Configuring Logstash - ironstream_for_splunk - ironstream_for_kafka - Ironstream_Hub - ironstream_for_elastic - 1.3

Ironstream Hub Administration

Product type
Software
Portfolio
Integrate
Product family
Ironstream
Product
Ironstream > Ironstream Hub
Ironstream > Ironstream for Elastic®
Ironstream > Ironstream for Splunk®
Ironstream > Ironstream for Kafka®
Version
1.3
Language
English
Content type
Administration
Product name
Ironstream Hub
Title
Ironstream Hub Administration
First publish date
2022
Last updated
2024-08-07
Published on
2024-08-07T14:51:48.776141

See the Logstash documentation to install, configure and connect it to a target, such as Elasticsearch.

When Logstash is available, and data is being forwarded from Logstash to Elasticsearch, follow these steps to add the Hub logs as a Source:

  1. Create a new Logstash configuration file.
  2. In the input section, modify the ‘path’ to reference the path to the logs as follows:
    On Linux:
    input {
    file {
        path => "<install location>\log\<hostname>\.<type of file> \.(.+)\.log"
        type => "insight"
        #start_position => "beginning"
      }
    }
    On Windows:
    input {
    path => "<install location>\log\<hostname>\.<type of file> \.(.+)\.log"
        type => "insight"
        #start_position => "beginning"
      }
    }

    If the installation path was changed, reference the new path in the path field. The start_position path just refers to where logstash starts procesing when it sees a new file. This is set to ‘start_position’ to ensure Logstash always starts from the beginning of a new file.

  3. The filter section needs to:
    1. Convert messages into a json data structure. This ensures Elasticsearch recognises the json key value pairs, rather than treating the whole message as a simple string.
      Note: Any messages that cannot be converted are discarded by Logstash therefore we recommend adding a failure check to replace undesired characters if necessary.
    2. Extract the host name from the file name for Elasticsearch to ingest as an additional “hostname” field.
    3. Remove the @version field which is just the Logstash version and is not required.
    4. Elasticsearch uses a @timestamp field to order the events and if it is not present in the event Logstash sets it to the time it reads the record, rather than the time the event was written on the IBM i system. Therefore, the filter section needs to set the @timestamp from the DATETIME field, which is written by Ironstream in RFC3339 format (i.e. ISO6801).
      Note: This also assumes the Logstash forwarder is in the same timezone as the LPAR.
      filter {
      # Convert into json and remove the original 'message'
            json {
              source => "message"
              remove_field => [ "message" ]
            }
      
      # May have failed. Use tags to check if failed
            if "_jsonparsefailure" in [tags] {
                # We failed the json parser!
                mutate {
                  remove_tag => [ "_jsonparsefailure" ]
                  # Replace any undesired character combinations
                  gsub => [
                    "message", "\\x", " x",
                    "message", "\x01", " "
                  ]
                }
                # Try the json parser again
                json {
                   source => "message"
                }
      # Check if it worked this time. If it did, then we now have the json fields# so we need to remove the original 'message' field again
                if "_jsonparsefailure" not in [tags] {
                  mutate {
                    remove_field => [ "message" ]
                  }
                }
              }
          grok {
              match => ["path","(?<hostname>.+)\.<type of file>\.(.+)\.log"]
          }
          mutate {
              remove_field => [ "@version"]
          }
          date {
              match => ["DATETIME", ISO8601]
              target => "@timestamp"
           }
      }
      

    If you wish to filter or transform the data further, then the filter section can be enhanced to include additional filters, remove fields, add fields or include others transformations.

  4. In the output section, modify these:
    1. The ‘hosts’ field needs to point to the site specific hosts.
    2. The ‘index’ field needs to point to the site specific index.
      output {
          elasticsearch { 
            hosts => ["localhost:9200"] 
            index => "ironstream "
      }
      
  5. Start the Logstash forwarder ensuring the correct path and name is provided for the configuration file in the command line or, for Windows, in the Service attributes or Started Task properties.