Chat now with support
Chat mit Support

syslog-ng Open Source Edition 3.22 - Administration Guide

Preface Introduction to syslog-ng The concepts of syslog-ng Installing syslog-ng The syslog-ng OSE quick-start guide The syslog-ng OSE configuration file source: Read, receive, and collect log messages
How sources work default-network-drivers: Receive and parse common syslog messages internal: Collecting internal messages file: Collecting messages from text files wildcard-file: Collecting messages from multiple text files linux-audit: Collecting messages from Linux audit logs network: Collecting messages using the RFC3164 protocol (network() driver) nodejs: Receiving JSON messages from nodejs applications mbox: Converting local e-mail messages to log messages osquery: Collect and parse osquery result logs pipe: Collecting messages from named pipes pacct: Collecting process accounting logs on Linux program: Receiving messages from external applications python: writing server-style Python sources python-fetcher: writing fetcher-style Python sources snmptrap: Read Net-SNMP traps sun-streams: Collecting messages on Sun Solaris syslog: Collecting messages using the IETF syslog protocol (syslog() driver) system: Collecting the system-specific log messages of a platform systemd-journal: Collecting messages from the systemd-journal system log storage systemd-syslog: Collecting systemd messages using a socket tcp, tcp6, udp, udp6: Collecting messages from remote hosts using the BSD syslog protocol— OBSOLETE unix-stream, unix-dgram: Collecting messages from UNIX domain sockets stdin: Collecting messages from the standard input stream
destination: Forward, send, and store log messages
amqp: Publishing messages using AMQP collectd: sending metrics to collectd elasticsearch2: Sending messages directly to Elasticsearch version 2.0 or higher (DEPRECATED) elasticsearch-http: Sending messages to Elasticsearch HTTP Bulk API file: Storing messages in plain-text files graphite: Sending metrics to Graphite Sending logs to Graylog hdfs: Storing messages on the Hadoop Distributed File System (HDFS) Posting messages over HTTP http: Posting messages over HTTP without Java kafka: Publishing messages to Apache Kafka (Java implementation) kafka: Publishing messages to Apache Kafka (C implementation, using the librdkafka client) loggly: Using Loggly logmatic: Using Logmatic.io mongodb: Storing messages in a MongoDB database network: Sending messages to a remote log server using the RFC3164 protocol (network() driver) osquery: Sending log messages to osquery's syslog table pipe: Sending messages to named pipes program: Sending messages to external applications pseudofile() python: writing custom Python destinations redis: Storing name-value pairs in Redis riemann: Monitoring your data with Riemann slack: Sending alerts and notifications to a Slack channel smtp: Generating SMTP messages (e-mail) from logs snmp: Sending SNMP traps Splunk: Sending log messages to Splunk sql: Storing messages in an SQL database stomp: Publishing messages using STOMP syslog: Sending messages to a remote logserver using the IETF-syslog protocol syslog-ng: Forwarding messages and tags to another syslog-ng node tcp, tcp6, udp, udp6: Sending messages to a remote log server using the legacy BSD-syslog protocol (tcp(), udp() drivers) Telegram: Sending messages to Telegram unix-stream, unix-dgram: Sending messages to UNIX domain sockets usertty: Sending messages to a user terminal: usertty() destination Write your own custom destination in Java or Python Client-side failover
log: Filter and route log messages using log paths, flags, and filters Global options of syslog-ng OSE TLS-encrypted message transfer template and rewrite: Format, modify, and manipulate log messages parser: Parse and segment structured messages db-parser: Process message content with a pattern database (patterndb) Correlating log messages Enriching log messages with external data Statistics of syslog-ng Multithreading and scaling in syslog-ng OSE Troubleshooting syslog-ng Best practices and examples The syslog-ng manual pages Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd) License

Using filters as selector

To better control to which log messages you add contextual data, you can use filters as selectors. In this case, the first column of the CSV database file must contain the name of a filter. For each message, syslog-ng OSE evaluates the filters in the order they appear in the database file. If a filter matches the message, syslog-ng OSE adds the name-value pair related to the filter.

For example, the database file can contain the entries. (For details on the accepted CSV-format, see database().)

f_auth,domain,all
f_localhost,source,localhost
f_kern,domain,kernel

Note that syslog-ng OSE does not evaluate other filters after the first match. For example, if you use the previous database file, and a message matches both the f_auth and f_localhost filters, syslog-ng OSE adds only the name-value pair of f_auth to the message.

To add multiple name-value pairs to a message, include a separate line in the database for each name-value pair, for example:

f_localhost,host-role,firewall
f_localhost,contact-person,"John Doe"
f_localhost,contact-email,johndoe@example.com

You can also add data to messages that do not have a matching selector entry in the database using the default-selector() option.

You must store the filters you reference in a database in a separate file. This file is similar to a syslog-ng OSE configuration file, but must contain only a version string and filters (and optionally comments). You can use the syslog-ng --syntax-only <filename> command to ensure that the file is valid. For example, the content of such a file can be:

@version: 3.22
filter f_localhost { host("mymachine.example.com") };
filter f_auth { facility(4) };
filter f_kern { facility(0) };
Declaration:
parser p_add_context_data_filter {
    add-contextual-data(
        selector(filters("filters.conf")),
        database("context-info-db.csv"),
        prefix(".metadata.")
    );
};

If you modify the database file, or the file that contains the filters, you have to reload syslog-ng OSE for the changes to take effect. If reloading syslog-ng OSE or the files fails for some reason, syslog-ng OSE will keep using the last working version of the file.

Options add-contextual-data()

The add-contextual-data() has the following options.

Required options:

The following options are required: selector(), database().

database()
Type: <path-to-file>.csv
Default:

Description: Specifies the path to the CSV file, for example, /opt/syslog-ng/my-csv-database.csv. The extension of the file must be .csv, and can include Windows-style (CRLF) or UNIX-style (LF) linebreaks. You can use absolute path, or relative to the syslog-ng OSE binary.

default-selector()
Synopsis: default-selector()

Description: Specifies the ID of the entry (line) that is corresponds to log messages that do not have a selector that matches an entry in the database. For example, if you add name-value pairs from the database based on the hostname from the log message (selector("${HOST}")), then you can include a line for unknown hosts in the database, and set default-selector() to the ID of the line for unknown hosts. In the CSV file:

unknown-hostname,host-role,unknown

In the syslog-ng OSE configuration file:

add-contextual-data(
    selector("$HOST")
    database("context-info-db.csv")
    default-selector("unknown-hostname")
);
ignore-case()

Synopsis: ignore-case()

Default:

ignore-case(no)

Description: Specifies if selectors are handled as case insensitive. If you set the ignore-case() option to yes, selectors are handled as case insensitive.

prefix()
Synopsis: prefix()

Description: Insert a prefix before the name part of the added name-value pairs (including the pairs added by the default-selector()) to help further processing.

selector()
Synopsis: selector()

Description: Specifies the string or macro that syslog-ng OSE evaluates for each message, and if its value matches the ID of an entry in the database, syslog-ng OSE adds the name-value pair of every matching database entry to the log message. Currently, you can use strings and a single macro (for example, ${HOST}) in the selector() option, templates are not supported. To use filters as selectors, see Using filters as selector.

Looking up GeoIP data from IP addresses (DEPRECATED)

This parser is deprecated. Use Looking up GeoIP2 data from IP addresses instead.

The syslog-ng OSE application can lookup IPv4 addresses from an offline GeoIP database, and make the retrieved data available in name-value pairs. IPv6 addresses are not supported. Depending on the database used, you can access country code, longitude, and latitude information.

NOTE:

To access longitude and latitude information, download the GeoLiteCity database, and unzip it (for example, to the /usr/share/GeoIP/GeoLiteCity.dat file). The default databases available on Linux and other platforms usually contain only the country codes.

You can refer to the separated parts of the message using the key of the value as a macro. For example, if the message contains KEY1=value1,KEY2=value2, you can refer to the values as ${KEY1} and ${KEY2}.

Declaration:
parser parser_name {
    geoip(
        <macro-containing-the-IP-address-to-lookup>
        prefix()
        database("<path-to-database-file>")
    );
};
Example: Using the GeoIP parser

In the following example, syslog-ng OSE retrieves the GeoIP data of the IP address contained in the ${HOST} field of the incoming message, and includes the data (prefixed with the geoip. string) in the output JSON message.

@version: 3.7
@module geoip

options {
    keep-hostname(yes);
};

source s_file {
    file("/tmp/input");
};

parser p_geoip { geoip( "${HOST}", prefix( "geoip." ) database( "/usr/share/GeoIP/GeoLiteCity.dat" ) ); };

destination d_file {
    file( "/tmp/output" template("$(format-json --scope core --key geoip*)\n") );
};


log {
    source(s_file);
    parser(p_geoip);
    destination(d_file);
};

For example, for the <38>Jan 1 14:45:22 192.168.1.1 prg00000[1234]: test message message the output will look like:

{"geoip":{"longitude":"47.460704","latitude":"19.049968","country_code":"HU"},"PROGRAM":"prg00000","PRIORITY":"info","PID":"1234","MESSAGE":"test message","HOST":"192.168.1.1","FACILITY":"auth","DATE":"Jan  1 14:45:22"}

If you are transferring your log messages into Elasticsearch, use the following rewrite rule to combine the longitude and latitude information into a single value (called geoip.location), and set the mapping in Elasticsearch accordingly. Do not forget to include the rewrite in your log path. For details on transferring your log messages to Elasticsearch, see elasticsearch2: Sending messages directly to Elasticsearch version 2.0 or higher (DEPRECATED).

rewrite r_geoip {
    set(
        "${geoip.latitude},${geoip.longitude}",
        value( "geoip.location" ),
        condition(not "${geoip.latitude}" == "")
    );
};

In your Elasticsearch configuration, set the appropriate mappings:

{
   "mappings" : {
      "_default_" : {
         "properties" : {
            "geoip" : {
               "properties" : {
                  "country_code" : {
                     "index" : "not_analyzed",
                     "type" : "string",
                     "doc_values" : true
                  },
                  "latitude" : {
                     "index" : "not_analyzed",
                     "type" : "string",
                     "doc_values" : true
                  },
                  "longitude" : {
                     "type" : "string",
                     "doc_values" : true,
                     "index" : "not_analyzed"
                  },
                  "location" : {
                     "type" : "geo_point"
                  }
               }
            }
         }
      }
   }
}

Options of geoip parsers

The geoip parser has the following options.

prefix()
Synopsis: prefix()

Description: Insert a prefix before the name part of the parsed name-value pairs to help further processing. For example:

  • To insert the my-parsed-data. prefix, use the prefix(my-parsed-data.) option.

  • To refer to a particular data that has a prefix, use the prefix in the name of the macro, for example, ${my-parsed-data.name}.

  • If you forward the parsed messages using the IETF-syslog protocol, you can insert all the parsed data into the SDATA part of the message using the prefix(.SDATA.my-parsed-data.) option.

Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)

For example, to insert the geoip. prefix, use the prefix(.geoip.) option. To refer to a particular data when using a prefix, use the prefix in the name of the macro, for example, ${geoip.country_code} .

database()
Synopsis: database()
Default: /usr/share/GeoIP/GeoIP.dat

Description: The full path to the GeoIP database to use. Note that syslog-ng OSE must have the required privileges to read this file. Do not modify or delete this file while syslog-ng OSE is running, it can crash syslog-ng OSE.

Verwandte Dokumente

The document was helpful.

Bewertung auswählen

I easily found the information I needed.

Bewertung auswählen