立即与支持人员聊天
与支持团队交流

syslog-ng Premium Edition 7.0.34 - Administration Guide

Preface Introduction to syslog-ng The concepts of syslog-ng Installing syslog-ng PE The syslog-ng PE quick-start guide The syslog-ng PE configuration file Collecting log messages — sources and source drivers
How sources work default-network-drivers: Receive and parse common syslog messages internal: Collecting internal messages file: Collecting messages from text files google-pubsub: collecting messages from the Google Pub/Sub messaging service wildcard-file: Collecting messages from multiple text files linux-audit: Collecting messages from Linux audit logs mssql, oracle, sql: collecting messages from an SQL database network: Collecting messages using the RFC3164 protocol (network() driver) office365: Fetching logs from Office 365 osquery: Collect and parse osquery result logs pipe: Collecting messages from named pipes program: Receiving messages from external applications python: writing server-style Python sources python-fetcher: writing fetcher-style Python sources snmptrap: Read Net-SNMP traps syslog: Collecting messages using the IETF syslog protocol (syslog() driver) system: Collecting the system-specific log messages of a platform systemd-journal: Collecting messages from the systemd-journal system log storage systemd-syslog: Collecting systemd messages using a socket tcp, tcp6,udp, udp6: Collecting messages from remote hosts using the BSD syslog protocol udp-balancer: Receiving UDP messages at very high rate unix-stream, unix-dgram: Collecting messages from UNIX domain sockets windowsevent: Collecting Windows event logs
Sending and storing log messages — destinations and destination drivers
elasticsearch2>: Sending messages directly to Elasticsearch version 2.0 or higher (DEPRECATED) elasticsearch-http: Sending messages to Elasticsearch HTTP Event Collector file: Storing messages in plain-text files google_bigquery(): Sending logs to a Google BigQuery table google_bigquery_managedaccount(): Sending logs to a Google BigQuery table authenticated by Google Cloud managed service account google_pubsub(): Sending logs to the Google Cloud Pub/Sub messaging service google_pubsub-managedaccount(): Sending logs to the Google Cloud Pub/Sub messaging service authenticated by Google Cloud managed service account hdfs: Storing messages on the Hadoop Distributed File System (HDFS) http: Posting messages over HTTP kafka(): Publishing messages to Apache Kafka (Java implementation) (DEPRECATED) kafka-c(): Publishing messages to Apache Kafka using the librdkafka client (C implementation) logstore: Storing messages in encrypted files mongodb: Storing messages in a MongoDB database network: Sending messages to a remote log server using the RFC3164 protocol (network() driver) pipe: Sending messages to named pipes program: Sending messages to external applications python: writing custom Python destinations sentinel(): Sending logs to the Microsoft Azure Sentinel cloud snmp: Sending SNMP traps smtp: Generating SMTP messages (email) from logs splunk-hec: Sending messages to Splunk HTTP Event Collector sql(): Storing messages in an SQL database stackdriver: Sending logs to the Google Stackdriver cloud syslog: Sending messages to a remote logserver using the IETF-syslog protocol syslog-ng(): Forward logs to another syslog-ng node tcp, tcp6, udp, udp6: Sending messages to a remote log server using the legacy BSD-syslog protocol (tcp(), udp() drivers) unix-stream, unix-dgram: Sending messages to UNIX domain sockets usertty: Sending messages to a user terminal — usertty() destination Client-side failover
Routing messages: log paths, flags, and filters Global options of syslog-ng PE TLS-encrypted message transfer Advanced Log Transport Protocol Reliability and minimizing the loss of log messages Manipulating messages parser: Parse and segment structured messages Processing message content with a pattern database Correlating log messages Enriching log messages with external data Monitoring statistics and metrics of syslog-ng Multithreading and scaling in syslog-ng PE Troubleshooting syslog-ng Best practices and examples The syslog-ng manual pages Glossary

Parsing key=value pairs

The syslog-ng PE application can separate a message consisting of whitespace or comma-separated key=value pairs (for example, Postfix log messages) into name-value pairs. You can also specify other separator character instead of the equal sign, for example, colon (:) to parse MySQL log messages. The syslog-ng PE application automatically trims any leading or trailing whitespace characters from the keys and values, and also parses values that contain unquoted whitespace. For details on using value-pairs in syslog-ng PE see Structuring macros, metadata, and other value-pairs.

You can refer to the separated parts of the message using the key of the value as a macro. For example, if the message contains KEY1=value1,KEY2=value2, you can refer to the values as ${KEY1} and ${KEY2}.

NOTE: If a log message contains the same key multiple times (for example, key1=value1, key2=value2, key1=value3, key3=value4, key1=value5), then syslog-ng PE stores only the last (rightmost) value for the key. Using the previous example, syslog-ng PE will store the following pairs: key1=value5, key2=value2, key3=value4.

Caution:

If the names of keys in the message are the same as the names of syslog-ng PE soft macros, the value from the parsed message will overwrite the value of the macro. For example, the PROGRAM=value1, MESSAGE=value2 content will overwrite the ${PROGRAM} and ${MESSAGE} macros. To avoid overwriting such macros, use the prefix() option.

Hard macros cannot be modified, so they will not be overwritten. For details on the macro types, see Hard versus soft macros.

The parser discards message sections that are not key=value pairs, even if they appear between key=value pairs that can be parsed.

The names of the keys can contain only the following characters: numbers (0-9), letters (a-z,A-Z), underscore (_), dot (.), hyphen (-). Other special characters are not permitted.

To parse key=value pairs, define a parser that has the kv-parser() option. Defining the prefix is optional. By default, the parser will process the ${MESSAGE} part of the log message. You can also define the parser inline in the log path.

Declaration
parser parser_name {
    kv-parser(
        prefix()
    );
};
Example: Using a key=value parser

In the following example, the source is a log message consisting of comma-separated key=value pairs, for example, a Postfix log message:

Jun 20 12:05:12 mail.example.com <info> postfix/qmgr[35789]: EC2AC1947DA: from=<me@example.com>, size=807, nrcpt=1 (queue active)

The kv-parser inserts the ".kv." prefix before all extracted name-value pairs. The destination is a file, that uses the format-json template function. Every name-value pair that begins with a dot (".") character will be written to the file (dot-nv-pairs). The log line connects the source, the destination and the parser.

source s_kv {
    network(port(21514));
};

destination d_json {
    file("/tmp/test.json"
        template("$(format-json --scope dot-nv-pairs)\n"));
};

parser p_kv {
    kv-parser (prefix(".kv."));
};

log {
    source(s_kv);
    parser(p_kv);
    destination(d_json);
};

You can also define the parser inline in the log path.

source s_kv {
    network(port(21514));
};

destination d_json {
    file("/tmp/test.json"
        template("$(format-json --scope dot-nv-pairs)\n"));
};

log {
    source(s_kv);
    parser {
        kv-parser (prefix(".kv."));
    };
    destination(d_json);
};

You can set the separator character between the key and the value to parse for example, key:value pairs, like MySQL logs:

Mar  7 12:39:25 myhost MysqlClient[20824]: SYSTEM_USER:'oscar', MYSQL_USER:'my_oscar', CONNECTION_ID:23, DB_SERVER:'127.0.0.1', DB:'--', QUERY:'USE test;'
parser p_mysql { kv-parser(value-separator(":") prefix(".mysql."));

Options of key=value parsers

The kv-parser has the following options.

extract-stray-words-into()
Synopsis: extract-stray-words-into("<name-value-pair>")

Description: Specifies the name-value pair where syslog-ng PE stores any stray words that appear before or between the parsed key-value pairs (mainly when the pair-separator() option is also set). If multiple stray words appear in a message, then syslog-ng PE stores them as a comma-separated list. Note that the prefix() option does not affect the name-value pair storing the stray words. Default value: N/A

Example: Extracting stray words in key-value pairs

For example, consider the following message:

VSYS=public; Slot=5/1; protocol=17; source-ip=10.116.214.221; source-port=50989; destination-ip=172.16.236.16; destination-port=162;time=2016/02/18 16:00:07; interzone-emtn_s1_vpn-enodeb_om; inbound; policy=370;

This is a list of key-value pairs, where the value separator is = and the pair separator is ;. However, before the last key-value pair (policy=370), there are two stray words: interzone-emtn_s1_vpn-enodeb_om inbound. If you want to store or process these, specify a name-value pair to store them in the extract-stray-words-into() option, for example, extract-stray-words-into("my-stray-words"). The value of ${my-stray-words} for this message will be interzone-emtn_s1_vpn-enodeb_om, inbound

prefix()
Synopsis: prefix()

Description: Insert a prefix before the name part of the parsed name-value pairs to help further processing. For example:

  • To insert the my-parsed-data. prefix, use the prefix(my-parsed-data.) option.

  • To refer to a particular data that has a prefix, use the prefix in the name of the macro, for example, ${my-parsed-data.name} .

  • If you forward the parsed messages using the IETF-syslog protocol, you can insert all the parsed data into the SDATA part of the message using the prefix(.SDATA.my-parsed-data.) option.

Names starting with a dot (for example, .example) are reserved for use by syslog-ng PE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard versus soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)

By default, kv-parser() uses the .kv. prefix. To modify it, use the following format:

parser {
    kv-parser(prefix("myprefix."));
};
pair-separator()
Synopsis: pair-separator("<separator-string>")

Description: Specifies the character or string that separates the key-value pairs from each other. Default value: , (a comma followed by a whitespace)

For example, to parse key1=value1;key2=value2 pairs, use kv-parser(pair-separator(";"));

template()
Synopsis: template("${<macroname>}")

Description: The macro that contains the part of the message that the parser will process. It can also be a macro created by a previous parser of the log path. By default, the parser processes the entire message (${MESSAGE}).

value-separator()
Synopsis: value-separator("<separator-character>")

Description: Specifies the character that separates the keys from the values. Default value: =

For example, to parse key:value pairs, use kv-parser(value-separator(":"));.

JSON parser

JavaScript Object Notation (JSON) is a text-based open standard designed for human-readable data interchange. It is used primarily to transmit data between a server and web application, serving as an alternative to XML. It is described in RFC 4627. The syslog-ng PE application can separate parts of incoming JSON-encoded log messages to name-value pairs. For details on using value-pairs in syslog-ng PE see Structuring macros, metadata, and other value-pairs.

You can refer to the separated parts of the JSON message using the key of the JSON object as a macro. For example, if the JSON contains {"KEY1":"value1","KEY2":"value2"}, you can refer to the values as ${KEY1} and ${KEY2}. If the JSON content is structured, syslog-ng PE converts it to dot-notation-format. For example, to access the value of the following structure {"KEY1": {"KEY2": "VALUE"}}, use the ${KEY1.KEY2} macro.

Caution:

If the names of keys in the JSON content are the same as the names of syslog-ng PE soft macros, the value from the JSON content will overwrite the value of the macro. For example, the {"PROGRAM":"value1","MESSAGE":"value2"} JSON content will overwrite the ${PROGRAM} and ${MESSAGE} macros. To avoid overwriting such macros, use the prefix() option.

Hard macros cannot be modified, so they will not be overwritten. For details on the macro types, see Hard versus soft macros.

NOTE: The JSON parser currently supports only integer, double and string values when interpreting JSON structures. As syslog-ng does not handle different data types internally, the JSON parser converts all JSON data to string values. In case of boolean types, the value is converted to 'TRUE' or 'FALSE' as their string representation.

The JSON parser discards messages if it cannot parse them as JSON messages, so it acts as a JSON-filter as well.

To create a JSON parser, define a parser that has the json-parser() option. Defining the prefix and the marker are optional. By default, the parser will process the ${MESSAGE} part of the log message. To process other parts of a log message with the JSON parser, use the template() option. You can also define the parser inline in the log path.

Declaration
parser parser_name {
    json-parser(
        marker()
        prefix()
    );
};
Example: Using a JSON parser

In the following example, the source is a JSON encoded log message. The syslog parser is disabled, so that syslog-ng PE does not parse the message: flags(no-parse). The json-parser inserts ".json." prefix before all extracted name-value pairs. The destination is a file, that uses the format-json template function. Every name-value pair that begins with a dot (".") character will be written to the file (dot-nv-pairs). The log line connects the source, the destination and the parser.

source s_json {
    network(port(21514) flags(no-parse));
};

destination d_json {
    file("/tmp/test.json"
        template("$(format-json --scope dot-nv-pairs)\n"));
};

parser p_json {
    json-parser (prefix(".json."));
};

log {
    source(s_json);
    parser(p_json);
    destination(d_json);
};

You can also define the parser inline in the log path.

source s_json {
    network(port(21514) flags(no-parse));
};

destination d_json {
    file("/tmp/test.json"
        template("$(format-json --scope dot-nv-pairs)\n"));
};

log {
    source(s_json);
    parser {
        json-parser (prefix(".json."));
    };
    destination(d_json);
};

Options of JSON parsers

The JSON parser has the following options.

extract-prefix()
Synopsis: extract-prefix()

Description: Extract only the specified subtree from the JSON message. Use the dot-notation to specify the subtree. The rest of the message will be ignored. For example, assuming that the incoming object is named msg, the json-parser(extract-prefix("foo.bar[5]")); parser is equivalent to the msg.foo.bar[5] javascript code. Note that the resulting expression must be a JSON object in order to extract its members into name-value pairs.

This feature also works when the top-level object is an array, because you can use an array index at the first indirection level, for example: json-parser(extract-prefix("[5]")), which is equivalent to msg[5].

In addition to alphanumeric characters, the key of the JSON object can contain the following characters: !"#$%&'()*+,-/:;<=>?@\^_`{|}~

It cannot contain the following characters: .[]

Example: Convert logstash eventlog format v0 to v1

The following parser converts messages in the logstash eventlog v0 format to the v1 format.

parser p_jsoneventv0 {
  channel {
    parser { json-parser(extract-prefix("@fields")); };
    parser { json-parser(prefix(".json.")); };
    rewrite {
      set("1" value("@version"));
      set("${.json.@timestamp}" value("@timestamp"));
      set("${.json.@message}" value("message"));
    };
  };
};
marker
Synopsis: marker()

Description: Use a marker in case of mixed log messages, to identify JSON encoded messages for the parser.

Some logging implementations require a marker to be set before the JSON payload. The JSON parser is able to find these markers and parse the message only if it is present.

Example: Using the marker option in JSON parser

This json parser parses log messages which use the "@cee:" marker in front of the json payload. It inserts ".cee." in front of the name of name-value pairs, so later on it is easier to find name-value pairs that were parsed using this parser. (For details on selecting name-value pairs, see value-pairs().)

parser {
        json-parser(
            marker("@cee:")
            prefix(".cee.")
        );
    };
prefix()
Synopsis: prefix()

Description: Insert a prefix before the name part of the parsed name-value pairs to help further processing. For example:

  • To insert the my-parsed-data. prefix, use the prefix(my-parsed-data.) option.

  • To refer to a particular data that has a prefix, use the prefix in the name of the macro, for example, ${my-parsed-data.name} .

  • If you forward the parsed messages using the IETF-syslog protocol, you can insert all the parsed data into the SDATA part of the message using the prefix(.SDATA.my-parsed-data.) option.

Names starting with a dot (for example, .example) are reserved for use by syslog-ng PE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard versus soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)

This parser does not have a default prefix. To configure a custom prefix, use the following format:

parser {
    json-parser(prefix("myprefix."));
};
template()
Synopsis: template("${<macroname>}")

Description: The macro that contains the part of the message that the parser will process. It can also be a macro created by a previous parser of the log path. By default, the parser processes the entire message (${MESSAGE}).

相关文档

The document was helpful.

选择评级

I easily found the information I needed.

选择评级