サポートと今すぐチャット
サポートとのチャット

syslog-ng Open Source Edition 3.38 - Administration Guide

Preface Introduction to syslog-ng The concepts of syslog-ng Installing syslog-ng The syslog-ng OSE quick-start guide The syslog-ng OSE configuration file source: Read, receive, and collect log messages
How sources work default-network-drivers: Receive and parse common syslog messages internal: Collecting internal messages file: Collecting messages from text files wildcard-file: Collecting messages from multiple text files kubernetes: Collecting and parsing the Kubernetes CRI (Container Runtime Interface) format linux-audit: Collecting messages from Linux audit logs mqtt: receiving messages from an MQTT broker network: Collecting messages using the RFC3164 protocol (network() driver) nodejs: Receiving JSON messages from nodejs applications mbox: Converting local email messages to log messages osquery: Collect and parse osquery result logs pipe: Collecting messages from named pipes pacct: Collecting process accounting logs on Linux program: Receiving messages from external applications python: writing server-style Python sources python-fetcher: writing fetcher-style Python sources snmptrap: Read Net-SNMP traps sun-streams: Collecting messages on Sun Solaris syslog: Collecting messages using the IETF syslog protocol (syslog() driver) system: Collecting the system-specific log messages of a platform systemd-journal: Collecting messages from the systemd-journal system log storage systemd-syslog: Collecting systemd messages using a socket tcp, tcp6, udp, udp6: Collecting messages from remote hosts using the BSD syslog protocol— OBSOLETE unix-stream, unix-dgram: Collecting messages from UNIX domain sockets stdin: Collecting messages from the standard input stream
destination: Forward, send, and store log messages
amqp: Publishing messages using AMQP collectd: sending metrics to collectd discord: Sending alerts and notifications to Discord elasticsearch2: Sending messages directly to Elasticsearch version 2.0 or higher (DEPRECATED) elasticsearch-http: Sending messages to Elasticsearch HTTP Bulk API file: Storing messages in plain-text files graphite: Sending metrics to Graphite Sending logs to Graylog hdfs: Storing messages on the Hadoop Distributed File System (HDFS) Posting messages over HTTP http: Posting messages over HTTP without Java kafka: Publishing messages to Apache Kafka (Java implementation) kafka-c(): Publishing messages to Apache Kafka using the librdkafka client (C implementation) loggly: Using Loggly logmatic: Using Logmatic.io mongodb(): Storing messages in a MongoDB database mqtt() destination: sending messages from a local network to an MQTT broker network: Sending messages to a remote log server using the RFC3164 protocol (network() driver) osquery: Sending log messages to osquery's syslog table pipe: Sending messages to named pipes program: Sending messages to external applications pseudofile() python: writing custom Python destinations redis: Storing name-value pairs in Redis riemann: Monitoring your data with Riemann slack: Sending alerts and notifications to a Slack channel smtp: Generating SMTP messages (email) from logs snmp: Sending SNMP traps Splunk: Sending log messages to Splunk sql: Storing messages in an SQL database stomp: Publishing messages using STOMP Sumo Logic destinations: sumologic-http() and sumologic-syslog() syslog: Sending messages to a remote logserver using the IETF-syslog protocol syslog-ng(): Forward logs to another syslog-ng node tcp, tcp6, udp, udp6: Sending messages to a remote log server using the legacy BSD-syslog protocol (tcp(), udp() drivers) Telegram: Sending messages to Telegram unix-stream, unix-dgram: Sending messages to UNIX domain sockets usertty: Sending messages to a user terminal: usertty() destination Write your own custom destination in Java or Python Client-side failover
log: Filter and route log messages using log paths, flags, and filters Global options of syslog-ng OSE TLS-encrypted message transfer template and rewrite: Format, modify, and manipulate log messages parser: Parse and segment structured messages
Parsing syslog messages Parsing messages with comma-separated and similar values Parsing key=value pairs JSON parser XML parser Parsing dates and timestamps Python parser Parsing tags Apache access log parser Linux audit parser Cisco parser Parsing enterprise-wide message model (EWMM) messages iptables parser Netskope parser panos-parser(): parsing PAN-OS log messages Sudo parser MariaDB parser Websense parser Fortigate parser Check Point Log Exporter parser Regular expression (regexp) parser db-parser: Process message content with a pattern database (patterndb)
Correlating log messages Enriching log messages with external data Statistics of syslog-ng Multithreading and scaling in syslog-ng OSE Troubleshooting syslog-ng Best practices and examples The syslog-ng manual pages Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd) License The syslog-ng Open Source Edition Documentation License Glossary

Log paths

Log paths determine what happens with the incoming log messages. Messages coming from the sources listed in the log statement and matching all the filters are sent to the listed destinations.

To define a log path, add a log statement to the syslog-ng configuration file using the following syntax:

Declaration
log {
    source(s1); source(s2); ...
    optional_element(filter1|parser1|rewrite1);
    optional_element(filter2|parser2|rewrite2);
    ...
    destination(d1); destination(d2); ...
    flags(flag1[, flag2...]);
};

Caution:

Log statements are processed in the order they appear in the configuration file, thus the order of log paths may influence what happens to a message, especially when using filters and log flags.

NOTE: The order of filters, rewriting rules, and parsers in the log statement is important, as they are processed sequentially.

Example: A simple log statement

The following log statement sends all messages arriving to the localhost to a remote server.

source s_localhost {
    network(
        ip(127.0.0.1)
        port(1999)
    );
};
destination d_tcp {
    network("10.1.2.3"
        port(1999)
        localport(999)
    );
};
log {
    source(s_localhost);
    destination(d_tcp);
};

All matching log statements are processed by default, and the messages are sent to every matching destination by default. So a single log message might be sent to the same destination several times, provided the destination is listed in several log statements, and it can be also sent to several different destinations.

This default behavior can be changed using the flags() parameter. Flags apply to individual log paths, they are not global options. For details and examples on the available flags, see Log path flags. The effect and use of the flow-control flag is detailed in Managing incoming and outgoing messages with flow-control.

Embedded log statements

Starting from version 3.0, syslog-ng can handle embedded log statements (also called log pipes). Embedded log statements are useful for creating complex, multi-level log paths with several destinations and use filters, parsers, and rewrite rules.

For example, if you want to filter your incoming messages based on the facility parameter, and then use further filters to send messages arriving from different hosts to different destinations, you would use embedded log statements.

Figure 12: Embedded log statement

Embedded log statements include sources — and usually filters, parsers, rewrite rules, or destinations — and other log statements that can include filters, parsers, rewrite rules, and destinations. The following rules apply to embedded log statements:

  • Only the beginning (also called top-level) log statement can include sources.

  • Embedded log statements can include multiple log statements on the same level (that is, a top-level log statement can include two or more log statements).

  • Embedded log statements can include several levels of log statements (that is, a top-level log statement can include a log statement that includes another log statement, and so on).

  • After an embedded log statement, you can write either another log statement, or the flags() option of the original log statement. You cannot use filters or other configuration objects. This also means that flags (except for the flow-control flag) apply to the entire log statement, you cannot use them only for the embedded log statement.

  • Embedded log statements that are on the same level receive the same messages from the higher-level log statement. For example, if the top-level log statement includes a filter, the lower-level log statements receive only the messages that pass the filter.

Figure 13: Embedded log statements

Embedded log filters can be used to optimize the processing of log messages, for example, to re-use the results of filtering and rewriting operations.

Using embedded log statements

Embedded log statements (for details, see Embedded log statements) re-use the results of processing messages (for example, the results of filtering or rewriting) to create complex log paths. Embedded log statements use the same syntax as regular log statements, but they cannot contain additional sources. To define embedded log statements, use the following syntax:

log {
    source(s1); source(s2); ...

    optional_element(filter1|parser1|rewrite1);
    optional_element(filter2|parser2|rewrite2);
    ...
    destination(d1); destination(d2); ...

    #embedded log statement
    log {
        optional_element(filter1|parser1|rewrite1);
        optional_element(filter2|parser2|rewrite2);
        ...
        destination(d1); destination(d2); ...

        #another embedded log statement
        log {
            optional_element(filter1|parser1|rewrite1);
            optional_element(filter2|parser2|rewrite2);
            ...
            destination(d1); destination(d2); ...
        };
    };
    #set flags after the embedded log statements
    flags(flag1[, flag2...]);
};
Example: Using embedded log paths

The following log path sends every message to the configured destinations: both the d_file1 and the d_file2 destinations receive every message of the source.

log {
    source(s_localhost);
    destination(d_file1);
    destination(d_file2);
};

The next example is equivalent to the one above, but uses an embedded log statement.

log {
    source(s_localhost);
    destination(d_file1);
    log {
        destination(d_file2);
    };
};

The following example uses two filters:

  • messages coming from the host 192.168.1.1 are sent to the d_file1 destination, and

  • messages coming from the host 192.168.1.1 and containing the string example are sent to the d_file2 destination.

log {
    source(s_localhost);
    filter {
        host(192.168.1.1);
    };
    destination(d_file1);
    log {
        message("example");
        destination(d_file2);
    };
};

The following example collects logs from multiple source groups and uses the source() filter in the embedded log statement to select messages of the s_network source group.

log {
    source(s_localhost);
    source(s_network);
    destination(d_file1);
    log {
        filter {
            source(s_network);
        };
    destination(d_file2);
    };
};

if-else-elif: Conditional expressions

You can use if {}, elif {}, and else {} blocks to configure conditional expressions.

Conditional expressions' format

Conditional expressions have two formats:

  • Explicit filter expression:

    if (message('foo')) {
        parser { date-parser(); };
    } else {
        ...
    };

    This format only uses the filter expression in if(). If if does not contain 'foo', the else branch is taken.

    The else{} branch can be empty, you can use it to send the message to the default branch.

  • Condition embedded in the log path:

    if {
        filter { message('foo')); };
        parser { date-parser(); };
    } else {
        ...
    };

    This format considers all filters and all parsers as the condition, combined. If the message contains 'foo' and the date-parser() fails, the else branch is taken. Similarly, if the message does not contain 'foo', the else branch is taken.

Using the if {} and else {} blocks in your configuration

You can copy-paste the following example and use it as a template for using the if {} and else {} blocks in your configuration.

Example for using the if {} and else {} blocks in your configuration

The following configuration can be used as a template for using the if {} and else {} blocks:

log{
  source { example-msg-generator(num(1) template("...,STRING-TO-MATCH,..."));};
  source { example-msg-generator(num(1) template("...,NO-MATCH,..."));};
 
if (message("STRING-TO-MATCH")) 
   {   
    destination { file(/dev/stdout template("matched: $MSG\n") persist-name("1")); };
   }
else    
   {
    destination { file(/dev/stdout template("unmatched: $MSG\n") persist-name("2")); };
   };
};

The configuration results in the following console printout:

matched: ...,STRING-TO-MATCH,...
unmatched: ...,NO-MATCH,...

An alternative, less straightforward way to implement conditional evaluation is to use junctions. For details on junctions and channels, see Junctions and channels.

関連ドキュメント

The document was helpful.

評価を選択

I easily found the information I needed.

評価を選択