지금 지원 담당자와 채팅
지원 담당자와 채팅

syslog-ng Premium Edition 7.0.34 - Administration Guide

Preface Introduction to syslog-ng The concepts of syslog-ng Installing syslog-ng PE The syslog-ng PE quick-start guide The syslog-ng PE configuration file Collecting log messages — sources and source drivers
How sources work default-network-drivers: Receive and parse common syslog messages internal: Collecting internal messages file: Collecting messages from text files google-pubsub: collecting messages from the Google Pub/Sub messaging service wildcard-file: Collecting messages from multiple text files linux-audit: Collecting messages from Linux audit logs mssql, oracle, sql: collecting messages from an SQL database network: Collecting messages using the RFC3164 protocol (network() driver) office365: Fetching logs from Office 365 osquery: Collect and parse osquery result logs pipe: Collecting messages from named pipes program: Receiving messages from external applications python: writing server-style Python sources python-fetcher: writing fetcher-style Python sources snmptrap: Read Net-SNMP traps syslog: Collecting messages using the IETF syslog protocol (syslog() driver) system: Collecting the system-specific log messages of a platform systemd-journal: Collecting messages from the systemd-journal system log storage systemd-syslog: Collecting systemd messages using a socket tcp, tcp6,udp, udp6: Collecting messages from remote hosts using the BSD syslog protocol udp-balancer: Receiving UDP messages at very high rate unix-stream, unix-dgram: Collecting messages from UNIX domain sockets windowsevent: Collecting Windows event logs
Sending and storing log messages — destinations and destination drivers
elasticsearch2>: Sending messages directly to Elasticsearch version 2.0 or higher (DEPRECATED) elasticsearch-http: Sending messages to Elasticsearch HTTP Event Collector file: Storing messages in plain-text files google_bigquery(): Sending logs to a Google BigQuery table google_bigquery_managedaccount(): Sending logs to a Google BigQuery table authenticated by Google Cloud managed service account google_pubsub(): Sending logs to the Google Cloud Pub/Sub messaging service google_pubsub-managedaccount(): Sending logs to the Google Cloud Pub/Sub messaging service authenticated by Google Cloud managed service account hdfs: Storing messages on the Hadoop Distributed File System (HDFS) http: Posting messages over HTTP kafka(): Publishing messages to Apache Kafka (Java implementation) (DEPRECATED) kafka-c(): Publishing messages to Apache Kafka using the librdkafka client (C implementation) logstore: Storing messages in encrypted files mongodb: Storing messages in a MongoDB database network: Sending messages to a remote log server using the RFC3164 protocol (network() driver) pipe: Sending messages to named pipes program: Sending messages to external applications python: writing custom Python destinations sentinel(): Sending logs to the Microsoft Azure Sentinel cloud snmp: Sending SNMP traps smtp: Generating SMTP messages (email) from logs splunk-hec: Sending messages to Splunk HTTP Event Collector sql(): Storing messages in an SQL database stackdriver: Sending logs to the Google Stackdriver cloud syslog: Sending messages to a remote logserver using the IETF-syslog protocol syslog-ng(): Forward logs to another syslog-ng node tcp, tcp6, udp, udp6: Sending messages to a remote log server using the legacy BSD-syslog protocol (tcp(), udp() drivers) unix-stream, unix-dgram: Sending messages to UNIX domain sockets usertty: Sending messages to a user terminal — usertty() destination Client-side failover
Routing messages: log paths, flags, and filters Global options of syslog-ng PE TLS-encrypted message transfer Advanced Log Transport Protocol Reliability and minimizing the loss of log messages Manipulating messages parser: Parse and segment structured messages Processing message content with a pattern database Correlating log messages Enriching log messages with external data Monitoring statistics and metrics of syslog-ng Multithreading and scaling in syslog-ng PE Troubleshooting syslog-ng Best practices and examples The syslog-ng manual pages Glossary

Prerequisites

The following describes how to send messages from syslog-ng PE to HDFS.

To send messages from syslog-ng PE to HDFS

  1. If you want to use the Java-based modules of syslog-ng PE (for example, the Elasticsearch, HDFS, or Kafka destinations), download and install the Java Runtime Environment (JRE), 1.8.

    The Java-based modules of syslog-ng PE are tested and supported when using the Oracle implementation of Java. Other implementations are untested and unsupported, they may or may not work as expected.

  2. Download the Hadoop Distributed File System (HDFS) libraries (version 2.x) from http://hadoop.apache.org/releases.html.

  3. Extract the HDFS libraries into a target directory (for example, /opt/hadoop/lib/), then execute the classpath command of the hadoop script: bin/hdfs classpath

    Use the classpath that this command returns in the syslog-ng PE configuration file, in the client-lib-dir() option of the HDFS destination.

  4. Remove all log4j and slf4j-log4j12 files from hadoop library, and download log4j-slf4j-impl (https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-slf4j-impl/2.17.1) with matching version of log4j2 found in syslog-ng premium edition. The log4j-slf4j-impl should be in one of the directories bin/hdfs classpath scripts returned.

How syslog-ng PE interacts with HDFS

The syslog-ng PE application sends the log messages to the official HDFS client library, which forwards the data to the HDFS nodes. The way how syslog-ng PE interacts with HDFS is described in the following steps.

  1. After syslog-ng PE is started and the first message arrives at the hdfs destination, the hdfs destination tries to connect to the HDFS NameNode. If the connection fails, syslog-ng PE will repeatedly attempt to connect again after the period set in time-reopen() expires.

  2. syslog-ng PE checks if the path to the logfile exists. If a directory does not exist syslog-ng PE automatically creates it. syslog-ng PE creates the destination file (using the filename set in the syslog-ng PE configuration file, with a UUID suffix to make it unique, for example, /usr/hadoop/logfile.txt.3dc1c59e-ab3b-4b71-9e81-93db477ed9d9) and writes the message into the file. After the file is created, syslog-ng PE will write all incoming messages into the hdfs destination.

    NOTE: When the hdfs-append-enabled() option is set to true, syslog-ng PE will not assign a new UUID suffix to an existing file, because it is then possible to open a closed file and append data to that.

    NOTE: You cannot set when log messages are flushed. Hadoop performs this action automatically, depending on its configured block size, and the amount of data received. There is no way for the syslog-ng PE application to influence when the messages are actually written to disk. This means that syslog-ng PE cannot guarantee that a message sent to HDFS is actually written to disk. When using flow-control, syslog-ng PE acknowledges a message as written to disk when it passes the message to the HDFS client. This method is as reliable as your HDFS environment.

  3. If the HDFS client returns an error, syslog-ng PE attempts to close the file, then opens a new file and repeats sending the message (trying to connect to HDFS and send the message), as set in the retries() parameter. If sending the message fails for retries() times, syslog-ng PE drops the message.

  4. The syslog-ng PE application closes the destination file in the following cases:

    • syslog-ng PE is reloaded

    • syslog-ng PE is restarted

    • The HDFS client returns an error.

  5. If the file is closed and you have set an archive directory, syslog-ng PE moves the file to this directory. If syslog-ng PE cannot move the file for some reason (for example, syslog-ng PE cannot connect to the HDFS NameNode), the file remains at its original location, syslog-ng PE will not try to move it again.

Storing messages with MapR-FS

The syslog-ng PE application is also compatible with MapR File System (MapR-FS), starting from version 5.4, syslog-ng Premium Edition is MapR certified. MapR-FS provides better performance, reliability, efficiency, maintainability, and ease of use compared to the default Hadoop Distributed Files System (HDFS). To use MapR-FS with syslog-ng PE, complete the following steps:

  1. Install MapR libraries. Instead of the official Apache HDFS libraries, MapR uses different libraries. The supported version is MapR 4.x.

    1. Download the libraries from the Maven Repository and Artifacts for MapR or get it from an already existing MapR installation.

    2. Install MapR. If you do not know how to install MapR, follow the instructions on the MapR website.

  2. In a default MapR installation, the required libraries are installed in the following path: /opt/mapr/lib.

    Enter the path where MapR was installed in the class-path option of the hdfs destination, for example:

    class-path("/opt/mapr/lib/")

    If the libraries were downloaded from the Maven Repository, the following additional libraries will be requiered. Note that the version numbers in the filenames can be different in the various Hadoop releases:commons-collections-3.2.1.jar, commons-logging-1.1.3.jar, hadoop-auth-2.5.1.jar, log4j-1.2.15.jar, slf4j-api-1.7.5.jar, commons-configuration-1.6.jar, guava-13.0.1.jar, hadoop-common-2.5.1.jar, maprfs-4.0.2-mapr.jar, slf4j-log4j12-1.7.5.jar, commons-lang-2.5.jar, hadoop-0.20.2-dev-core.jar, json-20080701.jar, protobuf-java-2.5.0.jar, zookeeper-3.4.5-mapr-1406.jar.

  3. Configure the hdfs destination in syslog-ng PE.

    Example: Storing logfiles with MapR-FS

    The following example defines an hdfs destination for MapR-FS using only the required parameters.

    @module mod-java
    @include "scl.conf"
    
    destination d_mapr {
        hdfs(
            client-lib-dir("/opt/syslog-ng/lib/syslog-ng/java-modules/:/opt/mapr/lib/")
            hdfs-uri("maprfs://10.140.32.80")
            hdfs-file("/user/log/logfile.txt")
        );
    };
    

Kerberos authentication with syslog-ng hdfs() destination

Version 7.0.3 and later supports Kerberos authentication to authenticate the connection to your Hadoop cluster. syslog-ng PE assumes that you already have a Hadoop and Kerberos infrastructure.

NOTE: If you configure Kerberos authentication for a hdfs() destination, it affects all hdfs() destinations. Kerberos and non-Kerberos hdfs() destinations cannot be mixed in a syslog-ng PE configuration. This means that if one hdfs() destination uses Kerberos authentication, you have to configure all other hdfs() destinations to use Kerberos authentication too.

Failing to do so results in non-Kerberos hdfs() destinations being unable to authenticate to the HDFS server.

NOTE: If you want to configure your hdfs() destination to stop using Kerberos authentication, namely, to remove Kerberos-related options from the hdfs() destination configuration, make sure to restart syslog-ng PE for the changes to take effect.

Prerequisites
  • You have configured your Hadoop infrastructure to use Kerberos authentication.

  • You have a keytab file and a principal for the host running syslog-ng PE.

  • You have installed and configured the Kerberos client packages on the host running syslog-ng PE. (That is, Kerberos authentication works for the host, for example, from the command line using the kinit user@REALM -k -t <keytab_file> command.)

destination d_hdfs {
    hdfs(
        client-lib-dir("/hdfs-libs/lib")
        hdfs-uri("hdfs://hdp-kerberos.syslog-ng.example:8020")
        kerberos-keytab-file("/opt/syslog-ng/etc/hdfs.headless.keytab")
        kerberos-principal("hdfs-hdpkerberos@MYREALM")
        hdfs-file("/var/hdfs/test.log")
    );
};
관련 문서

The document was helpful.

평가 결과 선택

I easily found the information I needed.

평가 결과 선택