The following table describes the possible error messages that you may encounter while using the google_pubsub() destination.
status_code | Complete response while running with trace messages enabled | Possible reason(s) | Possible solution(s) |
---|---|---|---|
400 | "error": { "code": 400, "message": "The value for message_count is too large. You passed 1001 in the request, but the maximum value is 1000.", "status": "INVALID_ARGUMENT" } |
There are too many messages in one batch. Google Pub/Sub allows maximum 1000 messages per batch. |
Decrease the value of the batch-lines() option if you modified it previously. |
400 | "error": { "code": 400, "message": "Request payload size exceeds the limit: 10485760 bytes.", "status": "INVALID_ARGUMENT" } |
The batch size is too large. Google Pub/Sub allows maximum 10MB per batch. |
To overcome the issue, try one of the following methods:
|
403 | "error": { "code": 403, "message": "User not authorized to perform this action.", "status": "PERMISSION_DENIED" } |
One of the following possible reasons behind the error message:
|
To overcome the issue, try one of the following methods:
|
404 | "error": { "code": 404, "message": "Requested project not found or user does not have access to it (project=YOUR_PROJECT). Make sure to specify the unique project identifier and not the Google Cloud Console display name.", "status": "NOT_FOUND" } |
You have specified an incorrect project ID. The string YOUR_PROJECT is the project name provided in the configuration. and the project name you have to specify. |
The project name you can find on the Pub/Sub UI is not necessarily the same as the project ID you specified in the YOUR_PROJECT string in your configuration. Make sure you use the project name provided in the YOUR_PROJECT string in your configuration. |
404 | "error": { "code": 404, "message": "Resource not found (resource=YOUR_TOPIC).", "status": "NOT_FOUND" } |
You have specified an incorrect topic ID. The string YOUR_TOPIC is the topic ID you provided in the configuration, and the topic ID you have to specify. |
Make sure you use the topic ID you provided in the YOUR_TOPIC string in the configuration, and make sure that you have sufficient permissions to access it. |
429 |
"error": { "code": 429, "message": "Quota exceeded for quota metric 'Regional publisher throughput, kB' and limit 'Regional publisher throughput, kB per minute per region' of service 'pubsubgoogleapiscom' for consumer 'project_number:127287437417'", "status": "RESOURCE_EXHAUSTED"} |
This error indicates that you have exceeded the quota for the given Google Cloud project. |
Review your Google Cloud project's quota and adjust it according to Google's documentation if necessary. |
Starting with version
NOTE: To use this destination, syslog-ng PE must run in server mode. Typically, only the central syslog-ng PE server uses this destination. For more information on the server mode, see Server mode.
Note the following limitations when using the syslog-ng PEhdfs destination:
This destination is only supported on the Linux platforms that use the linux glibc2.11 installer, including: Red Hat ES 7, Ubuntu 14.04 (Trusty Tahr).
Since syslog-ng PE uses the official Java HDFS client, the hdfs destination has significant memory usage (about 400MB).
NOTE: You cannot set when log messages are flushed. Hadoop performs this action automatically, depending on its configured block size, and the amount of data received. There is no way for the syslog-ng PE application to influence when the messages are actually written to disk. This means that syslog-ng PE cannot guarantee that a message sent to HDFS is actually written to disk. When using flow-control, syslog-ng PE acknowledges a message as written to disk when it passes the message to the HDFS client. This method is as reliable as your HDFS environment.
The log messages of the underlying client libraries are available in the internal() source of syslog-ng PE.
NOTE: The hdfs destination has been tested with Hortonworks Data Platform.
@module mod-java @include "scl.conf" hdfs( client-lib-dir("/opt/syslog-ng/lib/syslog-ng/java-modules/:<path-to-preinstalled-hadoop-libraries>") hdfs-uri("hdfs://NameNode:8020") hdfs-file("<path-to-logfile>") );
The following example defines an hdfs destination using only the required parameters.
@module mod-java @include "scl.conf" destination d_hdfs { hdfs( client-lib-dir("/opt/syslog-ng/lib/syslog-ng/java-modules/:/opt/hadoop/libs") hdfs-uri("hdfs://10.140.32.80:8020") hdfs-file("/user/log/logfile.txt") ); };
To install the software required for the hdfs destination, see Prerequisites.
For details on how the hdfs destination works, see How syslog-ng PE interacts with HDFS.
For details on using MapR-FS, see Storing messages with MapR-FS.
For the list of options, see HDFS destination options.
NOTE: If you delete all Java destinations from your configuration and reload syslog-ng, the JVM is not used anymore, but it is still running. If you want to stop JVM, stop syslog-ng and then start syslog-ng again.
The following describes how to send messages from syslog-ng PE to HDFS.
To send messages from syslog-ng PE to HDFS
If you want to use the Java-based modules of syslog-ng PE (for example, the Elasticsearch, HDFS, or Kafka destinations), download and install the Java Runtime Environment (JRE), 1.7 or 1.8.
Download the Hadoop Distributed File System (HDFS) libraries (version 2.x) from http://hadoop.apache.org/releases.html.
Extract the HDFS libraries into a target directory (for example, /opt/hadoop/lib/), then execute the classpath command of the hadoop script: bin/hdfs classpath
Use the classpath that this command returns in the syslog-ng PE configuration file, in the client-lib-dir() option of the HDFS destination.
The syslog-ng PE application sends the log messages to the official HDFS client library, which forwards the data to the HDFS nodes. The way how syslog-ng PE interacts with HDFS is described in the following steps.
After syslog-ng PE is started and the first message arrives to the hdfs destination, the hdfs destination tries to connect to the HDFS NameNode. If the connection fails, syslog-ng PE will repeatedly attempt to connect again after the period set in time-reopen() expires.
syslog-ng PE checks if the path to the logfile exists. If a directory does not exist syslog-ng PE automatically creates it. syslog-ng PE creates the destination file (using the filename set in the syslog-ng PE configuration file, with a UUID suffix to make it unique, for example, /usr/hadoop/logfile.txt.3dc1c59e-ab3b-4b71-9e81-93db477ed9d9) and writes the message into the file. After the file is created, syslog-ng PE will write all incoming messages into the hdfs destination.
NOTE: When the hdfs-append-enabled() option is set to true, syslog-ng PE will not assign a new UUID suffix to an existing file, because it is then possible to open a closed file and append data to that.
NOTE: You cannot set when log messages are flushed. Hadoop performs this action automatically, depending on its configured block size, and the amount of data received. There is no way for the syslog-ng PE application to influence when the messages are actually written to disk. This means that syslog-ng PE cannot guarantee that a message sent to HDFS is actually written to disk. When using flow-control, syslog-ng PE acknowledges a message as written to disk when it passes the message to the HDFS client. This method is as reliable as your HDFS environment.
If the HDFS client returns an error, syslog-ng PE attempts to close the file, then opens a new file and repeats sending the message (trying to connect to HDFS and send the message), as set in the retries() parameter. If sending the message fails for retries() times, syslog-ng PE drops the message.
The syslog-ng PE application closes the destination file in the following cases:
syslog-ng PE is reloaded
syslog-ng PE is restarted
The HDFS client returns an error.
If the file is closed and you have set an archive directory, syslog-ng PE moves the file to this directory. If syslog-ng PE cannot move the file for some reason (for example, syslog-ng PE cannot connect to the HDFS NameNode), the file remains at its original location, syslog-ng PE will not try to move it again.
© 2023 One Identity LLC. ALL RIGHTS RESERVED. Feedback Terms of Use Privacy