지금 지원 담당자와 채팅
지원 담당자와 채팅

syslog-ng Store Box 7.4.0 - Administration Guide

Preface Introduction The concepts of SSB The Welcome Wizard and the first login Basic settings User management and access control Managing SSB Configuring message sources Storing messages on SSB Forwarding messages from SSB Log paths: routing and processing messages Configuring syslog-ng options Searching log messages Searching the internal messages of SSB Classifying messages with pattern databases The SSB RPC API Monitoring SSB Troubleshooting SSB Security checklist for configuring SSB Glossary

Understanding SSB cluster statuses

This section explains the possible statuses of the syslog-ng Store Box (SSB) cluster and its nodes, the DRBD data storage system, and the heartbeat interfaces (if configured). SSB displays this information on the Basic Settings > High Availability page.

Status

The Status field indicates whether the SSB nodes recognize each other properly and whether those are configured to operate in high availability mode. The status of the individual SSB nodes is indicated in the Node HA status field of the each node. The following statuses can occur:

  • Standalone: There is only one SSB unit running in standalone mode, or the units have not been converted to a cluster (the Node HA status of both nodes is standalone). Click Convert to Cluster to enable High Availability mode.

  • HA: The two SSB nodes are running in High Availability mode. Node HA status is HA on both nodes, and the Node HA UUID is the same on both nodes.

  • Half: High Availability mode is not configured properly, one node is in standalone, the other one in HA mode. Connect to the node in HA mode, and click Join HA to enable High Availability mode.

  • Broken: The two SSB nodes are running in High Availability mode. Node HA status is HA on both nodes, but the Node HA UUID is different. Contact the One Identity Support Team for help.

  • Degraded: SSB was running in high availability mode, but one of the nodes has disappeared (for example broken down, or removed from the network). Power on, reconnect, or repair the missing node.

  • Degraded (Disk Failure): A hard disk of the slave node is not functioning properly and must be replaced. To request a replacement hard disk and for details on replacing the hard disk, contact our Support Team.

  • Degraded Sync: Two SSB units were joined to High Availability mode, and the first-time synchronization of the disks is currently in progress. Wait for the synchronization to complete. Note that in case of large disks with lots of stored data, synchronizing the disks can take several hours.

  • Split brain: The two nodes lost the connection to each other, with the possibility of both nodes being active (master) for a time.

    Caution:

    Hazard of data loss In this case, valuable log messages might be available on both SSB nodes, so special care must be taken to avoid data loss. For details on solving this problem, see Recovering from a split brain situation.

    Do NOT reboot or shut down the nodes.

  • Invalidated: The data on one of the nodes is considered out-of-sync and should be updated with data from the other node. This state usually occurs during the recovery of a split-brain situation when the DRBD is manually invalidated.

  • Converted: After converting nodes to a cluster (clicking Convert to Cluster) or enabling High Availability mode (clicking Join HA) and before rebooting the node(s).

NOTE: If you experience problems because the nodes of the HA cluster do not find each other during system startup, navigate to Basic Settings > High Availability and select HA (Fix current). This fixes the IP address of the HA interfaces in the nodes, which helps if the HA connection between the nodes is slow.

DRBD status

The DRBD status field indicates whether the latest data (including SSB configuration, log files, and so on) is available on both SSB nodes. The master node (this node) must always be in consistent status to prevent data loss. Inconsistent status means that the data on the node is not up-to-date, and should be synchronized from the node having the latest data.

The DRBD status field also indicates the connection between the disk system of the SSB nodes. The following statuses are possible:

  • Connected: Both nodes are functioning properly.

  • Connected (Disk Failure): A hard disk of the slave node is not functioning properly and must be replaced. To request a replacement hard disk and for details on replacing the hard disk, contact our Support Team.

  • Invalidated: The data on one of the nodes is considered out-of-sync and should be updated with data from the other node. This state usually occurs during the recovery of a split-brain situation when the DRBD is manually invalidated.

  • Sync source or Sync target: One node (Sync target) is downloading data from the other node (Sync source).

    When synchronizing data, the progress and the remaining time is displayed in the System monitor.

    Caution:

    When the two nodes are synchronizing data, do not reboot or shutdown the master node. If you absolutely must shutdown the master node during synchronization, shutdown the slave node first, and then the master node.

  • Split brain: The two nodes lost the connection to each other, with the possibility of both nodes being active (master) for a time.

    Caution:

    Hazard of data loss In this case, valuable log messages might be available on both SSB nodes, so special care must be taken to avoid data loss. For details on solving this problem, see Recovering from a split brain situation.

  • WFConnection: One node is waiting for the other node. The connection between the nodes has not been established yet.

Redundant Heartbeat status

If a redundant heartbeat interface is configured, its status is also displayed in the Redundant Heartbeat status field, and also in the HA > Redundant field of the System monitor. For a description of redundant heartbeat interfaces, see Redundant heartbeat interfaces.

The possible status messages are explained below.

  • NOT USED: There are no redundant heartbeat interfaces configured.

  • OK: Normal operation, every redundant heartbeat interface is working properly.

  • DEGRADED-WORKING: Two or more redundant heartbeat interfaces are configured, and at least one of them is functioning properly. This status is displayed also when a new redundant heartbeat interface has been configured, but the nodes of the SSB cluster has not been restarted yet.

  • DEGRADED: The connection between the redundant heartbeat interfaces has been lost. Investigate the problem to restore the connection.

  • INVALID: An error occurred with the redundant heartbeat interfaces. Contact the One Identity Support Team for help.

Recovering SSB if both nodes broke down

It can happen that both nodes break down simultaneously (for example because of a power failure), or the slave node breaks down before the original master node recovers. The following describes how to properly recover syslog-ng Store Box (SSB).

NOTE: When both nodes of a cluster boot up in parallel, the node with the 1.2.4.1 HA IP address will become the master node.

To properly recover SSB

  1. Power off both nodes by pressing and releasing the power button.

    Caution:

    Hazard of data loss If SSB does not shut down, press and hold the power button for approximately 4 seconds. However, consider that this method terminates connections passing SSB and might result in data loss.

  2. Power on the node that was the master before SSB broke down. Check the system logs to find out which node was the master before the incident: when a node boots as master, or when a takeover occurs, SSB sends a log message identifying the master node.

    TIP: Configure remote logging to send the log messages of SSB to a remote server where the messages are available even if the logs stored on SSB become inaccessible. For details on configuring remote logging, see SNMP and email alerts.

  3. Wait until this node finishes the boot process.

  4. Power on the other node.

Recovering from a split brain situation

A split brain situation is caused by a temporary failure of the network link between the cluster nodes, resulting in both nodes switching to the active (master) role while disconnected. This might cause new data (for example, log messages) to be created on both nodes without being replicated to the other node. Thus, it is likely in this situation that two diverging sets of data have been created, which cannot be easily merged.

Caution:

Hazard of data loss In a split brain situation, valuable log messages might be available on both syslog-ng Store Box (SSB) nodes, so special care must be taken to avoid data loss.

The nodes of the SSB cluster automatically recognize the split brain situation once the connection between the nodes is re-established, and do not perform any data synchronization to prevent data loss. When a split brain situation is detected, it is visible on the SSB system monitor, in the system logs (Split-Brain detected, dropping connection!), on the Basic Settings > High Availability page, and SSB sends an alert as well.

NOTE: After the connection between the nodes has been restored, the split brain situation will persist. The core firmware will be active on one of the nodes, while it will not start on the other node.

Once the network connection between the nodes has been re-established, one of the nodes will become the master node, while the other one will be the slave node. Find out which node is the master node. There are two ways to identify the master node:

  • Locally: Log in to each SSB locally, and wait for the console menu to come up. The console menu only appears on the master node.

  • Remotely: Try connecting to each SSB using SSH. It is only the master node that you can directly connect to via SSH. The slave node cannot be accessed externally, only via SSH from the master.

To recover an SSB cluster from a split brain situation, complete the procedures described in Data recovery and HA state recovery.

Caution:

Do NOT shut down the nodes.

Data recovery

In the procedure described here, data will be saved from the host currently acting as the slave host. This is required because data on this host will later be overwritten by the data available on the current master.

NOTE: During data recovery, there will be no service provided by SSB.

To configure recovering from a split brain situation

  1. Log in to the master node as root locally (or remotely using SSH) to access the Console menu. If no menu is showing up after login, then this is the slave node. Try the other node.

  2. Select Shells > Boot Shell.

  3. Enter /usr/share/heartbeat/hb_standby. This will change the current slave node to master and the current master node to slave (HA failover).

  4. Exit the console.

  5. Wait a few seconds for the HA failover to complete.

  6. Log in on the other host. If no Console menu is showing up, the HA failover has not completed yet. Wait a few seconds and try logging in again.

  7. Select Shells > Core Shell.

  8. Issue the systemctl stop syslog-ng.service command to disable all traffic going through SSB.

  9. Save the files from /opt/ssb/var/logspace/ that you want to keep. Use scp or rsync to copy data to your remote host.

    TIP: To find the files modified in the last n*24 hours, use find . -mtime -n.

    To find the files modified in the last n minutes, use find . -mmin -n.

  10. Exit the console.

  11. Log in again, and select Shells > Boot Shell.

  12. To change the current slave node to master and the current master node to slave (HA failover), enter /usr/share/heartbeat/hb_standby.

  13. Exit the console.

  14. Wait a few minutes to let the failover happen, so the node you were using will become the slave node and the other node will become the master node.

    The nodes are still in a split-brain state but now you have all the data backed up from the slave node, and you can synchronize the data from the master node to the slave node, which will turn the HA state from "Split-brain" to "HA". For details on how to do that, see HA state recovery.

HA state recovery

In the procedure described here, the "Split-brain" state will be turned to the "HA" state.

Caution:

Keep in mind that the data on the current master node will be copied to the current slave node and data that is available only on the slave node will be lost (as that data will be overwritten).

(Optional) To swap the nodes

NOTE: If you completed the procedure described in Data recovery, you do not have to swap the nodes. You can proceed to the steps about data synchronization.

If you want to swap the two nodes to make the master node the slave node and the slave node the master node, perform the following steps.

  1. Log in to the master node as root locally (or remotely using SSH) to access the Console menu. If no menu is showing up after login, then this is the slave node. Try the other node.

  2. Select Shells > Boot Shell.

  3. Enter /usr/share/heartbeat/hb_standby. This will output:

    Going standby [all]
  4. Exit the console.

  5. Wait a few minutes to let the failover happen, so the node you were using will become the slave node and the other node will be the master node.

To initialize data synchronization

  1. Log in to the slave node as root locally (or remotely using SSH) to access the Console menu. If the menu is showing up, then this is the master node. Try logging in to the other node.

    NOTE: Once you log in, you will be in the boot shell, as on the slave node, only the boot shell is available.

  2. Invalidate the DRBD with the following commands:

    drbdadm secondary r0

    drbdadm connect --discard-my-data r0

    ssh ssb-other

    drbdadm connect r0

  3. Reboot the slave node.

    Following this step, the master will be in Standalone state, while the slave's DRBD status will be WFConnection.

    NOTE: The console will display an Inconsistent (10) message. This is normal behavior, and you can ignore the message.

  4. Reboot the master node. The SSB cluster will now be functional, accepting traffic as before.

  5. After both nodes reboot, the cluster should be in Degraded Sync state, the master being SyncSource and the slave being SyncTarget. The master node should start synchronizing its data to the slave node. Depending on the amount of data, this can take a long time. To adjust the speed of synchronization, see Adjusting the synchronization speed.

  6. Enable all incoming traffic on the master node. Navigate to Basic Settings > System > Service control > Syslog traffic, indexing & search: and click Enable.

    If the web interface is not accessible or unstable, complete the following steps on the active SSB:

    1. Log in to SSB as root locally (or remotely using SSH) to access the console menu.

    2. Select Shells > Core Shell, and run the systemctl start syslog-ng.service command.

    3. Check the system date and time with the date command. If it is incorrect (for example, it displays 2000 January), replace the system battery. For details, see the hardware manual of the appliance.

Replacing a node in an SSB HA cluster

This section describes how to replace a unit in an syslog-ng Store Box (SSB) cluster with a new appliance.

To replace a unit in an SSB cluster with a new appliance

  1. Verify the HA status on the working node. Select Basic Settings > High Availability. If one of the nodes has broken down or is missing, the Status field displays DEGRADED.

  2. Take note of the IP addresses of the Heartbeat and the Next hop monitoring interfaces.

  3. Perform a full system backup. Before replacing the node, create a complete system backup of the working node. For details, see Data and configuration backups.

  4. Check which firmware version is running on the working node. Select Basic Settings > System > Version details and write down the exact version numbers.

  5. Log in to your support portal account and download the ISO installation media for the same SSB version that is running on your working node.

  6. Without connecting the replacement unit to the network, install the replacement unit from the ISO file. Use the IPMI interface if needed.

  7. When the installation is finished, connect the two SSB units with an Ethernet cable via the Ethernet connectors labeled as 4 (or HA).

  8. Reboot the replacement unit and wait until it finishes booting.

  9. Log in to the working node and verify the HA state. Select Basic Settings > High Availability. The Status field should display HALF.

  10. Reconfigure the IP addresses of the Heartbeat and the Next hop monitoring interfaces. Click .

  11. Click Other node > Join HA.

  12. Click Other node > Reboot.

  13. The replacement unit will reboot and start synchronizing data from the working node. The Basic Settings > High Availability > Status field will display DEGRADED SYNC until the synchronization finishes. Depending on the size of the hard disks and the amount of data stored, this can take several hours.

  14. After the synchronization is finished, connect the other Ethernet cables to their respective interfaces (external to 1 or EXT, management to 2 or MGMT) as needed for your environment.

    Expected result:

    A node of the SSB cluster is replaced with a new appliance.

관련 문서

The document was helpful.

평가 결과 선택

I easily found the information I needed.

평가 결과 선택