지금 지원 담당자와 채팅
지원 담당자와 채팅

syslog-ng Store Box 6.10.0 - Administration Guide

Preface Introduction The concepts of SSB The Welcome Wizard and the first login Basic settings User management and access control Managing SSB Configuring message sources Storing messages on SSB Forwarding messages from SSB Log paths: routing and processing messages Configuring syslog-ng options Searching log messages Searching the internal messages of SSB Classifying messages with pattern databases The SSB RPC API Monitoring SSB Troubleshooting SSB Security checklist for configuring SSB Glossary

Recovering from a split brain situation

A split brain situation is caused by a temporary failure of the network link between the cluster nodes, resulting in both nodes switching to the active (master) role while disconnected. This might cause new data (for example, log messages) to be created on both nodes without being replicated to the other node. Thus, it is likely in this situation that two diverging sets of data have been created, which cannot be easily merged.

Caution:

Hazard of data loss In a split brain situation, valuable log messages might be available on both syslog-ng Store Box (SSB) nodes, so special care must be taken to avoid data loss.

The nodes of the SSB cluster automatically recognize the split brain situation once the connection between the nodes is re-established, and do not perform any data synchronization to prevent data loss. When a split brain situation is detected, it is visible on the SSB system monitor, in the system logs (Split-Brain detected, dropping connection!), on the Basic Settings > High Availability page, and SSB sends an alert as well.

NOTE: After the connection between the nodes has been restored, the split brain situation will persist. The core firmware will be active on one of the nodes, while it will not start on the other node.

Once the network connection between the nodes has been re-established, one of the nodes will become the master node, while the other one will be the slave node. Find out which node is the master node. There are two ways to identify the master node:

  • Locally: Log in to each SSB locally, and wait for the console menu to come up. The console menu only appears on the master node.

  • Remotely: Try connecting to each SSB using SSH. It is only the master node that you can directly connect to via SSH. The slave node cannot be accessed externally, only via SSH from the master.

To recover an SSB cluster from a split brain situation, complete the procedures described in Data recovery and HA state recovery.

Caution:

Do NOT shut down the nodes.

Data recovery

In the procedure described here, data will be saved from the host currently acting as the slave host. This is required because data on this host will later be overwritten by the data available on the current master.

NOTE: During data recovery, there will be no service provided by SSB.

To configure recovering from a split brain situation

  1. Log in to the master node as root locally (or remotely using SSH) to access the Console menu. If no menu is showing up after login, then this is the slave node. Try the other node.

  2. Select Shells > Boot Shell.

  3. Enter /usr/share/heartbeat/hb_standby. This will change the current slave node to master and the current master node to slave (HA failover).

  4. Exit the console.

  5. Wait a few seconds for the HA failover to complete.

  6. Log in on the other host. If no Console menu is showing up, the HA failover has not completed yet. Wait a few seconds and try logging in again.

  7. Select Shells > Core Shell.

  8. Issue the systemctl stop syslog-ng.service command to disable all traffic going through SSB.

  9. Save the files from /opt/ssb/var/logspace/ that you want to keep. Use scp or rsync to copy data to your remote host.

    TIP: To find the files modified in the last n*24 hours, use find . -mtime -n.

    To find the files modified in the last n minutes, use find . -mmin -n .

  10. Exit the console.

  11. Log in again, and select Shells > Boot Shell.

  12. Enter /usr/share/heartbeat/hb_standby. This will change the current slave node to master and the current master node to slave (HA failover).

  13. Exit the console.

  14. Wait a few minutes to let the failover happen, so the node you were using will become the slave node and the other node will become the master node.

    The nodes are still in a split-brain state but now you have all the data backed up from the slave node, and you can synchronize the data from the master node to the slave node, which will turn the HA state from "Split-brain" to "HA". For details on how to do that, see HA state recovery.

HA state recovery

In the procedure described here, the "Split-brain" state will be turned to the "HA" state.

Caution:

Keep in mind that the data on the current master node will be copied to the current slave node and data that is available only on the slave node will be lost (as that data will be overwritten).

To swap the nodes (optional)

NOTE: If you completed the procedure described in Data recovery, you do not have to swap the nodes. You can proceed to the steps about data synchronization.

If you want to swap the two nodes to make the master node the slave node and the slave node the master node, perform the following steps.

  1. Log in to the master node as root locally (or remotely using SSH) to access the Console menu. If no menu is showing up after login, then this is the slave node. Try the other node.

  2. Select Shells > Boot Shell.

  3. Enter /usr/share/heartbeat/hb_standby. This will output:

    Going standby [all]
  4. Exit the console.

  5. Wait a few minutes to let the failover happen, so the node you were using will become the slave node and the other node will be the master node.

To initialize data synchronization

  1. Log in to the slave node as root locally (or remotely using SSH) to access the Console menu. If the menu is showing up, then this is the master node. Try logging in to the other node.

    Note that you are in the boot shell now as on the slave node, only the boot shell is available.

  2. Invalidate the DRBD. Issue the following commands:

    drbdadm secondary r0

    drbdadm connect --discard-my-data r0

    ssh ssb-other

    drbdadm connect r0

  3. Reboot the slave node.

    Following this step, the master will be in Standalone state, while the slave's DRBD status will be WFConnection.

    The console will display an Inconsistent (10) message. This is normal behavior, and it is safe to ignore this message.

  4. Reboot the master node. The SSB cluster will now be functional, accepting traffic as before.

  5. After both nodes reboot, the cluster should be in Degraded Sync state, the master being SyncSource and the slave being SyncTarget. The master node should start synchronizing its data to the slave node. Depending on the amount of data, this can take a long time. To adjust the speed of synchronization, see Adjusting the synchronization speed.

  6. Enable all incoming traffic on the master node. Navigate to Basic Settings > System > Service control > Syslog traffic, indexing & search: and click Enable.

    If the web interface is not accessible or unstable, complete the following steps on the active SSB:

    1. Log in to SSB as root locally (or remotely using SSH) to access the console menu.

    2. Select Shells > Core Shell, and issue the systemctl start syslog-ng.service command.

    3. Issue the date, and check the system date and time. If it is incorrect (for example, it displays 2000 January), replace the system battery. For details, see the hardware manual of the appliance.

Replacing a node in an SSB HA cluster

This section describes how to replace a unit in an syslog-ng Store Box (SSB) cluster with a new appliance.

To replace a unit in an SSB cluster with a new appliance

  1. Verify the HA status on the working node. Select Basic Settings > High Availability. If one of the nodes has broken down or is missing, the Status field displays DEGRADED.

  2. Note down the IP addresses of the Heartbeat and the Next hop monitoring interfaces.

  3. Perform a full system backup. Before replacing the node, create a complete system backup of the working node. For details, see Data and configuration backups.

  4. Check which firmware version is running on the working node. Select Basic Settings > System > Version details and write down the exact version numbers.

  5. Log in to your support portal account and download the CD ISO for the same SSB version that is running on your working node.

  6. Without connecting the replacement unit to the network, install the replacement unit from the ISO file. Use the IPMI interface if needed.

  7. When the installation is finished, connect the two SSB units with an Ethernet cable via the Ethernet connectors labeled as 4 (or HA).

  8. Reboot the replacement unit and wait until it finishes booting.

  9. Log in to the working node and verify the HA state. Select Basic Settings > High Availability. The Status field should display HALF.

  10. Reconfigure the IP addresses of the Heartbeat and the Next hop monitoring interfaces. Click .

  11. Click Other node > Join HA.

  12. Click Other node > Reboot.

  13. The replacement unit will reboot and start synchronizing data from the working node. The Basic Settings > High Availability > Status field will display DEGRADED SYNC until the synchronization finishes. Depending on the size of the hard disks and the amount of data stored, this can take several hours.

  14. After the synchronization is finished, connect the other Ethernet cables to their respective interfaces (external to 1 or EXT, management to 2 or MGMT) as needed for your environment.

    Expected result:

    A node of the SSB cluster is replaced with a new appliance.

Resolving an IP conflict between cluster nodes

The IP addresses of the HA interfaces connecting the two nodes are detected automatically, during boot. When a node comes online, it attempts to connect to the IP address 1.2.4.1. If no other node responds until timeout, then it sets the IP address of its HA interface to 1.2.4.1, otherwise (if there is a responding node on 1.2.4.1) it sets its own HA interface to 1.2.4.2.

Replaced nodes do not yet know the HA configuration (or any other HA settings), and will attempt to negotiate it automatically in the same way. If the network is, for any reason, too slow to connect the nodes on time, the replacement node boots with the IP address of 1.2.4.1, which can cause an IP conflict if the other node has also set its IP to that same address previously. In this case, the replacement node cannot join the HA cluster.

To manually assign the correct IP address to the HA interface of a node, perform the following steps:

  1. Log in to the node using the IPMI interface or the physical console.

    Configuration changes have not been synced to the new (replacement) node, as it could not join the HA cluster. Use the default password of the root user of syslog-ng Store Box (SSB), see "Installing the SSB hardware" in the Installation Guide.

  2. From the console menu, choose 6 HA address.

    Figure 242: The console menu

  3. Choose the IP address of the node.

    Figure 243: The console menu

  4. Reboot the node.

Restoring SSB configuration and data

The following procedure describes how to restore the configuration and data of syslog-ng Store Box (SSB) from a complete backup, for example, after a hardware replacement.

NOTE: It is possible to receive indexer errors following data restore. Data that was still in the memory of SSB during backup might have been indexed, but as it was not on the hard drive, it was not copied to the remote server.

To make sure that all data is backed up (for example, before an upgrade), shut down syslog-ng before initiating the backup process.

Caution:

Statistics about syslog-ng and logspace sizes are not backed up. As a result, following a data restore, the Basic Settings > Dashboard page will not show any syslog-ng and logspace statistics about the period before the backup.

To restore the configuration and data of SSB from a complete backup

  1. Connect to your backup server and locate the directory where SSB saves the backups. The configuration backups are stored in the config subdirectory in time stamped files. Find the latest configuration file (the configuration files are called SSB-timestamp.config).

  2. Connect to SSB.

    If you have not yet completed the Welcome Wizard, click Browse, select the configuration file, and click Import.

    If you have already completed the Welcome Wizard, navigate to Basic Settings > System > Import configuration > Browse, select the configuration file, and click Import.

  3. Navigate to Policies > Backup & Archive/Cleanup. Verify that the settings of the target servers and the backup protocols are correct.

  4. Navigate to Basic Settings > Management > System backup, click Restore now and wait for the process to finish. Depending on the amount of data stored in the backup, and the speed of the connection to the backup server, this may take a long time.

  5. Navigate to Log > Logspaces, and click Restore ALL. Depending on the amount of data stored in the backup, and the speed of the connection to the backup server, this may take a long time.

관련 문서

The document was helpful.

평가 결과 선택

I easily found the information I needed.

평가 결과 선택