You can configure your SPS cluster in the following ways:
Configuration synchronization without a central search.
It allows you to perform your configuration settings on your Central Management node. Managed Host nodes periodically fetch and merge the settings into their own (configuration synchronization). Central search is not configured and you can search for sessions on each node, including the Central Management node.
For more information, see Configuration synchronization without a central search.
Central search with configuration synchronization.
IMPORTANT: One Identity does not recommend having a central search configuration without configuration synchronization.
It allows you to use a Central Management node with a Search Master role to view session data recorded by the minion nodes of your cluster, as well as manage all the nodes in the cluster from one central location.
For more information, see Central search with configuration synchronization.
The following figure shows a cluster configured for central search with configuration synchronization.
Figure 100: Central search with configuration synchronization
The figure above is an example of an SPS cluster configured as follows:
The Central Management node with a Search Master role and the connected Managed Host nodes with Search Minion roles require different configuration settings as described in the table below:
Role | Use and configuration settings |
---|---|
Central Management node, Search Master |
|
Managed Host node, Search Minion |
|
For more information on each role, see Cluster roles.
High availability (HA) clusters can stretch across long distances, such as nodes across buildings, cities or even continents. The goal of HA clusters is to support enterprise business continuity by providing location-independent failover and recovery.
To set up a high availability cluster, connect two One Identity Safeguard for Privileged Sessions (SPS) units with identical configurations in high availability mode. This creates a primary-secondary (that is, active-backup) node pair. Should the primary node stop functioning, the secondary node takes over the IP addresses of the primary node's interfaces. Gratuitous ARP requests are sent to inform hosts on the local network that the MAC addresses behind these IP addresses have changed.
The primary node shares all data with the secondary node using the HA network interface (labeled as 4 or HA on the SPS appliance). The disks of the primary and the secondary node must be synchronized for the HA support to operate correctly. Interrupting the connection between running nodes (unplugging the Ethernet cables, rebooting a switch or a router between the nodes, or disabling the HA interface) disables data synchronization and forces the secondary node to become active. This might result in data loss. You can find instructions to resolve such problems and recover a SPS cluster in Troubleshooting a One Identity Safeguard for Privileged Sessions (SPS) cluster.
|
NOTE:
HA functionality was designed for physical SPS units. If SPS is used in a virtual environment, use the fallback functionalities provided by the virtualization service instead. |
The Basic Settings > High Availability page provides information about the status of the HA cluster and its nodes.
Figure 101: Basic Settings > High Availability — Managing a high availability cluster
The following information is available about the cluster:
Status: Indicates whether the SPS nodes recognize each other properly and whether those are configured to operate in high availability mode. For details, see Understanding One Identity Safeguard for Privileged Sessions (SPS) cluster statuses.
Current master: The MAC address of the high availability interface (4 or HA) of the primary node. This address is also printed on a label on the top cover of the SPS unit.
HA UUID: A unique identifier of the HA cluster. Only available in High Availability mode.
DRBD status: Indicates whether the SPS nodes recognize each other properly and whether those are configured to operate in high availability mode. For details, see Understanding One Identity Safeguard for Privileged Sessions (SPS) cluster statuses.
DRBD sync rate limit: The maximum allowed synchronization speed between the primary and the secondary node. For details, see Adjusting the synchronization speed.
The active (that is, primary) SPS node is labeled as This node. This unit inspects the SSH traffic and provides the web interface. The SPS unit labeled as Other node is the secondary node that is activated if the primary node becomes unavailable.
The following information is available about each node:
Node ID: The MAC address of the HA interface of the node. This address is also printed on a label on the top cover of the SPS unit.
For SPS clusters, the IDs of both nodes are included in the internal log messages of SPS.
Node HA state: Indicates whether the SPS nodes recognize each other properly and whether those are configured to operate in high availability mode. For details, see Understanding One Identity Safeguard for Privileged Sessions (SPS) cluster statuses.
Node HA UUID: A unique identifier of the cluster. Only available in High Availability mode.
DRBD status: The status of data synchronization between the nodes. For details, see Understanding One Identity Safeguard for Privileged Sessions (SPS) cluster statuses.
RAID status: The status of the RAID device of the node. If it is not Optimal, there is a problem with the RAID device. For details, see Understanding One Identity Safeguard for Privileged Sessions (SPS) RAID status.
Boot firmware version: Version number of the boot firmware.
The boot firmware boots up SPS, provides high availability support, and starts the core firmware. The core firmware, in turn, handles everything else: provides the web interface, manages the connections, and so on.
HA link speed: The maximum allowed speed between the primary node and the secondary node. The HA link's speed must exceed the DRBD sync rate limit, else the web UI might become unresponsive and data loss can occur.
Interfaces for Heartbeat: Virtual interface used only to detect that the other node is still available. This interface is not used to synchronize data between the nodes (only heartbeat messages are transferred).
You can find more information about configuring redundant heartbeat interfaces in Redundant heartbeat interfaces.
Next hop monitoring: IP addresses (usually next hop routers) to continuously monitor both the primary node and the secondary node by using ICMP echo (ping) messages. If any of the monitored addresses becomes unreachable from the primary node while being reachable from the secondary node (in other words, more monitored addresses are accessible from the secondary node), then it is assumed that the primary node is unreachable and a forced takeover occurs – even if the primary node is otherwise functional. For details, see Next-hop router monitoring.
This section is about the available configuration and management options for HA clusters.
For detailed instructions about setting up a HA cluster, see "Installing two SPS units in HA mode" in the Installation Guide.
You can change the limit of the DRBD synchronization rate. Note that this does not change the speed of normal data replication. For details, see Adjusting the synchronization speed.
You can configure virtual interfaces for each HA node to monitor the availability of the other node. For details, see Redundant heartbeat interfaces.
You can provide IP addresses (usually next hop routers) to continuously monitor both the primary node and the secondary node by using ICMP echo (ping) messages. If any of the monitored addresses becomes unreachable from the primary node while being reachable from the secondary node (in other words, more monitored addresses are accessible from the secondary node), then it is assumed that the primary node is unreachable and a forced takeover occurs – even if the primary node is otherwise functional. For details, see Next-hop router monitoring.
To reboot both nodes, click Reboot Cluster. To prevent takeover, a token is placed on the secondary node. While this token persists, the secondary node halts its boot process to make sure that the primary node boots first. Following reboot, the primary node removes this token from the secondary node, allowing it to continue with the boot process.
If the token still persists on the secondary node following reboot, the Unblock Slave Node button is displayed. Clicking the button removes the token, and reboots the secondary node.
This option reboots the selected node.
When rebooting the nodes of a cluster, reboot the other node (that is, the secondary node) first to avoid unnecessary takeovers.
This option forces the selected node to shut down.
When shutting down the nodes of a cluster, shut down the other node (that is, the secondary node) first. When powering on the nodes, start the primary node first to avoid unnecessary takeovers.
To activate the other node (that is, the secondary node) and disable the currently active node, click Activate slave.
Activating the secondary node terminates all connections of One Identity Safeguard for Privileged Sessions (SPS) and might result in data loss. The secondary node becomes active after about 60 seconds, during which the protected servers cannot be accessed.
One Identity Safeguard for Privileged Sessions (SPS) synchronizes the content of the hard disk of the primary node (previously also referred to as master node) and the secondary node (previously also referred to as slave node) in the following cases.
When you configure two SPS units to operate in High Availability mode (converting a single node to a high availability cluster),
when you replace a node from a cluster, or
when recovering from a split-brain situation.
Normal data replication (copying incoming data, for example, audit trails from the primary node to the secondary node is not synchronization.
Since this synchronization can take up significant system-resources, the maximal speed of the synchronization is limited, by default, to 10 Mbps. However, this means that synchronizing large amount of data can take very long time, so it is useful to increase the synchronization speed in certain situations —.
To change the limit of the DRBD synchronization rate, navigate to Basic Settings > High Availability > DRBD sync rate limit, and select the desired value. Note the following points before changing the DRBD sync rate limit option.
The Basic Settings > High Availability > DRBD sync rate limit option is visible only when synchronization is in progress, or when you have clicked Convert to Cluster but have not rebooted the cluster yet.
Changing this option does not change the limit of the data replication speed.
Set the sync rate carefully. A high value is not recommended if the load of SPS is very high, as increasing the resources used by the synchronization process may degrade the general performance of SPS. On the other hand, the HA link's speed must exceed the speed of the incoming data, else the web UI might become unresponsive and data loss can occur.
The Basic Settings > High Availability > DRBD status field indicates whether the latest data (including SPS configuration,
© ALL RIGHTS RESERVED. Termini di utilizzo Privacy Cookie Preference Center