Chat now with support
Chat with Support

One Identity Safeguard for Privileged Sessions 5.9.0 - Administration Guide

Preface Introduction The concepts of SPS The Welcome Wizard and the first login Basic settings User management and access control Managing SPS
Controlling SPS: reboot, shutdown Managing Safeguard for Privileged Sessions clusters Managing a high availability SPS cluster Upgrading SPS Managing the SPS license Accessing the SPS console Sealed mode Out-of-band management of SPS Managing the certificates used on SPS
General connection settings HTTP-specific settings ICA-specific settings RDP-specific settings SSH-specific settings Telnet-specific settings VMware Horizon View connections VNC-specific settings Indexing audit trails Using the Search (classic) interface Using the Search interface Searching session data on a central node in a cluster Advanced authentication and authorization techniques Reports The SPS RPC API The SPS REST API SPS scenarios Troubleshooting SPS Configuring external devices Using SCP with agent-forwarding Security checklist for configuring SPS Jumplists for in-product help Third-party contributions About us

Replacing a HA node in a SPS cluster

Purpose:

To replace a unit in a SPS cluster with a new appliance, complete the following steps.

Steps:
  1. Verify the HA status on the working node. Select Basic Settings > High Availability. If one of the nodes has broken down or is missing, the Status field displays DEGRADED.

  2. Note down the Gateway IP addresses, and the IP addresses of the Heartbeat and the Next hop monitoring interfaces.

  3. Perform a full system backup. Before replacing the node, create a complete system backup of the working node. For details, see Data and configuration backups.

  4. Check which firmware version is running on the working node. Select Basic Settings > System > Version details and write down the exact version numbers.

  5. Log in to your support portal and download the CD ISO for the same SPS version that is running on your working node.

  6. Without connecting the replacement unit to the network, install the replacement unit from the ISO file. Use the IPMI interface if needed.

  7. When the installation is finished, connect the two SPS units with an Ethernet cable via the Ethernet connectors labeled as 4 or HA.

  8. Reboot the replacement unit and wait until it finishes booting.

  9. Login to the working node and verify the HA state. Select Basic Settings > High Availability. The Status field should display HALF.

  10. Reconfigure the Gateway IP addresses, and the IP addresses of the Heartbeat and the Next hop monitoring interfaces. Click Commit.

  11. Click Other node > Join HA.

  12. Click Other node > Reboot.

  13. The replacement unit will reboot and start synchronizing data from the working node. The Basic Settings > High Availability > Status field will display DEGRADED SYNC until the synchronization finishes. Depending on the size of the hard disks and the amount of data stored, this can take several hours.

  14. After the synchronization is finished, connect the other Ethernet cables to their respective interfaces (external to 1 or EXT, internal to 3 or INT, management to 2 or MGMT) as needed for your environment.

    Expected result:

    A node of the SPS cluster is replaced with a new appliance.

Resolving an IP conflict between cluster nodes

The IP addresses of the HA interfaces connecting the two nodes are detected automatically, during boot. When a node comes online, it attempts to connect to the IP address 1.2.4.1. If no other node responds until timeout, then it sets the IP address of its HA interface to 1.2.4.1, otherwise (if there is a responding node on 1.2.4.1) it sets its own HA interface to 1.2.4.2.

Replaced nodes do not yet know the HA configuration (or any other HA settings), and will attempt to negotiate it automatically in the same way. If the network is, for any reason, too slow to connect the nodes on time, the replacement node boots with the IP address of 1.2.4.1, which can cause an IP conflict if the other node has also set its IP to that same address previously. In this case, the replacement node cannot join the HA cluster.

To manually assign the correct IP address to the HA interface of a node, perform the following steps:

  1. Log in to the node using the IPMI interface or the physical console.

    Configuration changes have not been synced to the new (replacement) node, as it could not join the HA cluster. Use the default password of the root user of SPS, see "Installing the SPS hardware" in the Installation Guide.

  2. From the console menu, choose 10 HA address.

    Figure 290: The console menu

  3. Choose the IP address of the node.

    Figure 291: The console menu

  4. Reboot the node.

Understanding SPS RAID status

This section explains the possible statuses of the SPS RAID device and the underlying hard disks. SPS displays this information on the Basic Settings > High Availability page. The following statuses can occur:

  • Optimal: The hard disks are working as expected.

  • Degraded: One or more hard disk has reported an error, and might have to be replaced. Contact the One Identity Support Team for help. For contact details, see About us.

  • Failed stripes: One or more stripes of data failed on the RAID device. It is possible that data loss occurred, but unfortunately there is no way to find out the extent of the data loss (if any).

  • Offline: The RAID device is not functioning, probably because several disks have broken down. SPS cannot operate properly in this case. Contact the One Identity Support Team for help. For contact details, see About us.

Restoring SPS configuration and data

Purpose:

The following procedure describes how to restore the configuration and data of SPS from a complete backup, for example, after a hardware replacement.

Caution:

Do not enable audited traffic to SPS restoring the system backup is complete.

During the restore process, the REST-based search might not function properly, since the data to search in might still be incomplete.

To make sure that the import process has finished, check the logs.

Navigate to Basic Settings > Troubleshooting > View log files. Select syslog as Logtype, the day of the upgrade process as Day and enter Run metadb_importer.py in the Show only messages containing field. Click View.

If the import process has finished, the following line is displayed:

systemd[1]: Started Run metadb_importer.py to import data from metadb to elasticsearch if necessary...
Steps:
  1. Connect to your backup server and locate the directory where SPS saves the backups. The configuration backups are stored in the config subdirectory in timestamped files. Find the latest configuration file (the configuration files are called PSM-timestamp.config).

  2. Connect to SPS.

    If you have not yet completed the Welcome Wizard, click Browse, select the configuration file, and click Import.

    If you have already completed the Welcome Wizard, navigate to Basic Settings > System > Import configuration > Browse, select the configuration file, and click Import.

  3. Navigate to Policies > Backup & Archive/Cleanup. Verify that the settings of the target servers and the backup protocols are correct.

  4. Navigate to Basic Settings > Management > System backup, click Restore now and wait for the process to finish. Depending on the amount of data stored in the backup, and the speed of the connection to the backup server, this may take a long time.

  5. Navigate to SSH Control > Connections, and click Restore ALL. Repeat this step for other traffic types. Depending on the amount of data stored in the backup, and the speed of the connection to the backup server, this may take a long time.

Related Documents