Chat now with support
Chat with Support

One Identity Safeguard for Privileged Passwords 2.4 - Administration Guide

Introduction System requirements Installing the One Identity Safeguard for Privileged Passwords desktop client Setting up Safeguard for Privileged Passwords for the first time Getting acquainted with the console Privileged access requests Toolbox Accounts Account Groups Assets Asset Groups Directories Entitlements Partitions Settings
Access Request settings Appliance settings Asset Management settings Backup and Retention settings Certificate settings Cluster settings External Integration settings Messaging settings Profile settings Access settings Sessions settings
Users User Groups Disaster recovery Administrator permissions Preparing systems for management Troubleshooting Frequently asked questions
How do I access the API How do I audit transaction activity How do I configure external federation authentication How do I manage accounts on unsupported platforms How do I modify the appliance configuration settings How do I prevent Safeguard for Privileged Passwords messages when making RDP connections How do I see which assets and/or accounts are governed by a profile How do I set the appliance system time How do I setup discovery jobs How do Safeguard for Privileged Passwords database servers use SSL What are the access request states What do I do when an appliance goes into quarantine What is required for One Identity Safeguard for Privileged Passwords Privileged Sessions What is required to integrate with Starling Identity Analytics & Risk Intelligence What needs to be set up to use Application to Application What role-based email notifications are generated by default When does the rules engine run for dynamic grouping and tagging Why did the password change during an open request Why join Safeguard for Privileged Passwords to One Identity Starling
Safeguard Desktop Player Appendix: Safeguard ports

Patching cluster members

When an appliance update is released, apply the patch so all appliances in the cluster are on the same version. See About cluster patching for more information on how Safeguard for Privileged Passwords handles access requests and system failures during the cluster patching process.

Prior to installing an update patch to a cluster:
  • Ensure all appliances in the cluster are online and healthy. Any warnings or problems should be addressed before cluster patching. The patch install process will fail if any of the cluster members are unhealthy or cannot be contacted.

    IMPORTANT: The primary appliance orchestrates the cluster upgrade; therefore, the primary appliance must stay online and have a solid network connection with all of the replica appliances in the cluster. If this cannot be reasonably assured, you should unjoin the replica appliances from the cluster, individually upgrade them, and then re-enroll them into cluster.

  • It is highly recommended to take a backup of your primary appliance before applying a patch.
  • You may want to unjoin a replica from the cluster to serve as a backup appliance. In case of a catastrophic failure, you can activate the unjoined replica to be the primary. If the cluster patching process is successful, upgrade the unjoined replica, and then re-enroll it back into the cluster.

To patch appliances in a cluster

IMPORTANT: The following procedure applies to Safeguard for Privileged Passwords Appliances running version 2.1.x and later. If you need to patch appliances running an earlier version, you will need to unjoin replica appliances, install the patch on each appliance, and then enroll the replica appliances to rebuild your cluster. For more information, see Patching cluster members in the One Identity Safeguard for Privileged Passwords 2.0 Administration Guide.

  1. Log into the primary appliance, as an Appliance Administrator.
  2. In Administrative Tools, select Settings | Appliance | Updates.
  3. Click (or tap) Upload a File and browse to select an update file.

    The patch will be uploaded and distributed to all of the appliances in the cluster.

    NOTE: If you make changes to the cluster, such as adding a new replica, while a patch is staged, the update file must be distributed to the new cluster member before the patch install process can begin. Safeguard for Privileged Passwords will not allow the patch install process to begin until all of the cluster members report that they have the update file stored locally.

    NOTE: Clicking the Cancel button during the distribution process stops the distribution of the update file to the replicas. At this point, you can click (or tap) one of the following buttons:

    • Remove to remove the update file from all of the appliances in the cluster.
    • Distribute to Cluster to continue distributing the update file to each replica in the cluster.
  4. Once the file has been successfully distributed to all of the replicas in the cluster, click (or tap) the Install Now button.

    The primary appliance will go into Maintenance mode to begin the update operation. Once the primary appliance is successfully updated, Safeguard for Privileged Passwords will perform the update operation on each replica, one at a time. During an update operation, the cluster will be locked so that no other cluster operations can interfere with the update operation. Once the update operation is completed on all cluster members, the cluster will automatically unlock so normal operations can resume.

    The Cluster view (Settings | Cluster | Cluster Management) shows that an update operation is in progress and the cluster members that are locked, awaiting to install the update file.

    In addition, the Updates view (Settings | Appliance | Updates) shows the cluster members involved in the update operation and the progress as cluster members are successfully updated.

About cluster patching

The following information provides some insight into how Safeguard for Privileged Passwords processes access requests during the cluster patching process. It also describes what happens if a cluster member loses power or network connectivity during the patching process.

Service guarantees:

During a cluster upgrade, the cluster is split logically into the current version (side A) and the upgrade version (side B). Access request workflow is only enabled on one side at a time. Audit logs run on both sides and merge when the cluster patch completes. Initially, access request workflow is only enabled on side A and replicas in PatchPending state can perform access requests. As appliances upgrade and move to side B, the access workflow migrates to side B when side B has a majority of the appliances. At this point in the upgrade process, replicas in PatchPending state can no longer perform access requests; however, all of the upgraded cluster members can perform access requests. There is a small window where access request workflow is unavailable as the data migrates from one side to the other.

Failure scenarios:

If the primary appliance loses power or loses network connectivity during the upgrade process, it will try to resume the upgrade on restart.

If a replica is disconnected or loses power during an upgrade process, the replica will most likely go into quarantine mode. The primary appliance will skip that appliance and remove it from the cluster. This replica will need to be reset, upgraded, and then re-enrolled into the cluster manually to recover.

Using a backup to restore a clustered appliance

 

NOTE: When a backup is created, the state of the sessions module is saved which can be either the embedded sessions module (SPP) or the joined sessions module (SPS). Restoring a backup restores the sessions module to the state when the backup was taken, regardless of the state when the restore was started.

In a clustered environment, the objective of a cluster backup is to preserve and allow the restoration of all operational data, including access request workflow, users/accounts, audit logs and so on. All appliances in a cluster (primary and replicas) can be backed up. However, a backup should only be restored to an appliance in the worst-case scenario where no appliance can be restored using the failover operation.

When a backup is restored to an appliance, all of the cluster configuration data is purged. The appliance is restored as a stand-alone primary appliance in Read-Only mode with no replicas. However, all the access request workflow, user/account, and audit log data that existed when the backup was taken is retained. This primary appliance can then be activated and replicas can be joined to recreate a cluster.

To take a backup of an appliance

  1. Log into the appliance as an Appliance Administrator.
  2. In Administrative Tools, select Settings | Backup and Restore.
  3. Click (or tap) Run Now to create a copy of the data currently on the primary appliance.

    For more information, see Run Now.

    Or you can click (or tap) Backup Settings, in the upper right corner of the Backups page, to configure an automatic backup schedule.

    For more information, see Backup and Restore.

To restore an appliance from a backup

NOTE: A backup can be restored to any appliance that is running the same version of Safeguard for Privileged Passwords.

  1. Log into the appliance to be restored as an Appliance Administrator.
  2. In Administrative Tools, select Settings | Backup and Restore.
  3. Select the backup to be used and click (or tap) Restore.

    NOTE: If you want to use a backup file taken on a different appliance, that backup file must first be downloaded on the appliance where the backup was taken. The downloaded backup file will then need to be uploaded to the appliance that wants to use it before you can use the Restore option.

  4. In the Restore dialog, enter the word Restore and click (or tap) OK.

    For more information, see Restore.

The appliance is restored as a stand-alone primary appliance in Read-Only mode with no replicas.

To rebuild a cluster

  1. Log into the primary appliance as an Appliance Administrator.
  2. Activate the Read-Only primary appliance.
    1. In Administrative Tools, navigate to Settings | Cluster | Cluster Management.
    2. Select the node to be activated from the cluster view (left pane).
    3. Click (or tap) Activate.
    4. Confirm the activate operation.

    For more information, see Activating a read-only appliance.

  3. One at a time, enroll the replica appliances to rebuild your cluster.
    1. In Administrative Tools, select Settings | Cluster.
    2. Click (or tap) Add Replica to join a replica appliance to the cluster.

    Once the enroll operation completes, repeat to add your appliances back into the cluster as replicas.

    NOTE: Enrolling a replica can take up to 24 hours depending on the amount of data to be replicated and your network.

    For more information, see Enrolling replicas into a cluster.

Related Topics

Backup and Restore

Resetting a cluster that has lost consensus

Resetting the cluster configuration allows you to recover a cluster that has lost consensus.

Caution: Resetting a cluster should be your last resort. It is recommended that you restore from a backup rather than reset a cluster.

A cluster has consensus when the majority of the nodes in the cluster are online and able to communicate. If a cluster loses consensus (also known as a quorum failure), the following automatically happens:

  • The primary appliance goes into Read-Only mode.
  • Password check and change is disabled.

If the cluster regains consensus automatically after connectivity is restored, the primary will return to Read-Write mode and password check and change will be re-enabled. However, if it does not regain consensus automatically, the Appliance Administrator must perform a cluster reset to force-remove nodes from the cluster.

IMPORTANT: Only reset the cluster if you are certain that consensus has been lost; otherwise, you could introduce a split-brain scenario. (Split-brain scenario is where a cluster gets divided into smaller clusters. Each of these smaller clusters believes it is the only active cluster and may then access the same data which could lead to data corruption.)

TIP: If you are concerned about network issues, reset the cluster with only the new primary appliance. Once the cluster reset operation is complete, enroll appliances one by one to create a new cluster.

To reset a cluster

  1. In Settings, select Cluster.
  2. Click (or tap) the Reset Cluster button.

    The Reset Cluster dialog displays listing the appliances (primary and replicas) in the cluster.

  3. In the Reset Cluster dialog, select the nodes to be included in the reset operation and use the Set Primary button to designate the primary appliance in the cluster.

    NOTE: Nodes must have an Appliance State of Online or Online Read-Only and be able to communicate to be included in the reset operation. If you select a node that is not online or not available, you will get an error and the reset operation will fail.

  4. Click (or tap) Reset Cluster.
  5. In the confirmation dialog, enter the words Reset Cluster and click (or tap) OK.

    When connected to the new primary appliance, the Configuring Safeguard for Privileged Passwords Appliance progress page displays showing the steps being performed as part of the maintenance task to reset the cluster.

  6. Once the maintenance tasks have completed, click (or tap) Restart Desktop Client.

Once reset, the cluster only contains the appliances that were included in the reset operation.

Related Documents