Chatta subito con l'assistenza
Chat con il supporto

One Identity Safeguard for Privileged Passwords 7.5 - Administration Guide

Introduction System requirements and versions Using API and PowerShell tools Using the virtual appliance and web management console Cloud deployment considerations Setting up Safeguard for Privileged Passwords for the first time Using the web client Home page Privileged access requests Appliance Management
Appliance Backup and Retention Certificates Cluster Global Services External Integration Real-Time Reports Safeguard Access Appliance Management Settings
Asset Management
Account Automation Accounts Assets Partitions Discovery Profiles Tags Registered Connectors Custom platforms Importing objects
Security Policy Management
Access Request Activity Account Groups Application to Application Cloud Assistant Asset Groups Entitlements Linked Accounts User Groups Security Policy Settings
User Management Reports Disaster recovery and clusters Administrator permissions Preparing systems for management Troubleshooting Frequently asked questions Appendix A: Safeguard ports Appendix B: SPP and SPS join guidance Appendix C: Regular Expressions

Diagnosing a cluster member

The diagnostic tools are available to an Appliance Administrator or Operations Administrator for the currently connected appliance and any other appliances (replicas) in the cluster.

To run diagnostics on a clustered appliance

  1. Go to Cluster Management:
    • web client: Navigate to Cluster > Cluster Management
  2. From the cluster view (on the left) select the appliance to be diagnosed.
  3. Click Diagnose.

  4. Click Network Diagnostics.
  5. Choose the type of test to perform and complete the steps.

    • ARP: Use Address Resolution Protocol (ARP) to discover the Interface, Internet Address, Physical Address, and Type (dynamic or static).
    • Netstat: Use netstat to display the active connection protocol, local address, foreign address, and state.
    • NS Lookup: To obtain your domain name or IP address.
    • Ping: To verify your network connectivity and response time.
    • Show Routes: To retrieve routing table information.
    • Telnet: To access remote computers over TCP/IP networks like the internet.
    • Throughput: Test throughput to other appliances in the cluster.
    • Trace Route: To obtain your router information; trace route determines the paths packets take from one IP address to another.

Patching cluster members

When an appliance update is released, apply the patch so all appliances in the cluster are on the same version. See About cluster patching for more information on how SPP handles access requests and system failures during the cluster patching process.

Prior to installing an update patch to a cluster

  • Ensure all appliances in the cluster are online and healthy. Any warnings or problems should be addressed before cluster patching. The patch install process will fail if any of the cluster members are unhealthy or cannot be contacted.

    IMPORTANT: The primary appliance orchestrates the cluster upgrade; therefore, the primary appliance must stay online and have a solid network connection with all of the replica appliances in the cluster. If this cannot be reasonably assured, you should unjoin the replica appliances from the cluster, individually upgrade them, and then re-enroll them into cluster.

  • It is highly recommended to take a backup of your primary appliance before applying a patch. For more information, see Backup and Restore..
  • You may want to unjoin a replica from the cluster to serve as a backup appliance. In case of a catastrophic failure, you can activate the unjoined replica to be the primary. If the cluster patching process is successful, upgrade the unjoined replica, and then re-enroll it back into the cluster.

To patch appliances in a cluster

  1. Log in to the primary appliance, as an Appliance Administrator.
  2. Go to the patch updates page:
    • web client: Navigate to Appliance > Patch Updates.
  3. Click Upload a File and browse to select an update file.

    The patch will be uploaded and distributed to all of the appliances in the cluster.

    NOTE: If you make changes to the cluster, such as adding a new replica, while a patch is staged, the update file must be distributed to the new cluster member before the patch install process can begin. SPP will not allow the patch install process to begin until all of the cluster members report that they have the update file stored locally.

    NOTE: Clicking the Cancel button during the distribution process stops the distribution of the update file to the replicas. At this point, you can click one of the following buttons:

    • Remove to remove the update file from all of the appliances in the cluster.
    • Distribute to Cluster to continue distributing the update file to each replica in the cluster.
  4. Once the file has been successfully distributed to all of the replicas in the cluster, click the Install Now button.

    The primary appliance will go into Maintenance mode to begin the update operation. Once the primary appliance is successfully updated, SPP will perform the update operation on each replica, one at a time. During an update operation, the cluster will be locked so that no other cluster operations can interfere with the update operation. Once the update operation is completed on all cluster members, the cluster will automatically unlock so normal operations can resume.

    The Cluster view shows that an update operation is in progress and the cluster members that are locked, awaiting to install the update file. Go to:

    • web client: Navigate to Cluster > Cluster Management.

    In addition, go to Patch Updates:

    • web client: Navigate to Appliance > Patch Updates.

About cluster patching

The following information provides insight into how SPP processes access requests during the cluster patching process. It also describes what happens if a cluster member loses power or network connectivity during the patching process.

Service guarantees

During a cluster upgrade, the cluster is split logically into the current version (side A) and the upgrade version (side B). Access request workflow is only enabled on one side at a time. Audit logs run on both sides and merge when the cluster patch completes. Initially, access request workflow is only enabled on side A, and replicas in PatchPending state can perform access requests. As appliances upgrade and move to side B, the access workflow migrates to side B when side B has a majority of the appliances.

At this point in the upgrade process, replicas in PatchPending state can no longer perform access requests; however, all of the upgraded cluster members can perform access requests. There is a small window where access request workflow is unavailable as the data migrates from one side to the other.

Failure scenarios

If the primary appliance loses power or loses network connectivity during the upgrade process, it will try to resume the upgrade on restart.

If a replica is disconnected or loses power during an upgrade process, the replica will most likely go into quarantine mode. The primary appliance will skip that appliance and remove it from the cluster. This replica will need to be reset, upgraded, and then re-enrolled into the cluster manually to recover.

Configuration for password and SSH key check out

The policy may be configured such that a password or SSH key reset is required before the password or SSH key can be checked out again. If that is the case, the following can be temporarily configured prior to cluster patching and access request to allow for password or SSH key check out when a password or SSH key has not been reset.

  • The policy can be set to allow multiple accesses.
  • The policy can be set to not require a password or SSH key change at check in.
  • Emergency requests can be allowed so the user does not have to wait for the password or SSH key to be reset.

Using a backup to restore a clustered appliance

In a clustered environment, the objective of a cluster backup is to preserve and allow the restoration of all operational data, including access request workflow, users/accounts, audit logs, and so on. All appliances in a cluster (primary and replicas) can be backed up. However, a backup should only be restored to an appliance in the worst-case scenario where no appliance can be restored using the failover operation.

When a backup is restored to an appliance, the restore on the primary clears the primary's cluster configuration but does not change the replicas' cluster configuration. To avoid issues:

  1. If possible, unjoin the replicas from the cluster prior to a backup restore.
  2. If the primary has been set to encrypt the cluster backups with a password or GPG key, you must have the password or GPG private key to complete the upload and restore operation. For more information, see Backup protection settings..

  3. Upload and restore the backup on the appliance that will be the primary.
  4. If you did not unjoin the replicas prior to the backup restore, perform a cluster reset on each replica so they become standalones then join the replicas back into the cluster.

The appliance is restored as a stand-alone primary appliance in Read-only mode with no replicas. However, all the access request workflow, user/account, and audit log data that existed when the backup was taken is retained. This primary appliance can then be activated and replicas can be joined to recreate a cluster.

To take a backup of a physical appliance

  1. Log in to the appliance as an Appliance Administrator.
  2. Go to Safeguard Backup and Restore:
    • web client: Navigate to Backup and Retention > Safeguard Backup and Restore.
  3. As needed, you can run a backup, set a schedule for the backup, and encrypt the backup for a cluster from the primary. For more information, see Backup and Restore..

To restore a physical appliance from a backup

An Appliance Administrator can restore backups as far back as SPP version 6.0.0.12276. Only the data is restored; the running version is not changed.

You cannot restore a backup from a version newer than the one running on the appliance. The restore will fail and a message like the following displays: Restore failed because backup version [version] is newer then the one currently running [version].

The backup version and the running version display in the Activity Center logs that are generated when Safeguard starts, completes, or fails a restore.

NOTE: If you want to use a backup file taken on a different appliance, that backup file must first be downloaded on the appliance where the backup was taken. The downloaded backup file will then need to be uploaded to the appliance that wants to use it before you can use the Restore option.

  1. Log in to the appliance to be restored as an Appliance Administrator.
  2. Go to Safeguard Backup and Restore:
    • web client: Navigate to Backup and Retention > Safeguard Backup and Restore.
  3. Select the backup to be used and click Restore. If a problematic condition is detected, Warning for Restore of Backup displays along with details in the Restore Warnings, Warning X of X message. Click Cancel to stop the restore process and address the warning or click Continue to move to the next warning (if any) or complete the process.

  4. If the backup is protected by a password, the Protected Backup Password dialog displays. Type in the password in the Enter Backup Password text box. For more information, see Backup protection settings..

  5. When the Restore dialog displays, enter the word Restore and click OK. For more information, see Restore a backup..

The appliance is restored as a stand-alone primary appliance in Read-only mode with no replicas.

To rebuild a cluster

  1. Log in to the primary appliance as an Appliance Administrator.
  2. Activate the Read-only primary appliance.
    1. Go to Cluster Management:
      • web client: Navigate to Cluster > Cluster Management.
    2. Select the node to be activated from the cluster view (left pane).
    3. Click Activate.
    4. Confirm the activate operation.

    For more information, see Activating a read-only appliance..

  3. One at a time, enroll the replica appliances to rebuild your cluster.
    1. Go to Cluster Management:
      • web client: Navigate to Cluster > Cluster Management.
    2. Click Add Replica to join a replica appliance to the cluster.

    Once the enroll operation completes, repeat to add your appliances back into the cluster as replicas.

    NOTE: Enrolling a replica can take up to 24 hours depending on the amount of data to be replicated and your network.

    For more information, see Enrolling replicas into a cluster..

Related Documents

The document was helpful.

Seleziona valutazione

I easily found the information I needed.

Seleziona valutazione