Converse agora com nosso suporte
Chat com o suporte

Safeguard for Privileged Passwords On Demand Hosted - Administration Guide

Introduction System requirements and versions Using API and PowerShell tools Using the virtual appliance and web management console Cloud deployment considerations Setting up Safeguard for Privileged Passwords for the first time Using the web client Getting started with the desktop client Using the desktop client Activity Center Search box Privileged access requests Toolbox Accounts Account Groups Assets
General/Properties tab (asset) Accounts tab (asset) Account Dependencies tab (asset) Owners tab (asset) Access Request Policies tab (asset) Asset Groups tab (asset) Discovered SSH Keys (asset) Discovered Services tab (asset) History tab (asset) Managing assets
Asset Groups Discovery Entitlements Linked Accounts Partitions Profiles Settings
Access Request settings Appliance settings Asset Management settings Tags Backup and Retention settings Certificates settings Cluster settings Enable or Disable Services settings External Integration settings Password Management settings Real-Time Reports Safeguard Access settings SSH Key Management settings Security Policy Settings
Users User Groups Disaster recovery and clusters Administrator permissions Preparing systems for management Troubleshooting Frequently asked questions Appendix A: Safeguard ports Appendix B: SPP 2.7 or later migration guidance Appendix C: SPP and SPS join guidance Appendix D: Regular Expressions About us

Diagnosing a cluster member

The diagnostic tools are available to an Appliance Administrator or Operations Administrator for the currently connected appliance and any other appliances (replicas) in the cluster.

To run diagnostics on a clustered appliance

  1. Go to Cluster Management:
    • web client: Navigate to Cluster | Cluster Management
    • desktop client: In Settings, select Cluster | Cluster Management.
  2. From the cluster view (on the left) select the appliance to be diagnosed.
  3. Click Diagnose.

  4. Click Network Diagnostics.
  5. Choose the type of test to perform and complete the steps.

    • ARP: Use Address Resolution Protocol (ARP) to discover the Interface, Internet Address, Physical Address, and Type (dynamic or static).
    • Netstat: Use netstat to display the active connection protocol, local address, foreign address, and state.
    • NS Lookup: To obtain your domain name or IP address.
    • Ping: To verify your network connectivity and response time.
    • Show Routes: To retrieve routing table information.
    • Telnet: To access remote computers over TCP/IP networks like the internet.
    • Throughput: Test throughput to other appliances in the cluster.
    • Trace Route: To obtain your router information; trace route determines the paths packets take from one IP address to another.

Patching cluster members

When an appliance update is released, apply the patch so all appliances in the cluster are on the same version. See About cluster patching for more information on how Safeguard for Privileged Passwords handles access requests and system failures during the cluster patching process.

Prior to installing an update patch to a cluster

  • Ensure all appliances in the cluster are online and healthy. Any warnings or problems should be addressed before cluster patching. The patch install process will fail if any of the cluster members are unhealthy or cannot be contacted.

    IMPORTANT: The primary appliance orchestrates the cluster upgrade; therefore, the primary appliance must stay online and have a solid network connection with all of the replica appliances in the cluster. If this cannot be reasonably assured, you should unjoin the replica appliances from the cluster, individually upgrade them, and then re-enroll them into cluster.

  • It is highly recommended to take a backup of your primary appliance before applying a patch. For more information, see Backup and Restore.
  • You may want to unjoin a replica from the cluster to serve as a backup appliance. In case of a catastrophic failure, you can activate the unjoined replica to be the primary. If the cluster patching process is successful, upgrade the unjoined replica, and then re-enroll it back into the cluster.

To patch appliances in a cluster

IMPORTANT: The following procedure applies to Safeguard for Privileged Passwords Appliances running version 2.1.x and later. If you need to patch appliances running an earlier version, you will need to unjoin replica appliances, install the patch on each appliance, and then enroll the replica appliances to rebuild your cluster. For more information, see Patching cluster members in the One Identity Safeguard for Privileged Passwords 2.0 Administration Guide.

  1. Log in to the primary appliance, as an Appliance Administrator.
  2. Go to the patch updates page:
    • web client: Navigate to Appliance | Patch Updates.
    • desktop client: In Administrative Tools, select Settings | Appliance | Updates.
  3. Click Upload a File and browse to select an update file.

    The patch will be uploaded and distributed to all of the appliances in the cluster.

    NOTE: If you make changes to the cluster, such as adding a new replica, while a patch is staged, the update file must be distributed to the new cluster member before the patch install process can begin. Safeguard for Privileged Passwords will not allow the patch install process to begin until all of the cluster members report that they have the update file stored locally.

    NOTE: Clicking the Cancel button during the distribution process stops the distribution of the update file to the replicas. At this point, you can click one of the following buttons:

    • Remove to remove the update file from all of the appliances in the cluster.
    • Distribute to Cluster to continue distributing the update file to each replica in the cluster.
  4. Once the file has been successfully distributed to all of the replicas in the cluster, click the Install Now button.

    The primary appliance will go into Maintenance mode to begin the update operation. Once the primary appliance is successfully updated, Safeguard for Privileged Passwords will perform the update operation on each replica, one at a time. During an update operation, the cluster will be locked so that no other cluster operations can interfere with the update operation. Once the update operation is completed on all cluster members, the cluster will automatically unlock so normal operations can resume.

    The Cluster view shows that an update operation is in progress and the cluster members that are locked, awaiting to install the update file. Go to:

    • web client: Navigate to Cluster | Cluster Management.
    • desktop client: Navigate to Administrative Tools | Settings | Cluster | Cluster Management.

    In addition, go to Patch Updates:

    • web client: Navigate to Appliance | Patch Updates.
    • desktop client: Navigate to Administrative Tools | Settings | Appliance | Updates.

About cluster patching

The following information provides insight into how Safeguard for Privileged Passwords processes access requests during the cluster patching process. It also describes what happens if a cluster member loses power or network connectivity during the patching process.

Service guarantees

During a cluster upgrade, the cluster is split logically into the current version (side A) and the upgrade version (side B). Access request workflow is only enabled on one side at a time. Audit logs run on both sides and merge when the cluster patch completes. Initially, access request workflow is only enabled on side A, and replicas in PatchPending state can perform access requests. As appliances upgrade and move to side B, the access workflow migrates to side B when side B has a majority of the appliances. At this point in the upgrade process, replicas in PatchPending state can no longer perform access requests; however, all of the upgraded cluster members can perform access requests. There is a small window where access request workflow is unavailable as the data migrates from one side to the other.

Failure scenarios

If the primary appliance loses power or loses network connectivity during the upgrade process, it will try to resume the upgrade on restart.

If a replica is disconnected or loses power during an upgrade process, the replica will most likely go into quarantine mode. The primary appliance will skip that appliance and remove it from the cluster. This replica will need to be reset, upgraded, and then re-enrolled into the cluster manually to recover.

Configuration for password and SSH key check out

The policy may be configured such that a password or SSH key reset is required before the password or SSH key can be checked out again. If that is the case, the following can be temporarily configured prior to cluster patching and access request to allow for password or SSH key check out when a password or SSH key has not been reset.

  • The policy can be set to allow multiple accesses.
  • The policy can be set to not require a password or SSH key change at check in.
  • Emergency requests can be allowed so the user does not have to wait for the password or SSH key to be reset.

Using a backup to restore a clustered appliance

In a clustered environment, the objective of a cluster backup is to preserve and allow the restoration of all operational data, including access request workflow, users/accounts, audit logs, and so on. All appliances in a cluster (primary and replicas) can be backed up. However, a backup should only be restored to an appliance in the worst-case scenario where no appliance can be restored using the failover operation.

When a backup is restored to an appliance, the restore on the primary clears the primary's cluster configuration but does not change the replicas' cluster configuration. To avoid issues:

  1. If possible, unjoin the replicas from the cluster prior to a backup restore.
  2. If the primary has been set to encrypt the cluster backups with a password or GPG key, you must have the password or GPG private key to complete the upload and restore operation. For more information, see Backup protection settings.

  3. Upload and restore the backup on the appliance that will be the primary.
  4. If you did not unjoin the replicas prior to the backup restore, perform a cluster reset on each replica so they become standalones then join the replicas back into the cluster.

The appliance is restored as a stand-alone primary appliance in Read-only mode with no replicas. However, all the access request workflow, user/account, and audit log data that existed when the backup was taken is retained. This primary appliance can then be activated and replicas can be joined to recreate a cluster.

To take a backup of a physical appliance

  1. Log in to the appliance as an Appliance Administrator.
  2. Go to Safeguard Backup and Restore:
    • web client: Navigate to Backup and Retention | Safeguard Backup and Restore.
    • desktop client: Navigate to Administrative Tools | Settings | Backup and Retention | Safeguard Backup and Restore.
  3. As needed, you can run a backup, set a schedule for the backup, and encrypt the backup for a cluster from the primary. For more information, see Backup and Restore.

To restore a physical appliance from a backup

An Appliance Administrator can restore backups as far back as Safeguard for Privileged Passwords version 2.2.0.6958. Only the data is restored; the running version is not changed.

If the administrator attempts to restore a version earlier than 2.2.0.6958, a message like the following displays: Restore failed because the backup version '[version]' is older than the minimum supported version '2.2.0.6958' for restore.

You cannot restore a backup from a version newer than the one running on the appliance. The restore will fail and a message like the following displays: Restore failed because backup version [version] is newer then the one currently running [version].

The backup version and the running version display in the Activity Center logs that are generated when Safeguard starts, completes, or fails a restore.

NOTE: If you want to use a backup file taken on a different appliance, that backup file must first be downloaded on the appliance where the backup was taken. The downloaded backup file will then need to be uploaded to the appliance that wants to use it before you can use the Restore option.

  1. Log in to the appliance to be restored as an Appliance Administrator.
  2. Go to Safeguard Backup and Restore:
    • web client: Navigate to Backup and Retention | Safeguard Backup and Restore.
    • desktop client: Navigate to Administrative Tools | Settings | Backup and Retention | Safeguard Backup and Restore.
  3. Select the backup to be used and click Restore. If a problematic condition is detected, Warning for Restore of Backup displays along with details in the Restore Warnings, Warning X of X message. Click Cancel to stop the restore process and address the warning or click Continue to move to the next warning (if any) or complete the process.

  4. If the backup is protected by a password, the Protected Backup Password dialog displays. Type in the password in the Enter Backup Password text box. For more information, see Backup protection settings.

  5. When the Restore dialog displays, enter the word Restore and click OK. For more information, see Restore a backup.

The appliance is restored as a stand-alone primary appliance in Read-only mode with no replicas.

To rebuild a cluster

  1. Log in to the primary appliance as an Appliance Administrator.
  2. Activate the Read-only primary appliance.
    1. Go to Cluster Management:
      • web client: Navigate to Cluster | Cluster Management.
      • desktop client: In Administrative Tools, navigate to Settings | Cluster | Cluster Management.
    2. Select the node to be activated from the cluster view (left pane).
    3. Click Activate.
    4. Confirm the activate operation.

    For more information, see Activating a read-only appliance.

  3. One at a time, enroll the replica appliances to rebuild your cluster.
    1. Go to Cluster Management:
      • web client: Navigate to Cluster | Cluster Management.
      • desktop client: In Administrative Tools, navigate to Settings | Cluster | Cluster Management.
    2. Click Add Replica to join a replica appliance to the cluster.

    Once the enroll operation completes, repeat to add your appliances back into the cluster as replicas.

    NOTE: Enrolling a replica can take up to 24 hours depending on the amount of data to be replicated and your network.

    For more information, see Enrolling replicas into a cluster.

Documentos relacionados

The document was helpful.

Selecione a classificação

I easily found the information I needed.

Selecione a classificação