立即与支持人员聊天
与支持团队交流

One Identity Safeguard for Privileged Passwords 7.0.3 LTS - Administration Guide

Introduction System requirements and versions Using API and PowerShell tools Using the virtual appliance and web management console Cloud deployment considerations Setting up Safeguard for Privileged Passwords for the first time Using the web client Home Privileged access requests Appliance Management
Appliance Backup and Retention Certificates Cluster Enable or Disable Services External Integration Real-Time Reports Safeguard Access Appliance Management Settings
Asset Management
Account Automation Accounts Assets Partitions Discovery Profiles Tags Registered Connectors Custom platforms
Security Policy Management
Access Request Activity Account Groups Application to Application Cloud Assistant Asset Groups Entitlements Linked Accounts User Groups Security Policy Settings
User Management Reports Disaster recovery and clusters Administrator permissions Preparing systems for management Troubleshooting Frequently asked questions Appendix A: Safeguard ports Appendix B: SPP and SPS join guidance Appendix C: Regular Expressions About us

Disaster recovery and clusters

Safeguard for Privileged PasswordsAppliances can be clustered to ensure high availability. Clustering enables the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. This reduces down time and data loss.

Another benefit of clustering is load distribution. Clustering in a managed network ensures the load is distributed to ensure minimal cluster traffic and to ensure appliances that are closest to the target asset are used to perform the task. The Appliance Administrator defines managed networks (network segments) to effectively manage assets, account, and service access requests in a clustered environment to distribute the task load.

Primary and replica appliances

A Safeguard for Privileged Passwords cluster consists of three or five appliances. An appliance can only belong to a single cluster. One appliance in the cluster is designated as the primary. Non-primary appliances are referred to as replicas. All vital data stored on the primary appliance is also stored on the replicas. In the event of a disaster, where the primary appliance is no longer functioning, you can promote a replica to be the new primary appliance. Network configuration is done on each unique appliance, whether it is the primary or a replica.

The replicas provide a read-only view of the security policy configuration. You cannot add, delete, or modify the objects or security policy configuration on a replica appliance. On the replica; you can perform check and change operations for passwords and ssh keys, set password, and set ssh key (both imported and generated). Users can log in to replicas to request access, generate reports, or audit the data. Also, passwords, SSH keys, and sessions can be requested from any appliance in a Safeguard cluster.

Supported cluster configurations

Current supported cluster configurations follow.

  • 3 Node Cluster (1 Primary, 2 Replicas): Consensus is achieved when two of the three appliances are online and able to communicate. Valid states are: Online or ReplicaWithQuorum. For more information, see Appliance states.
  • 5 Node Cluster (1 Primary, 4 Replicas): Consensus is achieved when three of the five appliances are online and able to communicate. Valid states are: Online or ReplicaWithQuorum. For more information, see Appliance states.
Consensus and quorum failure

Some maintenance tasks require that the cluster has consensus (quorum). Consensus means that the majority of the members (primary or replica appliances) are online and able to communicate. Valid states are: Online or ReplicaWithQuorum. For more information, see Appliance states.

Supported clusters have an odd number of appliances so the cluster has a consensus equal to or greater than 50% of the appliances are online and able to communicate.

If a cluster loses consensus (also known as a quorum failure), the following automatically happens:

  • The primary appliance goes into Read-only mode.
  • Password and SSH key check and change is disabled.

When connectivity is restored between a majority of members in a cluster, consensus is automatically regained. If the consensus members include the primary appliance, it automatically converts to read-write mode and enables password and SSH key check and change.

Health checks and diagnostics

The following tools are available to perform health checks and diagnose the cluster and appliances.

  • Perform a health check to monitor cluster health and appliance states. For more information, see Maintaining and diagnosing cluster members.
  • Diagnose the cluster and appliance. You can view appliance information, run diagnostic tests, view and edit network settings, and generate a support bundle. For more information, see Diagnosing a cluster member.
  • If you need to upload a diagnostic package but can't access the UI or API, connect to the Management web kiosk (MGMT). The MGMT connection gives access to functions without authentication, such as pulling a support bundle or rebooting the appliance, so access should be restricted to as few users as possible.
Shut down and restart an appliance

You can shut down and restart an appliance.

Run access request workflow on an isolated appliance in Offline Workflow Mode

You can enable Offline Workflow Mode either automatically or manually to force an appliance that no longer has quorum to process access requests using cached policy data in isolation from the remainder of the cluster. The appliance will be in Offline Workflow Mode.

Primary appliance failure: failover and backup restore

If a primary is not communicating, perform a manual failover. If that is not possible, you can use a backup to restore an appliance.

  • Unjoin and activate

    If the cluster appliances are able to communicate, you can unjoin the replica, then activate the primary so replicas can be joined.

    Cluster reset

    If the appliance is offline or the cluster members are unable to communicate, you must use Cluster Reset to rebuild the cluster. If there are appliances that must be removed from the cluster but there is no quorum to safely unjoin, a cluster reset force-removes nodes from the cluster. For more information, see Resetting a cluster that has lost consensus.

    Factory reset

    Perform a factory reset to recover from major problems or to clear the data and configuration settings on a hardware appliance. All data and audit history is lost and the hardware appliance goes into maintenance mode.

    You can perform a factory reset from:

  • Enrolling replicas into a cluster

    Prior to the Appliance Administrator enrolling cluster members into a Safeguard for Privileged Passwords cluster, review the enrollment considerations that follow.

    Considerations to enroll cluster members

    • If there is an appliance in Offline Workflow Mode, resume online operations before adding another replica. For more information, see About Offline Workflow Mode.
    • Update all appliances to the same appliance build (patch) prior to building your cluster. During the cluster patch operation, access request workflow is available so authorized users can request password and SSH key releases and session access.
    • To enroll an appliance into a cluster, the appliance must communicate over port 655 UDP and port 443 TCP, and must have IPv4 or IPv6 network addresses (not mixed). If both IPv4 and IPv6 are available for the connection then IPv6 will be used.For more information, see Safeguard ports.
    • You can only enroll replica appliances to a cluster when logged in to the primary appliance (using an account with Appliance Administrator permissions).
    • You can only add one appliance at a time. The maintenance operation must be complete before adding additional replicas.
    • Enrolling a replica can take as little as five minutes or as long as 24 hours depending on the amount of data to be replicated and your network.
    • During an enroll replica operation, the replica appliance goes into Maintenance mode. The existing members of the cluster can still process access requests as long as the member has quorum. On the primary appliance, you will see an enrolling notice in the status bar of the cluster view, indicating that a cluster-wide operation is in progress. This cluster lock prevents you from doing additional maintenance activities.

      Once the maintenance operation (enroll replica operation) is complete, the diagram in the cluster view (left pane) shows the link latency on the connector. The appliances in the cluster are unlocked and users can once again use the features available in Safeguard for Privileged Passwords.

      TIP: The Activity Center contains events for the start and the completion of the enrollment process.

    • The primary appliance's objects and security policy configuration are replicated to all replica appliances in the cluster. Any objects (such as users, assets, and so on) or security policy configuration defined on the replica will be removed during enroll. Existing configuration data from the primary will be replicated to the replica during the enroll. Future configuration changes on the primary are replicated to all replicas.

    To enroll a replica

    1. It is recommended that you make a backup of your primary appliance before enrolling replicas to a cluster.
    2. Log in to the primary appliance as an Appliance Administrator.
    3. Go to Cluster Management:
      • web client: Navigate to Cluster > Cluster Management.
    4. Click  Add Replica to join a Safeguard for Privileged Passwords Appliance to a cluster. 
    5. In the Add Replica dialog, enter a network DNS name or the IP address of the replica appliance into the Network Address field, and click Connect.
    6. Your web browser redirects to the login page of the replica. Log in as normal, including any two-factor authentication. After successful log in, your web browser is redirected back to the web client.

      1. Enter a valid account with Appliance Administrator permissions.
      2. In the Add Replica confirmation dialog, enter the words Add Replica and click OK to proceed with the operation.

      Safeguard for Privileged Passwords displays (synchronizing icon) and (lock icon) next to the appliance it is enrolling and puts the replica appliance in Maintenance mode while it is enrolling into the cluster.

      On all of the appliances in the cluster, you will see an "enrolling" banner at the top of the cluster view, indicating that a cluster-wide operation is in progress and all appliances in the cluster are locked down.

    7. View the link latency: Once the maintenance operation (enroll replica operation) is complete, click on an appliance to see the link latency. The appliances in the cluster are unlocked and users can once again make access requests.

    8. Log in to the replica appliance as the Appliance Administrator.
      Notice that the appliance has a state of Replica (meaning it is in a Read-Only mode) and contains the objects and security policy configuration defined on the primary appliance.

    Unjoining replicas from a cluster

    Safeguard for Privileged Passwordsallows the Appliance Administrator to unjoin replica appliances from a cluster. Prior to unjoining a replica from a Safeguard for Privileged Passwords cluster, review the unjoin considerations that follow.

    Considerations to unjoin cluster members
    • You can only unjoin replica appliances from a cluster.

    • To promote a replica to be the new primary and then unjoin the 'old' primary appliance, you can use the Failover option if the cluster has consensus (the majority of the appliances are online and able to communicate). For more information, see Failing over to a replica by promoting it to be the new primary. If the cluster does not have consensus, use the Cluster Reset option to rebuild your cluster. For more information, see Resetting a cluster that has lost consensus.
    • To perform an unjoin operation, the replica appliance to be unjoined can be in any state; however, the remaining appliances in the cluster must achieve consensus (online and able to communicate).
    • You can unjoin a replica appliance when logged in to any appliance in the cluster that is online, using an account with Appliance Administrator permissions.
    • When you unjoin a replica appliance from a cluster, the appliance is removed from the cluster as a stand-alone appliance that retains all of the data and security policy configuration information it contained prior to being unjoined. After the replica is unjoined, the appliance is placed in a Read-only mode with the functionality identified in Read-only mode functionality. You can activate an appliance in Read-only mode so you can add, delete and modify data, apply access request workflow, and so on. For more information, see Activating a read-only appliance.

    To unjoin a replica from a cluster

    1. Log in to an appliance in the cluster, as an Appliance Administrator.
    2. Go to Cluster Management:
      • web client: Navigate to Cluster > Cluster Management
    3. In the cluster view on the left, select the replica node to be unjoined from the cluster.
    4. In the details view on the right, click Unjoin.
    5. In the Unjoin confirmation dialog, enter the word Unjoin and click OK to proceed.

      Safeguard for Privileged Passwords displays (synchronizing icon) and (lock icon) next to the appliance it is unjoining and puts the replica appliance in Maintenance mode while it is unjoining from the cluster.

      Once the operation has completed, the replica appliance no longer appears.

    Login during Maintenance mode

    If you log in to the replica appliance while Safeguard for Privileged Passwords is processing an unjoin operation, you will see the Maintenance mode screen. At the end of the Maintenance mode, there will be a button indicating that the unjoin operation completed successfully:

    • web client: Continue

    Maintaining and diagnosing cluster members

    Maintain and diagnosis cluster members from Cluster Management:

    • web client: Navigate to Cluster > Cluster Management

    When a node is selected in the Cluster view, the right of the pane displays details about the selected appliance. From this pane you can run the following maintenance and diagnostic tasks against the selected appliance.

    To fix more serious issues with a cluster, you can perform additional operations depending on the state of the cluster members. Some such operations include:

    相关文档

    The document was helpful.

    选择评级

    I easily found the information I needed.

    选择评级