Chat now with support
Chat with Support

One Identity Safeguard for Privileged Passwords 6.10 - Administration Guide

Introduction System requirements and versions Using API and PowerShell tools Using the virtual appliance and web management console Cloud deployment considerations Setting up Safeguard for Privileged Passwords for the first time Using the web client Getting started with the desktop client Using the desktop client Search box Privileged access requests Toolbox Accounts Account Groups Assets Asset Groups Discovery Entitlements Partitions Settings
Access Request settings Appliance settings Asset Management settings Backup and Retention settings Certificates settings Cluster settings Enable or Disable Services settings External Integration settings Messaging settings (desktop client) Password Management settings Real-Time Reports Safeguard Access settings SSH Key Management settings
Users User Groups Disaster recovery and clusters Administrator permissions Preparing systems for management Troubleshooting Frequently asked questions Appendix A: Safeguard ports Appendix B: SPP 2.7 or later migration guidance Appendix C: SPP and SPS join guidance Appendix D: Regular Expressions About us

Removing a trusted certificate

To remove certificates from the appliance

  1. Go to the following selection, based on your client:
    • web client: Navigate to Certificates | Trusted CA Certificates.
    • desktop client: Navigate to Administrative Tools | Settings | Certificates | Trusted Certificates.
  2. Select a certificate.
  3. Click  Delete Trusted CA Certificate from the details toolbar.

    IMPORTANT: Safeguard for Privileged Passwords does not allow you to remove built-in certificate authorities.

Cluster settings

Use the Cluster settings to create a clustered environment, to monitor the health of the cluster and its members, and to define managed networks for high availability and load distribution.

It is the responsibility of the Appliance Administrator or the Operations Administrator to create a cluster, monitor the status of the cluster, and define managed networks.

Before creating a Safeguard for Privileged Passwords cluster, become familiar with the Disaster recovery and clusters chapter to understand:

  • Go to Clusters:
    • web client: Navigate to Cluster.
    • desktop client: Navigate to Administrative Tools | Settings | Cluster.
    Table 157: Cluster settings
    Setting Description

    Cluster Management

    Where you create and manage a cluster and monitor the health of the cluster and its members.

    Managed Networks

    Where you define managed networks to distribute the task load for the clustered environment.

    Offline Workflow (automatic)

    Where you configure Offline Workflow Mode to automatically trigger if an appliance has lost consensus (quorum) and, optionally, automatically resume online workflow. You can also manually Enable Offline Workflow and Resume Online Operations from this dialog. For more information, see About Offline Workflow Mode.

    Session Appliances with SPS link

    Where you view, edit, and delete link connections when a Safeguard for Privileged Sessions (SPS) cluster is linked to a Safeguard for Privileged Password (SPP) for session recording and auditing. For more information, see SPP and SPS sessions appliance link guidance.

  • Cluster Management

    Cluster Management allows you to create and diagnosis clusters.

    The display of Cluster Management is different in the desktop client and the web client. Refer to the instructions for the client you are using.

    web client: Cluster Management

    When using Cluster Management from the web client, performing operations against other members of the cluster will incur a Cross-Origin Resource Sharing (CORS) HTTP request. This may require you to change the Trusted Servers, CORS, and Redirects setting to allow the specific host name being used in your web browser.

  • Navigate to Cluster | Cluster Management.

    Cluster Management grid

    • Health indicators: Health indicators display in the first column in the Cluster Management grid. Cluster members periodically query other appliances in the cluster to obtain their health information. Cluster member information and health information is cached in memory, with the most recent results displayed.

      The health indicators on the nodes indicate if cluster members are in any of these states:

      error: Indicates a definite problem impacting the functionality of the cluster

      warning: Indicates a potential issue with the cluster

      locked: Indicates the cluster is locked

      (green) healthy state.

      Expand the View More section to see more details.

    • Name: The name of the appliance.
    • Network Address: The IPv4 address (or IPv6 address) of the appliance configuration interface. You can modify the appliance IP address. For more information, see How do I modify the appliance configuration settings.
    • Primary: Displays Yes if the appliance is the primary.
    • Appliance State: Indicates the appliance state. For a list of available states, see Appliance states.

    When you select an appliance, the details for the appliance display on the right. The grid information displays: name, network address, primary, and state. This additional information is available:

    • Disk Space: The amount of used and free disk space.
    • Version: The appliance version number.
    • Last Health Check: Last date and time the selected appliance's information was obtained.
    • Uptime: The amount of time (days, hours, and minutes) the appliance has been running.
    • If the replica is selected, this additional information displays for the Primary:
      • Network Address: The network DNS name or the IP address of the primary appliance in the cluster
      • MAC Address: The media access control address (MAC address), a unique identifier assigned to the network interface for communications

      • Link Present: Displays either Yes or No to indicate if there is an open communication link

      • Link Latency: The amount of time (in milliseconds) it takes for the primary to communicate with the replica. Network latency is an expression of how much time it takes for a packet of data to get from one designated point to another. Ideally, latency is as close to zero as possible.

    • Errors and warnings are reported:
      • Errors: Errors are reported. For example, if an appliance is disconnected from the primary (no quorum), an error message may be: Request Workflow: Cluster configuration database health could not be determined.

      • Warnings: Warnings are reported. For example, if an appliance is disconnected from the primary (no quorum), a warning message may be: Policy Data: There is a problem replicating policy data. Details: Policy database slave IO is not running. The Safeguard primary may be inaccessible from this appliance.

    Toolbar actions

    desktop client: Cluster Management

    Navigate to Administrative Tools | Settings | Cluster | Cluster Management.

    The Cluster Management page is divided into left and right panes. If you do not see the right pane, click an appliance node in the left pane.

    Health indicators

    The health indicators on the nodes indicate if cluster members are in any of these states:

    error: Indicates a definite problem impacting the functionality of the cluster

    warning: Indicates a potential issue with the cluster

    locked: Indicates the cluster is locked

    (green) healthy state.

    Expand the View More section to see more details.

    Cluster Management left pane ( desktop client)

    In the left pane, you will initially see a single primary node for the appliance you are currently logged in to. As you join appliances to the cluster, replica nodes will be shown as being connected to the primary node.

    Toolbar buttons:

    • Add Replica: Join an appliance to the primary appliance as a replica. For more information, see Enrolling replicas into a cluster.
    • Refresh: Update the list of appliances in a cluster.
    • Reset Cluster: Reset a cluster to recover a cluster that has lost consensus. For more information, see Resetting a cluster that has lost consensus.

      Caution: Resetting a cluster should be your last resort. It is recommended that you restore from a backup rather than reset a cluster.
    • Enable Offline Workflow: This button is available if the appliance has lost consensus, you are logged into the selected appliance, and you have not already put the appliance in Offline Workflow Mode. The state of the appliance will be Isolated or Lost Quorum.
      Click Enable Offline Workflow to manually place the selected appliance in Offline Workflow Mode. The appliance will run in isolation from the rest of the cluster. For more information, see Manually control Offline Workflow Mode.
    • Resume Online Operations: This button is available if the appliance has lost consensus, you are logged into the selected appliance, and the appliance is in Offline Workflow Mode. The state of the appliance will be Isolated or Lost Quorum.
      Click Resume Online Operations to manually reintegrate the appliance with the cluster and merge audit logs. For more information, see To manually resume online operations.

    Cluster Management right pane ( desktop client)

    From this pane you can run maintenance and diagnostic tasks against the selected appliance.

    On the right, you see details about the appliance and the health of the cluster member selected. Cluster members periodically query other appliances in the cluster to obtain their health information. Cluster member information and health information is cached in memory, with the most recent results displayed.

    Toolbar buttons:

    Properties

    • Appliance name: The name of the appliance.
    • IP address: The IPv4 address (or IPv6 address) of the appliance configuration interface. You can modify the appliance IP address. For more information, see How do I modify the appliance configuration settings.
    • Appliance type: Indicates either Primary or Replica.
    • Appliance state: Indicates the appliance state. For a list of available states, see Appliance states.
    • Disk Space: The amount of used and free disk space.
    • Click View More to show or hide additional information.
    • Serial Number: The serial number of the appliance

    • Uptime: The amount of time (days, hours, and minutes) the appliance has been running.

    • Primary (display on replicas)
      • Network Address: The network DNS name or the IP address of the primary appliance in the cluster

      • MAC Address: The media access control address (MAC address), a unique identifier assigned to the network interface for communications

      • Link Present: Displays either Yes or No to indicate if there is an open communication link

      • Link Latency: The amount of time (in milliseconds) it takes for the primary to communicate with the replica. Network latency is an expression of how much time it takes for a packet of data to get from one designated point to another. Ideally, latency is as close to zero as possible.

    • Information:
      • Last Health Check: Last date and time the selected appliance's information was obtained.
      • Version: The appliance version number.

      • Errors: Errors are reported. For example, if an appliance is disconnected from the primary (no quorum), an error message may be: Request Workflow: Cluster configuration database health could not be determined.

      • Warnings: Warnings are reported. For example, if an appliance is disconnected from the primary (no quorum), a warning message may be: Policy Data: There is a problem replicating policy data. Details: Policy database slave IO is not running. The Safeguard primary may be inaccessible from this appliance.

  • Unlocking a locked cluster

    In order to maintain consistency and stability, only one cluster operation can run at a time. To ensure this, Safeguard for Privileged Passwords locks the cluster while a cluster operation is running, such as enroll, unjoin, failover, patch, reset, session module join, update IP, and audit log maintenance. While the cluster is locked, changes to the cluster configuration are not allowed until the operation completes.

    The lock notification displays as follows:

    • web client: The Appliance State will show a red lock icon ().
    • desktop client: In the Cluster view, the banner that appears at the top of the screen explains the operation in progress and a red lock icon () next to an appliance indicates that the appliance is locking the cluster.

    You should never cancel the cluster lock for an SPP unjoin, failover, cluster reset, restore, patch, or IP address update. Other considerations:

    • If a SPP join (enroll) is taking a long time, you may cancel it during the streaming audit data step.
    • If a patch distribution is taking a long time, you may cancel it and upload the patch to the replicas directly.
    • If an audit log synchronize operation is taking a long time, or you have reason to believe it will not complete due to a down appliance in the cluster, you may cancel it. Canceling this operation requires monitoring as detailed in Cancel Audit Log Maintenance from the Audit Log Maintenance page.
    • If an audit log archive or purge operation is taking a long time, or you have reason to believe it will not complete due to a down appliance in the cluster, you may cancel it. Canceling this operation requires monitoring as detailed in Cancel Audit Log Maintenance from the Audit Log Maintenance page.

    To unlock a locked cluster

    1. Go to Cluster Management:
      • web client: Navigate to Cluster | Cluster Management.
      • desktop client: Navigate to Administrative Tools | Settings | Cluster | Cluster Management.
    2. Click the lock icon in the upper right corner of the warning banner.
    3. In the Unlock Cluster confirmation dialog, enter Unlock Cluster and click OK.

      This will release the cluster lock that was placed on all of the appliances in the cluster and close the operation.

    IMPORTANT: Care should be taken when unlocking a locked cluster. It should only be used when you are sure that one or more appliances in the cluster are offline and will not finish the current operation. If you force the cluster unlock, you may cause instability on an appliance, requiring a factory reset and possibly the need to rebuild the cluster. If you are unsure about the operation in progress, do NOT unlock the cluster.

    Related Documents

    The document was helpful.

    Select Rating

    I easily found the information I needed.

    Select Rating