When you add user groups to an entitlement, you are specifying which people can request access to the accounts and assets governed by an entitlement's policies. It is the responsibility of the Security Policy Administrator to add user groups to entitlements.
To add a user group to entitlements
- Navigate to Administrative Tools | User Groups.
- In User Groups, select a user group from the object list and open the Entitlements tab.
- Click
Add Entitlement from the details toolbar.
- Select one or more entitlements from the Entitlements dialog and click OK.
If you do not see the entitlement you are looking for and you have Security Policy Administrator permissions, you can click
Create New and add the entitlement. For more information about creating entitlements, see Adding an entitlement.
Only the Security Policy Administrator can modify user groups.
To modify a user group
- Navigate to Administrative Tools | User Groups.
- In User Groups, select a user group.
- Select the view of the user group's information you want to modify (General, Users, or Entitlements).
For example:
- To change a local user group's name or description, double-click the General information box on the General tab or click the
Edit icon.
Note: You can double-click a user group name to open the General settings edit window.
- To add (or remove) users to the selected local user group, click the Users tab. You can multi-select members to add or remove more than one from a user group.
- To add (or remove) the selected user group to an entitlement, click the Entitlements tab.
- To view or export the details of each operation that has affected the selected user group, switch to the History tab. For more information, see History tab (user groups).
It is the responsibility of the Security Policy Administrator to delete groups of local users from Safeguard for Privileged Passwords. It is the responsibility of the Authorizer Administrator or the User Administrator to delete directory groups.
When you delete a user group, Safeguard for Privileged Passwords does not delete the users associated with it.
To delete a user group
- Navigate to Administrative Tools | User Groups.
- In User Groups, select a user group from the object list.
- Click
Delete Selected.
- Confirm your request.
Safeguard for Privileged Passwords Appliances can be clustered to ensure high availability. Clustering enables the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. This reduces down time and data loss.
Another benefit of clustering is load distribution. Clustering in a managed network ensures the load is distributed to ensure minimal cluster traffic and to ensure appliances that are closest to the target asset are used to perform the task. The Appliance Administrator defines managed networks (network segments) to effectively manage assets, account, and service access requests in a clustered environment to distribute the task load.
Primary and replica appliances
A Safeguard for Privileged Passwords cluster consists of three or five appliances. An appliance can only belong to a single cluster. One appliance in the cluster is designated as the primary. Non-primary appliances are referred to as replicas. All vital data stored on the primary appliance is also stored on the replicas. In the event of a disaster, where the primary appliance is no longer functioning, you can promote a replica to be the new primary appliance. Network configuration is done on each unique appliance, whether it is the primary or a replica.
The replicas provide a read-only view of the security policy configuration. You cannot add, delete, or modify the objects or security policy configuration on a replica appliance. You can perform password and SSH key change and check operations and make password and SSH key release and session access requests. Users can log in to replicas to request access, generate reports, or audit the data. Also, passwords, SSH keys, and sessions can be requested from any appliance in a Safeguard cluster.
Supported cluster configurations
Current supported cluster configurations follow.
- 3 Node Cluster (1 Primary, 2 Replicas): Consensus is achieved when two of the three appliances are online and able to communicate. Valid states are: Online or ReplicaWithQuorum. For more information, see Appliance states.
- 5 Node Cluster (1 Primary, 4 Replicas): Consensus is achieved when three of the five appliances are online and able to communicate. Valid states are: Online or ReplicaWithQuorum. For more information, see Appliance states.
Consensus and quorum failure
Some maintenance tasks require that the cluster has consensus (quorum). Consensus means that the majority of the members (primary or replica appliances) are online and able to communicate. Valid states are: Online or ReplicaWithQuorum. For more information, see Appliance states.
Supported clusters have an odd number of appliances so the cluster has a consensus equal to or greater than 50% of the appliances are online and able to communicate.
If a cluster loses consensus (also known as a quorum failure), the following automatically happens:
- The primary appliance goes into Read-only mode.
- Password and SSH key check and change is disabled.
When connectivity is restored between a majority of members in a cluster, consensus is automatically regained. If the consensus members include the primary appliance, it automatically converts to read-write mode and enables password and SSH key check and change.
Health checks and diagnostics
The following tools are available to perform health checks and diagnose the cluster and appliances.
- Perform a health check to monitor cluster health and appliance states. For more information, see Maintaining and diagnosing cluster members.
- Diagnose the cluster and appliance. You can view appliance information, run diagnostic tests, view and edit network settings, and generate a support bundle. For more information, see Diagnosing a cluster member.
- If you need to upload a diagnostic package but can't access the UI or API, connect to the Management web kiosk (MGMT). The MGMT connection gives access to functions without authentication, such as pulling a support bundle or rebooting the appliance, so access should be restricted to as few users as possible.
Shut down and restart an appliance
You can shut down and restart an appliance.
Run access request workflow on an isolated appliance in Offline Workflow Mode
You can enable Offline Workflow Mode either automatically or manually to force an appliance that no longer has quorum to process access requests using cached policy data in isolation from the remainder of the cluster. The appliance will be in Offline Workflow Mode.
Primary appliance failure: failover and backup restore
If a primary is not communicating, perform a manual failover. If that is not possible, you can use a backup to restore an appliance.
Unjoin and activate
If the cluster appliances are able to communicate, you can unjoin the replica, then activate the primary so replicas can be joined.
Cluster reset
If the appliance is offline or the cluster members are unable to communicate, you must use Cluster Reset to rebuild the cluster. If there are appliances that must be removed from the cluster but there is no quorum to safely unjoin, a cluster reset force-removes nodes from the cluster. For more information, see Resetting a cluster that has lost consensus.
Factory reset
Perform a factory reset to recover from major problems or to clear the data and configuration settings on a hardware appliance. All data and audit history is lost and the hardware appliance goes into maintenance mode.
You can perform a factory reset from: