Unlocking a locked cluster
In order to maintain consistency and stability, only one cluster operation can run at a time. To ensure this, SPP locks the cluster while a cluster operation is running, such as enroll, unjoin, failover, patch, reset, session module join, update IP, and audit log maintenance. While the cluster is locked, changes to the cluster configuration are not allowed until the operation completes.
The lock notification displays as follows:
- web client: The Appliance State will show a red lock icon ().
You should never cancel the cluster lock for an SPP unjoin, failover, cluster reset, restore, patch, or IP address update. Other considerations:
- If a SPP join (enroll) is taking a long time, you may cancel it during the streaming audit data step.
- If a patch distribution is taking a long time, you may cancel it and upload the patch to the replicas directly.
- If an audit log synchronize operation is taking a long time, or you have reason to believe it will not complete due to a down appliance in the cluster, you may cancel it. Canceling this operation requires monitoring as detailed in Cancel Audit Log Maintenance from the Audit Log Maintenance page.
-
If an audit log archive or purge operation is taking a long time, or you have reason to believe it will not complete due to a down appliance in the cluster, you may cancel it. Canceling this operation requires monitoring as detailed in Cancel Audit Log Maintenance from the Audit Log Maintenance page.
To unlock a locked cluster
- Go to Cluster Management:
- web client: Navigate to Cluster > Cluster Management.
- Click the lock icon in the upper right corner of the warning banner.
-
In the Unlock Cluster confirmation dialog, enter Unlock Cluster and click OK.
This will release the cluster lock that was placed on all of the appliances in the cluster and close the operation.
IMPORTANT: Care should be taken when unlocking a locked cluster. It should only be used when you are sure that one or more appliances in the cluster are offline and will not finish the current operation. If you force the cluster unlock, you may cause instability on an appliance, requiring a factory reset and possibly the need to rebuild the cluster. If you are unsure about the operation in progress, do NOT unlock the cluster.
Task delegation in a cluster
A Safeguard for Privileged Passwords' cluster delegates platform management tasks (such as password and SSH key check and password and SSH key change) to appliances based on platform task load. The primary appliance performs delegation and evaluates cluster member suitability using an internal fitness score that is calculated by dividing the number of in-use platform task threads by the maximum number of allowed platform task threads.
The maximum number of allowed platform task threads can be adjusted using the Appliance/Settings API and adjusting the MaxPlatformTaskThreads value. By adjusting this number, you can tune task distribution.
IMPORTANT: Adjusting the MaxPlatformTaskThreads will impact SPP's available resources for handling access requests and may impact user experience. Best practice is to engage Professional Services if the value may need to be changed.
Increasing the maximum number of allowed platform task threads will decrease the fitness score thus increasing the number of tasks passed to that appliance.
The fitness score is cached and is recalculated in 8-minute intervals when the scheduler is not busy. When the scheduler is running tasks, the fitness score is calculated more frequently so the scheduler can dynamically adjust.
The selection of a Safeguard for Privileged Sessions Appliance is primarily dependent on managed network rules. However, if there aren't any managed network rules or if the managed network rules result in more than one Safeguard for Privileged Sessions appliances selected, a fitness score is used as the tie breaker. The fitness score is calculated based on the percentage of disk available minus the overall load average of the Safeguard for Privileged Sessions appliance. (Load average is a Linux metric which provides a numerical indication of the overall resource capacity in use on the server.) The higher the fitness score, the more likely that the corresponding appliance will be selected.
Password distribution in a cluster
The Primary coordinates all password changes across the cluster. When the scheduler decides it is time to change a password, it will choose whichever node is least busy (based on a fitness score and part of the Managed Network) and assign the task to that cluster member. When the cluster member is done and the change is successful, the password change is written immediately from that replica into the distributed database. It has consensus (quorum) which means that all nodes that are part of that quorum will immediately receive the password.
The same is true for a session. If the Primary schedules a password change, and someone attempts to use a session, it should have the correct password.
There is a potential scenario where the password or session request has already been initiated and the Primary schedules a change during the request. In this case, the user will have to check out the password anew, but an existing session that is live should not be interrupted. If they attempt to initiate the session at the exact moment that the password change has happened, they may get an error about the session.
Password distribution changes when Offline Workflow is enabled on a node that has lost consensus (quorum). For more information, see About Offline Workflow Mode.
Managed Networks
Managed networks are named lists of network segments serviced by a specific SPP (SPP) or Safeguard for Privileged Sessions (SPS) appliance in a clustered environment. Managed networks are used for scheduling tasks, such as password or SSH key change, account discovery, sessions recording, and asset discovery to distribute the task load. Using managed networks, you can:
- Distribute the load so there is minimal cluster traffic.
- Specify to use the appliances that are closest to the target asset to perform the actual task.
A SPP cluster has a default managed network that consists of all cluster members. Other managed networks can be defined.
|
CAUTION: If the role of a managed host that belongs to a linked Safeguard for Privileged Sessions cluster is changed or if a managed host is added or removed from the cluster, SPP will detect the change by querying each Central Management node and attempt to stay in sync with the Safeguard for Privileged Sessions cluster topology. If the Central Management node is down, SPP warns the administrator there may be invalid policies with a message like: The session connection policy was not found, in addition to flagging each broken Access Request Policy with an Invalid notation (Security Policy Management > Entitlements > Access Request Policies tab). Based on the size of your network and other factors, this will take one to 10 minutes and, during this time window, an unavailable managed host may continue to appear on the Managed Networks page. Any requests made will be invalid and will not be able to be launch sessions. |
Precedence
The selection made on the Entitlement > Access Request Policy tab takes precedence over the selections on Appliance Management > Cluster > Managed Networks page. If a Managed Networks rule includes nodes from different Safeguard for Privileged Sessions clusters, Safeguard for Privileged Passwords will only select the nodes from the same cluster that was assigned on the Session Settings page of the Access Request Policy tab.
IMPORTANT: Discovery, password and SSH key check and change will not work if a managed network has been configured with a subnet but is not assigned to an appliance (the appliance is blank). If the managed network does not have an assigned appliance, a message like the following displays: No appliances in network '<NameOfEmptyNetwork>' available to execute platform task request. To resolve the issue, assign at least one appliance to manage the passwords, SSH key, and/or sessions or delete the managed network entry.
Go to Managed Networks:
- web client: Navigate to Appliance Management > Cluster > Managed Networks.
The Managed Networks page displays the following information about previously defined managed networks. Initially, this page contains the properties for the Default Managed Network, which implicitly includes all networks and is served by all appliances in the cluster.
Table 38: Managed Networks: Properties
Name |
The name assigned to the managed network when it was added to SPP. |
Subnets |
A list of subnets included in the managed network.
Double-click an entry in the Managed Networks grid to display details about the subnets associated with the selected managed network.
If you have linked Safeguard for Privileged Sessions, the following apply:
- Passwords Managed By: The SPP appliance ID, which includes the MAC address followed by the IP address of the node.
- Sessions Managed By: If applicable, the Safeguard for Privileged Sessions appliance host name followed by the IP address of the Safeguard for Privileged Sessions node.
|
Passwords Managed By |
The host name and IP address of the appliances and the MAC address assigned to manage the specified subnets. |
Sessions Managed By |
The host name and IP address of the cluster nodes. |
Description |
The descriptive text entered when defining the managed network. |
Use these toolbar buttons to define and maintain your managed networks.
Table 39: Managed Networks: Toolbar
New |
Add a managed network. For more information, see Adding a managed network. |
Delete Selected |
Remove the selected managed network from SPP. You cannot delete the Default Managed Network. |
Refresh |
Update the list of managed networks. |
Edit |
Modify the selected managed network configuration. You can not modify the Default Managed Network. |
Resolve Network text box |
Locate an IP address in a managed network's list of subnets. For more information, see Resolving IP address. |
Related Topics