If the failed device was a Primary, one of the Replicas should be promoted to take over as authoritative primary
1. Log on to the /admin interface of the replica that will become the new primary
2. Select System Status/Settings | Cluster Management from the menu.
3. Make sure the replica is selected in the cluster member listing.
4. Select the Run Level check box and Maintenance from the list. Click the Change Run Level button.
5. Click the Take Over Author. Primary button, click the Continue with Change button.
6. Once the role of this appliance changes to Primary, select the Run Level check box, and select Operational from the list.
7. Click the Change Run Level button.
8. Refresh the page. If the old primary is still appearing in the Cluster Members list, select it in the list and click the Remove Cluster Member button.
9. In the pop up box that appears select Remove and Retain data.
10. Next you must log on to the /admin interface of any other replica/s in the cluster to recognize the new primary. Select System Status/Settings | Cluster Management from the menu.
11. Select the replica you are logged on to from the cluster member list.
12. Select the Run Level check box, and select Maintenance from the list, click the Change Run Level button
13. Select the new primary from the cluster member list.
14. Click the Recognize New Author. Primary button.
15. Select the replica you are logged on to from the cluster member list.
16. Select the Run Level check box.
17. Select Operational from the list.
18. Click the Change Run Level button.
19. Repeat steps 14-24 for any additional replicas.
NOTE: TPAM does not automatically redirect users to a different Primary, users will need to use a new IP address, or the DNS/load balancing solution will need to be updated to use the new primary.
OR
If the failed device was a Replica it should be removed from the Cluster
1. Log on to the /admin interface of the primary appliance.
2. Select System Status/Settings | Cluster Management from the menu
3. Select the member that is no longer part of the cluster and select Remote Cluster Member
4. Select Remove and Reset Data
Take a backup of the primary
1. Log on to the /admin interface of the primary appliance.
2. Select Backup | Online Backups from the menu
3. Click "Backup Now"
4. Monitor the results from the Backup Log tab | Results
5. The message "The backup operation has completed." is shown when the backup has completed.
6. Select the "Online Backups" tab, select the most recent backup and click the "Download" button
NOTE: It is recommend to take daily backups and download them/archive them off, while only running on one node.
After the replacement console appliance arrives1. Perform initial configuration of the replacement appliance. Refer to Knowledge Article 184398,
TPAM 2.5 Initial Configuration, for further guidance.
2. Set the Network Settings | IP Settings and DNS settings
3. Access TPAM from the Network Interface https://:8443/config/
4. Configure the web certificates of the replacement if not using the self-signed certificate. Refer to pages 97 and 98 of the Administrator Guide, available
here, for more information.
5. From the /admin interface | System Status/Settings | Date/Time Configuration. Set the correct Date & Time and NTP server if applicable.
6. Install the upgrade patch to the same version as the current primary:
How to install an upgrade patch.
7. Install all hotfixes that were installed after the upgrade patch on the current primary
8. Place the replica into Maintenance mode
Add the replacement console appliance into the cluster1. Log on to the /admin interface of the primary appliance
2. If using 2.5.916 or 2.5.917,
Hotfix_9080 needs to be installed on the primary:
How to install a hotfix patch2. Perform another backup from the Primary as above (this will reduce the number of backup sets / replication required to send to the Replica)
3. Select System Status/Settings | Cluster Management from the menu
4. Click the New Cluster Member button and enter the name for the replica
5. Enter the network address of the replica, and click the Check Address button.
6. Select Replica from the role list.
7. Enter the failover timeout, and failback timeout.
8. Click the Save button.You will see a message that “Appliance at address x.x.x.x is not yet registered in the cluster".
9. Click the Make Enrollment Bundle button. This generates the key file that will be used to communicate with the replica.
10. Click the Continue with Change button
11. You will be prompted to save the enrollment bundle file. Click the OK button and save the file locally. (Make sure you have enabled pop-ups in your browser for your TPAM appliance.)
12. Log on to the /admin interface of the replacement appliance.
13. Select System Status/Settings | Cluster Management from the menu.
14. Select the Run Level check box.
15. Select Maintenance from the Run Level list.
16. Click the Change Run Level button, click the Continue with Change button.
17. Click the Select File button. Click the Browse button and select the file
18. Click the Upload button, and click the Apply button. Select "Continue with Change"
19. Log off the replica appliance or close the browser
20. On the primary appliance select System Status/Settings | Cluster Management from the menu.
21. Select the replica in the cluster member list. Wait for the replica run level to change from Unknown to Maintenance, then proceed to the next step. ( To refresh the page click the Details tab)
NOTE: The replica may be visible in the cluster list but its status may be unknown for quite some time until it has fully enrolled with the primary. The time it takes to complete enrollment is dependent on the size of the backup being applied to the replica from the primary.
22. Select the Run Level check box.
23. Select Operational from the Run Level list.