Issue:
vRA8 (Aria) Displays Bad Gateway Instead of Login after IDM Restart
Cause:
IDM was rebooted w/o using LCM
Per VMware tech support, this is "expected behavior".
When IDM is rebooted w/o using LCM the Designated IP for for the IDM cluster is not set in the primary node.
Remediation:
Follow the outline in the VMware KB article: KB75080
Notes:
Depending on bootup of the cluster, the primary node may not be the same as before the node restarts.
1. SSH into one of the IDM appliances
$ ssh root@idm-node1.mindwatering.net
<enter password>
2. Get the PGPool Password Hash
[~] # cat /usr/local/etc/pgpool.pwd
a1b2c3d4bc5d6e7fa1b2c3d4bc5d6e7f
3. Find which IDM node is the master node:
[~] # su root -c "echo -e 'a1b2c3d4bc5d6e7fa1b2c3d4bc5d6e7f'|/usr/local/bin/pcp_watchdog_info -p 9898 -h localhost -U pgpool"
idm-node1.mindwatering.net:9999 Linux idm-node1.mindwatering.net idm-node1.mindwatering.net 9999 9000 7 STANDBY
idm-node2.mindwatering.net:9999 Linux idm-node2.mindwatering.net idm-node2.mindwatering.net 9999 9000 4 MASTER
idm-node3.mindwatering.net:9999 Linux idm-node3.mindwatering.net idm-node3.mindwatering.net 9999 9000 7 STANDBY
4. Find the status of all of the IDM nodes
[~] # su root -c "echo -e 'a1b2c3d4bc5d6e7fa1b2c3d4bc5d6e7f' | /opt/vmware/vpostgres/current/bin/psql -h localhost -p 9999 -U pgpool postgres -c \"show pool_nodes\""
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | last_status_change
0 | 192.168.99.100 | 5432 | up | 0.333333 | standby | 0 | false | 0 | 2023-01-02 01:02:03
1 | 192.168.99.101 | 5432 | up | 0.333333 | primary| 0 | false | 0 | 2023-01-02 01:02:03
2 | 192.168.99.102 | 5432 | up | 0.333333 | standby | 0 | false | 0 | 2023-01-02 01:02:03
(3 rows)
5. Check if the delegateIP entry in the ifconfig exists.
It is likely that the delegateIP is not added to any of the 3 node. Let's check the master first. If not already on the master node, switch/ssh to the actual master node for the next command:
Logout of the current IDM node if not already on the master node:
[~] # exit
Login to the master node, if not already on the master node:
$ ssh root@idm-node2.mindwatering.net
<enter password>
[~] # ifconfig eth0:0 | grep 'inet addr:' | cut -d: -f2
<empty output>
Notes:
- If you see an IP address and BCast, then this is not expected. Look at the KB article, there should have been an error message on the status results above in step 4.
- We can login the third/last node to just confirm it also doesn't have the delegateIP - being not the master, it should not, but we can check anyway.
Logout:
[~] # exit
Login to the other STANDBY node just to be extra-sure that it does not have the master's delegateIP set.
$ ssh root@idm-node3.mindwatering.net
<enter password>
[~] # ifconfig eth0:0 | grep 'inet addr:' | cut -d: -f2
<empty output>
[~] # exit
6. Assign the delegateIP on the master node:
$ ssh root@idm-node2.mindwatering.net
<enter password>
Run the ifconfig program and look at the eth0 interface and note the mask
[~] # ifconfig
... netmask 255.255.255.192 ...
Add the delegateIP designation to the eth0:0 interface:
[~] # ifconfig eth0:0 inet delegateIP netmask 255.255.255.192
Repeat the ifconfig command:
[~] # ifconfig eth0:0 | grep 'inet addr:' | cut -d: -f2
<delegateIP Bcast>
Restart the horizon workspace service:
[~] # service horizon-workspace restart
Notes:
- Connect to the other STANDY nodes and restart their horizon workspace services
- The vRA login prompt was immediately visible on the IDM VIP. However, the login to the AD domain still failed. We had to wait for a little while and try again - 1 hour was evidently enough in our case. We could login again on vRA.
previous page
|