I am managing a 3 node Ovirt 4.3.7 cluster with a hosted engine appliance; the nodes are also glusterfs nodes. The systems are:
- ovirt1 (node at 192.168.40.193)
- ovirt2 (node at 192.168.40.194)
- ovirt3 (node at 192.168.40.195)
- ovirt-engine (engine at 192.168.40.196)
The services ovirt-ha-agent
and ovirt-ha-broker
are continually restarting on ovirt1 and ovirt3, and this does not seem healthy (the first notice we had of this problem were the logs for these services filling on these systems).
All indications from the GUI consoles are that overt-engine is running on ovirt3. I tried migrating overt-engine to ovirt2, but got a failure without further explanation.
Users are able to create, start, and stop VMs on all three nodes without issue.
I am seeing the following output from gluster-eventaapi status
and hosted-engine --vm-status
on each of the nodes:
ovirt1:
[root@ovirt1 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-engine.low.mdds.tcs-sec.com:80/ovirt-engine/services/glusterevents
+---------------+-------------+-----------------------+
| NODE | NODE STATUS | GLUSTEREVENTSD STATUS |
+---------------+-------------+-----------------------+
| 192.168.5.194 | UP | OK |
| 192.168.5.195 | UP | OK |
| localhost | UP | OK |
+---------------+-------------+-----------------------+
[root@ovirt1 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
ovirt2:
[root@ovirt2 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-engine.low.mdds.tcs-sec.com:80/ovirt-engine/services/glusterevents
+---------------+-------------+-----------------------+
| NODE | NODE STATUS | GLUSTEREVENTSD STATUS |
+---------------+-------------+-----------------------+
| 192.168.5.195 | UP | OK |
| 192.168.5.193 | UP | OK |
| localhost | UP | OK |
+---------------+-------------+-----------------------+
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host ovirt2.low.mdds.tcs-sec.com (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt2.low.mdds.tcs-sec.com
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : e564d06b
local_conf_timestamp : 9753700
Host timestamp : 9753700
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9753700 (Wed Mar 25 17:45:50 2020)
host-id=1
score=0
vm_conf_refresh_time=9753700 (Wed Mar 25 17:45:50 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Apr 23 21:29:10 1970
--== Host ovirt3.low.mdds.tcs-sec.com (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : ovirt3.low.mdds.tcs-sec.com
Host ID : 3
Engine status : unknown stale-data
Score : 3400
stopped : False
Local maintenance : False
crc32 : 620c8566
local_conf_timestamp : 1208310
Host timestamp : 1208310
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1208310 (Mon Dec 16 21:14:24 2019)
host-id=3
score=3400
vm_conf_refresh_time=1208310 (Mon Dec 16 21:14:24 2019)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
ovirt3:
[root@ovirt3 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-engine.low.mdds.tcs-sec.com:80/ovirt-engine/services/glusterevents
+---------------+-------------+-----------------------+
| NODE | NODE STATUS | GLUSTEREVENTSD STATUS |
+---------------+-------------+-----------------------+
| 192.168.5.193 | DOWN | NOT OK: N/A |
| 192.168.5.194 | UP | OK |
| localhost | UP | OK |
+---------------+-------------+-----------------------+
[root@ovirt3 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
The steps I've taken so far are:
- find that logs for the
ovirt-ha-agent
andovirt-ha-broker
service are not rotating correctly on nodes ovirt1 and ovirt3; the logs show the same failure on both nodes. The broker.log contains this statement repeated frequently:
MainThread::WARNING::2020-03-25 18:03:28,846::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: [Errno 5] Input/output error: '/rhev/data-center/mnt/glusterSD/ovirt2:_engine/182a4a94-743f-4941-89c1-dc2008ae1cf5/ha_agent/hosted-engine.lockspace'
- find that the RHEV documentation suggests running
hosted-engine --vm-status
to understand the problem; that output (above) suggests that ovirt1 is not completely part of the cluster. - I asked on the Ovirt forum yesterday morning, but since I am new there, my question needs a moderator review, and that hasn't happened yet (if the users of this cluster weren't all suddenly working from home, and suddenly dependent upon it, I wouldn't be worried about waiting a few days).
How should I recover from this situation? (I think I need to recover something in the glusterfs cluster first, but can't find a hint or don't have the language to form the right query.)
UPDATE: After restarting glusterd
on ovirt3, the glusterfs cluster appears to be healthy, but with no change in behavior on the ovirt services.
The steps required to recover from the above situation amounted to running the following on ovirt3:
This caused ovirt-engine to start on ovirt2. After that, I restarted the services ovirt-ha-broker.service and ovirt-ha-agent.service on ovirt3.