I am trying to setup an active/passive (2 nodes) Linux-HA cluster with corosync and pacemaker to hold a PostgreSQL-Database up and running. It works via DRBD and a service-ip. If node1 fails, node2 should take over. The same if PG runs on node2 and it fails. Everything works fine except the STONITH thing.
Between the nodes is an dedicated HA-connection (10.10.10.X), so I have the following interface configuration:
eth0 eth1 host
10.10.10.251 172.10.10.1 node1
10.10.10.252 172.10.10.2 node2
Stonith is enabled and I am testing with a ssh-agent to kill nodes.
crm configure property stonith-enabled=true
crm configure property stonith-action=poweroff
crm configure rsc_defaults resource-stickiness=100
crm configure property no-quorum-policy=ignore
crm configure primitive stonith_postgres stonith:external/ssh \
params hostlist="node1 node2"
crm configure clone fencing_postgres stonith_postgres
crm_mon -1
shows:
============
Last updated: Mon Mar 19 15:21:11 2012
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b
2 Nodes configured, 2 expected votes
4 Resources configured.
============
Online: [ node2 node1 ]
Full list of resources:
Master/Slave Set: ms_drbd_postgres
Masters: [ node1 ]
Slaves: [ node2 ]
Resource Group: postgres
fs_postgres (ocf::heartbeat:Filesystem): Started node1
virtual_ip_postgres (ocf::heartbeat:IPaddr2): Started node1
postgresql (ocf::heartbeat:pgsql): Started node1
Clone Set: fencing_postgres
Started: [ node2 node1 ]
Problem is: when I cut the connection between the eth0-interfaces, it kills both nodes. I think it is a problem with the quorum, because there are just 2 nodes. But I don't want to add a 3rd node just for calculation of the right quorum.
Are there any ideas to solve this problem?
This is a slightly older question but the problem presented here is based on a misconception on how and when failover in clusters, especially two-node clusters, works.
The gist is: You can not do failover testing by disabling communication between the two nodes. Doing so will result in exactly what you are seeing, a split-brain scenario with additional, mutual STONITH. If you want to test the fencing capabilities, a simple
killall -9 corosync
on the active node will do. Other ways arecrm node fence
orstonith_admin -F
.From the not quite complete description of your cluster (where is the output of
crm configure show
andcat /etc/corosync/corosync.conf
?) it seems you are using the 10.10.10.xx addresses for messaging, i.e. Corosync/cluster communication. The 172.10.10.xx addresses are your regular/service network addresses and you would access a given node, for example using SSH, by its 172.10.10.xx address. DNS also seems to resolve a node hostname likenode1
to 172.10.10.1.You have STONITH configured to use SSH, which is not a very good idea in itself, but you are probably just testing. I haven't used it myself but I assume the SSH STONITH agent logs into the other node and issues a shutdown command, like
ssh root@node2 "shutdown -h now"
or something equivalent.Now, what happens when you cut cluster communication between the nodes? The nodes no longer see each node as alive and well, because there is no more communication between them. Thus each node assumes it is the only survivor of some unfortunate event and tries to become (or remain) the active or primary node. This is the classic and dreaded split-brain scenario.
Part of this is to make sure the other, obviously and presumably failed node is down for good, which is where STONITH comes in. Keep in mind that both nodes are now playing the same game: trying to become (or stay) active and take over all cluster resources, as well as shooting the other node in the head.
You can probably guess what happens now.
node1
doesssh root@node2 "shutdown -h now"
andnode2
doesssh root@node1 "shutdown -h now"
. This doesn't use the cluster communication network 10.10.10.xx but the service network 172.10.10.xx. Since both nodes are in fact alive and well, they have no problem issuing commands or receiving SSH connections, so both nodes shoot each other at the same time. This kills both nodes.If you don't use STONITH then a split-brain could have even worse consequences, especially in case of DRBD, where you could end up with both nodes becoming Primary. Data corruption is likely to happen and the split-brain must be resolved manually.
I recommend reading the material on http://www.hastexo.com/resources/hints-and-kinks which is written and maintained by the guys who contributed (and still contribute) a large chunk of what we today call "the Linux HA stack".
TL;DR: If you are cutting cluster communication between your nodes in order to test your fencing setup, you are doing it wrong. Use
killall -9 corosync
,crm node fence
orstonith_admin -F
instead. Cutting cluster communication will only result in a split-brain scenario, which can and will lead to data corruption.You could try adding
auto_tie_breaker: 1
into the quorum section of /etc/corosync/corosync.confTry reading the Quorum and two-node clusters chapter of the Pacemaker documentation.
Check this for HA cluster using Pacemaker: http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/index.html
It is more a question than an answer. I'm trying to set-up a hacluster configuration very similar to that of Mmore. I would really appreciate to obtain his configuration files, (the conf files, scripts ou pcs commands to configure and launch the cluster etc.) I'm at this moment trying to find a stonith configuration independent of the available hardware and of the underlying operating system (there are several sites to equip, with different infrastructures and linux versions and myself, I do the tests on a pair of notebooks running openindiana: I got corosync and pacemaker running but I have currently difficulties with stonith: I get error messages even if I disable it.)