I'm trialling SLES 12 w/ HAE to run a Fileserver cluster consisting of two nodes. Idea was to use SCSI persistant reservation as a fencing method using the fence_scsi script from stonith fence agents. Two nodes (a, b) are connected via FC to the same LUN which is then exported via NFS from the active node only.
The issue is with fence_scsi as crm fails/complains that nodename/key isn't supplied.
primitive storage-fence stonith:fence_scsi \
params action=off devices="/dev/mapper/mpath_test" \
op monitor interval=60s timeout=0s
I end up with:
storage-fence_start_0 on fs009a 'unknown error' (1): call=18, status=Error, last-rc-change='Wed Jun 17 00:51:40 2015', queued=0ms, exec=1093ms
storage-fence_start_0 on fs009b 'unknown error' (1): call=18, status=Error, last-rc-change='Wed Jun 17 00:56:42 2015', queued=0ms, exec=1101ms
and
2015-06-17T01:34:29.156751+02:00 fs009a stonithd[25547]: warning: log_operation: storage-fence:25670 [ ERROR:root:Failed: nodename or key is required ]
2015-06-17T01:34:29.156988+02:00 fs009a stonithd[25547]: warning: log_operation: storage-fence:25670 [ ]
2015-06-17T01:34:29.157234+02:00 fs009a stonithd[25547]: warning: log_operation: storage-fence:25670 [ ERROR:root:Please use '-h' for usage ]
2015-06-17T01:34:29.157460+02:00 fs009a stonithd[25547]: warning: log_operation: storage-fence:25670 [ ]
Now If nodename
is supplied then it doesn't complain.
But then I don't understand the fencing configuration.
Should I setup two stonith:fence_scsi
resources each "stickied" to each of the two nodes?
This is an example from RHEL that takes care of the whole thing, no additional constraints (and it works!)
pcs stonith create my-scsi-shooter fence_scsi devices=/dev/sda meta provides=unfencing
Reference from RHEL documentation
Note that SLES12 still uses crm while RHEL uses pcs. Also in SLES the meta attribute provides
doesn't exist. Is there a way to translate the RHEL pcs command to SLES?
Here the complete config:
# crm configure show
node 739719956: fs009a \
attributes maintenance=off standby=off
node 739719957: fs009b \
attributes maintenance=off standby=off
primitive clusterIP IPaddr2 \
params ip=172.23.59.22 cidr_netmask=25 \
op monitor interval=10s timeout=20s \
op stop interval=0s timeout=20s \
op start interval=0 timeout=20s
primitive fs_storage_test Filesystem \
params device="/dev/mapper/mpath_test_part1" directory="/TEST" fstype=ext4 \
op monitor timeout=40 interval=20 \
op start timeout=60 interval=0 \
op stop timeout=60 interval=0 \
meta target-role=Started
primitive nfs-server systemd:nfsserver \
op monitor interval=60 timeout=15 \
op start interval=0 timeout=15 \
op stop interval=0 timeout=15
primitive storage-fence stonith:fence_scsi \
params action=off devices="/dev/mapper/mpath_test" verbose=false \
op monitor interval=60s timeout=0s \
meta target-role=Started
group nas-service clusterIP fs_storage_test nfs-server \
meta target-role=Started
location constraint-location-a nas-service 100: fs009a
property cib-bootstrap-options: \
dc-version=1.1.12-ad083a8 \
cluster-infrastructure=corosync \
cluster-name=fs009 \
stonith-enabled=true \
no-quorum-policy=stop \
last-lrm-refresh=1434493344
rsc_defaults rsc-options: \
resource-stickiness=100
corosync.conf http://pastebin.com/M5sr7htC
corosync 2.3.3
pacemaker 1.1.12
0 Answers