I have four nodes A-D with service1 clone (clone-max 3) and with 3 virtual IPs. I have constraints - clone-ip1, clone-ip2, clone-ip3 - this works well when only pacemaker handles services.
I'd like Pacemaker to "audit" the nodes automaticly so when service is started without constraint pacemaker stops it.
It seems that when nodes A, B, C works in active state - pacemaker doesn't care what is going on on node D.
When I force reprobe by crm_resource -P
- it stops unnecessary service on node D.
Is there a way to make pacemaker check all nodes? (multiple_active doesn't seem to work...)
In a Pacemaker cluster, Pacemaker does expect to manage all services.
In your example, Pacemaker has no record of starting the clone-ip* on Node-D, therefore it's not going to run monitor operations there either.
crm_resource -P
tells the cluster to check all nodes, for all the services defined in the cluster, and then react accordingly; this is why your IP is removed.You might be able to accomplish your goal by increasing
clone-max=4
, and placing a negative infinity location constraint in your CIB to keep the service from running on Node-D: