While trying to configure an experimental Kubernetes cluster (in a few VMs on my laptop) as "high available" I found the advise to do this using the combination of keepalived and haproxy ( https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing ).
Looking at the configuration settings I read
${STATE} is MASTER for one and BACKUP for all other hosts, hence the virtual IP will initially be assigned to the MASTER.
${PRIORITY} should be higher on the master than on the backups. Hence 101 and 100 respectively will suffice.
and these settings surprise me. Seemingly I have to choose which of those systems is to be the initial master and I have to "hard" configure this in the nodes themselves.
To me this "high available" setup deviates from the "pet"/"cattle" analogy I find in Kubernetes.
Other systems like for example HBase have a similar setup (one active and multiple standby leaders) and all are configured "identically" (election is done via ZooKeeper).
Is there a way that I can configure Keepalived (for use in Kubernetes) in such a way that all nodes have the same config and it still works correctly?