If I were to do a sudo reboot on a Kubernetes master node, would Kubernetes be smart enough to drain itself from the cluster?
Say I have a web service running one pod that happened to land on the master node. If I were to "sudo reboot" the master node, would he tell another node to spin up a copy before he died?
This is not expected behavior for Kubernetes. Actually, K8s represent master node as the Control plane for the all cluster managed operations. The main contributors on each Node are: kubelet, kube-proxy, and Container run-time software (Docker, cri-o, etc.).
Whenever you wish to reboot OS on the particular Node(Master, worker), K8s cluster engine does not aware for that action and it keeps all the cluster related events in ETCD key value storage, backing up the most recent data. As soon as you wish carefully prepare cluster Node reboot, you might have to adjust Maintenance job on this Node in order to drain it from scheduling and gracefully terminate all the existing Pods.
If you compose any relevant K8s resource within defined set of replicas, then ReplicationController guarantees that a specified number of pod replicas are running at any one time through each available Node. It simply re-spawns Pods if they failed health check, deleted or terminated, matching desired replicas.