I've installed a kubernetes (v1.20.0) cluster with 3 masters and 3 nodes using kubeadm init
and kubeadm join
, all on Ubuntu 20.04. Now I need to update the configuration and
- Add
--cloud-provider=external
kubelet startup flag on all nodes as I'm going to use vsphere-csi-driver - Change the
--service-cidr
due to network requirements
However I'm not entirely sure what is the proper way of making these changes.
Kubelet
Looking at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
there is a reference to /etc/default/kubelet
but it's considered a last resort and recommends updating .NodeRegistration.KubeletExtraArgs
instead:
...
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
...
Where is this .NodeRegistration.KubeletExtraArgs
and how do I change it for all nodes in the cluster?
control-plane
From what I understand the apiserver and controller-manager are run as static pods on each master and reading their configuration from /etc/kubernetes/manifests/kube-<type>.yaml
. My first thought was to make necessary changes to these files, however according to the kubernetes docs on upgrading a kubeadm cluster, kubeadm will:
* Fetches the kubeadm ClusterConfiguration from the cluster.
* Optionally backups the kube-apiserver certificate.
* Upgrades the static Pod manifests for the control plane components.
Because I've changed the manifests manually they are not updated in the ClusterConfiguration (kubectl -n kube-system get cm kubeadm-config -o yaml
), would my changes survive an upgrade this way? I suppose I could also edit the ClusterConfiguration manually with kubeadm edit cm ...
but this seems error prone and it's easy to forget changing it every time.
According to the docs there is a way to customize control-plane configuration but that seems to be only when installing the cluster for the first time. For example, kubeadm config print init-defaults
as the name suggests only give me the default values, not what's currently running in the cluster.
Attempting to extract the ClusterConfiguration from kubectl -n kube-system get cm kubeadm-config -o yaml
and run kubeadm init --config <config>
fails in all kind of ways because the cluster is already initialized.
Kubeadm can run init phase control-plane which updates the static pod manifests but leaves the ClusterConfiguration untouched, so I would need to run the upload-config
phase as well.
Based on the above, the workflow seems to be
- Extract the ClusterConfiguration from
kubeadm -n kube-system get cm kubeadm-config
and save it to a yaml file - Modify the yaml file with whatever changes you need
- Apply changes with
kubeadm init phase control-plane all --config <yaml>
- Upload modified config
kubeadm init phase upload-config all --config <yaml>
- Distribute the modified yaml file to all masters
- For each master, apply with
kubeadm init phase control-plane all --config <yaml>
What I'm concerned about here is the apparent disconnect between the static pod manifests and the ClusterConfiguration. Changes aren't made particularly often so it's quite easy to forget that changing in one place also require changes in the other - manually.
Is there no way of updating the kubelet and control-plane settings that ensure consistency between the kubernetes components and kubeadm? I'm still quite new to Kubernetes and there is a lot of documentation around it so I'm sorry if I've missed something obvious here.