I've installed a kubernetes (v1.20.0) cluster with 3 masters and 3 nodes using kubeadm init
and kubeadm join
, all on Ubuntu 20.04. Now I need to update the configuration and
- Add
--cloud-provider=external
kubelet startup flag on all nodes as I'm going to use vsphere-csi-driver - Change the
--service-cidr
due to network requirements
However I'm not entirely sure what is the proper way of making these changes.
Kubelet
Looking at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
there is a reference to /etc/default/kubelet
but it's considered a last resort and recommends updating .NodeRegistration.KubeletExtraArgs
instead:
...
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
...
Where is this .NodeRegistration.KubeletExtraArgs
and how do I change it for all nodes in the cluster?
control-plane
From what I understand the apiserver and controller-manager are run as static pods on each master and reading their configuration from /etc/kubernetes/manifests/kube-<type>.yaml
. My first thought was to make necessary changes to these files, however according to the kubernetes docs on upgrading a kubeadm cluster, kubeadm will:
* Fetches the kubeadm ClusterConfiguration from the cluster.
* Optionally backups the kube-apiserver certificate.
* Upgrades the static Pod manifests for the control plane components.
Because I've changed the manifests manually they are not updated in the ClusterConfiguration (kubectl -n kube-system get cm kubeadm-config -o yaml
), would my changes survive an upgrade this way? I suppose I could also edit the ClusterConfiguration manually with kubeadm edit cm ...
but this seems error prone and it's easy to forget changing it every time.
According to the docs there is a way to customize control-plane configuration but that seems to be only when installing the cluster for the first time. For example, kubeadm config print init-defaults
as the name suggests only give me the default values, not what's currently running in the cluster.
Attempting to extract the ClusterConfiguration from kubectl -n kube-system get cm kubeadm-config -o yaml
and run kubeadm init --config <config>
fails in all kind of ways because the cluster is already initialized.
Kubeadm can run init phase control-plane which updates the static pod manifests but leaves the ClusterConfiguration untouched, so I would need to run the upload-config
phase as well.
Based on the above, the workflow seems to be
- Extract the ClusterConfiguration from
kubeadm -n kube-system get cm kubeadm-config
and save it to a yaml file - Modify the yaml file with whatever changes you need
- Apply changes with
kubeadm init phase control-plane all --config <yaml>
- Upload modified config
kubeadm init phase upload-config all --config <yaml>
- Distribute the modified yaml file to all masters
- For each master, apply with
kubeadm init phase control-plane all --config <yaml>
What I'm concerned about here is the apparent disconnect between the static pod manifests and the ClusterConfiguration. Changes aren't made particularly often so it's quite easy to forget that changing in one place also require changes in the other - manually.
Is there no way of updating the kubelet and control-plane settings that ensure consistency between the kubernetes components and kubeadm? I'm still quite new to Kubernetes and there is a lot of documentation around it so I'm sorry if I've missed something obvious here.
I will try to address both of your questions.
1. Add --cloud-provider=external kubelet startup flag on all nodes
KubeletExtraArgs
are any arguments and parameters supported by kubelet. They are documented here. You need to use thekubelet
command with a proper flags in order to modify it. Also, notice that the flag you are about to use is going to be removed in k8s v1.23:EDIT:
To better address your question regarding:
.NodeRegistration.KubeletExtraArgs
These are also elements of the kubeadm init configuration file:
You can also find more details regarding the NodeRegistrationOptions as well as more information on the fields and usage of the configuration.
Also, note that:
EDIT2:
kubeadm init
is supposed to be used only once when creating a cluster whenever you use it with flags or a config file. You cannot change the configs by executing it again with different values. Here you will find info regarding kubeadm and it's usage. Once the cluster is setup kubeadm should be dropped and changes be made directly to the static pod manifests.2. Change the --service-cidr due to network requirements
This is more complicated. You could try to do this similarly like here or here but that approach is prone to mistakes and rather not recommended.
The more feasible and safer way would be to simply recreate the cluster with
kubeadm reset
andkubeadm init --service-cidr
. The option to automatically change the CIDRs was not even expected from the Kubernetes perspective. So in short, the kubeadm reset is the way to go here.With respect to
multiple sources such as this one point to adding to
lines like e.g.
to set the environment for a custom directory for static pods here for example, instead of using the cli with
as it suggests in the docs.
If you google for
$KUBELET_EXTRA_ARGS
you'll find a lot of examples wrt the aforementioned10-kubeadm.conf
file.