Resource Management for Pods and Containers describes how to set resource requets and limits for "regular" pods in Kubernetes. Is there a supported/reccomended way to set these limits for control plane components such as kube-apiserver?
Things I considered:
- Modifying static manifests, e.g. in
/etc/kubernetes/manifests/kube-apiserver.yaml
. This could work but it will be overwritten bykubeadm
during next upgrade. - Setting
kube-reserved
orsystem-reserved
flags. This could work too however again - they are defined in just one ConfigMap (e.g.kubelet-config-1.21
) and will be overwritten bykubeadm
during node upgrade. The same limits will apply to control plane nodes and worker nodes and I don't want that.
I can overcome this with something like ansible but then ansible will be "figting" with kubeadm and I'd like to avoid that.
What problem am I trying to solve?
I have a small homelab kubernetes installation. I'd like to allow running regular pods on control plane node(s), but I'd like to be able to reserve some resources (primarily memory) for control plane components. I.e. I'd like to be able to set requests
on things like kube-apiserver so that scheduler knows not to put any other pods (they will also have appropriate requests
) in its place.
Yes, you can use
kubeadm init
withpatches
command line flag. Look at this github page. The documentation of this thing could be also interested. See also official documentation: Customizing the control plane with patches:Here's example how to set resources on kube-apiserver:
Create
kube-apiserver.yaml
file in some directory (e.g./home/user/patches
) with following contents:Then use --patches flag every time during node upgrade: use
kubeadm upgrade node --patches /home/user/patches/
orkubeadm upgrade apply v1.22.4 --patches /home/user/patches/
The other option will be supply extra flags to control-plane components. For this, check this guide: Customizing the control plane with flags in ClusterConfiguration: