I am busy setting up new k8s cluster.
I am using rke with the --max-pods: 200
kubelet: # https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/#extra-args
extra_args:
- max-pods: 200 # https://forums.rancher.com/t/solved-setting-max-pods/11866/5
How do I check if the running node has been created with the correct settings.
Inside Kubernetes docs regarding Building large clusters we can read that at v1.17 supports:
Inside GKE a hard limit of pods per node is
110
because of available addresses.This is described in Optimizing IP address allocation and Quotas and limits.
As or setting max pods for
Rancher
here is a solution [Solved] Setting Max Pods.There also is a discussion about Increase maximum pods per node
I hope this provides a bit more insides to the limits.
The following command will return the maximum pods value for
<node_name>
:edit: fixed typo in my command, thanks @Shtlzut.
I found this is the best way
Refer to the table here https://www.stackrox.com/post/2020/02/eks-vs-gke-vs-aks/ and https://learnk8s.io/kubernetes-node-size#:~:text=Most%20managed%20Kubernetes%20services%20even,of%20the%20type%20of%20node.
enter image description here
Most managed Kubernetes services even impose hard limits on the number of pods per node:
On Amazon Elastic Kubernetes Service (EKS), the maximum number of pods per node depends on the node type and ranges from 4 to 737. On Google Kubernetes Engine (GKE), the limit is 100 pods per node, regardless of the type of node. On Azure Kubernetes Service (AKS), the default limit is 30 pods per node but it can be increased up to 250.