I am trying to join new node to existing v1.21.3
cluster with Calico CNI. join command giving clusterCIDR
warning.
How to fix this subnet warning message?
# kubeadm join master-vip:8443 --token xxx --discovery-token-ca-cert-hash sha256:xxxx
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0809 14:10:27.717696 75868 utils.go:69] The recommended value for "clusterCIDR" in "KubeProxyConfiguration" is: 10.201.0.0/16; the provided value is: 10.203.0.0/16
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
update:
I was using 10.201.0.0/16
during the cluster setup, later I changed to 10.203.0.0/16
. not sure where its still getting 10.201.0.0/16
subnet value.
Here is the sub net value.
# sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr
- --cluster-cidr=10.203.0.0/16
kubectl cluster-info dump | grep cluster-cidr
"--cluster-cidr=10.203.0.0/16",
"--cluster-cidr=10.203.0.0/16",
"--cluster-cidr=10.203.0.0/16",
step to update pod CIDR from 10.201.0.0/16 to 10.203.0.0/16
- using this command updated the kubeadm-confg configmap
kubectl -n kube-system edit cm kubeadm-config
podSubnet: 10.203.0.0/16
- Updated kuber-controller-manger and restarted it.
sed -i 's/10.201.0.0/10.203.0.0/' /etc/kubernetes/manifests/kube-controller-manager.yaml
after updating the IP.
all config shows subnet as 10.203.0.0
but pods creating in `10.201.0.0' subnet.
# kubectl get cm kube-proxy -n kube-system -o yaml |grep -i clusterCIDR
clusterCIDR: 10.203.0.0/16
# kubectl get no -o yaml |grep -i podcidr
podCIDR: 10.203.0.0/24
podCIDRs:
podCIDR: 10.203.1.0/24
podCIDRs:
podCIDR: 10.203.2.0/24
podCIDRs:
podCIDR: 10.203.3.0/24
podCIDRs:
podCIDR: 10.203.5.0/24
podCIDRs:
podCIDR: 10.203.4.0/24
podCIDRs:
podCIDR: 10.203.6.0/24
podCIDRs:
I managed to replicate your issue. I got the same error. There is a need to update few other configuration files.
To fully change pods and nodes IP pool you need to update
podCIDR
andClusterCIDR
values in few configuration files:update ConfigMap
kubeadm-confg
- you did it alreadyupdate file
/etc/kubernetes/manifests/kube-controller-manager.yaml
- you did it alreadyupdate node(s) definition with proper
podCIDR
value and re-add them to the clusterupdate ConfigMap
kube-proxy
inkube-system
namespaceadd new IP pool in Calico CNI and delete the old one, recreate the deployments
Update node(s) definition:
kubectl get no
- in my case it'scontroller
kubectl get no controller -o yaml > file.yaml
file.yaml
-> updatepodCIDR
andpodCIDRs
values with your new IP range, in your case10.203.0.0
kubectl delete no controller && kubectl apply -f file.yaml
Please note you need to do those steps for every node in your cluster.
Update ConfigMap
kube-proxy
inkube-system
namespacekube-proxy
:kubectl get cm kube-proxy -n kube-system -o yaml > kube-proxy.yaml
kube-proxy.yaml
-> updateClusterCIDR
value with your new IP range, in your case10.203.0.0
kube-proxy
ConfigMap:kubectl delete cm kube-proxy -n kube-system && kubectl apply -f kube-proxy.yaml
Add new IP pool in Calico and delete the old one:
Download the Calico binary and make it executable:
Add new IP pool:
Check if there is new IP pool:
calicoctl get ippool -o wide
Get the configuration to disable old IP pool ->
calicoctl get ippool -o yaml > pool.yaml
Edit the configuration: -> add
disabled:true
fordefault-ipv4-ippool
in thepool.yaml
:Apply new configuration:
calictoctl apply -f pool.yaml
Excepted output of the
calicoctl get ippool -o wide
command:Re-create pods that are in
10.201.0.0
network (in every namespace, includingkube-system
namespace): just delete them and they should re-create instantly in new IP pool range , for example:You can also delete and apply deployments.
After applying those steps, there is no warning about
clusterCIDR
value when adding new node. New pods are created in proper IP pool range.Source: