A pod in my Kubernetes cluster is stuck on "ContainerCreating" after running a create. How do I see logs for this operation in order to diagnose why it is stuck? kubectl logs
doesn't seem to work since the container needs to be in a non-pending state.
kubectl describe pods
will list some (probably most but not all) of the events associated with the pod, including pulling of images, starting of containers.More info could be provided in the events.
However do note that sorting events might not work correctly due to this bug: https://github.com/kubernetes/kubernetes/issues/29838
Alternatively:
From: https://github.com/kubernetes/kubernetes/issues/29838#issuecomment-789660546
In my case I had an event relating to a pod:
In my case, docker's access to internet was blocked. It was solved using a proxy (using sandylss's comment):
minikube stop
minikube delete
export http_proxy=http://user:pass@ip:port
export https_proxy=http://user:pass@ip:port
export no_proxy=192.168.99.0/24
export no_proxy=$no_proxy,$(minikube ip)
export NO_PROXY=$no_proxy,$(minikube ip)
Then, to check if docker has access to internet, run:
in the cluster (connect to the cluster using
minikube ssh
); stop the process if it starts downloading.My second problem was slow internet connection. Since the required docker images are on the order of 100MB, both docker containers and Kubernetes pods remained in
\pause
andContainerCreating
states for 30 minutes.To check if docker is downloading the images, run:
in the cluster, which shows the temporary image file[s] that are being downloaded, empty otherwise.
If you are developing in minikube and using VPN, docker can use your VPN via fiddler. That is, docker will be connected to fiddler's ip:port, and fiddler is connected to the VPN. Otherwise, VPN is not shared between your host and minikube VM.
The one time I hit this was because my resource declarations were accidentally very very small.
resources: limits: cpu: 1000m memory: 1024M requests: cpu: 1000m memory: 1024M
vs
resources: limits: cpu: 1000m memory: 1024m requests: cpu: 1000m memory: 1024m
capitalizing that m makes a very large difference in resource use. I was stuck on ContainerCreating because I had not given enough memory to my container.
In my case, a pod was stuck at 'ContainerCreating' because a docker image pull was hung (some layers were downloaded, some were stuck at "downloading").
showed an event "Pulling image"
Tried to pull that image using docker image pull... and saw that it was hanging.
It turned out that there is a bug in concurrent pulls of layers. Changing docker config to limit concurrency solved the problem.
Added this to docker config (on windows, docker-desktop UI, settings, Docker Engine) to limit concurrency: