I have an Ubuntu Server in my basement running MicroK8S, installed via Snap. I'm trying to create a simple pod using kubectl run
that I can exec into for debugging purposes. But I keep failing.
At first, I tried this command:
kubectl run -it --rm --restart=Never busybox --image=busybox -- /bin/ash
But every time I try to run that, I get this error:
pod "busybox deleted"
pod default/busybox terminated (ContainerCannotRun)
failed to create OCI runtime control socket: stat /run/user/0/snap.microk8s: no such file or directory: unknown
Then I decided to try and run the container first, and exec into second, as two separate commands. Surprisingly, the initial pod creation actually works, a la this command:
kubectl run --image=busybox --restart=Never busybox --image=busybox --command -- tail -f /dev/null
But then when I try to exec into it using this command...
kubectl exec -it busybox -- /bin/ash
...I wind up with this error:
failed to create runc console socket: stat /run/user/0/snap.microk8s: no such file or directory: unknown
command terminated with exit code 126
Both errors only come into play once I try to connect to a running pod, and both errors reference /run/user/0/snap.microk8s
. I'm not really sure what those errors mean, though. Is that a problem with my configuration? Or am I missing some dependency? Or is the hard disk corrupt? Or something else entirely? And ultimately: how can I get this working?
You did not provide which verion of microk8s you are using.
As you are using kubectl instead of microk8s.kubectl I assume you have created alias. I have tasted 4 last versions of microk8s 1.11 1.12 1.13 and 1.14. It seems that this issue occurs only in version 1.11. To check what version you are using currently please execute
Remove old microk8s version
and install latest version of microk8s
As additional info, if you need pod which will be in Running state for longer time, you can use nginx image instead of busybox.
If you are new to microk8s very helpful will be this document
Try clearing out old containers and images - this seems to be a docker (not kubernetes) issue caused by lack of space. Some docker pruning can fix that.