I am having K3s cluster with system pods (kube-system
namespace) & my application pods (xyz-system
namespace) running.
I want to stop all of the K3s pods & reset the containerd state, so I used /usr/local/bin/k3s-killall.sh script and all pods got stopped (at least I was not able to see anything in watch kubectl get all -A
. K3s and kubectl are still installed, as I can see k3s -v output.
Can someone tell me how to start the k3s server up again because now after firing kubectl get all -A
I am getting message The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
PS: - When I ran k3s server
command, for fraction of seconds I can see the same above pods(with same pod ids) that I mentioned while the command is running. After few seconds, command get exited and again same message The connection to the...
start displaying. Does this means that k3s-killall.sh
have not deleted my pods as it is showing the same pods with same ids ( like pod/some-app-xxx
)?