I've setup a Ubuntu Jammy Jellyfish server using cloudimg and cloudinit. The idea of this server is to sit within a Kubernetes cluster powered by Rancher.
The server itself is a KVM VM running 12GB RAM, 2 sockets each with 4 cores and NUMA configuration. I have multiple servers on this cluster that run in a similar setup with absolutely no problems at all and I've tried mixing up the configuration (no NUMA, no ballooning, etc).
All I've installed is docker.io & qemu-guest-agent, checked all the packages are up to date which they are. It's running Docker 20.10.12.
I've then added it to the Kubernetes cluster and shortly after noticed it had become unresponsive. Resetting the VM and starting Docker (well, attempting to start Docker) and viewing the output of top
reveals that there are loads of agent
processes being spawned, the load shoots through the roof and the memory is rapidly consumed. Shortly after the node becomes unresponsive.
Has anyone experienced this behaviour before / know why the agent processes are spawning or would be able to point me in the direction of debugging it? I'd like to view the Docker containers that are running but it doesn't even let me do that as Docker becomes unresponsive.
Thanks!
0 Answers