I have a single node cluster in container engine that sits at ~40% cpu when idle as seen by the monitoring dashboard.
When I click through the monitoring, all the pods are at 0% cpu.
When I ssh into the instance, I can see that docker, kubelet, and heapster are the main culprits, but I don't understand what work they're doing.
If I look in the logs, I see lines like the following, repeated over and over many times per second. I'm guessing this is related.
gke-rogue-dev-7248e467-node-9hvh 2015-10-13 19:50:55.000 time="2015-10-13T23:50:55Z" level=info msg="-job containers() = OK (0)"
gke-rogue-dev-7248e467-node-9hvh 2015-10-13 19:50:55.000 time="2015-10-13T23:50:55Z" level=info msg="+job containers()"
gke-rogue-dev-7248e467-node-9hvh 2015-10-13 19:50:55.000 time="2015-10-13T23:50:55Z" level=info msg="GET /containers/json"
What should be my next step to figure out why this is happening?
I had the same question recently: https://serverfault.com/q/728211/310585
The answer to "what work they're doing" is "logging and monitoring".
To prevent this overhead you just need to unselect those features when creating the cluster. In the developer console there are check boxes for them. In the CLI add the options
--no-enable-cloud-logging --no-enable-cloud-monitoring
to thegcloud container clusters create
command.