I often have to re-build Docker images, and I do that inside the host I run my containers in. Occasionally I see that this puts considerable pressure on the CPU, so I thought I could run docker build
under nice -n19
, but this seems to make no difference in terms of how much docker yields when other processes need to run.
I'd rather not use Docker Hub repos, because this is private stuff and I'm trying to save every penny I can right now. I also know that I could set up another machine as a build/repo – I could use a machine at the office, for example – but I'm not sure how.
So, question: why does nice -n19 docker build ...
seem to not be very helpful?
(bonus points for pointing me at docs on how to set up my own private build/repo machine)
The
docker
commands is a client of the docker daemon, which spawns the processes that runs in the containers. When you give a lower priority to thedocker
command, the priority of the docker daemon itself is unaffected, thus your build and exec runs at their default priority. It's like giving lower priority to your web browser does not mean that the web server is going to service your requests at lower priority.For a basic separate build machine, you can just do
docker export
,scp
,docker import
. But for a more serious build system though, you may want to run a private docker registry. Some more useful documentations:If you run your own private registry, you can do a build from your local workstation in office, then use
docker push
anddocker pull
to upload the docker image to your private registry and fetch it to wherever it needs to be.Instead of using
nice
, consider using the other options the CPU management docker API gives you:For example, I've recently updated some of my builds to reserve 1 of 6 of the CPU cores I have on one of our build machines with a command like this:
It doesn't appear that any of these options are backed by a nice value so you might want to experiment to see what works best for you. I posted some quick tests I ran but these could be improved by running a longer test and having something using CPU on the host to better simulate sharing.
--cpu-period="100000" --cpu-quota="150000"
--cpu-shares 500
nice
--cpuset-cpus 0-1
nothing (control)
Lie's answer gives the reason Docker won't
nice
your builds. I wanted to give you a practical workaround, below.One way you can make your Docker builds more gentle is to identify the resource intensive parts of your Dockerfile, and then just
nice
those parts in the Dockerfile itself. This will also make your build more gentle on remote build infrastructure (example Jenkins CI).In my case, I was building a pretty heavy Apache mod_perl container with a ton of CPAN modules. Building all those modules was the expensive part, since it compiles a lot of code using GCC. Anyhow, I was able to
nice
this step of my build by changing the line of my Dockerfile fromto
Even though this is janky and a bit hacky, I'm not sure I would publish sofware with
nice
in the Dockerfile, but you said this was for private stuff anyway, so it's probably a fitting solution.You could generalize this to any other expensive operation in a Dockerfile,
Other Examples:
Python
Node.js
An expensive compression step