I run gitlab in a docker container and it separates its dependencies (MySQL, Redis, Mailserver) quite nicely into separate docker containers. Running them is not a problem, I start them in reverse order: the dependencies first, than gitlab itself.
From time to time I have to restart the docker host. Currently I ssh into the docker host and manually restart the containers. Is there a better way for it? Like just tell some service to start the gitlab container and it takes care of starting its dependencies first? I know I can create individual init scripts for each docker container, but that's not what I'm looking for.
I think you can look at Decking
Also you can manage dependencies in a way CoreOS does it. By writing a
Unit
file for your maingitlab
container like:Where
mysql.serice
isUnit
file for MySQL container,redis.service
a Redis one, etc.You might even want to look into the 'official' Fig project, which has now been replaced by Docker Compose. It should be fairly easy to configure / setup.
Your use case of running gitlab is basically the same as the Fig - Wordpress example or by using the gitlab-compose script
And if you're working on a Mac, you might want to have a look at the Docker toolbox which includes Compose, but also various other tools for getting up and running quickly!
In case anyone finds this useful, I wrote a
fish
shell script (should be easily portable tobash
) usingdocker inspect
to start all dependencies of my containers. Here is the code, using jq to parse the json:Note that this code assumes that there are subdirectories in the current directory which correspond to a docker container with the same name. It also doesn't deal with circular dependencies (I don't know if any of the other tools does), but it was also written in under half an hour. If you only have a single container you simply use the
docker_links_lookup
function like this:Edit:
Another handy function I started using in the above script is this one:
Instead of just starting a container, it looks up the ports that the container exposes, and tests to see if it can connect to them. Useful if you have things like a database container, which may perform cleanups when it is started and therefore take some time to actually be available on the network. Use it like this:
Or in case you are using the above script, replace the last line with this:
This last line will make sure that all containers are only started after their dependencies, while also waiting for the dependencies to actually complete their startup. Note that this is probably not applicable to every setup, as some servers might open their ports right away, without actually being ready (although I don't know any servers that actually do this, but this is the reason the docker developers give when asking them about a feature like this).