I'm working on getting a docker-based distributed task system running, and the major sticking point I'm running into is how to get credentials into each docker VM.
Basically, each instance needs to have a unique name and either a password/SSL cert. It then connects back home on startup, and starts processing tasks.
Making the instance is fairly straightforward, but what is a good approach to injecting the credentials into each instance? The general consensus seems to be "use environment variables", but using a 500+ character environment variable (e.g. the whole SSL cert) seems crude.
Right now, the application I'm trying to pack up uses a simple JSON file for configuration. Is there any way to add files to a docker instance at runtime, or something similar? Perhaps a last build-step that takes a parametrically defined file?
You can add a Docker volume and point it, on a per container basis, to a folder on the host containing the SSL cert you want. You can even make a "single file" volume.
Then whatever you're using to start the workers, a bash script, ansible, etc can select the right cert on the host but inside of each launched container the situation is identical (cert is always in the same path).
You might want to make these 'file volumes' read-only volumes with
:ro
to reflect that they are static configuration and not something the container is expected to, or should be able to, change.