On Debian servers we're supposed to store certificates on /etc/ssl/certs dir, and key files on /etc/ssl/private dir.
The problem is SSL private key files use to be readable only by the owner. So, I'm wondering what's the best practices regarding how to make it readable for Docker containers?
I mean, I have a service running on a Docker container, which needs to ready SSL cert and key files in order to expose it via HTTPS. In its default set up, I'm getting permission denied accessing /etc/ssl/private/server.key file.
To sort this out I moved this file to another directory and set it as 644. But, is that right?
Any help would be appreciated
No that is a really bad idea. Setting it to 644 makes the key file readable by everyone, and that's definitely something you don't want to do.
The preferred option is that you use a reverse proxy like nginx, haproxy, … which are then responsible to establish the https connections. This proxy can then forward the connection to the docker container which does not need to do the https part then.
The advantage of this is that those reverse proxies like nginx, haproxy, … are written in a way that they try to minimize the possibility that someone can steal the key.
If there is really the need that a container is able to read the key file for some reason, then create a group on the container host dedicated to SSL keys. Choose a GID for that group that will also be free in the containers. Set the group for
/etc/ssl/private_dir
and the key to this dedicated group. And allow that group to read the key. In each container that should be able to read that key create the same group with the same ID, look at what user the process that should read that key is running, and add that user to the created group.But if you allow an application to read that key (especially if you wrote it yourself) you need to ensure that the key is still saved. E.g. if you application uses 3rd party libraries you need to fully trust those libraries that they aren't malicious and search for such keys to send some to foreign servers.)
I just figured out how to do it properly using acme.sh:
Set a label on the container, the label will later be used to find the container
docker run --rm -it -d --label=sh.acme.autoload.domain=example.com nginx:latest
Export variables in a way acme can recognize it and deploy it into container
# The label value to find the container export DEPLOY_DOCKER_CONTAINER_LABEL=sh.acme.autoload.domain=example.com
# The target file path in the container. # The files will be copied to the position in the container. export DEPLOY_DOCKER_CONTAINER_KEY_FILE="/etc/nginx/ssl/example.com/key.pem" export DEPLOY_DOCKER_CONTAINER_CERT_FILE="/etc/nginx/ssl/example.com/cert.pem" export DEPLOY_DOCKER_CONTAINER_CA_FILE="/etc/nginx/ssl/example.com/ca.pem" export DEPLOY_DOCKER_CONTAINER_FULLCHAIN_FILE="/etc/nginx/ssl/example.com/full.pem"
# The command to reload the service in the container. export DEPLOY_DOCKER_CONTAINER_RELOAD_CMD="service nginx force-reload"
Execute acme script, so it will deploy it into container image
acme.sh --deploy --deploy-hook docker -d example.co
https://github.com/acmesh-official/acme.sh/wiki/deploy-to-docker-containers