We host many different domains and currently the certs live in a database. It's becoming increasingly tedious to get the certs out of there and into files just so nginx can load them.
It would be much nicer if we could something like the following:
server {
server_name www.example.com;
listen 443 ssl;
ssl_certificate_key https://cert-distributor.acme.com/$host/server.key;
ssl_certificate https://cert-distributor.acme.com/$host/server.crt;
...
}
Bonus points if the cert is cached in case cert-distributor.acme.com
is down.
Answer. Short. It is not. At the moment of writing this post, the documentation stays:
Workarounds. There are various workarounds for your problem. I would call some of them a preferred way to do it. So, order matters.
You can certificates deployment with Ansible, Salt or Chef; and keep them locally, on the host only.
You can override
nginx
systemd
-unit that requires downloading a file fromURL
to specific path and restartnginx
.You can make a
systemd-timer
, that downloads certificate and restartsnginx
.Note. Remember about security of the endpoint and Man in the middle attacks. More-over. How would you authenticate if specific host is allowed to download a file or not? Kerberos?