I'm having a bunch of virtual hosts served behind a nginx reverse proxy. At the end of each there is a server that has valid certificates for the given virtual domain.
E.g.
api.example.com -> proxy_pass https://api.example.com; # which resolves locally to a docker instance that has the certificates for api.example.com
Now, my problem is that, the proxy server itself seems to be needing its own certificates and I don't understand why. Since domain names and subdomains don't get encrypted over https, why can't I simply forward the certificate of each proxied server? Or can I? How?
This is what I have so far:
server {
listen 80;
listen [::]:80;
server_name *.example.com;
location / {
proxy_pass http://$http_host$uri$is_args$args;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name *.example.com;
location / {
proxy_pass https://$http_host$uri$is_args$args;
}
}
But the second directive requires certificates.
Think about what happens with a reverse proxy:
So the HTTPS connection from the client is terminated at the proxy and thus the proxy needs to have a valid certificate for this domain.
Why can't you just deploy the certificate currently in the docker container to the proxy? This is the normal approach. The connection to the backend from the proxy doesn't need to be encrypted in a secure network (like in your case where the docker container is running on the same host).
Nginx is actually capable to do this. Not at http level, but stream level. The configuration itself is not that difficult. Here is a full tutorial on how to do it
Also another example on how to achieve this can be found here. I'll copy the example here for completeness.
For those downvoting: I might've not explained this well enough in the question, but this is exactly what I wanted.