I'm trying to set up Nginx as a forwarding proxy for all the dev servers sat behind my static IP.
Iv'e read this Question: Lets Encrypt with an nginx reverse proxy
Already, and it get's me part of the way (That is it get's me the .well-known directory on one of the virtual servers) but my set-up is just enough different that there is another step I need to make this work, and so far Iv'e not been able to work this step out, any Nginx experts out there, I sure could do with a hand here.
The Current Setup
So basically, at present I have a single static IP, for the purposes of this question lets call it
1.2.3.4
In my host providers DNS, I have 2 wildcard DNS routes set-up (I have complete control of these myself by the way, so if it ends up that it's easier to just modify the DNS records than use a file then that's what I'll do), the 2 wildcard routes are
*.example.com -> 1.2.3.4
*.anotherexample.com -> 1.2.3.4
So basically no matter what is entered for those 2 domains, they ALWAYS end up at the same static IP.
http://example.com/ , http://doodah.example.com/ , http://biscuits.anotherexample.com/ and so on ALL arrive at 1.2.3.4
Inside that IP address, through a NAT controlled port forward from 80 on the outside to 80 on the inside (I can open others as needed, eg: 443) is a server that has one job, and one job only.
This one server run's Nginx on Ubuntu 16.04 LTS and it examines the inbound host name, and then passes that connection off to another server.
So for example
http://www.example.com/ will be proxy forwarded to the server running that web site
http://application.anotherexample.com/ will proxy forward to which ever server runs the application associated with that hostname
two of the virtual servers are special, in that they are set to respond to
*.example.com and *.anotherexample.com
So if a request comes in for idonetexist.example.com it will end up serving the default website for example.com and likewise noserverhere.anotherexample.com will do the same.
This means that no matter what the host/subdomain is that this proxy server receives, something will be delivered.
so, what I have so far to handle those 2 wildcard cases is the following two configuration files:
server {
listen 80;
server_name
example.com
www.example.com
*.example.com;
location /.well-known {
alias /var/www/html/letsencrypt;
}
location / {
proxy_pass_header Authorization;
proxy_pass http://address.of-the.internal.example-webserver
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
access_log /var/log/nginx/example.com.log;
error_log /var/log/nginx/example.com.log error;
}
}
and
server {
listen 80;
server_name
anotherexample.com
www.anotherexample.com
*.anotherexample.com;
location /.well-known {
alias /var/www/html/letsencrypt;
}
location / {
proxy_pass_header Authorization;
proxy_pass http://address.of-the.internal.anotherexample-webserver
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
access_log /var/log/nginx/anotherexample.com.log;
error_log /var/log/nginx/anotherexample.com.log error;
}
}
So far, this all works perfectly, and any requests from let's encrypt for SSL certs checking in the ".well-known" directory all works, it's at this point however where my use case starts to deviate from the original question I linked at the top of my question.
In order to implement the other virtual servers, such as those pointing at applications, and other sub-domains on any of my two registered domains, I have multiple virtual server configurations, so for example I'll have other configs, that look like the following:
server {
listen 80;
server_name application1.example.com;
location / {
proxy_pass_header Authorization;
proxy_pass http://internal.address.of-example-app-one
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
access_log /var/log/nginx/exampleappone.access.log;
error_log /var/log/nginx/exampleappone.error.log error;
}
}
server {
listen 80;
server_name application2.example.com;
location / {
proxy_pass_header Authorization;
proxy_pass http://internal.address.of-example-app-two
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
access_log /var/log/nginx/exampleapptwo.access.log;
error_log /var/log/nginx/exampleapptwo.error.log error;
}
}
and
server {
listen 80;
server_name application1.anotherexample.com;
location / {
proxy_pass_header Authorization;
proxy_pass http://internal.address.of-anotherexample-app-one
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
access_log /var/log/nginx/anotherexampleappone.access.log;
error_log /var/log/nginx/anotherexampleappone.error.log error;
}
}
When a new application is added to the internal servers, a new config file based on those shown above, with the appropriate changes in is created, then saved in /etc/nginx/sites-available/ along with all the other config files.
When Nginx is reloaded, it picks the new file up and activates it, so that application then becomes available.
The reason it's done this way, is that there is an application generator running in the network, and the app creation wizard, writes this config file as part of it's run, allowing internal users to log-in and create applications using an application creator program.
What I'm trying to achieve
With the answer in the first question that I linked above, that got me working with the "default" handling of sites, that is sites that DO NOT have a separate configuration to proxy them to a different server, and it keeps Let's encrypt happy because it can find it's identity files to prove I own the two domains.
Unfortunately, the separate configurations are not covered by SSL, and what's more they are not covered by wildcard SSL.
What I'd like to do is find a way that I can have a *.example.com and a *.anotherexample.com wildcard certificate from let's encrypt, all automated using certbot, with automated renewals.
The original question shows me how to do this for the default wildcard sites, but this certificate doesn't cover the separate config files that point at the applications, even though they are all technically sub-domains of the master domain.
In my mind, something tells me that the key to this is to actually apply the SSL to the actual "default" server that Nginx creates, but since I can't assign multiple certs to that, I don't believe I'd be able to assign example.com and anotherexample.com certificates on the one virtual server.
What I need (I think) is to assign the certificates to each of the virtual servers handling the *.... hostname, but then have that cert propagate to any subdomains of that domain that have the same top level domain as the domain default (If that makes sense)
Another thought that I had (That I'm going to have a play with after I finish typing this) is to put another proxy in front, who's sole responsibility is to accept the connection, send out the certificate then pass the connection off to an Nginx that decides which virtual server to send the connection too, I think in this case maybe HAProxy or something similar.
In Summary
1) If I didn't have the separate config files overriding various sub domains, the original answer to the original question would work perfectly.
2) I need to in effect, merge the sub-domains into the master domain where the cert will be specified, but while still keeping the files separate.
3) I need the SSL to work transparently from a *... wildcard SSL certificate, and don't want to have to create a new SSL request for every separate sub-domain created.
Any ideas on how I can do this, preferably by building on the original question, would be greatly appreciated.
Update (22/08/2018)
So using the advice given by Drifter in the comments, I eventually came up with the following for my "wildcard virtual servers", IE: the servers that will respond to anything in the domain if there is not a more specific subdomain configuration for it.
server {
listen 80;
server_name
example.com
www.example.com
*.example.com;
location /.well-known {
alias /var/www/html/letsencrypt;
}
location / {
proxy_pass_header Authorization;
proxy_pass http://internal-default-web-server-for-wildcard;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log error;
}
}
server {
listen 443 ssl;
server_name
example.com
www.example.com
*.example.com;
include "example-cert.inc";
ssl_stapling on;
ssl_stapling_verify on;
# maintain the .well-known directory alias for renewals
location /.well-known {
alias /var/www/html/letsencrypt;
}
location / {
proxy_pass_header Authorization;
proxy_pass http://internal-default-web-server-for-wildcard;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
access_log /var/log/nginx/example.sslaccess.log;
error_log /var/log/nginx/example.sslerror.log error;
}
}
As you can see, the server block is now acompnied by a server block activiating HTTPS, and instead of having the name of server certs in the file there is an
include "example-cert.inc"
In the "example-cert" file is the following
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
This causes Nginx to load that include with those cert names when it loads the VS configuration handling those wildcard domains.
For all the other configs for sub-domains that have their own destinations, Iv'e done a similar thing, the main difference being that the ".well-known" route is not included, and the proxy/server destinations and log file names are different.
Following the information in the linked question, and specifying the linkage to the certs in this manner, due to the certs being wildcard ones, has allowed me to use one set of certs for one domain, and another set of certs for the other domain.
That actually answers my question, now I just need to figure out how to automate the certificate renewal with let's encrypt. Beacuse it's a wildcard cert I'm using I'm not allowed to use any verification method other than "DNS-01" and that can only be done manually at the moment unless your using a DNS service provided by one of the cloud providers supported by the certbot plugin's.
I'm not, my DNS is just standard DNS, so for every renewal, I need to regenerate a new TXT record and change my DNS records before the renewal will work. I CAN fully automate it using the "webroot" method, but webroot does not allow you to request wildcard certs, only a fixed multi domain list.
Drifter, if you write your comments up as an answer, I'll give you the +10
0 Answers