I've got a bunch of websites on a server, all hosted through nginx. One site has a certificate, the others do not. Here's an example of two sites, using (fairly accurate) representations of real configuration:
server {
listen 80;
server_name ssl.example.com;
return 301 https://ssl.example.com$request_uri;
}
server {
listen 443 ssl;
server_name ssl.example.com;
}
server {
listen 80;
server_name nossl.example.com;
}
SSL works on ssl.example.com
great. If I visit http://nossl.example.com
, that works great, but if I try to visit https://nossl.example.com
(note the SSL), I get ugly warnings about the certificate being for ssl.example.com
.
By the sounds of it, because ssl.example.com
is the only site listening on port 443, all requests are being sent to it, regardless of domain name.
Is there anything I can do to make sure a Nginx server directive only responds to domains it's responsible for?
Use a different IP address for the hosts which should never answer on SSL, and ensure that nginx only listens on port 443 for the appropriate IP addresses.
the only way to really segregate ssl sites, sans multi-host/wildcard ssl cert, is to add secondary public ip's to your box (must request via provider).
then, you put each site/subdomain on its own ip via dns. that way, some domainname:443 will be open, and some will spin until timeout (by using DROP via iptables)
Unfortunately, the domain (via the
Host
header) is part of the encrypted payload. Thus, nginx doesn't know what domain it's for until after it has presented the certificate. This is a technical limitation of SSL, not nginx.http://en.wikipedia.org/wiki/Server_Name_Indication may help, but IE users on XP (still a decent percentage of Internet users) can't use it.