I’m after some assistance as I’ve run into a couple of issues implementing nginx as a reverse proxy backing onto windows servers. I’ve used the config that works elsewhere with no issues but I’m seeing issues with more failed requests (404s) and nginx specific error codes 499.
We are seeing around 200 post 404 errors in the nginx logs when using the setup and only around 100 of the same errors when running against IIS. The files are there in most cases and the same (get requests anyway) work when hit directly on the windows server.
My first through was that there was something wrong with my setup using the post requests through nginx but it’s only a small percentage so don’t think there’s anything fundamentally wrong with nginx post setup.
The 499 status codes, client closed connection seems a little weird. I can see around 150-300 499 requests for 20k requests. It may be that these were happening anyway and we are just not seeing them being reported in IIS.
Here’s my nginx site config:
server {
listen 80;
server_name www.mydomain.com;
access_log /var/log/nginx/www.mydomain.com/mydomain.com.log main_ext;
location / {
proxy_pass http://mydomain;
health_check;
}
}
upstream myupsteam {
zone myupsteam 64k;
sticky cookie srv_id expires=1h domain=.mydomain.com path=/;
server IP;
}
Really need to start pushing the traffic through these servers as the IIS servers are struggling with the load but just need to get to the bottom of the errors before we can sign the switch over off.
I’ve tried changing the proxy timeout settings, proxy bind address onto private NICs.
Anyone any ideas?
So far what I have experienced with NGINX
HTTP 499
errors (friendly known asClient Connection Closed
orFailed to load: Connection reset by peer
) was caused by wrong permissions under/var/lib/nginx
.Since I have switched to a containerized web server environment using Docker, we picked NGINX to serve as a reverse proxy for our containers without having to map host ports in every container.
During the recycling process of the main
proxy
container in our environment, the permissions at/var/lib/nginx
get messed up and that's the temporary folder that NGINX uses to store/retrieve the requests sent through theproxy_pass
directive.Properly
chown
ing recursively through that directory to theuser:group
mapped to the nginx process and, when necessary, fixing permissions to775
was the solution to the mysteriously closing connections while loading our applications.We also needed to proxy HTTP headers in order to have our applications properly recognize (and not drop/abort the connection) that they had been handed legitimately proxied requests and also providing our apps with enough information to properly give information back.
From our staging
/etc/nginx/snippets/proxy-headers.conf
:Your needs might (and probably will) diverge. Adjust and use those as a starting point if desired. We have also tried to improve our virtual hosts configurations using snippets to spread changes across all configurations that needed to share the same headers, SSL parameters (such as allowed ciphers, for example.), etc...
In order to use snippets like that, simple include it into your virtual host, considering the snippet and path above:
PS: The above proxy-header.conf snippet was from an SSL-enabled virtual host. The SSL headers are definitively not needed if you are talking in plain HTTP