I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent.
For example, I've got a view which looks like this:
def debug_big_file(request):
return HttpResponse("x" * 500000)
But when I access that URL through nginx, I only get 65283 bytes:
$ curl https://example.com/debug/big-file | wc
…
curl: (18) transfer closed with outstanding read data remaining
0 1 65283
Note that everything works as expected when accessing gunicorn directly:
$ curl http://localhost:1234/debug/big-file | wc
…
0 1 500000
The relevant nginx config:
location / {
proxy_pass http://localhost:1234/;
proxy_redirect off;
proxy_headers_hash_bucket_size 96;
}
And nginx version 1.7.0
Some other facts:
- The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283)
- 110k bytes are sent correctly (ie,
"x" * 110000
returns all 110,000 bytes), but 120k bytes are not tcpdump
suggests that nginx is sending a RST packet to gunicorn:
Okay! After double checking the nginx logs, this turned out to be the problem:
Some how the permissions for the
proxy_temp
directory got messed up which prevented nginx from properly buffering to it.