I have this nginx config on a host (host1):
server {
...
location /foo {
proxy_pass http://127.0.0.1:8091/;
}
}
The backend nginx config looks like this:
server {
listen localhost:8091;
root /store;
autoindex on;
}
The backend is actually on a different host (host2), and due to our firewall config, I need to reverse-tunnel the connections (this may be irrelevant to the question but including for completeness):
ssh -R8091:localhost:8091 host1
This setup is for serving large-ish files (GBs). The problem is I'm finding that requests are abruptly terminating/truncating short of their full size, always a bit >1GB (e.g. 1081376535, 1082474263, ...). No errors in the logs however, and nothing jumps out from verbose debug logging. There's always a Content-Length in the response too.
After some digging I found that there's a proxy_max_temp_file_size
that's 1GB by default. Indeed, from inspecting the FDs of the nginx worker in /proc, the temp file is actually being filled up to exactly 1073741824 bytes, and at a rate much faster than the downstream client is able to receive. Lowering it to 1MB mostly makes the problem go away [1] (as would, I imagine, disabling it with 0 altogether). But why would this be a problem? If there was a timeout, why no error message, why is 1GB the default, and why would the downstream client manage to receive a few additional (varying number of) bytes beyond the 1073741824th byte?
Anyway, just wondering if anyone might have an inkling as to what's up. If it makes a difference, both hosts are Ubuntu 12.04.
[1] I say mostly because the problem is replaced by another one, which is that downloads all stop at exactly 2147484825 bytes, which happens to be 0x80000499, but I haven't done enough debugging to determine if this is between (as I suspect) the frontend server and the client.