We have a setup where front facing HTTP2 server is HAProxy and multiple backend servers run Apache+PHP. We have set HAProxy configuration option http-buffer-request
. End users upload files (using basic file upload using <form>
with method POST
) and small files are handled correctly. However, when end users upload e.g. 50 MB file, it seems that the HAProxy is not buffering the uploaded file before connecting the backend server.
We have a pretty short timeout for the backend server because the expected use case is to just transfer the file from HAProxy to backend server using 1 Gbps connection in local network. The HAProxy->backend connection is currently limited to 35 seconds (timeout server 35s
). However, when end user upload is big and his or her upload bandwidth is low, the upload fails unless the upload completes within timeout server
limit. In case of Google Chrome, the browser automatically retries and submits the file another time. And third time, before finally displaying error ERR_HTTP2_SERVER_REFUSED_STREAM
.
If I've understood correctly, HAProxy buffering reverse proxy buffering is configured with option tune.buffers.limit
which defaults to zero meaning unlimited. Is there some other limit that prevents big uploads from buffering at the HAProxy level? The HAProxy server is only running HAProxy and does TLS termination and has 16 GB of RAM for this so I'm okay with buffering in RAM if buffering uploads to disk is not supported. Maybe limiting buffering up to 14 GB would be good, though.
We want to allow uploads up to 250 MB and preferably the whole upload should be buffered by HAProxy before HAProxy even connects to Apache to process the upload. In practice, HAProxy should just accept incoming HTTP2 connection, read all the request headers and whole body up to e.g. 260 MB and then connect to backend server. Currently, it seems that it connects to backend server much sooner and then spends timeout server
to wait bytes from the end user and connection is terminated by HAProxy.
0 Answers