I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request.
One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading.
Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout.
I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up.
What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.
If you use nginx as a reverse proxy then that will be handling your slow clients. It would receive the result from the backend and after that continue streaming to the client at whatever speed the client can handle.
In general a reverse proxy basically connection management, sparing your backend of holding thread for a long time.
Have you considered using PHP's output buffering? See http://us2.php.net/ob_start