We have Apache set up to gzip compress html pages before they are sent to the client browser.
However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser.
There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly.
Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this?
If not, can you suggest any other way to get gzip compression working for the server?
The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.
We had this same problem with a perl backend. I'm looking up the docs that pointed me @ the solution. It ultimately had to do with gzip trying to compress the entire document to compute the content-length.
We had some long-running scripts that this ruined the user-experience for -- instead of getting output incrementally, they were waiting for 2-3 mins with no activity!
Update:
I'm afraid I can't find it. Reviewing our activity logs, it seems I simply disabled it on our web server, then moved an nginx-based frontend for this and other reasons (it will gzip with chunked encoding w/o having to send content-length)
Try upgrading to Apache 2.2, iirc newer versions of apache handled the chunked/gzip thing much better.
It's called 'Early flushing', and its mostly a matter of just running 'flush();' before the entire page has been generated. Normally, an optimised PHP server will, as you've found, buffer the entire page, but a deliberate call to flush() overrides that. The tips page on Yahoo has more information: http://developer.yahoo.com/performance/rules.html#flush
Send flush() from time to time or after your blocks and unless you use obstart(), apache will send chunked responses.