I've read a bunch of different questions on what Nginx configuration is appropriate for SSE and came up with some confusing results regarding what settings to use:
- https://stackoverflow.com/questions/17529421/sending-server-sent-events-through-a-socket-in-c
- https://stackoverflow.com/questions/13672743/eventsource-server-sent-events-through-nginx
- https://stackoverflow.com/questions/21630509/server-sent-events-connection-timeout-on-node-js-via-nginx
So what's the right answer?
Long-running connection
Server-Sent Events (SSE) are a long-running HTTP connection**, so for starters we need this:
Chunked Transfer-Encoding
Now an aside; SSE responses don't set a Content-Length header because they cannot know how much data will be sent, instead they need to use the Transfer-Encoding header[0][1], what allows for a streaming connection. Also note: if you don't add a Content-Length most HTTP servers will set
Transfer-Encoding: chunked;
for you. Strangely, HTTP chunking warned against and causes confusion.The confusion stems from a somewhat vague warning in the Notes section of the W3 EventSource description:
Which would lead one to believe
Transfer-Encoding: chunked;
is a bad thing for SSE. However: this isn't necessarily the case, it's only a problem when your webserver is doing the chunking for you (not knowing information about your data). So, while most posts will suggest addingchunked_transfer_encoding off;
this isn't necessary in the typical case[3].Buffering (the real problem)
Where most problems come from is having any type of buffering between the app server and the client. By default[4], Nginx uses
proxy_buffering on
(also take a look atuwsgi_buffering
andfastcgi_buffering
depending on your application) and may choose to buffer the chunks that you want to get out to your client. This is a bad thing because the realtime nature of SSE breaks.However, instead of turning
proxy_buffering off
for everything, it's actually best (if you're able to) to add theX-Accel-Buffering: no
as a response header in your application server code to only turn buffering off for the SSE based response and not for all responses coming from your app server. Bonus: this will also work foruwsgi
andfastcgi
.Solution
And so the really important settings are actually the app-server response headers:
And potentially the implementation of some ping mechanism so that the connection doesn't stay idle for too long. The danger of this is that Nginx will close idle connections as set using the
keepalive
setting.[0] https://www.rfc-editor.org/rfc/rfc2616#section-3.6
[1] https://en.wikipedia.org/wiki/Chunked_transfer_encoding
[2] https://www.w3.org/TR/2009/WD-eventsource-20091029/#text-event-stream
[3] https://github.com/whatwg/html/issues/515
[4] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
[5] https://www.rfc-editor.org/rfc/rfc7230#section-6.3
[6] https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88