I am about to deploy a brand new node.js application, and I need some help setting this up.
The way my setup is right now is as follows.
I have Varnish running on external_ip:80
I have Nginx behind running on internal_ip:80
Both are listening on port 80, one internal port, one external.
NOTE: the node.js app runs on WebSockets
Now I have the my new node.js application that will listen on port 8080.
Can I have varnish set up that it is in front of both nginx and node.js.
Varnish has to proxy the websocket to port 8080, but then the static files such as css, js, etc has to go trough port 80 to nignx.
Nginx does not support websockets out of the box, else I would so a setup like:
varnish -> nignx -> node.js
Having just setup a project that is essentially identical to what you describe, I'll share my approach - no guarantees that it is 'the best', but it does work.
My server stack is
My Node.js app uses Websockets (sockets.io - v0.9.0) and Express (v2.5.8) - and is launched using forever. (The same server also has other sites on it - primarily PHP which use the same instances of Nginx and Varnish).
The basic intention of my approach is as follows:
Varnish config - /etc/varnish/default.vcl:
Nginx config - /etc/nginx/*/example.com.conf:
I am not particularly crazy about the repetition of the proxy_pass statement, but haven't gotten around to finding a cleaner alternative yet, unfortunately. One approach may be to have a location block specifying the static file extensions explicitly and leave the proxy_pass statement outside of any location block.
A few settings from /etc/nginx/nginx.conf:
Among my other server blocks and settings, I also have gzip and keepalive enabled in my nginx config. (As an aside, I believe there is a TCP module for Nginx which would enable the use of websockets - however, I like using 'vanilla' versions of software (and their associated repositories), so that wasn't really an option for me).
A previous version of this setup resulted in an unusual 'blocking' behaviour with the piping in Varnish. Essentially, once a piped socket connection was established, the next request would be delayed until the pipe timed out (up to 60s). I haven't yet seen the same recur with this setup - but would be interested to know if you see a similar behaviour.