I have discovered that I can set the TTL in Varnish as follows in my VCL file:
sub vcl_fetch {
# 1 minute
set obj.ttl = 1m;
}
But what is the default setting (assuming the backend server is setting no cache-control header) ?
I currently have nginx setup to serve content through Varnish. Nginx listens on port 8000 and varnish connects users' requests from 80 to 8000.
The problem is, on some occasions, particularly when trying to hit a directory, like site.com/2010
, nginx is redirecting the request to site.com:8000/2010/
.
How can I prevent this?
I've seen people recommend combining all of these in a flow, but they seem to have lots of overlapping features so I'd like to dig in to why you might want to pass through 3 different programs before hitting your actual web server.
nginx:
varnish:
haproxy:
Is the intent of chaining all of these in front of your main web servers just to gain some of their primary feature benefits?
It seems quite fragile to have so many daemons stream together doing similar things.
What is your deployment and ordering preference and why?
I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it.
I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache.
Ideally I'd want to invalidate cache in 2 ways:
Is it possible to set Nginx to do that? How?