We have a Squid transparent proxy running, it's great, awesome dare I say. The problem is when some very random sites seem to hate squid. Cox.com is one in particular. Right now we just set an IPtables rule to forward requests to that IP, and not send it to the squid cache.
It would be awesome to have an ACL of "bad" sites that we can setup in squid so that if a client asks for one of these sites, it lets them access it directly, avoiding the squid proxy altogether. Is that possible? Or is iptables the best solution?
I don't think always_direct configuration directive would be the appropriate choice - it simply specifies to not contact parent or peer caches, does not affect whether the web site itself will be cached.
Take a look at the Squid FAQ: How do I configure Squid not to cache a specific server?, regarding the cache directive and ACLs
If you want to AVOID completely squid, adding exceptions to the transparent proxy iptables redirect rule is way.
You can, however, create an acl in squid for the
always_direct
directive. From the squid docs:It doesn't work in all cases, sometimes just avoiding the proxy completely will do.
EDIT: If you use something like shorewall you can create lists that make the exception for the redirect rule easier to manage, but it may be too overkill.
Use "cache" TAG in squid.conf
/etc/squid/acl-dstdomain-localnetsites.cfg
/etc/squid/squid.conf
At work we use lots of different proxys, regarding the flow (what application) involved and your location.
Instead of doing per-proxy configuration, it's the user's web browser that do all the job to find the correct proxy, through the help of an automatic proxy configuration script.
It works quite well, however you may be forced to review your transparent redirection to achieve it.