We use squid 3.0 on Centos5.3 and have currently 20 users (Internet Explorer) using this proxy. The problem is that the access through proxy is slower. Even a simple webpage like google.com(.au) takes 5sec longer than a direct connection without proxy. Looks like there is approximately 2-5 second delay. Disabling cache for google.com(.au) didn't help. Explicitely defining dns_nameservers makes no difference.
Server parameters : Dual-Core AMD Opteron(tm) Processor 2220, 6GB memory, 60gb SCSI hdd
cache_mem 256 MB
cache_dir ufs /usr/local/squid/var/cache 30000 16 256
maximum_object_size_in_memory 256 KB
minimum_object_size 0 KB (0 - 200 KB - not a real difference, the delay is still there)
maximum_object_size 32 MB
How would you change these specifications in squid.conf based on the server specifications. What can cause the delay ? Also for a bigger webpage like yahoo.com.au is there a way to receive a part of the page from cache and then the rest (images last). At the moment there is nothing for 15 sec and then a whole webpage appears.
My first hunch would be to sniff the traffic using tcpdump and load it into wireshark to see where the delay is happening.
(If you're doing it over ssh, add "not port ssh" to the end.)
Once you load this into wireshark you should be able to see where the delay appears to be. I'd recommend doing this during a quiet time so there isn't too much traffic obscuring your view. If you can be the only person accessing the proxy at the time, even better.
Likely delays are:
For some web pages, it is not possible to draw the page before nearly the entire page is downloaded, images and all. To speed up such a page, there are a few things you can do:
In days gone by, I used to browse with Internet Explorer for Macintosh (68k in those days). I well remember seeing the "newspaper" icon that told you to wait as IE was computing how to display the page (not getting data: computing...)
Another thing to be aware of: some pages will explicitly request that they not be cached: it is up to the cache administrator as to whether these requests are granted or denied. Typically these pages are those that change often or that the web admin does not want to have stored elsewhere. Thus, in such a page, you will have an additional overhead involved as the web cache must process the page on your behalf, even though there is no page ever in the cache at all.
I would agree that sniffing traffic is a good way to determine why things are being delayed. What part of the network stream is actually causing the delay?
Wireshark (and tcpdump) have a large set of filters that you can use to clean up the traffic: the only reason you'd really have to wait until a quiet time is in order to avoid having a massive TCP dump file. However, you can get a reasonable set of data just be limiting yourself to direct-to-proxy network traffic:
(Port 3128 is the standard squid port: use whatever is appropriate for you.)
Using Wireshark, you can instantly filter based on a single TCP traffic stream: so you don't have to worry about the mixing of different streams there either.
Also look at the logs in /var/log/squid and examine what is happening to the request: is it coming from the cache? Is it coming from the remote site? Try repeated requests - does the page come up quicker after running it once?
I found that it was a DNS issue when i experienced the exact same problem.
Once I changed the DNS server in squid.conf to our ISP the lag was gone.