I have a 1TB disk on one server that 4 other servers frequently access via NFS to distribute the files over HTTP. I'm seeing a high load on the central server and want to cache these files on the local servers as they rarely change. Is NFS caching suitable or should I be looking at something else?
Thanks
The NFS way:
FS-Cache would likely help if you have minimal space on the client servers that prevents you from keeping full copies of all your large files (eg, you only want or need to cache the most frequently accessed files).
There are some caveats (as noted from Red Hat's documentation):
Also, you're required to run on specific types of filesystems on the NFS client in order to provide the needed FS support for the filesystem attributes that FS-Cache uses to keep track of things (ext3 with user_xattr, ext4, btrfs, xfs).
The rsync way:
Another alternative is to use rsync and keep a full copy of the files on each system. If these are things that only change periodically (such as daily or weekly), this may be more beneficial to you from the stand point of having less complexity in managing and debugging of problems.
The downside to this is that you're now keeping N + 1 copies where N is the number of systems you need to run this on and you'll have to come up with a mechanism to handle the rsyncs periodically (eg, scripts + cron, etc).
I see you've tagged your question with
squid
, so you clearly know about it. How are you using it? I think your problem can be solved by using squid in reverse proxy (httpd-accelerator) mode. If you have that set up properly then you should haven't to worry about the NFS side.