I'm planning to deploy some kiosk computers and would like to leave them with a small pendrive as boot disk, keeping the rest at an easy to back up server, ala LTSP.
Right now I'm pondering two options. An NFSed /home/, or a local copy of ~/ copied on login, rsynced on logout.
My fears are that working with files might get too slow, or my network might get clogged.
I use NFS for my home directories in our production environment. There are a couple of tricks.
Don't NFS mount to
/home
- that way you can have a local user that allows you in in the event that the NFS server goes down. We mount to/mnt/nfs/home
Use soft mounts and a very short timeout - this will prevent processes from blocking forever.
Use the automounter. This will keep resource usage down and also means that you don't need to worry about restarting services when the NFS server comes up if it goes down for some reason.
Use a single sign-on system so you don't run into permission related issues. I have an OpenLDAP server.
HowtoForge posted an article titled Creating An NFS-Like Standalone Storage Server With GlusterFS On Debian Lenny, you may want to check it out.
Here is a short description of why it's a good "feasible" alternative to NFS, from the GlusterFS project page:
More information can be found in the project documentation.
Also, another nice thing about using GlusterFS is if you need more space on your SAN you just add another storage brick (server node) and you are able to scale/grow your storage in parallel when there is need.
Be careful with the soft mounts! Soft mounting an NFS filesystem means IO will fail after a timeout occurs. Be very sure that is what you want on users' home directories! My guess is you don't. Using a hard mount on home directories in combination with the intr option feels a lot safer here.
Hard will not timeout: IO operations will be retried indefinitely. The intr option makes it possible to interrupt the mounting process. So if you mount the export and experience a failure, the hard-mount will lock your session. The intr option will make it possible to interrupt the mount, so the combination is pretty safe and ensures you will not easily lose a user's data.
Anyway, autofs makes this all even easier.
The one thing to note is that when the NFS server is out - your mounts will freeze - doing a soft mount will not block so the "freeze" itself can be avoided, however that will not fix the problem of home directories as without a home directory, the user is screwed anyway.
Even when the NFS server recovers, unless you do something about it, the freeze problem will remain - you'll have to kill the process on the mounting machine, and remount. The reason for this is that when the NFS server comes back up, it assigned a different
fsid
- so you can at least fix this problem by hard-coding thefsid
s on the NFS server, for example...The
exports(5)
man page states......While that indicates that as long as the major/minor numbers do not change (which they usually don't, except for when you're exporting SAN/multipath volumes, where the may change), I've found that we've completely removed the problem - i.e., if the NFS server comes back - the connection has been restored quickly - I still really don't know why this has made a difference for devices such as
/dev/sdaX
for example.I should now point out that my argument is largely anecdotal - it doesn't actually make sense why it has fixed the problem, but it "seems" to have fixed it - somehow - there are probably other variables at play here that I've not yet discovered. =)
Some general advice that will apply no matter which network filesystem you adopt: many programs cache data in the user's home directory, which usually does more harm than good when the home directory is accessed over a network.
These days, you can tell many programs to store their caches elsewhere (e.g., on a local disk) by setting the
XDG_CACHE_HOME
environment variable in a login script. Lots of programs (e.g., Firefox) still require manual configuration, however, so you will probably have to do some extra work to identify and configure them in a uniform manner for all your users.I lot of places I have worked use NFS mounted home directories. There usually isn't a huge difference in performance (and kiosk users are probably a bit less demanding than developers who know how to get hold of their local IT guy). One problem I have seen is what happens when I'm logged into a Gnome desktop and the NFS server goes away for whatever reasons. Things get real unresponsive.
I use an NFSed home and it works fine. but you must make sure the network is fast enough and that it will never be down.
On a practical basis, NFS performs well for home directory if there's a 100mbit switched network or better. For more then 10-20 kiosks, the server should have gigabit connectivity. You won't win performance contests, but things like Firefox and Open Office will work okay.
Copying in the home directory will be a major pain in term of delays at login (on a 100mbit network that's max 12MB/s. A 100MB home directory is close to 10 seconds.) Rsync will thrash you syncing web browser cache... 10 minutes and 500 files hurt.
Have a look at cachefilesd. I haven't used it myself, but it looks promising.
Also, don't forget to tune the rsize and wsize parameters and use Jumbo frames if possible.
Note that in some cases (eg. private home use, or kiosk config), the simplest thing to do might be to use symlinks for important top-level directories inside the home directory. Makes it a lot easier to nuke the "satellite" home if settings there get screwed up, need to be kept local, etc.