We are currently considering a solution that results in a separate mount point for every user's home directory. Whereas we used to have at most a couple mounts per file server on a client, we now could potentially have hundreds of mounts, many from the same fileserver. This obviously impacts the client because there are many more mounts, and it also affects the fileserver, because there are tons more exports to keep track of. In our environment, we are talking about hundreds of clients per fileserver and hundreds of users per client (ie, probably no more than 10000 exports on a file server).
My question is specifically about the efficacy of this solution. There are other solutions we could implement if this one is bad, but for various reasons, some political, this one has risen to the top. The clients are all linux, and the fileservers are a mix of linux and solaris systems. The concern I have is that the resources the kernel has to track mounts and shares is finite, but I do not really have a good idea what its capabilities are.
To lessen the load on the client side. If you do use the NFS route, consider using automount ( autofs ).
This will mount the NFS exports as they are requested from the server. Here is a short automount tutorial, and here is the why.
Generally on sarge I think we ran into issues around 30-40 mounts and we had to change our maps so we did less mounts.
Just a cut and paste from: http://nfs.sourceforge.net/
Why can't I mount more than 255 NFS file systems on my client? Why is it sometimes even less than 255?
A. On Linux, each mounted file system is assigned a major number, which indicates what file system type it is (eg. ext3, nfs, isofs); and a minor number, which makes it unique among the file systems of the same type. In kernels prior to 2.6, Linux major and minor numbers have only 8 bits, so they may range numerically from zero to 255. Because a minor number has only 8 bits, a system can mount only 255 file systems of the same type. So a system can mount up to 255 NFS file systems, another 255 ext3 file system, 255 more iosfs file systems, and so on. Kernels after 2.6 have 20-bit wide minor numbers, which alleviate this restriction.
For the Linux NFS client, however, the problem is somewhat worse because it is an anonymous file system. Local disk-based file systems have a block device associated with them, but anonymous file systems do not. /proc, for example, is an anonymous file system, and so are other network file systems like AFS. All anonymous file systems share the same major number, so there can be a maximum of only 255 anonymous file systems mounted on a single host.
Usually you won't need more than ten or twenty total NFS mounts on any given client. In some large enterprises, though, your work and users might be spread across hundreds of NFS file servers. To work around the limitation on the number of NFS file systems you can mount on a single host, we recommend that you set up and run one of the automounter daemons for Linux. An automounter finds and mounts file systems as they are needed, and unmounts any that it finds are inactive. You can find more information on Linux automounters here.
You may also run into a limit on the number of privileged network ports on your system. The NFS client uses a unique socket with its own port number for each NFS mount point. Using an automounter helps address the limited number of available ports by automatically unmounting file systems that are not in use, thus freeing their network ports. NFS version 4 support in the Linux NFS client uses a single socket per client-server pair, which also helps increase the allowable number of NFS mount points on a client.
If you have a really large number of nfs mount requests being issued, your servers might have a problem with the fact that the mount requests may start coming from non-privileged ports (ie >1024).
This is a note I had about a NetApp filer: