So we have a bunch of SSD disks and would like to put them in a zpool on a Solaris 10 system. The file system will be exported via NFS to about 50 Ubuntu clients which will mount it as their $HOME. I expect that the bandwidth will not exceed 1 Gbit/s, but the latency should be as low as possible because of the Desktop environments running on the clients.
- What is a good configuration for such a zpool? Currently we have one
raidz2
with 8 disks + 2 hot spares, but I've read that since a singleraidz
is just a single vdev, the performance is limited to the speed of a single disk. - What are the critical NFS server/client parameters that can be tuned? Currently we use NFS3 with
noatime
and defaultrsize/wsize
, but perhaps there is a better choice for the clients running on Ubuntu.
Do you really want to use Solaris?
Ubuntu can run ZFS. Other Linux variants can support ZFS. There are other Solaris-derived operating systems that can do it (OmniOS, OpenIndiana, etc.). Oh, and FreeBSD... Not to mention the appliance solutions: Zetavault, QuantaStor, napp-it, Nexenta, Cloudbyte...
Anyway, I would use ZFS mirrors. Multiple vdevs and gives you an option to expand. I'm not a fan of RAIDZ1/2/3 unless capacity is a concern. But since you have "a bunch" of disks, maybe that's not an issue. What type of disks are they?
As far as NFS settings, that's going to depend on the OS you choose. Also think about your NFS export settings (sync versus async) and possibly increase the number of NFS server threads.