Short version: mounted datasets are RO, even though I'm trying to make them RW.
Long version: we currently have a working implementation of ZFS on Linux on Wheezy (which I inherited when the previous SE left) and we want to upgrade to Jessie...because...just because. Before upgrading production, I'm trying to replicate that environment (as much as possible) in a VM running on my local machine.
I've created the pool, and created a new dataset via:
root@zfstest1:~# zfs set [email protected],insecure tank/vmware
root@zfstest1:~# zfs share -a
root@zfstest1:~# showmount -e
Export list for zfstest1:
/tank/vmware 10.1.2.3
I've compared permissions and properties and everything that I can think of against production, but regardless of whether I mount the datasets on my Mac or on a VMware host, the volume/datastore ends up being RO. Default permissions for /tank and child objects appears to be 755, and the only way I've found to make the mounted volumes on my Mac or VMware host RW, is to make them 777.
Some things that I know are different are the different zfs-debian, spl-dkms, etc packages. I just couldn't find the repository to get older ones (0.6.3-1~wheezy vs 0.6.5.2-2-wheezy)
Help on what I can look for to get this working would be greatly appreciated.
So I've found that running
gives me a different result on my test vm than on my production server. the thing i'm missing is that /etc/exports is default on both, and the results from
are the same, yet the /var/lib/nfs/etab file on production clearly has entries populated from somewhere...I just don't know from where, or how.
Ultimately, the difference is that the etab file has no_root_squash in it.
So I guess this actual issue can be considered solved, but now I have to figure out the other issue of where /var/lib/nfs/etab is getting that information from, and why our production server is populating it that way.
UPDATE: So, the
exportfs -v
command is getting it's information from thezfs share -a
orzfs share [dataset]
command. Doing aservice nfs-kernel-server restart
wipes that, or at least, runningshowmount -e
orexportfs -v
after restarting that service shows nothing. After running the share command, the etab is repopulated with the sharenfs entries....sigh...continually learning.
sharenfs
will appear to fail if thezfs
dataset does not have amountpoint
set