I mount a storage bucket to a local directory. /share
Then I try to make this directory - now populated with contents of the object store - available to another machine.
The goal is, that this other machine doesnt't have to use a gcsfuse-client itself but can rely on nfs or something similar.
I tried to expose the gcsfuse directory /share
with nfs.
The nfs-share worked. But when I mounted the cloud storage bucket into the nfs share, the remote machine - the nfs-client - never saw the files from the object store.
So both parts of the chain work in seperate from each other:
- I can mount an object store to a local directory.
- I can export a directory per nfs to a second machine.
But I can NOT 'pass through' the contents of the object store to the second machine.
All that is happening inside a kubernetes cluster.
To rule out nfs from the equation, I used the sidecar-pattern, to expose the /share
to a second container in the same pod.
Same result: the second container never sees the contents of the object store.
I don't know a lot about the (gcs)fuse-filesystem but people call it a 'user-space' file system. Is that the reason, I cannot pass-through the contents of a gcsfuse mount to a second machine?
Edit: I tried multiple different options when mounting the object storage: here they are:
gcsfuse -o nonempty -o allow_other --implicit-dirs --gid 0 --uid 0 --file-mode 777 --dir-mode 777 video-storage-dev /share
gcsfuse -o nonempty -o allow_other video-storage-dev /share
Possible solutions for this issue:
You might not have the
user_allow_other
config option enabled in the fuse config. To do this, make sure that in the file/etc/fuse.conf
, you have the following line uncommented:user_allow_other
, then try again.If the above does not work, you might want to take a look at this unix question, which addresses exporting FUSE via NFS. As a TL;DR, most Linux distros do not allow exporting a FUSE mounted fs via NFSv2 or NFSv3, so you will need to use NFSv4.