We're implementing a service that needs access to three disks that reside on different servers. I have mounted the disks (2 are on windows fileservers, cifs ; 1 is on another Linux server, nfs) and the scripts now work fine.
My manager is not very pleased with the "permanent mount" and considers this a security risk: if any script on the server is compromised, the mounted disks could be accessed. He suggests that each script that needs the disks, mounts the disks at the start of the script, and umounts them at the end of the script.
I'm not very fond of this idea, but I can't convince him. Some arguments I've tried:
- mount and umount have a cost - doing them each time will make scripts run slower. (alas, I can't pinpoint the exact cost)
- What if two scripts run simultaneously, one finishes and umounts, and the other still needs the disks? (granted, semaphores or mutexes could solve that)
- If anyone compromises the server, he can access the script that mount the disk, and thus mount them himself as well. (He claims this is an 'extra layer of defense to breach').
Can someone tell me if I'm right to be wary of this mount/umount each time, or if I'm wrong and it really is the way to go - and more specifically: why?
Sorry, but you are not right.
Regards
Automounting network shares means storing credentials in a file somewhere where
mount
can read them (via thecredentials=filename
mount option in the case of cifs filesystems) - this is not necessarily a problem but could be a security concern beyond even an exploit giving someone access to the script. Storing the credentials in a files like this is more secure than keeping them in the script directly because that way there is no chance they'll appear on command lines when users search the task list withps
or similar, but it is vitally important that you make sure that only the script can read the file. You concern about users who compromise the server is no different in this situation - if they compromise it while it is live they will potentially have access to the shares anyway.The cost of mounting and unmounting shares should not be significant - a couple of seconds (if that) at most unless you are talking to the shares over a very slow high-latency connection, and may be preferable from a security standpoint because you are not keeping the share mounted (potentially in a way which might leave it readable to other users with access) when it is not needed and the other end will be able log when your scripts authenticate and disconnect (so if there is a problem detected at the other end, it is easier to track what might have had access at the time the problem seems to have occurred).
Keeping the shares mounted despite overlapping script executions could be done by dropping a file with a random (or otherwise sufficiently unlikely to be duplicated by another script invocation) name in some location (say a directory under
/var/run
) and deleting that file after the actual job of the script has ended - than at the very end check that there are no other files in that directory before runningumount
. Another method would be to allow multiple mounts to the same share by giving each script its own mount point - this way each script would have its own share and would not interfere with the others, and you could also give each script its own authentication credentials so you have the option of managing permissions in a more fine-grained manner at the other side (perhaps some of the scripts need read+write access and some only need read-only, for instance).