We're implementing a service that needs access to three disks that reside on different servers. I have mounted the disks (2 are on windows fileservers, cifs ; 1 is on another Linux server, nfs) and the scripts now work fine.
My manager is not very pleased with the "permanent mount" and considers this a security risk: if any script on the server is compromised, the mounted disks could be accessed. He suggests that each script that needs the disks, mounts the disks at the start of the script, and umounts them at the end of the script.
I'm not very fond of this idea, but I can't convince him. Some arguments I've tried:
- mount and umount have a cost - doing them each time will make scripts run slower. (alas, I can't pinpoint the exact cost)
- What if two scripts run simultaneously, one finishes and umounts, and the other still needs the disks? (granted, semaphores or mutexes could solve that)
- If anyone compromises the server, he can access the script that mount the disk, and thus mount them himself as well. (He claims this is an 'extra layer of defense to breach').
Can someone tell me if I'm right to be wary of this mount/umount each time, or if I'm wrong and it really is the way to go - and more specifically: why?