Summary
We want to support this scenario: if we sense any security threat that might expose our core data (all stored on a /home
NFS mount from our NFS server), we immediately "hard" power off the NFS server. Any potential, successive, unauthorized reboot of the server can not read nor write to /home
... because our system will (hopefully) require the password/key_challenge/secret from a human (as initially proposed below), possibly if not probably via a simple ssh login to the NFS server machine, before the system can mount the /home
volume.
Is this feasible, and if so, how do we do this? Is this the "normal/default" mode for LUKS-based mounts (or some similar technology, toolset, gizmo)? And/or are there alternative (better?) ways to server our intent?
Details
My team plans to implement an NFS export of a LUKS-based, "sparse-file-like" loop-device file, similar to or exactly like a /home
export (for NFS client mounting by other servers/machines/VMs/containers in our VPN/network).
Our current target implementation: require a password (or key-based challenge) before the system can mount the LUKS /home
volume (only when mounting from the NFS server), presumably after normal bootup of the NFS-server machine/container.
However, if there's a better way... please do suggest it. We want to server the goal/intent described in the summary in the best way(s) possible and do not which to presume a limited scope of potential solutions.
Currently we anticipate the NFS server to be an Ubuntu 18.04.4 Docker container running on an the same-version-Ubuntu docker host. And the NFS-exported mount points will be Docker volumes served by the Docker host (to the NFS Docker container). But this design is far from "set," and we're flexible and open to alternative suggestions.
And while we are indeed serving an NFS application (of the system design) employing a loop-device ("filesystem in a file", as we understand it) mapping, we do not yet view this an an NFS- nor loop-device-specific problem. i.e., the same scenario might arise for other designs with no NFS or loop-device components.
System constraints
In our current environment, we can NOT employ a SAN-like solution. (We understand LUKS embedded in networked-storage/SAN solutions are popular nowadays. We've received a lot of comments similar to "just use a LUKS-based SAN." That will not work for us in this case.) Our current constraint: this NFS-server rig runs all within a kvm-based virtual machine, typically deployed by a hosting provider like DigitalOcean, Linode, AWS, etc.
Consequences and team background
Yes, we realize a hard-power-off / ungraceful shutdown of the NFS server will break all the NFS clients; we plan to require they all get rebooted (in the worse-case scenario). And yes, we realize that ungraceful shutdown can munge data, filesystems, databases, etc.
We have architected our network to robustly and (relatively) easily handle all these scenarios, within reason. Further: we expect 1) this "kill switch" scenario to rarely happens, and 2) we rarely have to reboot the NFS server system (Docker container or otherwise) for any reason; we'll keep it very stable and have all of our more-dynamically-changing software (OS, apps, etc) systems on other machines. The NFS server get's a bit, ole' config lockdown and super-simple setup, with few dependencies.
In any case, we're signing up for all this rig entails, as we are significantly-experienced block-and-filesystem, storage-server admins (long before LUKS was around) and have "been around this block," at least from long ago. (Specifically from pre-Fibre-Channel-SAN and post-FC-SAN era from the mid 1990's to the early 2000's, before Linux "was big." A bit more systems experience in the late 2000's to early 2010's.)
And while we're seasoned, grizzled veterans from the old days, we are also LUKS newbies. Nonetheless, the LUKS concepts all seem straightforward with similar systems we've managed previously. Our initial research shows that there's many StackExchange/ServerFault questions asking specifically how to avoid having to manually provide a password/key_challenge/secret of some sort. We effectively (we think?) want to do the opposite. And before we jump in implementing all this, we're posting this question in hopes we can learn from those who've gone before us, and maybe save a bit of project time.
0 Answers