LXC/LXD used to be implemented as native packages until 18.04. Since 18.04 LXC containers are run as snaps. I wonder if this imposes a runtime overhead. This would negate the idea of having light-weight containers. So my predicament is whether it is worth performance wise to run sandboxed environments as LXCs vs having a virtualized environment.
About twenty years ago, it was common to install several network services onto the same server. For example, if you had fifty websites, they were all onto the same server, using virtual hosts (Apache2 terminology). Not much separation between each website, apart from the Unix permissions. You can still do such a thing, and in terms of performance/density it would be unrivaled. But you tend not to do this anymore, because it becomes a management nightmare, and also a security risk. How can you easily remove the five websites of a specific customer? How do you separate the five websites of a customer from those of other customers?
The message is that, with even better computers it is OK to trade some of the density/performance for other features.
Hardware virtualization takes away quite a lot of resources off the CPU. But still people use VMs. This is where system containers come in, where you get some of the benefits of a virtual machine, but you achieve isolation thanks to the Linux kernel security features. You would use system containers in a VM to further separate your network services.
Snap packages have many advantages that are discussed elsewhere. The
snapd
service does only the management of snap packages, deciding when to update a particular package. It takes some memory, which you can measure. You can configure when to update packages, whether to delay an update (up to two months), which channel to track (like4.0/stable
, with minimal updates for the next five years).