(I can't find a similar question which has been already answered but maybe I do not use the good words as I'm a french student ;))
The closest answer to what I'm looking for is : Scalable Web Application Hardware Topology Best Practice but it doesn't answer everything
I have built a small private cloud (Openstack) on which I run KVM VMs, most of the time one VM per domain/website, for my dozens of website and for some clients too.
I plan to test if I can evolve to an "hybrid cloud", having some things run inside my cloud, and some others on EC2, so I want to find out if my "way of doing" is the best suited for my use.
Those VMs run CoreOS, which run then different Docker Services (one container for Nginx, one container for pgsql, etc...). If one service begins to be too "short" on something, I then either create a bigger VM, copy the old VM to the bigger VM and delete the old one, or I create a dedicated VM for the service in need ( for example a second VM dedicated to Nginx to handle more connexion).
However, I'm wondering if I'm not doing things wrong.
I chose to use this "model" because I want strong isolation between the different domains/clients, because I wanted back in the days to play with and get more used to Docker, and because I find Docker to be one of the most effective way to deploy quickly services.
Should I rather only use VMs (so no containers) with one VM per service (rather than one VM per domain) ?
Or should I use instead containers only to separate the different services, and run all of them mixed together in my nodes ? Like, dozens and dozens of containers of different services and different clients ? Then how can I isolate in an effective way the different domains/clients ? And then how do I scale those services ? Just adding more nodes ?
Or should I create a cluster of big VMs or bare metal machines and then use them to create a big CoreOS cluster which should be able to grow with adding more bare metal nodes to it ? Then the same question that for containers apply.
Sorry if my question seems too dumb or newbie or not suited, but I prefer to ask it now rather than when it will be too late to take a step back ;)
Any suggestions welcome :)
Y
It's totally fine to use both VMs and containers, especially in this kind of scenario.
VMs provide a secure isolation layer that is both cheap and expensive: - it is cheap in labor, because you don't have to work very hard to achieve good security with VMs; - it is expensive in resources, because the overhead of VMs can be significant, especially for small services requiring modest amounts of RAM.
(The "virtualization tax" can be considered to be a small constant; for big services, that constant is negligible, but for small services, it becomes a significant fraction of the total footprint.)
Containers, on the other hand, provide you with a cheap and efficient software isolation and deployment method (in the sense that you can deploy multiple containers side by side without worrying about conflicting versions).
Moreover, if you want to implement hybrid cloud (i.e. spillover from a private cloud to a public one), containers are a very easy way to bridge both environments, abstracting their differences.
My personal strategy (assuming that I'm understanding your needs correctly) would be to isolate tenants with VMs, and rely on a simple private cloud (OpenStack or other), deploy in containers, and move those containers between your private cloud and a public cloud as necessary. You can of course redimension your VMs (on either cloud) to accommodate fluctuation in resources requirements.