Is there a standard as to how one sets up networking on a Server Host that will have virtual servers within?
In the past I've used virtualmin and set up the nic to have multiple alias such as eth0:1 eth0:2 and linked whatever ip to each of those alias, but in general we routed all our websites through a single IP.
Now we are switching over to a KVM set up where the host virtualizes the physical hardware on each sub server rather than virtualizing a whole new machine/server. While each sub server has a virtualmin installation within. I'm not as familiar with this type of layering and I'm wondering what the best practice for networking are.
I was thinking alone these three lines but would like recommendations of what is a "de-facto" standard on how these are generally set up.
The first method I thought about was where each virtualized server would simply request for their respective IP to the colocation's router. Something like:
I think the pros would be that each server has their own IP and so long as it is tracked, no issues will occur. (We will manage these servers from top to bottom, they are all for our use so I'm not worried as there is no third party that can go change the networking.)
The con is that I have a potential of 4 or more firewall/servers to maintain, but that's part of normal operations anyways and there's no escape from that, period.
My second thought would be to have a set-up similar to what I have here on my Laptop where I virtualize a router (PFSense) and all the virtual computers in the laptop, including the laptop itself, are connected to the router.
The pros I see here is a centralized firewall
The cons I see here is firewall settings are easy to mess up. The virtual servers already have csf+lsd+iptables If the Physical host doesn't have a backup network to connect to the nic and for some reason the PFSense doesn't start up, well the box looses connectivity and therefore the server is down and requires a physical trip to the server farm where it's co-located.
Lastly, although I'm not quite sure how to do this at the moment, but I guess it's similar to giving eth0:1 and eth0:2, I could have the host take all the ips available from the Co-location and then have the host machine allocate each server it's own line connected to it's respective IP.
This is what comes up to my mind but I'm a little too inexperienced/lacking confidence in myself to simply go with these plans without having feedback from people who actually have done this before.
I'd recommend that you get the second NIC on the physical host connected.
Use one of them for "Management" purposes of the host itself, and the other for "VM traffic".
They'll probably uplink to the same place, but this at least gives you access to the host even if something goes sideways with your VMs.
Once you've got that configured, you can easily do either option 1, 2 or 3 since the VMs have their own NIC and you can do whatever you feel is best.
While the vast majority of my virtualization experience with with VMware, this is the way that we setup our hosts and it works vere effectively.
Even more interesing is that, in the VMware world, they've decoupled the IP stack from the hardware, meaning that in the case of a "VM traffic" NIC, it doesn't have an IP and acts like a switch uplink. I suspect that this might be possible here as well, but my KVM/Linux-fu aren't strong enough to provide you with any actual insight in to how you might do so. Our good friend Google likely can though.
I am familier with vmware environment where each virtual server will be allocated with virtual NIC and vmware tools installed. virtual servers can be either be assigned with static IP or automatic as long as there is DHCP server in the same network. Apologies, I couldn't be specific to your question but I thought I would share my experience with vmware.