I have an ESX host with 32 GB of RAM.
Placing three VMs on it, allocating each one to 16 GB RAM each - knowingly overcommitting the total.
Each VM has two network interfaces.
The first connects to a virtual switch called frontend and has an uplink NIC to a physical network.
The second connects to a virtual "switch-in-a-box" - i.e. no physical uplinks for this switch, and we call this network backend.
When the three machines are under load, the ESX vmkernel begins to swap some RAM out to disk - up to 6.5GB.
I cannot find any documentation/reasoning in regards to the backend network's performance degrading due to the heavier load due to memory swapping, but that is basically the impact.
Is there any clear reference regarding virtual switch speeds without any uplinks?
The internal networking is controlled by the vmkernel. I imagine that the harder the vmkernel is working, the more performance degradation you'll see. From this article they mention:
Not really a whole lot to work with. This document goes a bit more into the specifics of VMware netwoking, while this blog article goes a little bit into the nitty-gritty of networking and overhead.
Indeed, all VM network I/O is processed by the CPU. I've had systems that we cannot virtualise, due to network I/O, as opposed to CPU/memory/disk load.
Well, data that flows through the virtual network cards (the guests) have to end up somewhere by going through the guests RAM first (and then to disk or cpu).
Overcommitting is supposed to be used when you have several VM's with the same memory balloon that can be shared (like, 200 identical XP guests on a single host).
I can understand that your host hast to swap to disk when you turn everything on (it doesnt know what memory to share yet), but getting sudden swap usage when load goes up is the equalent that the memory inside these virtual servers arent the same.
Buy more RAM perhaps?