Currently I have one vSwitch with a few networks on it, which talk to other devices and use VLAN tags. I have Internet, Internal, Management (VMKernel), and WWW (LB pool). However, for things like SQL and WWW, the VMs mostly talk to each other completely within this host. I don't do vMotion, iSCSI, NFS, etc.. I have two standalone "redundant" vhosts which don't need to talk at the vhost level (the VMs do that by themselves if necessary).
I asked this a few years ago (ESX 3ish) on IRC and at that time I was told no and that traffic between VMs on the same host won't leave the host they're on regardless of vSwitch, IP range, etc.. IOW, it should act like a normal switch.
Is that true, or is it still the case with vSphere 5+? In this environment is there any reason to create a separate vSwitch and/or network for communication between VMs on a single host? The only thing I could think of would be to take load off the NIC, but if it's virtual and doesn't hit the NIC, then that's moot.
network traffic between VMs on the same host will not flow down to layer 1, the network vem has an ARP cache built into it, when the traffic from the VM goes down into domain 0 (ESXi) the vem takes over and makes the decision to either keep moving the frames down the OSI or not.
as for VDS or Cisco Nexus frames are always moved along the memory bus from the vem to the vsm, when requests are make switching happens at the vsm and will only go out onto the physical infrastructure when the target/source is external. know there is 1 thing to that the vsm does us the directly connected uplink to move frames between hosts
As I see it, the question you are asking is
When I have questions like this, I always start off by asking myself the following:
To me, the analog to your environment for a non-virtualization scenario is: some systems plugged into the same "mostly dumb switch", some of them only talk to each other and some talk to everyone else, including the world. So the question could be:
Of course, the answer to this question is basically the same as the answer to all good questions: it depends. It depends on a lot of different factors, that to me fall into the following categories:
So, let's take each category on its own:
1. Performance
Unless you are pushing max speeds on many of the connected systems, or have lots of "cpu intensive" traffic, like multicast, then probably not.
2. Security
Do any of the machines have critically important stuff on them? Are any of the "private" machines vulnerable to a malicious attack from one of the other systems or even from outside the network?
3. Privacy
Should the traffic between the hosts that communicate among themselves be hidden from other systems on the network? Is there any chance that a system put into promiscuous mode could listen in on the other traffic (this is where the "mostly dumb switch" that is a stock vSwitch comes into play: it can be configured to allow hosts to enable promiscuous mode). Also, keep in mind that all of the systems will use some amount of broadcast traffic that the others will see, even if it is only arp.
4. Reliability
Does adding complexity increase the reliability of the system or reduce it? If a[|the] switch fails or is accidentally misconfigured, are you ok with it taking down all of the communication between systems?
5. Ease of troubleshooting
If something goes wrong and you need to troubleshoot, will you be able to easily isolate the different systems?
6. Elegance
Can you explain the setup to someone else easily? Will the big picture be growing or changing over time? If you are like me: will you remember how it is set up in six months, a year, two years, five years? Can you figure out how it is set up at a glance? If you have to move the whole thing, or relocate some of the systems, will it be easy, or hard?
7. Practicality
Does any of the above actually matter? Are you able to realistically implement any of the above? If it is a lab that you are going to tear down in a couple of weeks, does anything other than performance matter? If you don't know how to configure additional vSwitches or can't get an additional physical switch in the "non-virtualization analog", then does it make a difference?
You'll have to weigh the pros and cons of each scenario yourself and implement it as you see fit, or provide WAY more information to us so we can make a recommendation.
But, based on the information I have from your post so far, if I were in your shoes, I'd separate them. I'd do this even if it only gets me clear boundaries and lines of communication between systems, and so I can understand what is and isn't going on. I'd configure a new vSwitch for the "private" systems, not connected to any physical NICs on the host, and rename the port group in it to something like "private network for host-to-host only" (I can't remember if the name field will accept that many characters, and I'm too lazy to fire up vSphere just to check, sorry). This will also make it easier in the future if you end up growing to multiple hosts.