we are in a process to purchase a virtualized solution from HP (c-3000,HP Virtual Connect Flex-10 10Gb Ethernet Module, HP BL 460 G6, SB-40 with PVA ).
my concern is the following: assume i have virtual machines like DNS , WEB and Email which is distributed between 4 ESX Hosts, configured to have a Public IP address and vMotion is enabled with DRS, HA. now one or multiple of these are vMotioned to different ESX host.
- how to guarantee that the network profile is also moving with the vMotion? i believe if the network profile (Public IP and other virtual nic configuration for that virtual machine ) didn't move as well then those public servers will lose thier functionality and will appear as if they were down.
- does HP virtual connect switch have the ability to do this or we need to consider other elements?
Thanks
We have virtually identical vSphere setups all over (actually we use BL490c G6's because they have more memory slots and we use XP/EVAs, but close enough). It's all much simpler than you might think.
Firstly all you need to do is configure your two Flex-10 NICs to be either 1Gbps or 10Gbps (depending on switch speed obviously) trunks through to ESX/ESXi (we use ESXi 4U2 on 4GB SDHC cards btw) with all your required VLANs exposed up these trunks - that's all you need to do from a Virtual Connect perspective - don't get into anything more complex.
ESX/i will then just see two pNICs at whatever speed, use these to setup your vSwitch0 (increase your ports to >=120 btw) and let the Management Console use the appropriate VLAN, then setup your VM-facing port groups on their own VLANs. This way the Virtual Connect will perform all the intraVLAN switching it can, only trunking the interVLAN traffic to your switches for routing.
Good luck.
If you do this then vMotion will simply work just fine by ARP/inARP'ing each vNIC's MAC back to the Virtual Connect and switches so you'll see little if any dropped frames.
If you have no intention of using Fibre-Channel/FCoE then go ahead with this purchase knowing it'll work and others are happy with it. If you are an FC-type person it might be worth waiting as the Intel G7's are about to be released and their CNA's are pretty compelling (the AMD G7's are already out). My only concern would be with your storage solution - I'd suggest something much better/faster/more-expandable - maybe an EVA4400 or a P2000 G3?
If your switch will support a regular old network fail-over for a single server (with 2 nics in an active/passive configuration) then it'll support vMotion just as well.
The only thing you have to ensure is that all the networks/VLANs you make available on one ESX host, are also available with the same names and VLAN IDs to the other hosts.
With 3.5, you'd typically see the VM drop a single ping when flipping between machines during a vMotion. With vSphere, we're not even seeing a single dropped ping - It's practically seamless.