I am looking to purchase a new server to be used for virtualization and am wondering how many physical network cards to get for the machine. Is there a basic rule of thumb saying something like, "1 network card can support the traffic of 4 virtual servers"?
Hi it depends on the network traffic that those VMs will do and a little bit also on the network card (if for example the card and NIC driver supports TCP Checksum Offloading, interrupt coalescing, etc...).
There's no hard and fast rule of thumb, you have to measure the characteristics of the servers you want to virtualize and figure out how many you need from that.
With GigE, since the ethernet stacks are pretty good these days you can safely assume that you'll be able to deliver 100Mbit/sec to 10 servers (or near enough) concurrently and it's not that long ago (in my head at any rate) when 100Mbit/sec was a Fast Ethernet network.
If you are talking about 10GigE NIC's then even accounting for some inefficiencies in the the existing implementations you can still happily assume that you will be able to provide 7 or 8 servers with at least a gigabit of capacity concurrently, and maybe even 9 with good hardware.
In reality very few servers actually saturate network links - on average most servers that I deal with only need about 1Mbit/sec when you spread it out over 24 hours. However the devil is in the detail - all servers burst up to a significant fraction of whatever bandwidth they have available at some point or points during the day, typically during backups. IF you know all the bursts tend to happen at different times you can consolidate more safely, if not then you cant. I'm never comfortable if I'm oversubscribing GigE links by more than 4:1 unless I've got data to tell me that it's OK and I'll only ever make that sort of assumption when I'm sure the class of server is not something that is likely to (need to) saturate network links.
In terms of general advice good NICs are pretty cheap (if you have available slots in your motherboard at any rate) and you should put in more than you need now because the demand from your servers, and the number of servers will almost certainly go up.
We run approximately 60 Virtual Machines over 10 blades, and each blade has just one gigabit NIC avaliable for public traffic flowing to the VM's with absolutally no room for expansion (have you SEEN the price for 10 daughterboards and the backplane?!). We've never had a bandwidth bottleneck that caused a any issues.
Our VM's are a vast array:
But as everyone has said, it really comes down to how much traffic you expect your VM's to handle.