I'm trying to get two new Dell R620 servers up and running on 10Gbit interfaces. The servers were delivered with two PCI network-cards for 10Gbit.
At the moment, the network cards installed and reported by iDRAC is:
- NIC SLOT 1 - Intel(R) Ethernet 10G 2P X520 Adapter
- NIC SLOT 2 - Intel(R) Ethernet 10G 2P X520 Adapter
- INTEGRATED NIC 1 - Intel(R) 2P X520/2P I350 rNDC
The integrated card has 4 ports. Each of the other cards have 2 ports each for 10Gbit. The total of availible ports is 8.
Problem number 1: I can only see 7 of the 8 ports in vmware.
Vmnic4 is missing for some reason.
The two NICs that actually links up, is on RJ45 and used for management.
Problem number 2: The switch that one of the vmnics are connected to, got link on 10Gbit towards the server, but vmware clearly states something else!
From the picture above, only two out of three ports that are connected got link. Two RJ45, and one optical 10Gbit are currently wired up.
The switch this is connected to is an Alcatel-Lucent 7210 SAS-M. Status on the port that is connected to this server is as follows:
Port ID: 1/2/2
Admin state: Up
Link: Yes
Port state: Up
How and why would I get an OK on the switchport, but no-go in vmware? Currently running version 5.5 downloaded from Dells software page.
The SFPs have been tested towards the ALU-switch by putting them in an EqualLogic disk-enclosure and checked that both sides links up at 10G. The EQL got the same kind of SFPs and are working fine.
Could this be a network-card firmware issue? Drivers in vmware?
Any pointers would be very helpfull.
I've done some testing and research, and it seems the X520 network adapters for 10Gbit doesn't support any other SFPs than Intels own.
Dell sendt me Dell-branded SFPs, which then didn't work as it should together with the Intel network-adapters.
I guess I'm going shopping!