We plan to run our test environment inside ESXi 4 hosts on a couple of Core i7 920's with X58 mb's.
The hardware is very non-HCL which also means we forfeit VMware support. No biggie for a test environment already running two whitebox ESXi 3.5 hosts for almost a year without problems.
We don't need the onboard NIC's or VMFS on SATA, although it would be a bonus. We just need to get ESXi 4 installed and load drivers for some dual-port Intel PT adapters. The PT's are on the HCL.
If anyone has this working on Core i7 (or not), I would be very interested.
Are all the components on these systems on the VMWare HCL? If so, happiness. If not, all bets are off. It's really that simple. You'll ideally be looking for the exact motherboard, exact disk controller, etc. rather than just chipsets. I know its not as much fun but this is where I honestly think its as well to buy a complete server that's on the list.
EDIT: After LEAT's comment, I had a check and the HCL doesn't list many motherboards to the point where at first glance it fooled both LEAT and myself into thinking it didn't list any at all. I'm not sure if that means VMWare aren't certifying individual components like that, but they do certify some motherboards... See the intel entry, which includes this little number.
For me, a tight list of individual components just confirms my feeling that for production use of any kind, anything other than investigating VMWare itself, it makes sense to buy complete servers.
Or see if the likes of Xen are a bit more amenable to what you're trying to do.
I'd like to document my experience with ESXi 4.0 and Core i7.
Problems in order:
I connected a single 500GB SATA hard drive so that it showed up as SATA device 0. I disabled all unused peripherals in the BIOS including the onboard Realtek NIC and IDE.
Once I had the Intel NIC and the USB keyboard installed, installation went flawlessly. ESXi picks up the important hardware including hyperthreading. To test I created a 64-bit Windows Server 2003 machine on the local SATA hard drive and booted it up. I also imported VM's from a NFS share.
So far it is working great except for the Intel NIC showing up as speed "100 Full". This may be an issue with our infrastructure since my notebook's NIC also connected at 100 Mbits. I have built both Core i7's and will move them into our data centre tomorrow. Hopefully the link speed issues will disappear. I'm also interested to see if DRS will work between Nehalem and Northwood.
This was the fifth and most trouble free whitebox ESX installation I have been through. I'll give more feedback once these new servers have done some real work.
On my Dell Studio XPS 435 MT, I needed to turn off one-half of the CPU cores in BIOS, in order to get past the PSOD. After that, I was able to turn them all back on again, and ESXi 4.1 U1 was able to clean boot every time without incident. It took me a long time to figure this out, and I already did have the latest BIOS installed, so I thought I'd share it.
Bob W
I have ESX 4.0 "classic" installed on a Core i7 based Supermicro server like this one link text and it runs very nicely. I haven't tried ESX 4.0i embedded, but I don't believe they are that different.