Here is our goal:
Setup new servers to turn our entire physical computer network into 3 physical groups, which are:
- Server 1 - NAS - Openfiler/NexentaStor CE/FreeNAS/(Other Suggestions)
- Server 2 - ESXi Server with the following VM's:
- VM1 - AD/DNS/DHCP - Windows SBS 2008
- VM2 - SQL Server 2008 r2 / Database Tier - Windows Server 2008 R2
- VM3 - Sharepoint 2010 / Applicaton Server Tier - Windows Server 2008 R2
- VM4 - IIS / Web Front End Tier - Windows Server 2008 R2
- VM5 - Windows Multipoint Server 2011 - Supporting 10 Clients / Some with 2D Cad
- 10 Clients - Atrust M220 WMS Zero Clients
Question:
For servers 1 & 2, I would like to know what configuration ensures maximum performance?
Configuration means:
- Hot Swap Hard Disk Options
- iSCSI targets
- Regular volumes
- 7500 SAS disks
- 10/15K SAS disks
- VM's on iSCSI target on NAS machine
- VM's on DAS
- RAID 0/1/5/10
- No RAID and ZFS file system
- Memory Configurations
- 2/4/8/16/32 Gb DDR3 Memory
- CPU Configuration
- Xeon / Opteron
- 2/4/8 Cores
- 1/2 Physical CPU's
- OS
- For NAS server, Openfiler OR NexentaStor CE OR FreeNAS OR some other free option
It's quite a bit to digest. This is a solution that could work, and I am a proponent of ZFS-based solutions, but I'd initially ask why you wish to have a storage server with only one VM host. Granted, you could expand to multiple hosts over time... But looking at your setup plan, I'd almost recommend a large standalone server with robust local storage. The NAS wouldn't buy you anything with one VM host.
Think something like a current-model HP ProLiant DL380 with 8 or more disks (RAID 1+0, please), running ESXi with plenty of RAM to handle your setups without oversubscribing. Two 6-core CPUs should round it out.
You will need a separate network segment for your iSCSI traffic. Don't run data and disk traffic on the same network, and preferably run dedicated switches for the Storage Area Network. Whatever you do, never mix the two types of traffic on the same physical ports.
2 ESXi servers would be a good choice for reliability reasons.
You can install ESXi onto an SD card on the host and boot from that, then run all of your disks out of the box you're configuring as a storage appliance.
Ensure your choice of iSCSI target is on VMWare's HCLs. I've seen all kinds of problems from using OpenFiler to serve iSCSI to ESX. It doesn't work correctly and shouldn't be used.
You may have more success with a Windows 2008 R2 server running MS' iSCSI target, although I'm not sure if the free version of that is on the HCL, either.