Please provide your tips and best practices for virtualizing SQL Server in VMWare ESX I am interested in advanced configurations and settings.
Please provide reasoning behind your recommendations
Edit: Just to clarify, I already have over 70 Virtual SQL servers in separate clusters using an ISCSI equallogic San -
What I am really looking for are those advanced configurations like:
How you configured your disks / RDM's
Do you make use of settings like Mem.ShareScanGHz - http://communities.vmware.com/thread/143828 - that are not well documented
Normally I hate linking to vendor whitepapers, but VMware's published a whitepaper on this that is surprisingly good:
http://www.vmware.com/files/pdf/solutions/sql_server_virtual_bp.pdf
Everything in there is important, and I can vouch for a lot of 'em personally. Less virtual CPUs is really better than more: if you're not saturating 100% of a single CPU, then you don't want to add a second virtual CPU. It has to do with the way VMware CPU scheduling works.
The only thing that isn't explicitly defined in the doc is multipathing. With ESX 3.5 and earlier, you don't get true active/active multipathing for your SAN. If you need more than one HBA of bandwidth to a single virtual machine, it needs to stay as a physical machine until vSphere 4 comes out, and even then, you have to get the top-of-the-line edition in order to get true multipathing.
We have a 3 node vmware cluster, with each node being a HP 365 (2xCPU 4 core, 1.8Mhz), 32 GB RAM per server and Fibre Channel connections to gobs of disk.
We support over 20 SQL vm's (SQL2000, SQL2005 and SQL2008, live, dev and test) plus other generic Windows 2003 servers (iis6, application, file and print etc) and have at least double that still to migrate onto it, and I don't expect to have any performance problems.
The three nodes give us "over the top" redundancy and resiliency. If one node goes down (or for maintenance) the other two can still provide the same performance. Three is a luxury (or best practice as they like to call it), two would be sufficient.
The actual vm's are quite light. Usually 1vCPU and 1GB RAM, and they host 20 or 30 databases. One or two of the busier vm's are double that but that's usually because of badly written applications using SQL Server as toy (not using stored procedures etc, but that's discussed to death on the "other" site! ;)
The vm gives us the flexibility to create more "smaller" servers that have similar usage patterns (keep the big data movers away from the lightweight website systems) and/or SLA requirements (keep all the important stuff together with a common and standardised operation practices).
Given that we have gobs of RAM, CPU and fast disk, we've not had to micro-tune our system. The DISK is multiple RAID 10 arrays (roughly split between OS, transaction logs and database) with some big RAID% for backups & dumps.
Lots of redundant 1GB network connections into redundant edge switches.
This topic was covered indepth in a recent VMware communities podcast. Please see here: http://blogs.vmware.com/vmtn/2009/03/virtualizing-sql-server-podcast-white-paper.html, and listen in to episode #42 here: http://www.talkshoe.com/talkshoe/web/talkCast.jsp?masterId=19367