We just got approval to purchase a new database server. My manager is worried that, after we deploy the new server, some users might not perceive the improvement in performance. I've been tasked to come up with some way to measure the improvement. But first, some details on the upgrade:
The old server has dual Intel Xeon 3.40GHz single-core processors and 4 GB of RAM. It's running Windows 2000 Server Standard 32-bit and SQL Server 2000.
The new server will have dual Intel Xeon 2.4GHz quad-core processors and 18 GB of RAM. It will be running Windows 2008 Hyper-V bare-metal. Initially, we'll be virtualizing Windows Server 2003 Premium (until we can upgrade our ERP software to the latest version) and SQL Server 2000 (not enough money in the budget to upgrade SQL Server yet).
Having never done this sort of thing before, my idea was to create a SQL query that hits some of the really large database tables such that the query takes a minute or two to run, and measure how long it takes to execute by setting two date/time variables, one before the big query and one after. I would then store these two date/time values in a table and schedule the job to run a night when activity is minimal and equivalent each day. This should factor out the load variability during the day.
By comparing the elapsed time in our current situation and the elapsed time after the upgrade, that should give an idea of time savings, so that we can say the upgrade resulted in an improvement of x%. Does my plan seem solid, or did I take a wrong turn at Albuquerque? Is there something I should be measuring additionally?
Here are a set of videos on how to use perfmon and create a baseline for sql server which will help you track and compare different counters. You can also benchmark your disks for random and sequential IO using IOmeter. Also check out SQLIO to measure your new storage io performance. Hope this will get you started.
EDIT: I also found this article regarding storage IO benchmarking