We are setting up a new piece of hardware to virtualize several of our servers on. Choices are RAID 5, RAID 6, and RAID 0+1.
We are wanting to benchmark all three before we go live with the machine, but I'm not sure how to test the speed.
- Since we will be using it to host VMs, what will the actual disk traffic look like?
- What can I use to see if RAID 6 is too slow?
- Short of setting up the system with all the VM's on it and running that way, then redoing on all the work, I'm not sure how to test it. It them becomes more of a subjective test than an objective one.
- I'm worried that RAID6 will have too much overhead, that RAID5 will be to fragile with 3TB drives and I've never worked with 0+1 at all.
So in short I'd like to setup the base machine (which will be running Linux) and then test the underlying SW RAID for speed. What kind of tool exists to simulate this kind of load? Barring the lack of a specific tool, how about a generic FS testing tool that will simulate different loads?
On the subject of RAID levels, yes, RAID5 isn't robust enough, and RAID6 without hardware control is almost certain to perform worse than RAID10. So, you can save most of your benchmarking and just go with RAID10. I'd seriously rethink the use of software RAID; whilst I'm a huge fan of Linux MD myself, for a large random write load (such as you'll often get on a well-used VM server), battery-backed write cache can be a godsend for performance.
There are still some tunable parameters with RAID10 (stripe size, primarily), and so you'll still want to benchmark things. However, without knowing the VM workload, it's impossible to predict the best parameters. In general (very, very general), I'd expect a generally smallish block size with reasonable concurrency, though. Choosing a tool is hard, because they're mostly designed for testing on filesystems, and you'll (hopefully) be using raw block devices over LVM. Quite honestly, just running the VMs (in a staging mode) and measuring the performance isn't the world's worst benchmark...
Give fio a try. It has several sample configurations included. The "iometer-file-access-server" one is particularly brutal on storage. I've worked with it a bit. It's much easier for me to interpret and use than bonnie++.
If you're running Linux Iozone is a filesystem benchmark tool that is very interesting, although keep in mind that it will work on top on the filesystem.
http://www.iozone.org/