I am having a HP Proliant DL 160 Gen9 Server with a HP H240 Host Bus Adapter. 6x 1 TB Samsung SSD's configured in a raid 5 directly using the internal storage of the machine. After installing a VM on it using VMware (6.0) I did a benchmark with the following result:
After some research I came to the following conclusion:
A Controller without Cache will have trouble to calculate the raid 5 stripes and I pay this in write performance. But 630MB/s Read and 40MB/s seem to be a bit poor. Anyhow I found others having the same problem.
Since I can't change the controller today, is there a way to test if the controller is on the edge? Or do I really have to try a better one and see the result? What are my options? I am pretty new to Server/Hardware/installation since in my previous company this was managed by a outsourced hosting provider.
EDIT UPDATE
Here now the performance with write cache enabled. The read went up even before I did the change. Not sure what happened, I just played around in bios settings of the windows machine. Today I go update the firmware to latest version, let's see what it gives us.
Here a screenshot of a Benchmark with the new Controller P440 with a 4GB Cache activated. (enabling HP SSD smart path didn't bring a performance improvement btw.) But with a Cache we get much better results. Of course I tested it with files > 4GB, to make sure to test the disk and not the cache.
The HP H240 is not a RAID controller. It's a host bus adapter that's intended to provide direct disks access to a host operating system. This applies to people using software RAID, ZFS, Hadoop, Windows Storage Spaces, etc. It has some limited RAID capability, but as you can see, it's not sufficient.
For VMware purposes, you want an HP Smart Array RAID controller like the HP Smart Array P440.
As you already discovered, the low write speed had nothing to do with slow parity calculation (modern CPU are very fast at that), but was due to the disabled disk's private DRAM cache, and more precisely on how badly flash memory need it to give good sustained performance.
I'll quote myself:
and some more info:
Bottom line: while enabling the disk's private cache can greatly increase your I/O speed, please be sure (by mean of testing) that a power outage will not cause any unexpected data loss.
Raid 5 always has poor write performance. I suggest to use Raid 10 but anyway did you installed the drivers for VMware ESXI from the HP website? Also consider to make a firmware update. If the Raid is still in the status of building/initializing the array performance is temporarily downgraded. This sometimes can take up to a couple of days if it'a full initialization.
[http://h20565.www2.hpe.com/hpsc/swd/public/readIndex?sp4ts.oid=7553524&swLangOid=18&swEnvOid=4183][1]
Does the h240 have a real ROC processor? You dont need an FBWC for RAID 5 with SSDs, because the RAM is slower than the SSD RAID. With my 8x 256gb 850 pro, I get 2.9Gb/s with an old LSI 9260 and disabled write cache. With enabled write cache, I have only 900Mb/s.