I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN
The general consensus looks like it does not perform well enough for a production SQL server even at a light load.
So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes?
We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008).
The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses.
I also like the redundancy built into the Network Raid architecture.
I have looked at other SANS and found different faults with them.
For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.).
I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in.
Thank you,
Keith
I got several of 'em on HP hardware about a year ago for performance benchmarking testing, and some of the same things I said in the LeftHand VSA SAN question apply here too.
At the time, LeftHand's iSCSI multipathing wasn't truly active/active. Say you have:
When you run a query on the SQL Server that accesses the data files, you will only get 1gb of read throughput despite the fact that you're using four network cards. The LeftHand devices (and indeed, all of the iSCSI SAN gear I've seen) will only send data from the SAN to one specific MAC address on the SQL Server.
You can work around this by:
If you need more than 1gb throughput, then until you hear from someone who's actually pulled it off, and can show it to you (not just say "oh yeah it works great on my l337 b0xx0r") then don't invest your money.
Fiber channel isn't necessarily different: it's just that you can easily get 4gb fiberoptic connections instead of 1gb Ethernet. You still have the same pathing challenges. I'm doing a presentation on this next week at IndyPASS, coincidentally - if you're in the area, swing by.
Maybe we can get some more input from Brent here, but I don't think the async SAN replication will work with SQL Server Data files, you'll have to do something like database mirroring/ log shipping. Or at least, this is what I've always thought & haven't yet had the privilege to test this myself.
Can anyone confirm or deny?
Well, if the case is that the replication falls behind when being done at SAN block level, you can't just assume that log shipping will not also fall behind. Both replication technologies would be done at some interval, right? Is the concern then lost data vs corrupt data? I wouldn't think the SAN replication would be recognized as usable if the entire differential update hadn't been received. So is the data corrupt or simply lost? If SQL is virtualized, you're not sending just the db or just the trans logs; I am just not sure how the data would be corrupt?