I have a question on planning. Recently, I got a request to create 2 625GB Luns (clariion cx-240), FC Connected, to ESX server 4.0 (four new hosts). I don't know the application they would be running. However, I only had option to carve luns from raid 5. All FC disks, and i did not look for number of disks in the raid group just looked for capacity. These existing raid groups are used for many applications -DB2, SAP and other. would that make any difference b'cause i am taking luns from 20 RGs.
This is what I did, created 2 metas. Now, to start with the raid groups I had, like I said, all RAID5. So I decided to take 10 luns of 63GB and another 63GB with 10 luns. Then I created a meta for each and allocated the storage, assigned all the four hosts to that SG.
So I've presented 630 + 630 i.e a total of 1260 GB and the request was for 1250GB. Now my question is, is it the best practice to convert the GBs to block and allocate, so that exact requirement is fulfilled.
Another thing I want to clarify is, I decided to take luns small in size so that the I/O impact is not much. Could I have done some thing better while provisioning this requirement? (lun size while creating meta)
Please advise me so that I can make sure I allocate the best way next time...
Thank you for reading!
I want to serve my customer the best way and I am a novice, so please don't ignore my post and help me understand the best way if there's any, so that I can work more intelligently next time.
I had trouble following some of what you wrote, but I'll start with the questions.
First, your decision to provide more smaller LUNs (volumes) was not a bad one. In general, the rule of storage is to go wide before going deep, and each volume will have its own command queue that could become a limitation on IO/s. It also means that the work is spread more evenly across SPs and ports, which reduce the likeliness that these will become a bottleneck. That said, smaller LUNs doesn't mean less work for the system. The servers will do the same number of reads and writes, no matter how you lay out their storage.
As for your decision to re-use the same RAID as your other applications, if it's your only choice, don't worry. It's another shared resource, but now the direction most admins are going is to stripe all workloads across all drives. Even if there's contention, the individual performance of each workload won't be worse than if it was only on a subset of the drives, and when there's no contention, each workload goes much faster.
On EMC, running highly transactional databases (that tend to do as many as 70% random reads), I'd recommend going with RAID-10 rather than RAID-5. It reduces the usable space by half, but increases the disk access speeds. I'm no vendor crusader, but even EMC themselves recommend RAID-10 for disk-intensive applications.
As for what to do with that last 10 GB, that's a question for whoever pays for the storage.
Perhaps the biggest oversight you made was not getting requirements for the storage before allocating them. You have a series of what appear to be low latency storage applications and this new demand could ruin that previously enjoyed service level. Suppose that they didn't know what they're application was, in response I would have put their volumes on the lowest storage tier available and monitor their performance (and how often they cry out).
There are a myriad of options for storage solutions. You need a high level view of what the service your provide actually is and then the necessary tools to secure those service level agreements for your customers. In the end, you're managing a complex shared resource that will have a full spectrum of demands placed on it. Once you understand that paradigm, the path towards better tactics and equipment to support your position should present itself while you increase your expertise on the subject and over time.