This is a follow-on to How can I grow a 3Ware 9650SE RAID1 under ESXi 5.0?
I've successively replaced 1TB drives in my RAID1 with 2TB drives hoping that I can grow the datastore I've got in ESXi 5.0. After replacing the drives, and letting the rebuild finish, I can boot into ESXi (the RAID is also the boot partition) but partition tools (both the ESXi maintenance partedUtil
and a gParted boot disk) show the RAID being the original sub ~1TB size.
What do I need to do to allow OSs, particularly ESXi, see the unused portions of the drives?
EDIT As MDMarra suggested below, I had tried the CLI KB article but confusing results. I think my question still stands. Worded differently: Why are partition tools unable to read the full size of the drives in a raid, and how can enable them too?
/dev/disks # partedUtil getptbl /vmfs/devices/disks/naa.600050e0f7f321007eb30000401b0000
gpt
121575 255 63 1953103872
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 10229760 1953103838 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
Looking at the line 121575 255 63 1953103872
the last number is supposed to be the LBA size of the disk (in 512 byte units), in this case just under 1TB. Forging ahead anyway ...
~ # vmkfstools --growfs "/vmfs/devices/disks/naa.600050e0f7f321007eb30000401b0000:3" "/vmfs/devices/disks/naa.600050e0f7f321007eb30000401b0000:3"
Underlying device has no free space
Error: No space left on device
SO I'm left thinking I need to do something to allow the OS to see the true size of RAID array.
EDIT 2 Output of tw_cli
~ # /tmp/tw_cli /c0
Error: (CLI:003) Specified controller does not exist.
~ # /tmp/tw_cli show
Ctl Model (V)Ports Drives Units NotOpt RRate VRate BBU
------------------------------------------------------------------------
c6 9650SE-4LPML 4 2 1 0 1 1 -
~ # /tmp/tw_cli /c6 show
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-1 OK - - - 931.312 RiW ON
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 1.82 TB 3907029168 WD-WCAY00283502
p1 OK u0 1.82 TB 3907029168 WD-WCAY00286752
p2 NOT-PRESENT - - - -
p3 NOT-PRESENT - - - -
~ #
Your expansion attempt has not been successful so far.
It may have failed - this would have produced an appropriate entry in the controllers' logs. Take a look at the "Controller log" section of the
tw_cli show diag
output.Or you may have used the wrong command set. In your special case it seems somewhat tricky. Intuitively, using
should launch the expansion, but a migration from raid1 to raid1 is unsupported according to the matrix from the latest/greatest CLI guide for 10.2 (which seems to date from 2010):
As I would be not too sure that this is still current and correct information, I would simply try the former command for migration. Should this fail, the route to go would probably be
which would break the mirror, and running
to see which disk has ended up in u0 and which has been separated out to another unit. Deleting the newly created unit by issuing
Then running
to re-mirror should finally expand the capacity for the array. But honestly, this is where I would open a call with LSI tech support just to make sure I don't screw up the array by a careless move.
And one more important point: make sure you have recent backups you can restore from.
You simply need to increase the size of your logical disk/unit (u0).
Some form of the
tw_cli /c0/u0 migrate
command would seem to work for you, but see this knowledge base article that gives conflicting information.You need to use the CLI to extend the partition and grow the VMFS volume. You can't do this from the GUI with local storage, so you'll have to get dirty with the vCLI.