I have a 3ware 9650se with 2x 2TB disks in raid-1 topology.
I recently replaced the disks with 2 larger (3TB) ones, one by one. The whole migration went smoothly. The problem I have now is, I don't know what more I have to do to make the system aware of the increase in size of this drive.
Some info:
root@samothraki:~# tw_cli /c0 show all
/c0 Model = 9650SE-4LPML
/c0 Firmware Version = FE9X 4.10.00.024
/c0 Driver Version = 2.26.02.014
/c0 Bios Version = BE9X 4.08.00.004
/c0 Boot Loader Version = BL9X 3.08.00.001
....
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-1 OK - - - 139.688 Ri ON
u1 RAID-1 OK - - - **1862.63** Ri ON
VPort Status Unit Size Type Phy Encl-Slot Model
------------------------------------------------------------------------------
p0 OK u0 139.73 GB SATA 0 - WDC WD1500HLFS-01G6
p1 OK u0 139.73 GB SATA 1 - WDC WD1500HLFS-01G6
p2 OK u1 **2.73 TB** SATA 2 - WDC WD30EFRX-68EUZN0
p3 OK u1 **2.73 TB** SATA 3 - WDC WD30EFRX-68EUZN0
Note that the disks p2
& p3
are correctly identified as 3TB, but the raid1 array u1
is still seeing the 2TB array.
After following the guide on LSI 3ware 9650se 10.2 codeset (note: the codeset 9.5.3 user guide contains exactly the same procedure).
I triple sync
my data and umount
the raid array u1
. Next I remove the raid array from command line using the command:
tw_cli /c0/u1 remove
and finally I rescan the controller to find the array again:
tw_cli /c0 rescan
unfortunately the new u1
array still identified the 2TB disk.
What could be wrong?
Some extra info. the u1
array corresponds to dev/sdb/
, which in turn corresponds to a physical volume of a larger LVM disk. Now that I replaced both the drives it appears that the partition table is empty. Yet the LVM disk works fine. Is that normal?!
root@samothraki:~# fdisk -l /dev/sdb
Disk /dev/sdb: 2000.0 GB, 1999988850688 bytes
255 heads, 63 sectors/track, 243151 cylinders, total 3906228224 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@samothraki:~#
You would need to update the
u1
size before increasing the filesystem from within the OS. The latter will not "see" the new size until the 3ware controller notify it.The unit capacity expansion in 3ware is called migration. I am certain it works for RAID5 and 6, didn't try it with RAID1. Here is an example of migration command to run:
When this completes
fdisk -l /dev/sdb
should yield 3TB andvgdisplay <VG name>
will list some empty space. From there you would increase the VG size, then the respective LV and finally the filesystem within the LV.Edit: I think you are out of luck - see page 129 on the User Guide.
You could migrate your RAID1 to different array type.
Here is an alternative (it carries some risk, so make sure your backups are good):
tw_cli /c0/u1 migrate type=single
- this will break apart youru1
unit into two single drives;tw_cli /c0/u1 migrate type=raid1 disk=2-3
- this should migrate your single unit back to RAID1 with the correct sizeOf course, there are alternative approaches to this, the one I listed above is in case you want your data online all the time.
ok this answer appends to
grs's
answer. so credits do go there for 70% of the answer.Notes:
sum up of the situation:
So the key is to delete one drive at a time and recreate a new array every time . Overall:
split the raid1 array. This will generate 2 arrays with the old size of disks (2TB in my case).
the precious
/dev/sdX
which was pointing to the raid1/u1
, should still exist (and work!). and you'll also get a new unit/u2
which is based on the 2nd drive of the mirror.delete the disk of the mirror that is not used any longer (it belongs to a new unit
/u2
in my case and must have acquired a new/dev/sdX
file descriptor after a restart).create a new
single
unit with the unused disk. NOTE: I did this step from BIOS so I am not sure this is how it should be done as I state below. In BIOS I did "create unit" not "migrate". Someone please verify this.the new
/u2
unit should 'see' all the 3TB.go ahead and transfer the data from the 2TB disk to the 3TB disk.
once the data are on the new unit update all references to the new /dev/sdX.
the remaining 2TB disk is (should be!) now unused so go ahead and delete it.
create a new
single
unit with the unused disk.the new
/u1
unit should have 3TB space now, too.finally, take a deep breath and merge the 2 single disks to the new expanded raid1
/u1
should now disappear and unit/u2
should start rebuilding.Enjoy life. Like, seriously.
Maybe your kernel did not receive updates from the controller.
Try to update the disks info by typing :
It will force the kernel to re-read the partition tables and disks properties.
Also try :
and/or:
cause partprobe not always works...
These are just some notes adding to
nass's
answer. Going from memory here, so this might not all be completely correct, and there was some rebooting done throughout these steps.Steps 1-2: ?
Step 3: Adding a new
single
unit from the cli:tw_cli /c0 add type=single disk=3
Step 4: I used
dd if=/dev/sdX of=/dev/sdY bs=64K
to clone the disk. To determine which were the correct devices, before Step 3 I tried mounting some devices (e.g.sudo mount -t ntfs /dev/sda1 /mnt/a
) and exploring the contents to see which was my source device from unit/c0/u1
. (There's probably a better way of determining this.)Also before Step 3, I
ls /dev/sd*
, noted which device had an existingsdY1
but nosdY
, and then after Step 3 checked again for whichsdY
had been created. I also usedsudo hdparm -I /dev/sdY
on each device before/after Step 3 to confirm things looked right. NOTE: Rebooting might change which device is which, so avoid doing that between checking anddd
'ing.Steps 5-6: ?
Steps 7-8: Creating a new single unit from the unused disk and then migrating didn't work for me (
Invalid disk
error or something along those lines). Instead, skipping Step 7 and going straight to Step 8 should work.Step 9: Will do. Thanks for the help!
Some other notes from my experience with this:
I used a Knoppix Live CD to do most of this. To install tw-cli on it:
sudo nano /etc/apt/sources.list
Add
deb http://hwraid.le-vert.net/ubuntu precise main
at the top.sudo apt-get update
sudo apt-get install tw-cli 3dm2
I was doing this on the boot drive of a Windows installation, going from 2TB drives to 4TB drives. One thing I forgot to check before starting was whether the disk was MBR or GPT. Turns out it was MBR, meaning that I can't access most of the extra space on the drive without converting to GPT.
The long and short of this post is contact LSI support to get a migration script.
I'm pretty sure I've got the same controller in both the 2 and 4 port configurations, and when I wanted to update from a 1 Gig Raid 1 to 2 Gig, I replaced one of the disks with 2 Gig, and then replaced the other disk after the rebuild.
At this point I still had a 1 Gig Raid 1, but sitting on 2 Gig disks. I then sent some drive dimension specifics off to LSI as a support request and they, in turn, sent me a (very technical) script that when executed, did the migration for me.
I never was satisfied why this migration couldn't be done without LSI support, but in the end it worked out fine.