I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0
using mdadm
and formatted with XFS
Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB)
I then created snapshots of the volumes using ec2-consistent-snapshot
and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk)
I then spun up a new instance, assembled the RAID0 configuration on /dev/md0
from the 2 volumes mentioned above and mount it to /vol
df -hT
showed /vol as 2GB (as expected)
Now I ran sudo xfs_growfs -d /vol
. The command completed normally but reported blocks changed from 523776 to 524160
(only!) and df -hT
still showed /vol as 2GB (instead of the expected 4GB)
I rebooted, remounted, reassembled the RAID but it still reports the old size.
EDIT: trying to grow the RAID using mdadm --grow
yields mdadm: raid0 array /dev/md0 cannot be reshaped
Is there any other way I can grow a RAID0 array?
Use the --update=devicesize option when assembling your array. ie:
or
However, this requires that you are using at least v1.1 metadata, which puts the superblocks on the front of the devices. If you expand volumes using v0.90 or v1.0 metadata, you will have to recreate the array to put the new superblocks at the end of the devices. This is non-destructive to filesystems as long as you use the same options when creating the array as you did originally.
One solution is to create new set of volumes from scratch (2GB each) and assemble a new RAID0 configuration in parallel (say, /dev/md1). Copy the files from one raid volume (/dev/md0) to the other (/dev/md1).