I'm using a combination of mdadm, lvm2, and XFS on Amazon EC2.
So far, I've had success running a RAID 5 volume built from a number of EBS volumes. The EBS volumes are attached and used with mdadm to create the RAID 5. Then, I use LVM to present the resulting RAID as a single physical volume and single logical volume.
In the past, I've been able to grow the file system by adding a new EBS volume, attaching it, and then running the following process.
mdadm --add /dev/md0 /dev/xvdi
# grow the raid... (could take a while for a large disk!)
mdadm --grow /dev/md0 --raid-devices=4
# grow the LVM physical volume
pvresize /dev/md0
# grow the LVM logical volume ... fairly certain
# -l100%PVS will make the extents use as much space
# as possible on physical disks (and hopefully won't overwrite anything)
lvresize -l100%PVS /dev/data_vg/data_lv
# resize the file system, too!
xfs_growfs /dev/data_vg/data_lv
# yay!
df -h
My most recent attempt at doing this has worked just fine, ostensibly. Running df -ih/fh shows that I have a mounted filesystem with an additional terabyte available, as expected. Also, the total number of inodes used is ONLY 1%. Also, pvdisplay and lvdisplay also show the proper volume sizes.
I've even been able to add some data (about 150GB) to the volume since growing it. However, today I attempt to create a directory and get
mkdir: no space left on device
Why would I encounter this problem if I allegedly have plenty of inodes availble?
I've unmounted the disk and run xfs_check, but that does not report any issues.
Thanks!
I was able to resolve the issue in the following way:
Apparently, the default (32-bit inodes?) xfs will store all inodes in the first 1TB portion of the disk. This means that if the first 1TB is full, then you'll run into no space on disk errors even if it appears you have plenty of space/inodes available. By adding the inode64 option, the nodes can be stored anywhere on disk, if I understand correctly.
Source: the XFS FAQ.