We have a large ~18TB hardware raid array on a Dell R720xd. Currently the RAID5 array consists of 6x4TB and I needed to extend it.
Step 1 expand the hardware raid array.
Simple enough if you have the dell admin tools installed.
omconfig storage vdisk action=reconfigure controller=0 vdisk=1 raid=r5 pdisk=0:1:0,0:1:1,0:1:3,0:1:3,0:1:4,0:1:5,0:1:8,0:1:9
( new disks were the last two, which can be confirmed by using the omreport
tool) That all went fine though it takes a while, and I was able to confirm the array had been expanded..
% omreport storage vdisk controller=0 vdisk=1
Virtual Disk 1 on Controller PERC H710P Mini (Embedded)
Controller PERC H710P Mini (Embedded)
ID : 1
Status : Ok
Name : bak
State : Ready
Hot Spare Policy violated : Not Assigned
Encrypted : No
Layout : RAID-5
Size : 26,078.50 GB (28001576157184 bytes)
...
Device Name : /dev/sdb
...
Step 2 new partition
So the vdisk is now reporting the increased ( 26TB ) size. and fdisk
does concur...
Disk /dev/sdb: 25.5 TiB, 28001576157184 bytes, 54690578432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A2D20632-37D1-4607-9AA0-B0ED6E457F91
Device Start End Sectors Size Type
/dev/sdb1 2048 39064698846 39064696799 18.2T Linux LVM
However when I go to add an additional partition to the disk the following happens...
Command (m for help): n
Partition number (2-128, default 2): 2
First sector (34-2047):
I now have about 16 Billion more sectors on the disk, but I can't use them. I am only offered Sectors 34-2047. I cannot allocate the 8TB of new space even though I am currently setup with just a single partition.
The other thing that did strike me as odd was the fact that I was offered partition numbers 2-128, not simply 2-4. The partition table doesn't show any extended partition so I would have expected that to limit me to just 4 partitions initially.
Is there anything I am missing?
- The machine has been rebooted since the drive array was expanded. Before that fdisk would report only the original 18TB
- Trying
cfdisk
instead just reports 2015 sectors available in the 39 Billion range despite reporting 25TB overall. - We don't want to delete and re-create the partition if we can avoid it, given we could loose all the data. We are preferring to simply extend the LVM volume group with the new partition once done.
- Its a similar issue to Another server Fault question, but I am not limited by having run out of partitions, and I don't think I'm being restricted by an extended partition.
- Its not sector size being expanded by the drive expansion. If it were fdisk would not be reporting the sector count increase I would have thought. Plus
pvs
andvgs
are not reporting any additional un-allocated space under LVM - I ran this as a dry run on a virtual machine and did not experience this. However I was shutting down the vm and increasing its disk device size. So it was not online during the size increase. Plus the drive sizes were many orders of magnitude smaller for the vm.
Update 1 'x'pert mode output requested by Micheal...
Command (m for help): x
Expert command (m for help): p
Disk /dev/sdb: 25.5 TiB, 28001576157184 bytes, 54690578432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A2D20632-37D1-4607-9AA0-B0ED6E457F91
First LBA: 34
Last LBA: 39064698846
Alternative LBA: 39064698879
Partitions entries LBA: 2
Allocated partition entries: 128
Device Start End Sectors Type-UUID UUID Name Attrs
/dev/sdb1 2048 39064698846 39064696799 E6D6D379-F507-44C2-A23C-238F2A3DF928 E9CB58BF-F170-4480-A230-6E2A238367D1 Linux LVM
Expert command (m for help): v
MyLBA mismatch with real position at backup header.
1 error detected.
So a possible LBA error?
The problem was the backup partition table location. Normaly you expect primary partition table at the start and backup partition table at the end. The disk resize made more sectors available but never moved the backup table. fdisk did not like this and I believe that was the
MyLBA mismatch with real position at backup header.
error message. Not exactly clear.I switched from
fdisk
togdisk
and the output was a little different. In gdisk you have...On going into that and running
v
erify gave the more helpful error message...Under
gdisk
expert mode there is the following option...... that ran successfully, and the verify output was now...
Printing the partition table now showed the last usable sector as 56Billion rather than 39Billion and I was able to create the new partition and add it into LVM which if anyone is interested the steps for that were...
The key to this snafu is this:
Last LBA: 39064698846
Your GPT Label does not reflect medium size, which have changed.
fdisk
does search for free space in a manner which ain't perfect, but at least logical - it looks for first available sector in the largest free space available between GPT Label's first and last LBAs.One way around it may be using
sfdisk
to dump the label, edit it appropriately to your medium size and write it back, or better useparted
that should take care of that issue IMO.