I have a ZFS zpool on linux under kernel 2.6.32-431.11.2.el6.x86_64 which has a single vdev. The vdev is a SAN device. I expanded the size of the SAN, and despite the zpool having autoexpand
set to on
, even after rebooting the machine, exporting/importing the pool, and using zpool online -e
, I was unable to get the pool to expand. I am sure the vdev is larger because fdisk
shows it has increased from 215GiB to 250 GiB. Here's a sample of what I did:
[root@timestandstill ~]# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dfbackup 214G 207G 7.49G 96% 1.00x ONLINE -
[root@timestandstill ~]# zpool import -d /dev/disk/by-id/
pool: dfbackup
id: 12129781223864362535
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
dfbackup ONLINE
virtio-sbs-XLPH83 ONLINE
[root@timestandstill ~]# zpool import -d /dev/disk/by-id/ dfbackup
[root@timestandstill ~]# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dfbackup 214G 207G 7.49G 96% 1.00x ONLINE -
venuebackup 248G 244G 3.87G 98% 1.00x ONLINE -
[root@timestandstill ~]# zpool get autoexpand dfbackup
NAME PROPERTY VALUE SOURCE
dfbackup autoexpand on local
[root@timestandstill ~]# zpool set autoexpand=off dfbackup
[root@timestandstill ~]# zpool set autoexpand=on dfbackup
[root@timestandstill ~]# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dfbackup 214G 207G 7.49G 96% 1.00x ONLINE -
venuebackup 248G 244G 3.87G 98% 1.00x ONLINE -
[root@timestandstill ~]# zpool status -v dfbackup
pool: dfbackup
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
dfbackup ONLINE 0 0 0
virtio-sbs-XLPH83 ONLINE 0 0 0
errors: No known data errors
[root@timestandstill ~]# fdisk /dev/disk/by-id/virtio-sbs-XLPH83
WARNING: GPT (GUID Partition Table) detected on '/dev/disk/by-id/virtio-sbs-XLPH83'! The util fdisk doesn't support GPT. Use GNU Parted.
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): p
Disk /dev/disk/by-id/virtio-sbs-XLPH83: 268.4 GB, 268435456000 bytes
256 heads, 63 sectors/track, 32507 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/disk/by-id/virtio-sbs-XLPH83-part1 1 27957 225443839+ ee GPT
Command (m for help): q
[root@timestandstill ~]# zpool online -e dfbackup /dev/disk/by-id/virtio-sbs-XLPH83
[root@timestandstill ~]# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dfbackup 214G 207G 7.49G 96% 1.00x ONLINE -
venuebackup 248G 244G 3.87G 98% 1.00x ONLINE -
[root@timestandstill ~]# zpool status -v dfbackup
pool: dfbackup
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
dfbackup ONLINE 0 0 0
virtio-sbs-XLPH83 ONLINE 0 0 0
errors: No known data errors
How can I expand this zpool?
I'm running ZFS on Ubuntu 16.04 and after much trial and error, this is is what worked for expanding the disk and pool size without rebooting. My system is hosted in the cloud at Profitbricks and uses libvirt (not SCSI) drives.
Get pool and device details:
Activate autoexpand:
Now login to Profitbricks control panel and increase disk size from 40GB to 50GB.
Notify system of disk size change and expand pool:
I'm not sure why, but it is sometimes necessary to run
partprobe
and/orzpool online -e pool vdb
twice in order to make the changes effective.I read on the freebsd forums a post which suggested to use
zpool online -e <pool> <vdev>
(without needing to offline the vdev first)This ultimately was the solution, but it required that ZFS autoexpand be disabled first:
Using
zpool set autoexpand=off
followed byzpool online -e
was required to get the zpool to expand for me, using ZFS on linux (in kernel, not using FUSE)