I Installed ubuntu 20.04 with the ZFS experimental option enabled, indeed under disks and gparted I can see that most of my current SSD is occupied by a ZFS_member.
I have very minor experience with ZFS. I set up my zfs in a previous ubuntu 18 install but the OS itself was not on ZFS.
anyways : How do I add another SSD (not the same size or brand) to my current ZFS pool?
I just want the most basic "add space" I don't really care about x2, x3 or x10 redundancy (in fact I don't want to alter current ZFS redundancy setup, whatever it is). I just want extra space.
I found this : https://unix.stackexchange.com/questions/530968/adding-disks-to-zfs-pool
but it doesn't answer my question for someone of my level.
For example none of the two people who answered specified if it was :
zpool create addonpool /dev/sdb
zpool add addonpool mirror /dev/sda /dev/sdb
Or just :
zpool add rpool mirror /dev/sda /dev/sdb#"rpool" name of existing pool, apparently
Nor what syntax is used to point to drives.
I want to expand rpool
all the links I found are referencing syntaxes like : c0t3d0
, c1t3d0
, and c1t1d0
.
I can't find such an identifier, this guide : https://www.thegeekdiary.com/zfs-tutorials-creating-zfs-pools-and-file-systems/
uses echo | format
this does not work in ubuntu 20.04
I do know their guid :
t@tsu:~$ sudo lshw -class disk
[sudo] password for t:
*-disk:0
description: ATA Disk
product: Samsung SSD 850
physical id: 0
bus info: scsi@2:0.0.0
logical name: /dev/sda
version: 2B6Q
serial: S2RBNX0J524197X
size: 465GiB (500GB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: ansiversion=5 guid=32f4df93-2b50-4a68-a888-f0570adac413 logicalsectorsize=512 sectorsize=512
*-disk:1
description: ATA Disk
product: Crucial_CT525MX3
physical id: 1
bus info: scsi@4:0.0.0
logical name: /dev/sdb
version: R040
serial: 172918010661
size: 489GiB (525GB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: ansiversion=5 guid=d3e2b4ab-2c44-4da8-ac0c-fdb8053d35da logicalsectorsize=512 sectorsize=512
I did test just zpool
alone to get the man, this works so I know that I'd be able to run the above I just want to not mess it up.
Also I'm planning on doing this by logging out and running my commands in a tty. It does nag me that technically at such a point I have not really exited the environment using my ZFS pool so will that work or should this be done from a Live USB?
t@tsu:~$ zpool status
pool: bpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
bpool ONLINE 0 0 0
73ea4055-b5ea-894b-a861-907bb222d9ea ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
7905bb43-ac9f-a843-b1bb-8809744d9025 ONLINE 0 0 0
errors: No known data errors
t@tsu:~$ blkid
/dev/sda2: UUID="53c19176-f03e-4c40-a6ed-3a2627160647" TYPE="swap" PARTUUID="7a5a6a79-1359-e04f-a783-1845b8bff78f"
/dev/sda1: UUID="3B30-7656" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="f1ae5c48-1ad3-4aff-9161-f0379f64d556"
/dev/sda3: LABEL="bpool" UUID="3543073794614485280" UUID_SUB="6877096781256962450" TYPE="zfs_member" PARTUUID="73ea4055-b5ea-894b-a861-907bb222d9ea"
/dev/sda4: LABEL="rpool" UUID="9443649997029540364" UUID_SUB="15472508558080641563" TYPE="zfs_member" PARTUUID="7905bb43-ac9f-a843-b1bb-8809744d9025"
t@tsu:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
bpool 442M 1,32G 96K /boot
bpool/BOOT 440M 1,32G 96K none
bpool/BOOT/ubuntu_38tazy 440M 1,32G 178M /boot
rpool 169G 276G 96K /
rpool/ROOT 8,66G 276G 96K none
rpool/ROOT/ubuntu_38tazy 8,66G 276G 3,65G /
rpool/ROOT/ubuntu_38tazy/srv 152K 276G 96K /srv
rpool/ROOT/ubuntu_38tazy/usr 480K 276G 96K /usr
rpool/ROOT/ubuntu_38tazy/usr/local 384K 276G 128K /usr/local
rpool/ROOT/ubuntu_38tazy/var 3,17G 276G 96K /var
rpool/ROOT/ubuntu_38tazy/var/games 152K 276G 96K /var/games
rpool/ROOT/ubuntu_38tazy/var/lib 3,10G 276G 2,53G /var/lib
rpool/ROOT/ubuntu_38tazy/var/lib/AccountsService 464K 276G 112K /var/lib/AccountsService
rpool/ROOT/ubuntu_38tazy/var/lib/NetworkManager 2,37M 276G 208K /var/lib/NetworkManager
rpool/ROOT/ubuntu_38tazy/var/lib/apt 75,1M 276G 65,6M /var/lib/apt
rpool/ROOT/ubuntu_38tazy/var/lib/dpkg 97,3M 276G 38,1M /var/lib/dpkg
rpool/ROOT/ubuntu_38tazy/var/log 69,7M 276G 34,5M /var/log
rpool/ROOT/ubuntu_38tazy/var/mail 152K 276G 96K /var/mail
rpool/ROOT/ubuntu_38tazy/var/snap 1016K 276G 160K /var/snap
rpool/ROOT/ubuntu_38tazy/var/spool 512K 276G 112K /var/spool
rpool/ROOT/ubuntu_38tazy/var/www 152K 276G 96K /var/www
rpool/USERDATA 160G 276G 96K /
rpool/USERDATA/root_mh3805 956K 276G 208K /root
rpool/USERDATA/t_mh3805 160G 276G 142G /home/t
For those searching to solve a similar issue in the future, see my answer which the OP accepted on the Unix & Linux Stack Exchange: https://unix.stackexchange.com/a/597275/151609
The solution can add additional space, but not redundancy, in this particular case.