Currently I have an Ubuntu 18.04.6 LTS server with 2 6TB HDs setup in RAID1 like so:
~$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdd2[0] sdb2[1]
5859412992 blocks super 1.2 [2/2] [UU]
bitmap: 1/44 pages [4KB], 65536KB chunk
The space on the drives will run out soon so I bought 2 16TB HDs I want to add (already physically connected in the server but not setup). From what I understand I cannot add these as a separate raid1 configuration (16TB mirroring + 6TB mirroring) and need to move to raid 10. Is this true? I can't just have the two 16TB also in RAID1 and mounted as a different folder?
Can I use the 2x 16TB HDs in combination with the 2x 6TB ones in a RAID 10 or do they all have to be the same size?
How do I go about adding the 2 HDs and migrating to the new RAID setup without losing the existing data?
Business requirements:
- Redundancy / Fault tolerance
- Fast read/write (big data)
- Increase HD space, does not necessarily have to act as one drive (can be a new mount point / folder if easier)
UPDATE:
Following the instructions at the link below, I added the two drives as an additional raid1 using the following commands, rebooted the computer, and now can't ssh into it.
~$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 14.6T disk
sdb 5.5T disk
├─sdb1 953M vfat part
└─sdb2 5.5T linux_raid_member part
└─md0 5.5T LVM2_member raid1
├─vg-swap 186.3G swap lvm [SWAP]
├─vg-root 93.1G ext4 lvm /
├─vg-tmp 46.6G ext4 lvm /tmp
├─vg-var 23.3G ext4 lvm /var
└─vg-home 5.1T ext4 lvm /home
sdc 14.6T disk
sdd 5.5T disk
├─sdd1 953M vfat part /boot/efi
└─sdd2 5.5T linux_raid_member part
└─md0 5.5T LVM2_member raid1
├─vg-swap 186.3G swap lvm [SWAP]
├─vg-root 93.1G ext4 lvm /
├─vg-tmp 46.6G ext4 lvm /tmp
├─vg-var 23.3G ext4 lvm /var
└─vg-home 5.1T ext4 lvm /home
~$ sudo mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda /dev/sdc
sudo mkfs.ext4 -F /dev/md1
sudo mkdir -p /mnt/md1
sudo mount /dev/md1 /mnt/md1
~$ df -h -x devtmpfs -x tmpfs
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg-root 92G 7.5G 79G 9% /
/dev/sdd1 952M 4.4M 947M 1% /boot/efi
/dev/mapper/vg-var 23G 6.0G 16G 28% /var
/dev/mapper/vg-tmp 46G 54M 44G 1% /tmp
/dev/mapper/vg-home 5.1T 2.5T 2.4T 51% /home
/dev/md1 15T 19M 14T 1% /mnt/md1
~$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
ARRAY /dev/md/0 metadata=1.2 name=mypc:0 UUID=someweirdhash
ARRAY /dev/md1 metadata=1.2 name=mypc:1 UUID=someweirdhash
~$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.15.0-166-generic
~$ sudo reboot
Cannot ssh into server after reboot.
DID NOT DO THIS: (what are the last two zeros below?) I wasn't sure what this command does and imagined it could set the new array to be the boot one, so maybe not running it broke it:
~$ echo '/dev/md1 /mnt/md1 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
0 Answers