I have created RAID 1 array consisting of 2 devices(working on Ubuntu 18.04 LTS) :
mdadm --create /dev/md6 -l 1 --raid-devices=2 /dev/sda9 /dev/sdb6
mkfs.ext4 /dev/md6
mkdir /raid
mdadm –detail –scan >> /etc/mdadm/mdadm.conf
update-initramfs -u
vim /etc/fstab # next line is what I've put into the fstab file
/dev/md6 /raid ext4 defaults 0 0
mount /raid
After the reboot I can't connect to my server(via ssh). Only when I boot into rescue mode and comment out fstab and mdadm.conf lines consisting info about md6, then I'm able to connect to my server.
mdadm --detail /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Sun Jan 26 14:52:22 2020
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Jan 26 17:06:17 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : popaja:6 (local to host popaja)
UUID : 67d43386:09285115:0c33fcec:68fb2054
Events : 17
Number Major Minor RaidDevice State
0 8 9 0 active sync /dev/sda9
1 8 22 1 active sync /dev/sdb6
This is how mdadm.conf looks like :
# This configuration was auto-generated on Sat, 24 Nov 2018 16:01:31 +0000 by mkconf
ARRAY /dev/md1 UUID=e402909e:fd60a086:a4d2adc2:26fd5302
ARRAY /dev/md2 UUID=a0e2960f:72c3523c:a4d2adc2:26fd5302
ARRAY /dev/md5 UUID=d97cc04c:2812a744:a4d2adc2:26fd5302
ARRAY /dev/md1 metadata=0.90 UUID=e402909e:fd60a086:a4d2adc2:26fd5302
ARRAY /dev/md2 metadata=0.90 UUID=a0e2960f:72c3523c:a4d2adc2:26fd5302
ARRAY /dev/md5 metadata=0.90 UUID=d97cc04c:2812a744:a4d2adc2:26fd5302
ARRAY /dev/md6 metadata=1.2 name=popaja:6 UUID=67d43386:09285115:0c33fcec:68fb2054
As you can see I have more RAID devices and they have never bothered me. Am I missing something ?
UPDATE :
I went to my server provider's page(control panel) and made a fresh installation, choosing /boot, /, /home on raid 1 arrays. Also I chose two more additional raid arrays as /raid and /raid1. What's weird that after installation everything works as expected but the trouble comes when I try to create raid arrays myself with mdadm command.
This is how everything looks right now :
vim /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/md2 / ext4 errors=remount-ro 0 1
/dev/md1 /boot ext4 errors=remount-ro 0 1
/dev/md5 /home ext4 defaults 1 2
/dev/md6 /raid ext4 defaults 1 2
/dev/md7 /raid1 ext4 defaults 1 2
/dev/sda3 swap swap defaults 0 0
/dev/sdb3 swap swap defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
~
vim /etc/mdadm/mdadm.conf
# This configuration was auto-generated on Sat, 24 Nov 2018 16:01:31 +0000 by mkconf
ARRAY /dev/md1 UUID=44642845:2ace7714:a4d2adc2:26fd5302
ARRAY /dev/md2 UUID=92b6b281:c860fcce:a4d2adc2:26fd5302
ARRAY /dev/md5 UUID=67bd76dc:3d620e68:a4d2adc2:26fd5302
ARRAY /dev/md6 UUID=5e9aacb7:13436f9e:a4d2adc2:26fd5302
ARRAY /dev/md7 UUID=36a9f6dc:2b8e82d8:a4d2adc2:26fd5302
~
~
I decided I'm not gonna mess around with RAIDs for the time being, as I don't need them. But it's a strange case.
0 Answers