I have a newly built machine with a fresh Gentoo Linux install and a software RAID 5 array from another machine (4 IDE disks connected to off-board PCI controllers). I've successfully moved the controllers to the new machine; the drives are detected by the kernel; and I've used mdadm --examine and verified that the single RAID partition is detected, clean, and even in the "right" order (hde1 == drive 0, hdg1 == drive 1, etc).
What I don't have access to is the original configuration files from the older machine. How should I proceed to reactivate this array without losing the data?
You really kinda need the original mdadm.conf file. But, as you don't have it, you'll have to recreate it. First, before doing anything, read up on mdadm via its manual page. Why chance losing your data to a situation or command that you didn't have a grasp on?
That being said, this advice is at your own risk. You can easily lose all your data with the wrong commands. Before you run anything, double-check the ramifications of the command. I cannot be held responsible for data loss or other issues related to any actions you take - so double check everything.
You can try this:
This should give you some info to start working with, along with the ID. It will also create a new array device /dev/md{number}, from there you should be able to find any mounts. Do not use the
--auto
option, the man page verbiage implies that under certain circumstances this may cause an overwrite of your array settings on the drives. This is probably not the case, and the page probably needs to be re-written for clarity, but why chance it?If the array assembles correctly and everything is "normal", be sure to get your mdadm.conf written and stored in
/etc
, so you'll have it at boot time. Include the new ID from the array in the file to help it along.Just wanted to add my full answer for Debian at least.
sudo apt-get install mdadm
Scan for the old raid disks via -->
sudo mdadm --assemble --scan
At this point, I like to check
BLKID
and mount the raid manually to confirm.blkid mount /dev/md0 /mnt
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Update initramfs via -->
update-initramfs -u
Troubleshooting:
Make sure the output of
mdadm --detail --scan
matches your/etc/mdadm/mdadm.conf
Example FSTAB
https://unix.stackexchange.com/questions/23879/using-mdadm-examine-to-write-mdadm-conf/52935#52935
https://askubuntu.com/questions/729370/can-i-transfer-my-mdadm-software-raid-to-a-new-system-in-case-of-hardware-failur
How do I move a Linux software RAID to a new machine?
Scan all partitions and devices listed in /proc/partitions and assemble /dev/md0 out of all such devices with a RAID superblock with a minor number of 0.
if the conf was successful you can add --detail --scan >> /etc/mdadm/mdadm.conf so it catches it on boot
An issue with a 4 disk data Raid0 separate from the OS disk when updating the OS from CentOS 6.2 to CentOS 8.2 brought me here.
I was able to get use Avery's accepted answer above (https://serverfault.com/a/32721/551746), but ran into problems due to Raid Layout confusion introduced in Kernel 3.14.
Following this post (https://www.reddit.com/r/linuxquestions/comments/debx7w/mdadm_raid0_default_layout/) I had to change the default layout (/sys/module/raid0/parameters/default_layout) in order for the new Kernel to use the old Raid0 layout.
If that works, add the kernel parameter so the raid0 default layout is 2 (or 1 or 0) on reboot by editing /etc/default/grub and setting
Rebuild grub.cfg, add to /etc/fstab and reboot!