I have a Linux server with a 4-disk mdraid array controlled by an HBA (LSI SAS9300-8i) - if I relocate the disks to an external JBOD enclosure controlled by a different HBA (Broadcom SAS3008), will the system still recognize the array? Running Fedora Server 40.
we are in a special case where we have 2 Ubuntu offline servers in 2 DCs and should be kept offline when not used. When we need to use them, we power up one of them, do whatever we want physically via KVM and then power it off again. Network connectivity will be absent at any times. We need a way to easily replicate the changes in the 2nd offline server, so that they will have the same data every time. We came up with 3 candidate solutions:
- 3-way ZFS mirror on the 1st server. Disc 1, remains attached. Disc 2, is kept in a safe. Disc 3, attached on the 2nd server. When operations are to be made on the 1st server, we plug disc 2 (from the safe), do the operation, unplug disk 2 from the mirror, plug in on the 2nd server and resilver. In short, the 3-way mirror will be always degraded on purpose. Alternatively, avoid plugging/unplugging disks and use ZFS send/receive snapshots stored in an external USB drive as snapshot file.
- mdraid (sw raid 1) and do the same as in (1) (unplug-plug disc and resync).
- Clonezilla (or any other 3rd party bare-metal solution) to image from the 1st server and apply in on the 2nd (the HW & partition setup will be identical).
Do you think that (1) would be too complex for a simple need like this? Any other opinions?
I recently found one of our production server (running RHEL7) doesn’t have the root disk mirrored. So, we thought of using one of the spare disks (which has some junk data) for mirroring the root disk using mdraid.
I wanted to do it without disturbing anything on the server as it’s in production. I was checking how to do this with mdraid but couldn’t find anything that’s helpful.
Can someone please let me know how I can mirror the existing root disk with RAID 1 using mdraid?
Thanks in advance.
-Ram
I have 4 Samsung 850 EVO SSDs in a mdraid 10 array inside a high performance simple NAS. running Centos6.8 minimal. I have found that the SSD tempertures start to increase on their own without any disk io. this happens to only some of the drives at a time. once the drive reasches 40c the activity led starts to flash. the drives are housed in good airflow. top and iotop show no processes accessing the raid at all. it happens at random, after boot, but one or two drives are always doing this. sometimes all of them. i have a fifth SSD as system drive with OS and that nevr has the problem. I cannot find what is causing this. Is this something to do with the EVO 850 Dynamic Temp Control, and why would it increase the temp. i'm at a loss, any help well apprieciated to get to the bottom of this. Occassionaly i have had 1 or 2 of the drives drop out of the array. the logs are not showing cause, only affect.
2012-03-31 Debian Wheezy daily build in VirtualBox 4.1.2, 6 disk devices.
My steps to reproduce so far:
- Setup one partition, using the entire disk, as a physical volume for RAID, per disk
- Setup a single RAID6 mdraid array out of all of those
- Use the resulting md0 as the only physical volume for the volume group
- Setup your logical volumes, filesystems and mount points as you wish
- Install your system
Both / and /boot will be in this stack. I've chosen EXT4 as my filesystem for this setup.
I can get as far as GRUB2 rescue console, which can see the mdraid, the volume group and the LVM logical volumes (all named appropriately on all levels) on it, but I cannot ls the filesystem contents of any of those and I cannot boot from them.
As far as I can see from the documentation the version of GRUB2 shipped there should handle all of this gracefully.
http://packages.debian.org/wheezy/grub-pc (1.99-17 at the time of writing.)
It is loading the ext2, raid, raid6rec, dosmbr (this one is in the list of modules once per disk) and lvm modules according to the generated grub.cfg file. Also it is defining the list of modules to be loaded twice in the generated grub.cfg file and according to quick Googling around this seems to be the norm and OK for GRUB2.
How to get further by getting GRUB2 to actually be able to read the content of the filesystems and boot the system?
What am I wrong about in my assumptions of functionality here?
EDIT (2012-04-01) My generated grub.cfg:
It seems it first makes my /usr logical volume the root and that might be source of the failure? A grub-mkconfig bug? Or is it supposed to get access to stuff from /usr before / and /boot? /boot is on / for me - no separate boot logical volume.