I have recently resized a logical volume group and a logical volume to fit into a RAID-0 configuration on Centos 6.10 system. Everything seems to be working fine.
However, the disk utility shows two Raid arrays, one with a status of "clean" and "running", the other with a status of "inactive" and "not running, partially assembled".
The clean one is named on /dev/md125 and has all the good stuff, e.g. the root volume and the lvm physical volume.
The second one is named /dev/md126, and the disk utility reveals little detail other than what I've previously described.
here's the output of the "mdadm --detail" command:
[root@Centos6svr guest]# mdadm --detail /dev/md125
/dev/md125:
Container : /dev/md/imsm0, member 0
Raid Level : raid0
Array Size : 1937872896 (1848.10 GiB 1984.38 GB)
Raid Devices : 2
Total Devices : 2
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 128K
UUID : 2eac1934:ec8965c9:96e64de0:00020788
Number Major Minor RaidDevice State
1 8 0 0 active sync /dev/sda
0 8 16 1 active sync /dev/sdb
[root@Centos6svr guest]# mdadm --detail /dev/md126
/dev/md126:
Version : imsm
Raid Level : container
Total Devices : 2
Working Devices : 2
UUID : ec0c211b:e1d9358d:38d5ecf1:2a09f082
Member Arrays : /dev/md/Volume1_0
Number Major Minor RaidDevice
0 8 0 - /dev/sda
1 8 16 - /dev/sdb
I'm not sure how this got there, or if it existed on the previous image. I had to delete and recreate the RAID volume initially, using the "ctrl-I" option from the bootup display.
It seems like it's harmless, but all the same I'd like to get rid of it. Any ideas how?