I have a 4x 2tb raid 5 mdadm array which I had begun to grow onto a 5th disk.
~# mdadm --add /dev/md1 /dev/sdb
~# mdadm --grow /dev/md1 --raid-devices=5
midway through the grow I recieved this in an email;
A Fail event had been detected on md device /dev/md/fubox:1.
It could be related to component device /dev/sdf.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sdb[5] sdd[0] sdc[1] sdf[2](F) sde[4]
7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/3] [UU_U_]
[=======>.............] recovery = 39.8% (779268708/1953512960) finish=212.4min speed=92101K/sec
~# mdadm --detail /dev/md1
Number Major Minor RaidDevice State
0 8 48 0 active sync /dev/sdd
1 8 32 1 active sync /dev/sdc
2 0 0 2 removed
4 8 64 3 active sync /dev/sde
4 0 0 4 removed
2 8 80 - faulty spare /dev/sdf
5 8 16 - spare /dev/sdb
I rebooted and tried
~# mdadm --assemble /dev/md1 /dev/sdd /dev/sdc /dev/sdf /dev/sde /dev/sdb
mdadm: /dev/md1 assembled from 3 drives and 1 spare - not enough to start the array.
~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : inactive sdd[0](S) sdb[5](S) sde[4](S) sdf[2](S) sdc[1](S)
9767567800 blocks super 1.2
output from examines;
~# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 43ea18e9:0e1e3eac:d45b539d:5ec452b8
Name : fubox:1 (local to host fubox)
Creation Time : Sat May 7 18:42:12 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
Array Size : 7814051840 (7452.06 GiB 8001.59 GB)
Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 6ec847bf:49a3522a:6c4a3a61:c1743bac
Update Time : Wed Jan 30 14:15:08 2013
Checksum : de59745 - correct
Events : 57664
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : AA.A. ('A' == active, '.' == missing)
~# mdadm --examine /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 43ea18e9:0e1e3eac:d45b539d:5ec452b8
Name : fubox:1 (local to host fubox)
Creation Time : Sat May 7 18:42:12 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
Array Size : 7814051840 (7452.06 GiB 8001.59 GB)
Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 62d8fc8f:20feb8ee:357d32b5:e7bf9223
Update Time : Wed Jan 30 14:15:08 2013
Checksum : 6e507ffb - correct
Events : 57664
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AA.A. ('A' == active, '.' == missing)
~# mdadm --examine /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 43ea18e9:0e1e3eac:d45b539d:5ec452b8
Name : fubox:1 (local to host fubox)
Creation Time : Sat May 7 18:42:12 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
Array Size : 7814051840 (7452.06 GiB 8001.59 GB)
Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 9a10cb04:01283998:f460f9ee:e350d313
Update Time : Wed Jan 30 13:14:20 2013
Checksum : b6aa487c - correct
Events : 57642
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAAA ('A' == active, '.' == missing)
The events count is lower on sdf and tried assemble --force
~# mdadm --assemble /dev/md1 /dev/sdd /dev/sdc /dev/sdf /dev/sde /dev/sdb --force
mdadm: forcing event count in /dev/sdf(2) from 57642 upto 57664
mdadm: clearing FAULTY flag for device 2 in /dev/md1 for /dev/sdf
mdadm: Marking array /dev/md1 as 'clean'
mdadm: /dev/md1 has been started with 4 drives (out of 5) and 1 spare.
~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sdd[0] sdb[5] sde[4] sdf[2] sdc[1]
7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
[>....................] recovery = 0.2% (4355440/1953512960) finish=361.2min speed=89929K/sec
However again at 39.8% /dev/sdf dropped out, I'm now running smartctl -H /dev/sdf
The 4 original drives are Samsung HD204UI 2TB, the new drive is a WD 2TB RED
HD204UI drives manufactured December 2010 or later include the firmware patch (mine are stamped 2011+)
any tips on how to proceed?
0 Answers