I'm getting a rather odd result from df from one of my disks using mdadm. It's setup as 4 of 2TB disks in raid 10.
# df
Filesystem Size Used Avail Use% Mounted on
/dev/md2 3.6T 40G 3.4T 2% / <------ this one
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/md0 4.9G 189M 4.4G 5% /boot
The actual disk usage is roughly around 2TB. Yet, it's reporting only 40GB of usage.
I see some errors like this in /var/log/messages as well as /var/log/dmesg (same ones)
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24449: 0 blocks in bitmap, 32768 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24450: 3 blocks in bitmap, 32771 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24451: 6 blocks in bitmap, 32766 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24452: 50 blocks in bitmap, 32742 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24453: 43 blocks in bitmap, 32768 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24454: 30 blocks in bitmap, 32768 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24455: 77 blocks in bitmap, 32768 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24456: 27 blocks in bitmap, 32744 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24457: 68 blocks in bitmap, 32265 in gd
Feb 13 05:46:00 las kernel: EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 24458: 32 blocks in bitmap, 1804 in gd
Feb 13 05:46:00 las kernel: JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash.
Feb 13 05:46:00 las kernel: JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash.
Feb 13 05:46:00 las kernel: JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash.
But unsure what to make of it.
mdadm detail shows...
# mdadm --detail /dev/md2
/dev/md2:
Version : 1.1
Creation Time : Thu Oct 18 22:20:38 2012
Raid Level : raid10
Array Size : 3896783872 (3716.26 GiB 3990.31 GB)
Used Dev Size : 1948391936 (1858.13 GiB 1995.15 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Feb 13 05:58:12 2013
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : i2274.userdns.net:2
UUID : f64e69c7:8342cdd1:0a275bbf:3ba052f4
Events : 275873
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
Lastly, I just ran fsck by echo y > /forcefsck
and reboot
, but seems nothing have changed. I think the disk is corrupted, but uncertain how to proceed.
Just to bring closure to this issue, I had it do a complete fsck, like @kormoc suggested, in non-interactive mode on boot. This has resolved the issue.