Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting.
Behold:
[root@localhost ~]# multipath -ll
mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400
[size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:0:1 sdb 8:16 [active][ready]
\_ 2:0:0:1 sdc 8:32 [active][ready]
[root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo
[root@localhost ~]# touch /mnt/foo/blah
All good, now I yank the LUN out from under it.
[root@localhost ~]# touch /mnt/foo/blah
[root@localhost ~]# touch /mnt/foo/blah
touch: cannot touch `/mnt/foo/blah': Read-only file system
[root@localhost ~]# tail /var/log/messages
Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down
Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down
Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2.
Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545
Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2
Mar 18 13:17:36 localhost kernel: ext3_abort called.
Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal
Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only
It only thinks its read-only, in reality its not even there.
[root@localhost ~]# multipath -ll
sdb: checker msg is "tur checker reports path is down"
sdc: checker msg is "tur checker reports path is down"
mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400
[size=1.1T][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:1 sdb 8:16 [failed][faulty]
\_ 2:0:0:1 sdc 8:32 [failed][faulty]
[root@localhost ~]# ll /mnt/foo/
ls: reading directory /mnt/foo/: Input/output error
total 20
-rw-r--r-- 1 root root 0 Mar 18 13:11 bar
How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN:
[root@localhost ~]# tail /var/log/messages
Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up
Mar 18 13:23:58 localhost multipathd: 8:16: reinstated
Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled
Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode
Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1
Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent)
Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered
Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up
Mar 18 13:23:59 localhost multipathd: 8:32: reinstated
Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2
Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent)
Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered
[root@localhost ~]# multipath -ll
mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400
[size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][enabled]
\_ 1:0:0:1 sdb 8:16 [active][ready]
\_ 2:0:0:1 sdc 8:32 [active][ready]
Great right? It says [rw] right there. Not so fast:
[root@localhost ~]# touch /mnt/foo/blah
touch: cannot touch `/mnt/foo/blah': Read-only file system
OK, doesn't do it automatically, I'll just give it a little push:
[root@localhost ~]# mount -o remount /mnt/foo
mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only
The hell you are:
[root@localhost ~]# mount -o remount,rw /mnt/foo
mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only
Noooooooooo.
I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.
I just recently ran into this problem and solved it by rebooting but after further investigation it appears that issuing the following command might fix it.
I think you might want to look at look at section 25.14.4: Changing the Read/Write State of an Online Logical Unit in this document, however, I recommend rebooting.
Try using:
I am a fan of preventing the issue in the first place. Most enterprise UNIX boxes will retry filesystem operations like forever. You as an administrator need to do some homework before tuning your MPIO configuration. If your application should wait until the device return to a usable state, then here is a solution. In your /etc/multipath.conf make sure that the device type you care about has a setting for "no_path_retry" set to "queue". Setting this will cause failed I/Os to queue until there is a valid path. We have done this for our EMC Symmtrix/DMX boxes to work about hiccups under certain conditions drive/controller/srdf path failures/recovery. When you want to fail the device manually during a failure it gets more complicated as you will need to use tools like dmsetup to flush/fail I/Os or temporarily change the multipath.conf file and rescan devices....etc.
This approach has saved our bacon countless times and is our standard for hundreds of boxes on a multicabinet/multivendor SAN with replication for disaster recovery.
Just thought I might share with you all. Take care.
I had the some issue, which I resolved using hdparm with the
-r
option on subdrives of logical, multipath devices.Do you think it's related to the section in this document titled Why does the ext3 filesystems on my Storage Area Network (SAN) repeatedly become read-only?
It's quite an old article, and is talking about fibre channel, but it may be related to your problem.
File system corruption? Try:
If clean with errors, then you need to scan and clean.
Linux simply doesn't cope well enough with medium-large scale SANs. You MUST give it some care and fine tune the IO timeouts and multipath timeout handling, they're all pretty much at desktop-ready defaults.
(Remember "rejecting IO to dead device"?)