Lately on a shared host, the filesystem containing my home folder was mounted read-only for 45 minutes to 1 hour. The technical support did not know about the outage, evaded direct questions. After a bit more than three days I obtained the answer:
There are many explanations, but in the most times this caused by server issue on filesystem level.
I am somehow not pleased by this in-depth analysis, as my normal work environment runs on a RAID1 (mdadm
) and I never encountered such issues.
The shared host system is supposed to be a RAID1, and I became aware of the issue as a cronjob running uptime every 15 minutes sent me email about it.
I would really like to know, what you, the more experienced, think of this.
I've had this happen in the following scenario:
VMWare server connected to shared storage.
It should be rare, but it happens.
They could/should have been more forthcoming with you about the causes.
I am going to assume that you are talking about a Linux system on ext2/ext3/ext4 filesystems (also reiser if you dare).
Anyway, when a new filesystem is generated on the disk, there is something in the filesystem meta information that informs the Linux host what to do in the event of some problem in the filesystem during operations.
From what I see, this is set to a default. The default tells the operating system to remount the problem filesystem READ ONLY.
I had this happen on a number of VMs and was most annoyed. What I did was change the setting so that if a serious filesystem event happens, then to panic the system which will cause a reboot.
Assumming ext* filesystems, you can change the setting even while the disk is mounted:
where sdX# is a disk for example /dev/sdb3 where your filesystem is. This also works for LVM disks with the appropriate /dev/ name for the specific LVM where the filesystem is contained.
You must do this for each filesystem partition, changing one filesystem does not change any other filesystem.
After making this change on all my VM filesystems, I am very happy.
Enjoy