After deciding to use LVM2 as volumemanager on our servers there was also the wish for an online resizeable filesystem. After reading a few articles I decided to use JFS in favour of XFS.
Now today I had a power outage on our office server and I discovered that one file on the JFS volume has been completely corrupted. While this might happen the system fooled me into believing everything is alright by not indicating any filesystem problems during the bootup after the power failure. All filesystems were clean after replaying the journal.
This leaves me with a bad taste. I don't want a filesystem which would not recover well after a power outage, but I really don't want a filesystem which doesn't tell me that there might be a problem.
So I thought I give it a shot and ask which filesystem do you prefer? Which one do you favour and why? I'm looking for the following features:
- robust
- online growable
- good performance for usual workloads (normal filesizes - nothing special like millions of small files or such)
- available in the CentOS 5.4 distribution, but that's optional
I'd also like to know if you have used JFS and have had bad experiences with it - also of course if there're success stories using JFS. And ultimately: would you prefer XFS over JFS or vice versa (as mentioned for everyday use, not for specific workloads)
XFS.
JFS is basically dead/unmaintained now.
Most/all of the XFS developers now work at Redhat and kernel support for XFS is available out of the box in RHEL 5.4.
Replaying a journal just means that the meta data gets put back to a clean state. It makes no guarantees on the data itself. This is true with any journaled file system, at least any that don't also do other tricks like COW (Copy On Write). So there is a potential for data corruption like this anytime a server gets shutdown uncleanly regardless of which file system you select. Your file system did it's job and was able to get the file system back to a clean state minimizing data loss/corruption.
So the lesson learned from this should be always have your servers on a UPS that can instruct the server to shut down cleanly when it's battery is low. And always have good backups.
If you're really worried about data integrity you'll have to move to a more robust file system like ZFS on OpenSolaris or BSD. It's the only production ready free solution that I know about at this time. BTRFS on Linux will be a decent solution in a few years once it's mature and tested. But I wouldn't really recommend using it in a production environment at this time. Even these more robust file systems are not a replacement for backups.
Same experience as Brad here.
JFS was really nice performance- and feature-wise, but I just lost 3 partitions worth of data after a forced shutdown.
I've thus tossed JFS to the bin and will use XFS in the future (and wait for ZFS on Linux as well as BTRFS).
I wonder had anyone among above-mentioned "losers" tried using fsck before giving up with JFS? JFS@Linux hasn't kernel built-in journal recovery code, and thus requires using appropriate user-space tool for that.
The standard Linux ext3 filesystem supports online grow, and meets your other requirements as well. Unless you have other special needs, that's really the right answer. Even though XFS has a long history and a good reputation, using it still puts you into a special case, which is fine, but comes with a inherent cost in increased complexity — and why "pay" for something you don't need?
I used JFS In a large scale professional environment, and got burned bad. Massive corruption problems that would not show up right away, sometimes all files would end up in lost+found just from a clean reboot.
Changed over to XFS, and never looked back. Using it for 5 years now on hundreds of mutiterabyte systems.