There are many file systems for a Linux server administrator to choose from. Also, it is not always an easy task to figure out the appropriate layout and optimal file system for your needs.
Can you suggest some guidelines for choosing the "optimal" file system for a Linux server? I know this is a general question. Let us be more specific. I will put it this way.
1- I have a caching web proxy server. The proxy usually needs to store many small objects and also there are some medium and large objects. I think the web server case will be similar.
2- I have an email server. It needs to save the users' messages.
3- I have a database server.
All these three servers require very much IO access (read and write).
For other types of servers, I think it is not so critical. For example, a Linux-based firewall does not require HD access like a proxy or web server. Most of its processing is done in the memory.
I think the best answer will suggest the appropriate file system for each server (preferably with justifications). Let us focus on performance as the factor to base our suggestions on.
most of my daywork involves spec'ing and managing multi-TB storage on setups where i don't want a lot of management, and none of the higher-level management know a word about filesystems. for those reasons, i need dependable, efficient and easy systems. i did lots of pull-the-plug disaster simulations, copied around tens of millions of files on (simulated) flaky hardware. ext3 worked admirably. XFS not so, but well above requirements. JFS and ReiserFS failed miserably.
I've had great success with XFS, both small and large files, many or less. Various benchmarks around usually points it to be the combined winner of read/write speed. EXT4 is also quite fast, and perhaps easier to find documentation for. Just make sure you run a fresh kernel.
One good example where XFS shines is MySQL databases. I tried to import a 8GB .sql file into MySQL, wich used EXT3 for storage. It nearly timed out after filling up it's buffers (setting the buffer size to 16M or larger didnt help), and the system showed excessive I/O time wait. Switched to XFS, and every problem simply vanished - everything runs at full speed. It seems like XFS get almost zero penalty on fragmentation (wich is a quite often problem with large databases).
If performance were your only question, then XFS is in no doubt a good option. Just make sure you have a UPS, good backups and a disaster plan ready whenever you use a file system who does lazy updates/journaling (wich is just about every modern filesystem on the planet).
You can't go wrong with Ext3 for almost any medium workload. Plus, I've never lost data with it. Its journal recovery is solid.
Update: Ext4 is fine, if you have solid 'idiot-proof' Power protection.
I've lost data due to the default 'delayed write' nature of EXT4 writes on a local home+office server, that got rebooted due to a cable snag. Just my experience with ext4. Once bitten, twice shy.
Which distro are you using? With RHEL 5 ext3 is the only supported option (well, with the AP you get GFS, and in some cases you get support for XFS too, apparently), for newer distros ext4 or XFS are IMHO better options.
Generally, both ext4 and XFS are pretty close. If you want to make a distinction
XFS is slower on metadata-intensive workloads.
XFS supports filesystems > 16 TB.
XFS has project quotas in addition to user and group quotas.