The book "HBase: The definitive guide" states that
Installing different filesystems on a single server is not recommended. This can have adverse effects on performance as the kernel may have to split buffer caches to support the different filesystems. It has been reported that, for certain operating systems, this can have a devastating performance impact.
Does this really apply to Linux? I have never seen the buffer cache bigger than 300 Mbytes and most modern servers have gigabytes of RAM so splitting the buffer cache between different filesystems should not be an issue. Am I missing something else?
Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.
You have to remember that data between different mount points is unshareable too.
While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from
slabtop
for a system running 3 different file systems (XFS, ext4, btrfs):As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.
I don't think there's a negative impact. I often have ext3/ext4 mixed with XFS (and even ZFS) on the same server setup. I would not describe my performance as being anything less than expected, given the hardware I'm running on.
Are you concerned about a specific scenario? What filesystems would be in play? What distribution are you on?