I'm thinking of using ZFS for my home-made NAS array. I would have 4 HDDs in raidz on a Ubuntu Server 10.04 machine.
I'd like to use the snapshot capability and dedup when storing data. I'm not so much concerned about the speed, since the machine is accessed via N wireless network and that is probably going to be the bottleneck.
So does anyone have any practical experience with zfs-fuse 0.6.9 on such (or simillar) configuration?
I have two 500GB drives in a zfs-fuse mirror setup on my home NAS (debian lenny). It has been running for almost 6 months now, and I have not had problems. More details here on my blog.
There is now a native linux port of ZFS. I only learned of this recently, and as such have not had a chance to test it. It is under active development, though, which is a good sign. It's probably worth trying, as long as you're not scared off by having to compile the kernel module and tools for yourself.
If you can get it working, it will, without a doubt, perform much better than zfs-fuse does.
I know this thread is ancient, but things have changed quite a bit since then. (E.g. the state of ZFS-FUSE and in-kernel options, the arguable disappearance of "Open" Solaris, etc.)
First of all, the kernel port of ZFS won't necessarily perform much better than ZFS-FUSE "without a doubt". That reply seems to echo the common misconception that FUSE filesystems always perform worse than in-kernel. (In case you don't already know, in short: in theory kernel filesystems perform better, all else being equal. But there are many other factors affecting performance with bigger impact than kernel vs. user space.) With ZFS-FUSE however, it does appear according to benchmarks that in some cases it is significantly slower than native ZFS (or BTRFS). For my uses though, it is fine.
Ubuntu now has an "ubuntu-zfs" package through their PPA repository system, which is just a nice packaging and automatic module-building of the native zfs-on-linux project. It runs in kernel space and supports a higher zpool version currently than zfs-fuse.
I used to use OpenSolaris on a big redundant 20tb server, and now use Oracle Solaris 11 on it. Solaris has some significant problems and challenges (esp. if you are comfortable with configuring and administering Linux rather than old-school UNIX), and they have drastically changed many of the hardware management and other configuration interfaces between OS versions and even updates, making it an often highly frustrating moving target (even after finally mastering a version prior to upgrading to the next). But with the right (compatible) hardware and alot of patience for changes and learning and tweaking, it can be an amazing choice in terms of the file system.
One more word of advice: Don't use the built-in CIFS support. Use Samba. The built-in support is broken and may never be ready-for-prime-time. The last time I checked, there were plenty of enterprise installs using Samba, but not a single one using CIFS due to permissions management nightmares.
I also use ZFS-FUSE on Ubuntu on a daily basis (on personal workstation), and have found it to be rock-solid and an awesome solution. Only problems I can think of with ZFS-FUSE specifically, are:
You can't disable the ZIL (write-cache), at least not without setting a flag in source code and compiling yourslef. BTW - disabling the ZIL, contrary to a common misconception, will not cause you to lose your pool on a crash. You just lose whatever was being written at the time. This is no different than with most filesystems. It may not be ideal for many mission-critical server scenarios (in which case you should probably be using native Oracle Solaris anyway), but is usually a very worthwhile tradeoff for most workstation/personal use-cases. For a small-scale setup, the ZIL can be a huge write-performance problem, because by default the cache is spread among the pool itself - which could be quite slow especially if a parity stripe setup (RAIDZx). On Oracle Solaris, disabling it is easy, I believe it is the pool's "sync" property IIRC. (I don't know if it can be easily disabled on the native linux kernel version.)
Also with ZFS-FUSE, the zpool version isn't high enough to support the better pool recovery options of more recent versions - so if you do decide to offload the write cache to, say, one or more SSDs or ram drives, be wary. (And always mirror it!) If you lose the ZIL, you almost certainly also lost your entire pool. (This happened to me disastrously back with OpenSolaris.) More recent zpool versions on Oracle Solaris have mitigated that problem. I seem unable to determine if the kernel-level linux port has that mitigation incorporated or not.
Also you can safely disregard the "ZFS ARC bug" alarm that guy seemed to spam discussions with. My server gets hammered hard, as have countless production servers around the world, and have never experienced it.
Personally, while I strongly dislike Solaris, ZFS is just amazing, and now that I've come to depend on its features, I can't do without it. I use it even on Windows notebooks. (Via a complex but very reliable virtualization solution and USB drives velcro'ed to the lid.)
Edit: A few minor edits for clarity, relevance, and acknowledging ZFS-FUSE performance limitations.
Why dimply don't you use opensolaris?
You get everything you need and the best performance.
I ran ZFS-FUSE under Ubuntu for nearly a year without any issues before migrating the pool to OpenSolaris. That said, the memory requirements for Dedup on a multi TB pool will likely exceed memory of your home linux server. Dedup performance is terrible once your deduplication tables spill over from ARC (primary memory cache) unless you have an SSD for L2ARC to keep them readily available. Without the dedup tables in memory a number of operations can become unbelievably slow (deletion of directory of files, snapshot deletion, etc). Snapshots can function w/o dedup and have nearly no overhead on their own, so unless your storing a lot of redundant and have 8-16GB of ram and/or an SSD to throw at the problem, I'd skip dedup.
There's also the 3+ years bug with ZFS ARC that still persists!
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6522017
(This one is nasty as it will also go out-of-bounds from the VM limits of a hypervisor!)
Have no idea if ZFS-fuse addresses this one...