I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE.
The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS.
These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers.
There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory -> SSD -> HDD -> remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor.
I like the idea of running NexentaStor. It almost fits the bill.
NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle.
I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment.
Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html
And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option.
I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.
One way to get this on a Linux server is by using the flashcache kernel module. This only really gives you one tier, say the SSD on top of the Drobo and/or local discs. I have been using this experimentally over the last few weeks here at home with a 500GB SATA drive and a X25-E SSD to provide a LVM that I then slice up and serve via iSCSI. So far it's been working very well.
You have two methods available with FlashCache: write-through and write-back. Write-back caches writes, but also has a design flaw that they haven't resolved yet that would cause a hard failure of the system to not correctly preserve some data. The write-through has no such issue, but writes are always flushed to the backing disc.
I don't think this would be appropriate for layering on top of NFS though.
A few notes about Flashcache: You have to build it from scratch currently, you have to run a 64-bit kernel (32-bit just doesn't load the module properly), and in my testing so far it's worked great. Again, that's only been around a week or two so far.
You could try and extend this experimental project on github: https://github.com/tomato42/lvmts
It contains a daemon that detects witch lvm extents are used most and moves those extents up over the tiered storage chain.