Let's say I've got a somewhat important database but one where I don't actually care about every transaction. Before you get snarky -- it is a database storing zabbix status data, so I honestly don't care if I lose a minute or two of transactions when the ascii control codes hit the TTY.
If I want to replicate that database to another host, I could do traditional database replication that essentially replays the transaction log across a pipe. There are some issues with that (like I'll have lots of writers to the primary database and the slave will only have one thread inserting the replica data), but I'd also get a "read only slave" for running my reports against. And someone else coming to town would understand what's going on. Both are valuable, I completely agree.
But -- what if I want to be clever? Being clever, I'll run my postgresql database on a ZFS filesystem with compression turned on. And now that my database is running on ZFS, can I just zfssend that stream to another box and either apply it or just archive the zfs send stream for later replay if I decide I need it?
Would the resulting database be usable after a "clean up after an unclean shutdown and replay the transaction logs" on the target machine? Let's say now I also want a read-only slave to run my reports against, could I replay the ZFS, make a snapshot, and startup the database against the snapshot?
ZFS send/receive works based on ZFS snapshots. Unless you quiesce the database, the snapshot you take to provide the initial source for the ZFS send will be crash-consistent.
You may use ZFS send/receive to send that snapshot to another host or ZFS filesystem. I don't understand what you mean by archiving the send stream. ZFS send/receive is atomic, so I'm not sure that you have an ability to save the stream.
If you need a read-only copy of a ZFS filesystem, you can take a ZFS snapshot and clone that snapshot to a new filesystem mount point.
At that point, you could perform operations on the new mount as needed.