It has been determined already in this question that tar
cannot read input from stdin
.
How else can a dd
output be archived directly possibly without using any compression? The purpose of doing everything in a single task is to avoid to write the dd
output to the target disk twice (once as a raw file and once as an archive), and to avoid to perform two different tasks, which is a waste of time (since the input file must be read and written and the output read, processed and written again), and can be impossible if the target drive is almost full.
I'm planning to do multiple backups of drives, partitions and folders, and I'd like to benefit both from the ease of having everything stored into a single file and from the speed of each backup / potential restore task.
If you want to dump a whole block device to a file,
tar
won't be of any use, because it doesn't work with block devices. Instead you'll need to usedd
or similar:Even like this it would be better to use at least a little compression, as long as it doesn't slow down the transfer too much. In short, you need a compression algorithm with a throughput not much lower than that of your slowest storage medium. There are several such compression algorithms. The most notorious are Lempel–Ziv–Oberhumer, its derivate L4Z, and Snappy. There's a comparison of various compression algorithms including those three on the L4Z project page:
For the sake of this answer, I'll choose an example with LZO, because it's readily available in Canonical's repositories in the form of lzop, but ultimately all those stream compressors have front-ends that read from standard input and write to standard output.
If you want to work on the same machine during backup, you may want to use
ionice
and/ornice
/schedtool
: