I need to do a temporary backup of ca 1 TB before moving a local server, and the other storage location I have is a remote HPC cluster with enough storage quota but a cap on the file-count, and there are too many files. Creating a tar file on the local machine is too slow (write speed?).
So how can I transfer the local files to a remote tar file? I was thinking of mounting the remote file-system locally (with sshfs?) and then using something like tar -cf /mnt/remote/backup.tar local_folder
(Should work right?). But can this be done without mounting? Maybe using some magic pipe of ssh
, scp
and tar
?
If I can get this to work, is it also possible to update the remote archive with updated local files like a proper backup solution? (This is not necessary for the current task.)
You can use command like:
The command is executed on the host with many files and
ssh
go to the desired destination host. Of course you can replace the wildcard (*
) with your.Using this method update is not possible, only create.
Use the tar
-
option to maketar
send the output to sdtout, which you can then pipe overssh
to the remote filesystem.Often using tar native gzip or xz compression (with a low compression ratio) will only slightly increase CPU load, but will often significantly reduce the amount of data you need to transfer and write.
You then end up with something along the lines of: