I need to do a temporary backup of ca 1 TB before moving a local server, and the other storage location I have is a remote HPC cluster with enough storage quota but a cap on the file-count, and there are too many files. Creating a tar file on the local machine is too slow (write speed?).
So how can I transfer the local files to a remote tar file? I was thinking of mounting the remote file-system locally (with sshfs?) and then using something like tar -cf /mnt/remote/backup.tar local_folder
(Should work right?). But can this be done without mounting? Maybe using some magic pipe of ssh
, scp
and tar
?
If I can get this to work, is it also possible to update the remote archive with updated local files like a proper backup solution? (This is not necessary for the current task.)