Is there an easy way to transfer files between two SSH/SFTP servers? The perfect solution would be FileZilla, but it only lets you create a connection between local and remote, but not between remote and remote.
Theoretically I could open two Nautilus windows and connect to some ssh://server1/path/to/folder
and ssh://server2/path/to/folder
and then just pull the files from one to the other side. My experience is that this is very unstable. Transmitting files in size sum of i.e. 10MB is no problem, but transferring i.e. 10GB often resulted in Nautilus hanging itself up and remaining there in need of ps -e | grep nautilus
-> kill -9 <pid>
. I also tested the same thing with Nemo and Caja. While Nemo tends to be more stable than the two others, it still is not perfect and also breaks from time to time. FileZilla is extremely stable, never really got it to break, but it is not very flexible due to the mentioned fact that it can only connect to a single SSH server.
Of course I could also mount a folder with sshfs
, but this is kind of an inconvenient solution. Too much pre-work to do to get a simple transfer running.
Is there any app that can handle transfers between two SSH servers without breaking? Perfect would be something like FileZilla, that picks up the job again if the connection got interrupted.
If you are on an Ubuntu version that is still supported, then your
scp
command will provide the-3
switch which enables copying files from remote1 to remote2 via localhost:You can also omit the
-3
switch, but then you will need the public key (id_rsa.pub
) ofuser1@remote1
in the fileauthorized_keys
ofuser2@remote2
:scp
then under the hood does assh user1@remote1
first and from therescp /path/to/file1 user2@remote2:/path/to/file2
. That's why the credential must be distributed different from the-3
solution.In other words:
scp -3 remote1:file1 remote2:file2
transfers the file from remote1 to localhost and then back to remote2. The data travels remote1 → localhost → remote2. The localhost is the 3rd party in this scenario, hence-3
. For this to work, you will need the credentials from localhost on both remote1 and remote2 because localhost connects to both of them.scp remote1:file1 remote2:file2
copies the file directly from remote1 to remote2 at the speed with wich they are connected to each other. localhost is not involved here (besides issuing the command). The data travels remote1 → remote2. For this to work, you will need the credentials from localhost only on remote1 but additionally you need the credentials of remote1 on remote2 because localhost connects to remote1 only and then remote1 connects to remote2.If possible I would choose the second approach. As some comments already say:
usuallyoften the network cable between remote1 and remote2 is far thicker than the cable between them and localhost.In most cases, two ssh servers can reach each other (or at least one can reach the other), and again in most cases the workstation's internet is far worse than either of the servers.
If so, ordering one server to transfer to the other one is the way to go.
Check
nohup.out
on server1 for errors.If server reachability is the other way around you can reverse which machine is the master:
Perhaps you could use one of several GUI front-ends to rsync:
Is there any GUI application for command rsync?
Or perhaps you could use rsync directly from the command line to connect to both remote servers:
"How to rsync files between two remotes"
I often log in to one server with ssh, then from that server's command line use rsync to push or pull files to the other remote server -- that's generally much quicker than trying to transfer the files through some 3rd computer.
The rsync is smart enough to do some work, then if anything goes wrong and interrupts the process, it can later resume right where it left off.
You need to use SCP protocol.
scp file you want to transfer login@address_of_second_server:/path_where_you_want_to_save