I often copy many files between 2 servers connected via 1GBit ethernet while I am connected via VPN and SSH over DSL. The problem is, that the verbose output (1 line per copied file) which needs to be sent over the slow connection to my ssh client and slows down the copy operation (it feels for me this way)
I tested this with a testfile created via
dd if=/dev/urandom | base64 | dd of=testfile count=10M bs=1
Testing run A (no output)
# time sh -c 'cat testfile > /dev/null'
sh -c 'cat testfile > /dev/null' 0.00s user 0.02s system 97% cpu 0.025 total
Testing run B (all output via ssh/vpn)
# time sh -c 'cat testfile'
sh -c 'cat testfile' 0.00s user 0.45s system 0% cpu 4:31.10 total
(I know, its not a good test, but it demonstrates the problem)
Is there a way not to slow down the operation and get the output asyncronously. I imagined something like dropping all lines except a specified count per second or something like this.
At the moment I start screen and detach while the operation is running.
I am using putty for windows and the openssh-client for linux access
Any ideas?
You don't say what you're using to do the copy, but I suppose it doesn't really matter.
You might try something like this using
pv
:That gives you a low-overhead progress indicator:
which shows the number of lines of filenames (not file contents) that you would have seen otherwise. The
tee
command saves this output on the remote system in case you do need to see it. Then the output is discarded.Here's a way to do something similar without
pv
where
progress
is a script something like:what happens if you pipe the output into less ?, as in cp a b | less
If I'm understanding this correctly, it sounds like you're slowing down due to terminal buffering!
Do you need to see the output of the transfer? If not, use the
-q
or appropriate options to silence the per-file transfer output. If you need that data, redirect the output to a file for later review. You could thentail -f
that file.