I have to transfer a big directory to another server.
My problem is that i cannot use the full bandwidth that would be available, because one concurrent tcp stream does not get as fast.
Basically the functionality that any download manager nowadays supports.
Therefore I would like to do concurrent data streams.
However I cannot find a program that supports this, so I thought about just running multiple instances or rsync at once.
Is this a good idea or can you point me into the direction of a more suitable tool?
By the way, the problem you raise, that of a TCP connection being unable to get fast enough to use up your whole network bandwidth, only occurs in scenarios where the bandwidth delay product is large, which is the case on only very few networks. In LANs the delay is small, which keeps the product small, and in WANs the bandwidth is small, keeping the product small.
If you have a network with a very large bandwidth delay product, there are some things you can do to tweak TCP to work with it, starting with increasing the window size and increasing the path MTU (well, that's an IP tweak, not a TCP tweak, but it applies!). Look at papers written about research networks for more on this. For further help in tuning TCP to your scenario, you might need to describe your specific network.
As for using rsync, you can't usefully run two simultaneous rsyncs copying the same files at the same time.
the only thing I can think of that you can do at the rsync level is to break your directory into multiple subdirectories and transfer them one by one in parallel.
You could replace rsync with lftp - see my post on Superuser:
https://superuser.com/questions/75681/inverse-multiplexing-to-speed-up-file-transfer/305236#305236
The only issue may be that lftp doesn't "just transfer the changed/added files". But I can assure you, it's the fastest way I've seen to transfer data in a multi-threaded way.