Linux Kernel before 3.6 used route caching to do IPv4 multipath routing, which meant routing between two separate lines/ISPs was quite easy. From 3.6 the algorithm changed to being per-packet, meaning that some route table/rule/iptables marker tricks were required to achieve the two lines/ISPs.
However, if you had two lines with the same ISP who could route a single IP down both lines on a per-packet basis in a balanced/failover fashion, then from 3.6 you could easily achieve line bonding (at the IP level) because of the per-packet routing in both directions.
From 4.4, the kernel changed again to flow-based load balancing based on a hash over the source and destination addresses.
I am currently running Kernel 4.4.36, and am using multipath routing over PPPoE connections. My downstream traffic from the ISP is routed across the two separate lines on a per-packet basis (one IP routed down both lines). This gives me a download speed faster than the speed of one individual line. Nearly the speed of both lines added together. It works really well, Skype video, VoIP (UDP), YouTube etc. all work great.
Because of having such a good downstream experience I want to try it upstream but my upstream traffic is routed according to the newer flow-based algorithm across both ppp devices (which have the same IP address). This means that I cannot achieve an upload speed that is faster than the speed of a single line.
Is there a way to configure the current Kernel to use the per-packet algorithm? Or some other method to achieve per-packet multipath routing? Would I need to revert to an older Kernel (which I don't want to do for various other reasons)?
My ISP does not support multi-link ppp.
In case it is relevant, I am currently running Arch Linux ARMv7 on a Raspberry Pi 3.
Ok, so after having had more time to investigate this I found a way to do it using Linux TEQL (True Link Equalizer). Here is a link I loosely followed, but with some tweaks.
http://lartc.org/howto/lartc.loadshare.html
This is how I got it working on Arch Linux ARMv7 (Raspberry Pi 3)
On boot:
The following command should be run on boot to load the appropriate Kernel module.
The following commands also to run on boot assuming you want to NAT from a local network on eth0.
The FORWARD return traffic is on ppp+, and the POSTROUTING MASQUERADE on teql+ because the outgoing traffic goes out on teql and the return traffic comes back on ppp.
When ppp links come up:
Assuming the links to be load-balanced are ppp, the following commands to be run in a script in an
/etc/ppp/ip-up.d/
script.Where
1.1.1.1
is your ISP-facing public IP address. Additional public IPs can be assigned to the teql0 device, but don't need to be assigned to the ppp devices. In my setup the two ppp links share the same IP (negotiated by pppoe etc.) The teql link it manually assigned as shown above. The ISP needs to send traffic for the IP equally down both links.The reverse path (
rp_filter
) is set to2
(loose) both in the script above so that the return packets are not dropped due to them coming back on the ppp interfaces rather than teql0.I have set it up that way, and it works perfectly. Very easy! When the links fail, there is seamless failover. When they come up, they just start working again. Seems like there is no packet loss or delay when it fails over, and none when it comes back up either.
Also, one of the commenters suggested the below link which uses policy routing, with iptables to mark every other packet etc. but I will try in a few days to see whether it works any better than the above and provide feedback here accordingly.
http://support.aa.net.uk/Router_-_Linux_upload_bonding_using_policy_routing