Lately i was working on uploading big files to website, this upload eat whole bandwidth and crippled my network. So i implemented chunking one 1MB chunk per second and its working but now I'm thinking could I use traffic control to achieve the same with better result.
My question: Is traffic control that smart and can distinguish between www browsing and www bulk upload/download big files.
tc qdisc add dev imq0 root handle 1:0 htb default 666
tc class add dev imq0 parent 1:0 classid 1:1 htb rate 90000kbit ceil 90000kbit
tc class add dev imq0 parent 1:1 classid 1:888 htb rate 10000kbit ceil 40000kbit prio 0 # browse www traffic
tc class add dev imq0 parent 1:1 classid 1:666 htb rate 10000kbit ceil 40000kbit prio 1 # bulk www traffic
tc filter add dev imq0 protocol ip parent 1:1 prio 2 u32 match ip tos 0x08 0xff flowid 1:666 # bulk www traffic
tc filter add dev imq0 protocol ip parent 1:0 prio 2 u32 match ip sport 80 0xffff flowid 1:888 # http
tc filter add dev imq0 protocol ip parent 1:0 prio 2 u32 match ip sport 443 0xffff flowid 1:888 # https
Would something like this work? (Cant check it now.)
Your solution relies on web browser behaviour to set IP ToS field accordingly. This could be relied upon only in trusted environment.
I can easily write mangle rules on my machine to flatten all ToS into some single value and your TC rules wouldn't be able to distingush my bulk from browsing.
Also, I wouldn't be sure that all browser and other http(s) software always uses same ToS for same things. There is a plenty of such software: wget and other downloaders; web site mirroring (wget also could do that, but there is specialised software) which download not just some large files, but all page requisites also; a plenty of browsers, including firefox, lynx, links, ie, edge, chrome-based - all of them use different engines and could display different network behaviour; REsT-related stuff, which just send small messages over http back and forth. In the web world there is also Websocket solution, which is completely different from what you know about http.
So, reliance on ToS is useful only if you carefully control your network.
In general case, the difference is that browsing uses short bursts of traffic, while downloading fills the capacity of the channel. You can set fairly low average HTB speed, which will correspond to average browsing and downloading speed, while keeping bucket size (burst size) quite large to accomondate the complete web page. Then web browsing will be experienced as fast (because of large bucket), but since page fetches are rare, there would be enough time to replenish the bucket for the next page. On the other hand, downloading will deplete the bucket and wouldn't give a free time to replenish it, being as slow as your limit.
Many popular uploading tools provide a mechanism to throttle the amount of bandwidth the transfer is allowed to consume.
If you're using
rsync
to upload your big files, use the--bwlimit
switch:If you're simply using
scp
, use the-l
switch:If you're using
wpu
, it also recognizes the-l
switch (also--limit-rate=
):or
If you're using something else, consult the
man
page for your uploader of choice to see whether it offers a similar capability.