Is there any way to divide outgoing channel between different traffic class, for example, with relation 30/70, if I don't know channel width? HTB requires exact numbers, and CBQ too.
Is there any way to divide outgoing channel between different traffic class, for example, with relation 30/70, if I don't know channel width? HTB requires exact numbers, and CBQ too.
I thought this would do the trick but unless it has some mechanism controlling its outflow, the queue empties too fast and it never exhibits drr weightings (not exactly sure why this is). As soon as there is selection pressure, however, it works exactly as it is supposed to.
If you wrap it all in an HTB with a 800kbps flow rate, you get a nice 70KB/30KB split, tested by running parallel instances of
pv -ar /dev/zero | nc differenthost port
.Incidentally, don't try to test this from a remote machine at line speed without out-of-band control mechanisms. (oops)
Maybe this answer will help anyway.
Edit: Searching the internet, it turns out there is no other linux deficit round robin (DRR) qdisc example, other than the manpage-included one, which seemed incomplete to me, omitting the filter rules (and caveats thereof) and being generally non-obvious.
What is DRR
Deficit round robin is a scheduling algorithm which can be imagined as a number of parallel queues. The scheduler iterates these queues in sequence. Each time a queue comes up, if the queue is not empty, it checks the packet size of the next packet in the queue against a number it tracks per queue called the deficit counter. If the packet size is smaller than the deficit counter, DRR removes the packet from the queue and sends it along to be put on the wire and subtracts its size from the deficit counter (hence the deficit). It then repeats with the next packet in the queue until either the packet is bigger than the deficit counter or the queue is empty. If the queue is not empty when the loop ends, the queue-specific value quantum is added to the deficit counter before moving on to the next queue.
This is actually not all that different from HTB, except HTB limits the maximum amount to be added to a given queue in any particular time interval (Pretty sure linux measures this in bytes-per-jiffie, though that might not be true anymore with tickless) and also has a deficit counter for all queues combined (which is also filled with some number of byte per time interval)
Example usage description
So what the above example does is create a drr qdisc as root and add 3 queues to it, one with a quantum of 600 bytes per pass, a second with 1400 bytes per pass, and a third which defaults to the MTU size. For reasons I will explain in a bit doesn't matter too much what the values are, only what the ratio is.
Because I like being fair, I added some sfqs to the leaves; this is not necessary because if you don't, it should default to pfifo fast (or whatever your default scheduler is, I think. I would have to read the sch_drr.c source to be 100% sure). It also doesn't change my tests since I was using a single TCP connection per port.
tc filter rules for testing description
When I was testing the above, it actually gave me some trouble in the filter rules. drr doesn't have a default flow like a lot of the other qdiscs have, nor does it allow you to assign one as a qdisc option (that I'm aware of, if it does, edit this answer). So it's rather amusing in a frustrating way when it starts dropping your packets on the floor because it can't queue things like arp requests or replies and it doesn't say why your interface is spontaneously down.
So the first two rules perform the test activity, match the tcp/udp (it's in the same spot) dport against 8111 and 8112, putting matches in the appropriate queue, stopping matching if an appropriate rule is found.
The third rule says "match any protocol where the first byte (0 offset) matches 0x0 with a mask of 0" and put it in the third queue. It is prio 2 to be evaluated after the first-pass guys and then catch any unmatched packets. If you know a better way to make a default classid, I would surely like to know.
Quantum selection
As I mentioned earlier, the actual values don't matter so much as the ratio, though it'll probably make the queues more bursty (by which I mean stick in one queue for X packets per pass) if they are bigger than they should be, and use more cpu time if they are smaller. For that reason, I chose values near the same order of magnitude as MTU whose relation to the 30/70 target ratio is visible. Since quantum determines the fill-rate per pass, and thus number of bytes per pass, the ratio of quantums will be the ratio of bytes per pass relative to each other. If one queue is empty, the others don't so much absorb its bandwidth as just spend more time skipping the empty queue and filling themselves.
The relative dearth of documentation compared to HTB or CBQ suggests to me that DRR isn't a particularly popular qdisc, so unfortunately, if you do decide to go this route, I expect support will be pretty sparse, which makes it hard to recommend for use.
Did you check this link ?
http://www.tldp.org/HOWTO/html_single/Traffic-Control-HOWTO/#r-unknown-bandwidth