I've implemented simple rate limiting using HaProxy in a similar fashion to the way StackExchange does it with HaProxy. I'm trying to make it a bit more advanced so that there are multiple thresholds of rate limiting.
For example, limit clients that request:
15/minute
60/hour
360/day
It seems like I need multiple stick-tables to store the same data with different sample rates. The documentation states:
There is only one stick-table per proxy. At the moment of writing this doc, it does not seem useful to have multiple tables per proxy. If this happens to be required, simply create a dummy backend with a stick-table in it and reference it.
Unfortunately I'm having a devil of a time trying to figure out how to store the data into the dummy backend tables.
I'm also open to other methods, HaProxy simply seemed like a promising road and since we already have it in the environment it made sense. Any suggestions are appreciated.
I was just trying to do this myself, was having no luck, and decided to resort to my google-fu. The top result for me when looking for multiple levels of rate limiting was this, and I got really excited. Then I saw it had no answers and initially fell into an existential pit of despair. After digging myself out, I kept hacking, and by some stroke of luck, I seem to have figured out how to do it at least for what I needed. Maybe it will work for you too.
Haproxy is really, really cool, and I'm excited to start using it in place of our current load balancing solution, but stick-tables are a bit of a monster to wrap your head around. On that front, I've found one general principle that seems to be helping me, and that's to explicitly refer to every stick table by name when you're trying to do a setup with multiple stick tables. The default behaviour, where the name is implicit (assumed to be the backend you're in), is great... except when you start trying to get fancy with multiple stick tables. So that's why in my configuration below, some of it is more verbose than it has to be. I just find it easier to follow the logic that way. Anyway, here goes (note that this is counting based on cookies for a Moodle application, not the IP, and it's using v1.5.11 of haproxy):
So, what this is doing is setting one counter to record the rate per 10s and another to record the rate per 60s. Note that it's not actually using these counters to do any rate limiting yet. But you can verify via:
That the rate counters are being maintained separately.
I wanted to find out the minimal configuration I needed to get those counters to actually increment, which is why you see "FALSE" at the end of the "tcp-request content reject" statement. Just defining the acls with the counters won't get them to increment. You have to actually use the acl. Putting "FALSE" at the end simply allows me to use the acl without ever satisfying the condition to actually reject the request. I'll probably just take out the "FALSE" once I decide on some really numbers for those acls.
The real key to getting multiple stick tables to work seems to be doing the "stick on", "track-sc{0|1|2}", and acl definitions using "sc{0,1,2}_inc_gpc0" in the backend where you're actually handling the request. Moving any of those to the dynamic_60 backend caused the count for that to stop working. I guess the reasoning is that it makes no sense to track or apply acls to a backend that's not serving requests because it doesn't actually have the requests going through to pull information from. That said, I'm sure others will have better explanations. I'm pretty new to haproxy.
The next question I asked was: am I limited to just tracking 3 things (as the "track-sc" configuration settings only go from 0-2). I believe that, yes, you can only track the three things. But importantly, it's 3 things per backend that actually serves a request. So, for example, if like me you want to do different rate limiting for static content than dynamic content, you can make your decision on whether to go to a "static" or "dynamic" backend in your frontend, based on something in the request. Then in the "static" backend, you define track-sc0 and track-sc1 on the "static" and "static_60" backends (if you happened to be following a similar naming scheme to the config I put above). Then you'll have 4 stick tables to use to make rate limiting decisions. 10s and 60s rates for both dynamic and static content. Use the 3rd counter, and I'd think you could get your 3 levels in, but I think that would be the limit.
I also tinkered with this problem for some time. The solution from David Ackerman is working fine but it can be simplified with the use of a second General Purpose Registers (available since HAProxy 1.9+) if you only need two limits. This is my solution to limit the requests per minute and per day.
This solution also doesn't count blocked requests