(To work around the "is a duplicate" issue: I don't see many requests. The number is rather small. Instead, each request downloads a lot of data.)
The server I'm talking about has 2x10 GBit/sec of Internet connectivity, with a backend of 40 GBit/sec. It serves around 20 TByte of data to the public, using nginx/vsftpd/rsyncd on a Debian Stable system. In addition, apache2 is used to serve some non-static content, but this can be disregarded.
The hardware is beefy enough to serve up to around 18 GBit/sec (as observed once), and traffic is free. As the server is a mirror of open source software and other public software, there's also not an issue of downtime being a critical problem.
However, I observe a specific pattern of DDoS attack I'd like to stop affecting the server. Whenever the attack is ongoing, most of the DVD ISOs of Debian (around 300 GByte, so way more than what fits in RAM) are downloaded by multiple hosts, with downloads per file repeating. Depending on how organized the attack is, this causes the bandwidth to increase quite a lot, and of course puts some stress on the hardware, while limiting the experience for legitimate users of the server at the same time.
In these attacks, typically 2-3 networks are coordinated in the attack, each downloading files as described. Most of the times it seems one click hosters or file caches of some sort are abused, tricked into downloading the same file over and over - and this being automated to download a number of different files as part of the attack.
Is there any way I can configure nginx to auto-ban certain IP ranges? Or limit traffic rates to, say, 1 GBit/sec for these networks (for some time)?
I don't want to impose a general limit, as the server actually should be used, even for high-speed transfers (mirror to mirror, most likely).
As a remark, a clever attacker, whatever the motivation might be, could start to abuse FTP/RSYNC instead of HTTP, working around the solutions this question might produce.
Currently, when I realize an DDoS attacks is going on, I scan the log files, identify the abusing networks, and ban them manually.
Actually you can use Nginx limit_req module and also Nginx limit_conn
Both modules are able to limit the connections from a specific source and also to limit requests made from IPs, and this in your case may be very helpfull
As per the reuqest, nginx can also be used to limit bandwidth.
in this example
limit_rate_after 100m;
nginx will (per each user connection, be aware of this) throttle connection to a max of150k
. So eg if you need to allow up to 100m of full bandwidth and then restrict speed, this can help you.Be aware that this solution limits nginx download speed per connection, so, if one user opens multiple video files, it will be able to download 150k x the number of times he connected to the video files. If you need to set a limit to the connections, you should be able to do it with limit_zone and limit_conn directives. Example:
Inside your server block configuration:
Inside your location block configuration:
In this example, it would allow 10 connections per IP with 1 Mbit each.
Credits
You could use fail2ban with configuration scanning nginx access logs. There are lots of guides around the web that help with this.
It's really hard to block this kinds of attacks without specialized software/hardware, that do pattern recognition. You could probably use some kind of blacklist, although I'm not sure there might be one that fits your use case.
Another solution would be to use some kind of a javascript wall, that blocks downloading unless the browser uses javascript, but that's a bad practice and blocks legit users using curl/wget.