Goal: Reduce Server Load from Malicious Requests
Sometimes servers receive a large number of requests for pages that dont exist. Usually the requests are malicious in nature because the attacker is looking for an admin panel or other files using brute force.
These requests can severely degrade server performance because Apache spins up so many connections. When tools such as Fail2ban aren't able to stop them (i.e. attacks using multiple subnets instead of single ips) what are the options?
Is there a way to configure Apache or another add-on tool to override the web application and return a non load intensive 404 page (static .html, etc)?
It is rather specific to your web app.
If it uses a front-controller pattern, which is, e.g. a single
index.php
processing virtually unlimited number of different SEO URLs, e.g./foo/bar
, and there are too many of different "category" (first level) pages to list in the configuration, then there is no luck there.However, suppose that your website handles just
/
(homepage), then some/shop/<product name>
and/blog/<name-of-the-article>
.You can construct a simple rule (not Apache man here, sorry) in NGINX like the folowing:
Which would have the web server deliver straight not found error without invoking PHP at all, for requests which are know to be not part of your app URL "scheme".
Another technique that is efficient in this regard is honeypots.
Typically bots are trying to check for software/plugins which are vulnerable, but not even present on your website to begin with.
You can leverage the fact that you know you don't have them. And then instantly ban whoever tries to load those endpoints. (e.g. see this honeypot for NGINX technique).
You can implement the same with Apache. Essentially you'll need to list locations which are not belonging to your website and commonly attempted for exploits.
E.g. you know you have a Magento website, but so many bots will try to see/login as Wordpress. So
/wp-login.php
is one of your honeypot locations.Once defined in config you need to pass through those requests to a FastCGI script which interacts with your firewall to immediately ban.
Not only it will not cause any PHP load, but also trigger instant ban in the firewall.
Thus this would fire off much faster in comparison to Fail2ban (which monitors logs for e.g. repeat login failures), because the banning happens as soon as the request has arrived. This can work as a complementary measure to Fail2ban though.
Paging out to swap space is deadly to performance. Tune Apache MPM such that the maximum possible number of concurrent connections served, fit comfortably in memory. Notably, the
MaxRequestWorkers
directive.Doing capacity planning like this helps for good spikes in traffic, as well as malicious ones.
Monitor response time. The number of bogus requests doesn't really matter if none of them breach your security, and performance is still acceptable.
Banning IPs helps for repeat offenders. However, unlikely to be many repeats, which reduces the effectiveness of fail2ban. Blasting the entire Internet for the web app vulnerability of the month is often more useful to an attacker than scanning you in detail. Also, one attacker can come from many source addresses.
If the web server is still falling over, consider putting it behind a security oriented proxy.
Move the evaluation to your firewall. What you are looking for is a Frequency of 404 on top level URL by IP. Examine your current logs to understand the arrival rate of these creatures and then tailor your rule to their behavior.
You have a couple of options at this point * Send them to a slow honeypot, then of a free cloud tier where pages randomly 503. * At the firewall, 503 the IP for between 5 & 30 minutes. What you want to do is corral the bot and prevent them from changing the bot code. The most difficult item for a developer to check is code which works sometimes....and 503 is a legitimate return code.