I am the reluctant "sysadmin" of a medium-sized cooking blog that runs on a t2.small EC2 instance, and utilizes WordPress on the LAMP stack. Despite being a fairly embarrasingly bad "sysadmin" it actually runs really reliably and efficiently despite shipping a LOT of traffic. I've done everything I can but every now and then (about once a year) someone manages to hammer it with a DDOS type attack. What tends to happen is that the apache server dies and anyone request anything from the website gets a HTTP timeout. Restarting the httpd service immediately fixes the problem.
What I'm thinking of doing is scheduling a CRON job to call the website every 5ish minutes and if it finds that it gets a timeout more than twice, it restarts the httpd service and sends me an email. Now, this is NOT one of those "I can't set up my apache server properly so I'll restart it every hour" solutions. The apache is tweaked and runs beautifully when it isn't being hacked. And I've put in as much firewall anti-hacking best practice as I possibly can. My rationale is basically "hackers and crazies are always finding new ways to hammer me and when they do I'd rather have 10 minutes of outage and an email than potentially hours of downtime (if I'm asleep or away)". The question is, is this a good solution or is there a better, more sysadminny way to do this using some existing technology?
NOTE: When I say the apache server dies it actually looks fine when you run top -U apache. WHat it does is writes about 10
(12)Cannot allocate memory: AH00159: fork: Unable to fork new process
to the error_log and just doesn't do anything at all until restarted.
You can do this pretty easily by checking the return code of a
curl
command:The idea is that if the return code does not equal 0, i.e., the request failed, then restart apache.
You would put it in cron something like this:
This would make sure that it is logging the results.
Note: This is a bandaid. It is a good idea to get on and make sure this isn't running repeatedly. Perhaps put in a
send me an email
line?For a while, Microsoft was ignoring robots.txt and hammering my server with their search engine crawler. When things got slow, I would see as many as 200 httpd instances, most of them Microsoft crawlers! So I implemented the following script, which checks to see how many httpd instances are running, and if over a certain number, kills them all.
This is pretty unsophisticated. You can change the number of httpd instances on line 5 from "30" to anything you like. You can change the time between monitoring by changing "sleep 30" to any number of seconds you prefer. I looked into only killing Microsoft crawler instances, but it would have taken parsing the apache logs, which I didn't feel like getting into.