Im trying to keep away some attackers that try to exploit XSS vulnerabilities from my website, I have found that most of the malicious attempts start with a classic "alert(document.cookie);\" test. The site is not vulnerable to XSS but I want to block the offending IP addresses before they found a real vulnerability, also, to keep the logs clean.
My first thought is to have a script constantly checking in the Apache logs all IP addresses that start with that probe and send those addresses to an iptables drop rule. With something like this:
cat /var/log/httpd/-access_log | grep "alert(document.cookie);" | awk '{print $1}' | uniq
Why would be an effective way to send the output of that command to iptables?
Thanks in advance for any input!
You'll be happy to know that you don't have to write a program; fail2ban already does this.
Something I do, mainly because of my ignorance of a more elegant solution, is to manually check my Nginx logs every 4 hours and the mail server logs every 2 minutes for excessive access by individual IP's. I run a few scripts together that:
access.log
and list off the top 10 IP's organized by how many hits they have to the serveriptables.save
Here's what it looks like:
autoBanIPs_mail.sh checkBadIPs_mail.shOne thing that is VERY important to note here is that you NEED to setup a whitelist or you are going to start blocking a lot of authentic IP's from servers that you just receive a lot of email from or in the case of other logs, IP's that just hit your server a lot for legitimate reasons. My whitelist is just built into this script by adding grep pipes right after | grep ']' | that look something like this "grep -v 127.0 |".
BlockIPYou need to take the time to teach your server which high traffic IP's are legit and which aren't. For me this meant that I had to spend the first week or so checking my logs manually every couple of hours, looking up high traffic ip's on iplocation.net and then adding the legit ones like amazon, box.com or even my home/office IP ranges to this whitelist. If you don't you will likely be blocked from your own server or you are going to start blocking legit mail/web servers and cause interruptions in email or traffic.
I have some logs checked every 2 minutes, mainly my ssh auth log and the mail log as they were getting pounded :(.
I setup specific scripts for each log file although it would be easy enough from the manual script I use myself when wanting to inspect logs. Looks like this:
This requires 2 inputs when run, the log file you want to scan and how far back into the past you want to scan.
So if I wanted to check mail.log for the ip counts say 75 minutes into the past I would run:
Again I know this is crude as hell and there is probably a nice clean efficient protocol that does all of this but I didn't know about it and this thing has been going for a year or two now and keeping the bad guys at bay. The one thing I would very SERIOUSLY recommend is that you have a proxy or another server in the wings that you can use to access your main server.. The reason being is that if you are doing web development one day out of the blue and you ping your self 2000 times in 5 hours for some testing you could get blocked with no way back in except for a proxy.
You can see that in
checkBadIPs.sh
I've put grep -v 127.0 and in my actual files I have a ton of ignore rules for my own IP's and other trusted IP ranges but sometimes your IP changes, you forget to update and then you're locked out of your own server.Anyways, hope that helps.