What I basically want is to to write all tcpdump captured packets to a file every 3 days. So basically tcpdump should run for 24 hours on day 1 and write the output to Day1.log and similar for Day2 and Day3. On the 4th day it should repeat and write the log to Day1 again. This is basically to check DDoS attempts on my server and to find out the type of attack including the attacker's IP as in the last 7 days my machines were DDoS'd and I expect it to happen again. I know its done by some cronjobs but I need the actual commands to put there?
I also want to know which IP made how much input in mb/sec maximum as I have a high traffic so it would almost take me 6 hours to keep searching those files for the attacker's IP. So is there anything in WireShark during the analysis of those files which might tell how much input in mb/s was made by an IP to my server? If not, how should I find that?
Edit: --------------------------------------------
You guys are free to post your ideas of countering this as well. All I need is to find the attacker's IP, the packet-data he sent and the input in mb/s made to my server. My clients do not make more than 300kb/s input so if we set a filter to capture more than 1mb/s input if made, we could capture that.
It's right there in the man pages,
tcpdump
has -G,So,
tcpdump -i eth0 -s 65535 -G 86400 -w /var/log/caps/%F.pcap
will write to /var/log/caps/%F.pcap (where %F will be 2012-05-10, 2012-05-11, 2012-05-12, etc). Keep in mind it will rotate 24hrs from the time you start the cap, so it's not technically per-day unless you run it at midnight.I'm not saying what you're planning on doing is a good idea, just that this is the solution you're asking for.
Instead of logging all traffic, I would suggest the following: Monitor the number of packets sent to your server. If it exceeds a certain threshold, log a couple of 1000 packets, then wait for a longer time.
That packet trace should contain plenty of information which can be used for analysis. Also, it will not impose too much additional load on your server while everything is fine. You could use the following hacked together bash code as a starting point (could be started in
screen
, for example):Feel free to adapt it to your needs.
You can certainly get that data from tcpdump, but it's not entirely straighforward.
First, tcpdump writes to a special file format which isn't a log file, so you would need either another instance of tcpdump or Wireshark to analyze the logfiles. But here's a basic suggestion:
Be warned that tcpdump gives a lot of output, so you'll need a fair amount of free disk space!
If you are on Linux you could use logrotate.
Something like
This logrotate configuration would go into e.g.
/etc/logrotate.d/tcpdump
.You probably have a either a line in
/etc/crontab
or like me a script/etc/cron.daily/logrotate
that calls logrotate.Logrotate will when it processes this file rename
/var/log/dump.pcap.1
to/var/log/dump.pcap.2
and/var/log/dump.pcap
to/var/log/dump.pcap.1
and so on. Then when all those files are renamed and the oldest ones removed (in this example/var/log/dump.pcap.2
would be removed before renaming .1 to .2) it will execute the commands inpostrotate
. Unfortunately tcpdump does not survice a kill -HUP that is used on other deamons like httpd so this recipe kills it and then starts a new capture.Note that the first day you may want to start the tcpdump manually.
This is untested but should do the trick.
Something like darkstat might be more useful to identify high traffic hosts, although it won't store the actual traffic (it does record port numbers though).
I've used tshark to do this but you need to be careful.
Or you can set complex output format options and redirect stdout. The problem is tshark never discards received packets so it eventually runs out of memory. shorter runs are better.
Another technique I like is to use iptables and ULOG. there are several ulog daemons around that can send things to ordinary log files. I've also used specter to convert ulog reports to messages.