What are the tell-tale signs that a Linux server has been hacked? Are there any tools that can generate and email an audit report on a scheduled basis?
Keep a pristine copy of critical system files (such as ls, ps, netstat, md5sum) somewhere, with an md5sum of them, and compare them to the live versions regularly. Rootkits will invariably modify these files. Use these copies if you suspect the originals have been compromised.
aide or tripwire will tell you of any files that have been modified - assuming their databases have not been tampered with.
Configure syslog to send your logfiles to a remote log server where they can't be tampered with by an intruder. Watch these remote logfiles for suspicious activity
read your logs regularly - use logwatch or logcheck to synthesize the critical information.
Know your servers. Know what kinds of activities and logs are normal.
Weird segfaults, eg. from standard utilities like ls (this can happen with broken root kits)
Hidden directories in / or /var/ (most script kiddies are too stupid or lazy to cover their tracks)
netstat shows open ports that shouldn't be there
Daemons in the process list that you normally use different flavours of (eg. bind, but you always use djbdns)
Additionally I've found the there's one reliable sign that a box is compromised: if you have a bad feeling about the diligence (with updates, etc.) of the admin from whom you inherited a system, keep a close eye on it!
I know, I know - but it's the paranoid, sad truth, really ;) There are plenty of hints of course, but if the system was targeted specifically - it might be impossible to tell. It's good to understand that nothing is ever completely secure. But we need to work for more secure, so I will point at all the other answers instead ;)
If your system was compromised, none of your system tools can be trusted to reveal the truth.
Tripwire is a commonly used tool - it notifies you when system files have changed, although obviously you need to have it installed beforehand. Otherwise items such as new user accounts you don't know about, weird processes and files you don't recognize, or increased bandwidth usage for no apparent reason are the usual signs.
Other monitoring systems such as Zabbix can be configured to alert you when files such as /etc/passwd are changed.
There's a method of checking hacked servers via kill -
Essentially, when you run "kill -0 $PID" you are sending a nop signal to process identifier $PID. If the process is running, the kill command will exit normally. (FWIW, since you're passing a nop kill signal, nothing will happen to the process). If a process isn't running, the kill command will fail (exit status less than zero).
When your server is hacked / a rootkit is installed, one of the first things it does is tell the kernel to hide the affected processes from the process tables etc. However it can do all sorts of cool things in kernel space to muck around with the processes. And so this means that
a) This check isn't an extensive check, since the well coded/intelligent rootkits will ensure that the kernel will reply with a "process doesn't exist" reply making this check redundant.
b) Either way, when a hacked server has a "bad" process running, it's PID usually won't show under /proc.
So, if you're here until now, the method is to kill -0 every available process in the system (anything from 1 -> /proc/sys/kernel/pid_max) and see if there are processes that are running but not reported in /proc.
If some processes do come up as running, but not reported in /proc, you probably do have a problem any way you look at it.
Here's a bash script that implements all that - https://gist.github.com/1032229 . Save that in some file and execute it, if you find a process that comes up unreported in proc, you should have some lead to start digging in.
SNORT® is an open source network intrusion prevention and detection system utilizing a rule-driven language, which combines the benefits of signature, protocol and anomaly based inspection methods. With millions of downloads to date, Snort is the most widely deployed intrusion detection and prevention technology worldwide and has become the de facto standard for the industry.
Snort reads network traffic and can look for things like "drive by pen testing" where someone just runs an entire metasploit scan against your servers. Good to know these sort of things, in my opinion.
Use the logs...
Depending on your usage you can set it up so you know whenever a user logs in, or logs in from an odd IP, or whenever root logs in, or whenever someone attempts to login. I actually have the server e-mail me every log message higher than Debug. Yes, even Notice. I filter some of them of course, but every morning when I get 10 emails about stuff it makes me want to fix it so it stops happening.
Monitor your configuration - I actually keep my entire /etc in subversion so I can track revisions.
Run scans. Tools like Lynis and Rootkit Hunter can give you alerts to possible security holes in your applications. There are programs that maintain a hash or hash tree of all your bins and can alert you to changes.
Monitor your server - Just like you mentioned diskspace - graphs can give you a hint if something is unusual. I use Cacti to keep an eye on CPU, network traffic, disk space, temperatures, etc. If something looks odd it is odd and you should find out why it's odd.
Check your bash history, if it's empty and you haven’t unset it or emptied it, there a good possibility someone has compromised your server.
Check last. Either you will see unknown I.P's or it will look very empty.
Then as the accepted answer stated, system files are often changed, check the date modified. However they often tamper with the date modified.
They often install another version of ssh running on a random port. This is often hidden in some really odd places. Note it will normally be renamed to something other than ssh. So check netstat(might not work as they often replace it) and use iptables to block any unknown ports.
In any case, this is a situation where prevention is better than cure. If you have been compromised, it's best to just format and start again. It almost impossible to confirm you have successfully cleaned the hack.
Take note of the following to prevent your server from being compromised.
Change ssh port
Prevent root from being able to login
only allow certain users
Prevent password login
Use ssh keys, preferable password protected keys
Where possible blacklist all ip's and whitelist the required ips.
Install and configure fail2ban
Use tripwire to detect intrusions
Monitor the number of users logged in with Nagios or zabbix. Even if you get notified every time you login, at least you will know when some else is playing.
If possible keep your server on a vpn, and only allow ssh via vpn ip. Secure your vpn.
It's worth while taking note that once they in one server, they will check through your bash history and look for other servers you connected to via ssh from that server. They will then attempt to connect to those servers. So if you get brute forced due to a poor password, it's very possible they will be able connect to the other server and compromise those too.
It's an ugly world out there, I reiterate prevention is better than cure.
You should check out GuardRail. It can scan your server on a daily basis and tell you what's changed in a nice visual way. It doesn't require an agent and can connect over SSH so you don't need to junk up your machine and resources with an agent.
Some things that have tipped me off in the past:
ls
(this can happen with broken root kits)/
or/var/
(most script kiddies are too stupid or lazy to cover their tracks)netstat
shows open ports that shouldn't be therebind
, but you always usedjbdns
)Additionally I've found the there's one reliable sign that a box is compromised: if you have a bad feeling about the diligence (with updates, etc.) of the admin from whom you inherited a system, keep a close eye on it!
You don't.
I know, I know - but it's the paranoid, sad truth, really ;) There are plenty of hints of course, but if the system was targeted specifically - it might be impossible to tell. It's good to understand that nothing is ever completely secure. But we need to work for more secure, so I will point at all the other answers instead ;)
If your system was compromised, none of your system tools can be trusted to reveal the truth.
Tripwire is a commonly used tool - it notifies you when system files have changed, although obviously you need to have it installed beforehand. Otherwise items such as new user accounts you don't know about, weird processes and files you don't recognize, or increased bandwidth usage for no apparent reason are the usual signs.
Other monitoring systems such as Zabbix can be configured to alert you when files such as /etc/passwd are changed.
There's a method of checking hacked servers via
kill
-Essentially, when you run "kill -0 $PID" you are sending a nop signal to process identifier $PID. If the process is running, the kill command will exit normally. (FWIW, since you're passing a nop kill signal, nothing will happen to the process). If a process isn't running, the kill command will fail (exit status less than zero).
When your server is hacked / a rootkit is installed, one of the first things it does is tell the kernel to hide the affected processes from the process tables etc. However it can do all sorts of cool things in kernel space to muck around with the processes. And so this means that
a) This check isn't an extensive check, since the well coded/intelligent rootkits will ensure that the kernel will reply with a "process doesn't exist" reply making this check redundant. b) Either way, when a hacked server has a "bad" process running, it's PID usually won't show under /proc.
So, if you're here until now, the method is to kill -0 every available process in the system (anything from 1 -> /proc/sys/kernel/pid_max) and see if there are processes that are running but not reported in /proc.
If some processes do come up as running, but not reported in /proc, you probably do have a problem any way you look at it.
Here's a bash script that implements all that - https://gist.github.com/1032229 . Save that in some file and execute it, if you find a process that comes up unreported in proc, you should have some lead to start digging in.
HTH.
I'll second the responses given here and add one of my own.
This will give you a quick indication if any of your main server files have changed in the last 2 days.
This is from an article on hack detection How to detect if your server has been hacked.
From How can I detect unwanted intrusions on my servers?
Use an IDS
Snort reads network traffic and can look for things like "drive by pen testing" where someone just runs an entire metasploit scan against your servers. Good to know these sort of things, in my opinion.
Use the logs...
Depending on your usage you can set it up so you know whenever a user logs in, or logs in from an odd IP, or whenever root logs in, or whenever someone attempts to login. I actually have the server e-mail me every log message higher than Debug. Yes, even Notice. I filter some of them of course, but every morning when I get 10 emails about stuff it makes me want to fix it so it stops happening.
Monitor your configuration - I actually keep my entire /etc in subversion so I can track revisions.
Run scans. Tools like Lynis and Rootkit Hunter can give you alerts to possible security holes in your applications. There are programs that maintain a hash or hash tree of all your bins and can alert you to changes.
Monitor your server - Just like you mentioned diskspace - graphs can give you a hint if something is unusual. I use Cacti to keep an eye on CPU, network traffic, disk space, temperatures, etc. If something looks odd it is odd and you should find out why it's odd.
I'd just like to add to this:
Check your bash history, if it's empty and you haven’t unset it or emptied it, there a good possibility someone has compromised your server.
Check last. Either you will see unknown I.P's or it will look very empty.
Then as the accepted answer stated, system files are often changed, check the date modified. However they often tamper with the date modified.
They often install another version of ssh running on a random port. This is often hidden in some really odd places. Note it will normally be renamed to something other than ssh. So check netstat(might not work as they often replace it) and use iptables to block any unknown ports.
In any case, this is a situation where prevention is better than cure. If you have been compromised, it's best to just format and start again. It almost impossible to confirm you have successfully cleaned the hack.
Take note of the following to prevent your server from being compromised.
It's worth while taking note that once they in one server, they will check through your bash history and look for other servers you connected to via ssh from that server. They will then attempt to connect to those servers. So if you get brute forced due to a poor password, it's very possible they will be able connect to the other server and compromise those too.
It's an ugly world out there, I reiterate prevention is better than cure.
After searching around a bit, there's this also, it does what I've listed above, amongst some other stuff: http://www.chkrootkit.org/ and http://www.rootkit.nl/projects/rootkit_hunter.html
You should check out GuardRail. It can scan your server on a daily basis and tell you what's changed in a nice visual way. It doesn't require an agent and can connect over SSH so you don't need to junk up your machine and resources with an agent.
Best of all, it's free for up to 5 servers.
Check it out here:
https://www.scriptrock.com/