I just got a dedicated server with Windows 2008 Standard Edition and am trying to do the necessary configuration to run my web app on it.
Was wondering, is it a good idea to install an antivirus on the web server? In the app, users can't upload any files except images (and they checked for being images in the app code before being saved on the the server). I'm encouraged to not install an antivirus in order not to affect performance or cause any troubles with the app, will I miss anything by doing this?
Thanks
A well run webserver should IMHO not have a commercial anti-virus (AV) package installed. The kind of Office macro viruses and mass-market trojans that AV packages are optimized for are a poor match to the problems of a web server.
What you should do is:
There is a lot of confusion about the terms, the words are often used in many different ways here. To be clear, what I mean by an H-IDS here is:
Actually a good H-IDS will do a bit more than this, such as monitoring file permissions, Registry access etc, but the above gets the gist of it.
A host intrusion detection system takes some configuration, since it can give a lot of false errors if not set up properly. But once it's up and running, it will catch more intrusions than AV packages. Especially H-IDS should detect a one-of-a-kind hacker backdoor, which a commercial AV package probably will not detect.
H-IDS also lighter on the server load, but that's a secondary benefit -- the main benefit is a better detection rate.
Now, if the resources are limited; if choice is between a commercial AV package and doing nothing, then I'd install the AV. But know that it isn't ideal.
If it's Windows based, which you said it is, I would. I would also try finding some form of host intrusion detection (a program that monitors/audits files that are changing on the server and alerts you to the changes).
Just because you aren't changing files on the server doesn't mean that there isn't a buffer overflow or vulnerability that will allow someone else to change files on the server remotely.
When there's a vulnerability the fact that there's an exploit is usually known within a window of time between discovery and fix distributed, then there's a window of time until you get the fix and apply it. In that time there's usually some form of automated exploit available and script kiddies are running it to expand their bot networks.
Note that this also affect AV's since: new malware created, malware distributed, sample goes to your AV company, AV company analyzes, AV company releases new signature, you update signature, you're supposedly "safe", repeat cycle. There's still a window where it's spreading automatically before you're "innoculated".
Ideally you could just run something that checks for file changes and alerts you, like TripWire or similar functionality, and keep logs on another machine that is kind of isolated from use so if the system is compromised the logs aren't altered. The trouble is that once the file is detected as new or altered you are already infected and once you're infected or an intruder is in it's too late to trust that the machine hasn't had other changes. If someone has cracked the system they could have altered other binaries.
Then it becomes a question of do you trust the checksums and host intrusion logs and your own skills that you cleaned up everything, including rootkits and Alternate Data Stream files that are possibly in there? Or do you do the "best practices" and wipe and restore from backup, since the intrusion logs should at least tell you when it happened?
Any system connected to the Internet running a service can be exploited potentially. If you have a system connected to the Internet but not actually running with any services I'd say you're most likely safe. Web servers do not fall under this category :-)
It depends. If you are not executing any unknown code, then it may be unneccessary.
If you have a virus infected file, the file itself is harmless while it's on the hard drive. It only gets harmful once you execute it. Do you control everything that gets executed on the server?
A slight variation is upload of files. They are harmless for your server - if I upload a manipulated image or trojan-infested .exe, nothing will happen (unless you execute it). However, if other people then download those infected files (or if the manipulated image is used on the page), then their PCs might become infected.
If your site allows users to upload anything that is shown or downloadable for other users, then you might want to either install a Virus Scanner on the Web Server or have some sort of "Virus Scanning Server" in your Network that scans every file.
A third option would be to install Anti-Virus but disable On-Access scanning in favor of a scheduled scan during off-peak times.
And to completely turn this answer 180° around: It's usually better to be safe than sorry. If you work on the web server, it's easy to accidentially click a bad file and wreck havoc. Sure, you can connect to it a thousand times to do something over RDP without touching any file, but the 1001st time you will accidentially execute that exe and regret it, because you cannot even know for sure what a virus does (nowadays they download new code from the internet as well) and would have to perform some intensive forensics on your whole network.
Yes, always. Quoting my answer from superuser:
If it's connected to any machines that may be connected to the Internet, then absolutely yes.
There're many options available. While I personally don't like McAfee or Norton, they are out there. There's also AVG, F-Secure, ClamAV (though the win32 port is no longer active), and I'm sure hundreds more :)
Microsoft has even been working on one - I don't know if it's available yet outside of beta, but it does exist.
ClamWin, mentioned by @J Pablo.