Suppose you have a Windows Server machine running various sensitive services. Suppose that one of these services is pretty simple, maintaining a small amount of information in a text file, but as a result of being badly coded, has an (unknown) arbitrary code execution vulnerability.
Is it possible to set up a user account for that service such that if a hacker were to exploit this vulnerability successfully, the most damage they could do is read/write this text file, mess up this specific service, and possibly list the files out of C:\Windows, but nothing else?
A naive attempt at doing this immediately runs into a problem: anyone in "Users" can write to C:\Program Files, and removing "Users" from that directory's ACL results in a permission error, making me wonder if, perhaps, it is a very bad idea.
Or is the game already lost if the attacker can execute arbitrary code, regardless of which user account is used? I've always thought Windows NT descendants make it possible to contain this, but now that I've tried, I'm no longer so sure.
By default, users can not write to
C:\Program Files
. They have Read, List Folder Contents, and Read & Execute. If that's not the case with you, then someone or something has modified those permissions.A limited user can read much of the filesystem, but can only write to locations that it has explicitly been given access to, like its user profile.
If you grant Modify only to that one text file, the only thing that account will be able to write to on the filesystem is that file, plus things in its profile directory (My Documents, etc) which should be of no consequence.
If you have the built-in Users group having modify permissions all over your filesystem, then this is non-standard. Out of the box, limited user accounts can do very little damage.
By itself an arbitrary code execution bug is not especially harmless if the service account running its process is limited in privilege. The problem is that there are so many privilege escalation exploits out there that once you can arbitrarily execute code, you can just execute something that will allow you to break out of your privilege level. By itself, arbitrary code execution isn't a big deal, but in the real world it's almost always bundled with a privilege escalation exploit. So, yes, I'd be concerned.
Some quite interesting answers here.
Taking an empirical stance, with numerous post-penetration test "things to fix lists" under my belt, I would actually say that Microsoft have done a great job in recent years of providing the tools and options to harden a server very well.
Of course, MS still fuzz their code, and they and the community continue to find remote execution / privilege escalation exploits, but I've got to say, their patching is on-the-ball compared to some other vendors (SonicWall, Tivoli and Oracle spring to mind).
My recommendations would be:
It's important to remember, there is no such thing as a completely secure o/s, application or network. It's all about layers of prevention, and identifying when things look different. Don't get sold on Intrusion Detection either, unless of course, you a) are susceptible to hyped up sales pitches, b) have lots of free time on your hands and c) have plenty of spare $$$.
Finally, the most devious "attacks" these days aren't about disruption; quite the opposite. The focus (and funding!) has switched to data exfiltration.
In the Linux world, we would use SELinux or another mandatory access control mechanism to mitigate this sort of threat.
Windows doesn't have anything quite so robust, but since Vista/2008 it does have a basic integrity mechanism which you might be able to use. (Though, this has a rather high learning curve and explaining it fully would require more length than is permitted here.)
I think your best short-term mitigation would be to isolate the service in a virtual machine.
If "execute arbitrary code" means the process can create directories, it may be possible for them to create directories either in the current directory, the service account's user profile directory, the root of the C:\ drive, or simply search for a directory where it can create directories.
Last time I checked, Windows usually conferred the permission to create directories in the root of C:. You may not see this in the gui due to you would need to view the Advanced property page or use ICACLS to get a complete list of permissions.
Even if the C:\ root directory permission had been hardened, it is trivial to search all directories on C:\ and test for one where permissions allow the service account to create directories. Chances are there will be one.
If the process exploit can create directories somewhere on C:, then it is fairly straightforward to disable the system by creating millions or billions of empty directories/subdirectories.
Empty directories are zero bytes, so are not subject to quotas. These directories are also stored directly in the MFT (due to their small size), so even if the process could be stopped and the directories deleted, the MFT is effectively trashed - so large and/or fragmented that the system may need to be restored from backup.
Use DEP for all processes. It'll stop a majority, but there are a lot of exploits that can defeat DEP anyway.
Preventing abuse has to be done from the ground up - no point doing it to an account on an insecure server with a large attack surface
So;
1) Make sure the server is patched.
2) You can use SCW (Security Config Wizard) to harden the server. It will do a faster and better job than most admins will do.
It allows you do undo the changes made, but it will only undo one-step. So if you run it, lock stuff down and then run it again (i.e. if you missed something) and then you notice something is broken, if it broke that during the first lock down, you can't easily undo it (have to do it manually)
So, after running it and applying it, test functionality thoroughly.
3) Then, make sure you are using accounts which aren't admins, power-users etc and that they have only NTFS permissions to do the bare minimum of what they need to do - nothing else.
4) You can also use local security policy (or domain policy, if server is a domain member) to lock down other aspects of the GUI.
You could certainly try to create a secure user similar to the ones that come with windows that can't be logged into, and only have access to a certain files on the machine. The con is that a privilege escalation exploit could work on that user depending on the situation.
With linux we chroot jail processes and services like this as part of a security mitigation strategy. I'm not a Windows guy, but how would you do this in Windows?
One thing you could do to put the service in a sandbox is run it in a VM. Virtualbox is my choice since you could even install it on the same box if you wanted to and make it start with the server, even headless, so it would almost run just like a service. It's free also. It's very difficult for an exploit to escape your VM. The con to this approach is that you would run another whole instance of windows, which would itself need to be kept up to date and would consume resources.
Another thing that would work is using a dedicated sandbox program such as sandboxie that can sandbox any process to make it harder to escape if it gets compromised.