With my IT outfit, we have templates to deploy servers with a dinky C drive/partition (10GB) and a larger D drive/partition. Why do this? Windows (at least until recently and at that minimally) has no real use of dynamic mount points in general server deployments.
Edit
So with many of the comments below a synopsis:
- It's faster to recover a smaller partition. This includes a corruption of NTFS,which would be kept to a paticular partition instead of messing up the enitre system.
- You get some protection from runnaway processes. This includes the ability to set quotas.
- Provides some cost savings for Raid configuration
- A religious hold over from the days before virtualization, raids, and high bandwidth networks.
Aside from #3 ( which, I think, is an argument against partitions), I still see no reason to have separate partitions. If you want to protect your data, wouldn't you just put it on another set of real or virtual disks or otherwise map to a shared resource somwhere else (NAS, SAN, whatever)?
To stop data filling up your operating system volume and crashing the server.
File servers benefit from separate volumes if you use quotas, as these are usually set per volume. (e.g. Put your user's home directories on one volume, profiles on another, company data on another etc.)
p.s. 10Gb sounds too small for a system volume. After several years of windows updates and service packs, that will soon fill up.
Restoring from backup becomes easier when program/data files are separated from the OS installation. I like to give at least 25GB to the OS partition, but the point remains the same.
Typically I don't find an advantage to making partitions.
Applications (Microsoft and others) are notorious for demanding space on %SystemDrive% even if they allow you to choose a destination directory. With the inability to have the Windows Update Automated Updates service choose not to save backups of patched files, the size of "$Uninstall$" directories under %SystemRoot% grows and grows. Having an artifically constrained %SystemDrive% has been nothing but make-work for me.
I typically put shared directories and data under a single root-level subdirectory. That satisfies my needs to keep applications and data apart.
Having said all this, generally this is a "religious" issue and I don't argue with people about it. Do what you want with your servers. Not having "data" partitions has served me well.
(Now, having separate physical volumes / spindles... that's another story.)
Part of the reason that we do this is that if you have some sort of runaway proccess that fills up the drive, Windows doesn't crash to the ground when the disk run out of space.
The second reason we do this is to allow for different sized drives/different raid levels for our OS and data partitions. For example we would get (i'm rounding numbers and pulling them outta thin air here) 2x100GB SAS drive for an OS Mirror partition, and then 6x700GB SAS drives for a RAID 10 data partition. doing that could easily save you $1000 on the cost of the system at the end of the day.
The third reason is actually quite simple, whoever built the server with the dell CD wasn't paying attention and by default it creates a 10GB OS drive (20 on newer releases i believe).
Now as Evan has said this is really a personal preference that borders on "religious" belief. Honestly with the size of todays drives, either way will work fine. Do what you are comfortable with ... or what your corporate standards dictate.
EDIT (based on the original asker bringing up virtulization):.
The thought of virtulization brings up an interesting topic. As Evan pointed out, most of what I had to say was talking about different RAID containter. However, in my VMWare enviroement i have a base template of 20 GB. Now the interesting part comes here, all of my servers are hosted on a SAN and I have two volumes presented.
the 20 GB drive that is part of my template and
a variable data size Data drive that I attach per the requirements of the systems.
90% of the time these two disks are on the same RAID set, but are two different "physical" drives to the machine. As usual virtualization brings a layer of obscurity to the "standard" IT thought process.
I don't have an exact answer to your question, but I do have several anecdotes that you might find useful in designing your drive/partition setup.
(1) The corrupted NTFS
I had a server with two partitions, one for OS and one for data. At some point over the years, something went wrong with the data partition, and a single file nested about 6 levels deep became impossible to delete or rename. In the end, the only solution was to wipe the partition and reload the data back on. Obviously, it would have been much more painful without partitions.
(2) The full data partition
The same server as above, at another point in it's life, managed to end up with a completely full data partition while there were dozens of GB available on the OS partition. As a stop-gap measure, I used a junction point to temporarily store data on the OS partition until the new server arrived. It was ugly, but it worked. Avoiding partitions would have meant avoiding ugly fixes.
(3) The Server 2008 UAC
On a newer server, I discovered that you may have trouble administering any drive except the C: drive, unless you are the local Administrator or Domain Administrator. Being in the Administrators group is not sufficient. This is due to an oddity with UAC, which I have disabled for now.
(4) The Volume Shadow Copy
Shadow Copy (aka Previous Versions) is toggled on/off on a per-partition basis. If you don't want to waste space storing previous versions for a particular data set, partitions are your best ally.
My preferred course of action is to completely separate OS and Data by having a separate RAID 1 array just for the operating system. This allows a great deal of flexibility. For example, I could upgrade all the harddrives used for data storage without having to change the OS installation at all.
We use multiple partitions on our servers with the C: drive dedicated to the OS. Our other partitions we use mainly for storage of data such as databases, user files/folders, shared files/folders, etc.
It depends on the service, of course, but there is value in this. As alluded to elsewhere, different partitions can have different underlaying storage characteristics. As such, different drive letters should represent different underlaying drives rather than partitions. Once upon a time it was a wise move to put your Swap file on its own partition, but that's no longer as beneficial as it once was. Otherwise, keep your C: drive for OS and obstreperous applications that refuse to go anywhere else, and your relocatable apps elsewhere.
With virtualization, you can have your C: drive be file-backed storage and yet have your D:, E:, F:, etc. drives really be NPIV direct presentations of block-level storage. Or have your OS drive be the mirrored pair of disks (which may be 72GB or 144GB at that) and your non-OS drives be a RAID10 set, or even something else entirely.
When running a Windows IIS server, we seperated the OS drive from where you put the hosted website files to prevent directory traversals.
This was a Windows 2000 issue mainly though.
If your system partition is small, it takes less time to run diagnostics and repairs on that partition, resulting in less downtime. For example, if you have an unexpected disk or filesystem problem and have to reboot to run chkdsk on a 2 TB combined system+data partition, you might not have the server back online until tomorrow. If that partition is only 20 GB, you could be back up and running in less than half an hour. You can also backup or image the partition in less time.
That said, the 10 GB limit you mentioned seems alarmingly small, since you'll quickly consume that space with service packs and hotfixes. 20-30 GB would be more suitable.
I saw no mention of Short Stroking so I'll add it here.
Smaller volumes/partitions on mechanical drives reduce access time/latency. This can noticeably increase performance in some cases.
10GB boot partitions seem overly small nowdays 25 to 80GB volumes seem more comfortable to me. No matter the actual size I don't format drives to their full capacity for multiple reasons most of which go back to performance concerns.
If the D partition on the same physical drive is something for rarely used data or emergency use only then C: still gets the benefit of the short stroke effect. The key there is keeping others from using that space as though it were primary storage. Any regular use of the secondary partition wastes this advantage.
I would also add that small partitions may allow you to use a spare 36GB or 73GB drive as a spare for a degraded RAID 1 as opposed to having to leave the array degraded until a new drive arrives. You can also use SSDs that might be on the smaller side to takeover a small partition if you haven't sized yourself out of that option.