"Back in the day" we always segregated our OS drives (in Windows) from our Data drives. In the Linux world, although I am much less familiar with it, I am aware that the wisdom dictates even more volumes defined and used in a best-practice configuration.
Now that server storage is just as likely to be on a SAN (where the disk resources are shared by many individual operating systems and applications), does it really matter any more that the OS and Data partitions be segregated at the volume level?
What are your thoughts?
Yes, most certainly separate OS from data. I've seen it time and time again where, with a shared partition, the partition ends up filling up and making it impossible to patch the OS, impossible to extend the partition (due to various reasons), etc.
IMO, the overhead of managing two partitions is a small price to pay for the isolation provided.
With regards to the SAN-backed systems you referred to, that still won't protect you from data filling up your OS partition. With fully-virtualized storage, you don't need to worry as much about ensuring that OS and data live on separate spindles.
There are three main drivers for keeping OS and Data separated storage-wise.
Also, some operating systems (Windows is among them) don't take too kindly to resizing the OS volume, which means you generally need to give as much of it as it will need in its lifetime when you format the server. Contrast this to Data volumes which can and frequently are resized many times over the lifetime of a server. Even in fully virtualized environments where the OS and Datavvolumes themselves are being housed in the same actual storage, not being able to resize your OS volume can be a major handicap. Windows 2008+ is now recommending 30GB for the C:\ drive these days, a far cry from the 10GB we were using on Server 2003; this is something that will nail many Windows admins as they make the conversion from 2003 to 2008.
I would say it depends on what you are doing with the system. If you may need to reinstall the os you might save yourself some hassle by putting all of your data on a separate partition. Otherwise I don't see the necessity any more. My two cents.
In general principle, I think segregating the default OS space (such as C:) from the Data (D:) is a good idea, but I would also recommend creating a smaller partition for Log files (L:) to keep them a little more secure and prevent some types of Denial-of-Service attacks.
Linux is very nice in that the file system remains hierarchically under one root directory no matter how many physical disks or virtual partitions you use. I would definitely partition the disk, but not necessarily for data vs. OS separation (since very often the two get mixed up anyway).
I would look at:
Historical Linux (well, Unix really) partitioning recomendations are partly due to its origins as a (networked) mainframe server OS, which in turn i suspect was influenced by the (then) relative unreliability of hardware. For instance logs and temporary data were typically seperated because those storage areas got a lot of wear-and-tear, but it wasn't much of an issue if they were lost.
If you're building a desktop system, i'd go for the data/non-data/swap split. Unless you are building a server that's expecting to take serious abose, stuff like seperate /usr/local and /var/tmp just becomes a space allocation headache.
I'd say that its still nice to have - you have 100Gb of data (too much pr0n dude :) ) and you need to reinstall the OS (or, in keeping with Windows history, re-install it regularly to remove built-up cruft) then its very simple matter to keep it intact, than if it was on the C partition as well.
However, I'd say there is a problem there as Windows especially likes to stuff all kinds of stuff in directories on the C drive - its not just 'users' directory, but all the app data and various bits and pieces that end up stuck in ProgramData too.
Also, there is another factor - apart from the really big stuff (yup, that pr0n again) there are plenty of online backup tools (or local backup utilities) that perform continuous backups. Given these, its not such a priority to separate the data, as you can easily restore it from the backup location.
Personally, I try to split data + OS. I also try to put apps on a different partition too, so that my OS backups are a lot smaller.
I'll be the devil's advocate for a different school of thought.
Suppose for performance reasons, your vendor recommends that the OS partition isn't "sparse" and wants you to allocate the full OS partition upfront. This results in 10Gb to 20Gb (or more) of unused space on the SAN drive.
This is fine for a single VM, but chances are you will have several "performance critical" servers, each with its own 10 to 20Gb of whitespace overhead. In our environment this whitespace accounted for 20% of our SAN disk. Keep in mind that there are limits to which we should fill a SAN disk (but that is another story).
Management's had a choice
1) Absorb the 20% wasted space on the SAN, which is in addition to other requirements of "white space", and isolate any "full disk" scenario that might occur
2) Put everything on the C:\ drive and risk the drive filling up due to application logs.
What did they do?
Considering that Windows 2008R2 can dynamically expand the host OS's C:\ drive, and can expand the drive when full, management took the cost "savings" and reinvested it in monitoring tools like SCOM.
Now we are getting more than just simple protection of a C:\ drive filling, but we have a more complete systems monitoring in place to address other concerns before it happens.