I have often wondered why there is such a passion for partitioning drives, especially on Unixy OSes (/usr, /var, et al). This does not seem to be a common theme with Windows installations.
It seems that partitioning greatly increases the likelihood of filling one partition while others have a great deal of free space. Obviously this can be prevented by careful design and planning, but things can change. I've experienced this on machines many times, mostly on ones setup by others, or by the default install settings of the OS in question.
Another argument I've heard is that it simplifies backup. How does it simplify backup? I've also heard that it improves reliability. Again, how?
Almost 100% of the problems I have encountered with disk storage is with physical failure of the disk. Could it be argued that partitioning can potentially accelerate hardware failure, because of the thrashing a disk does when moving or copying data from one partition to another on the same disk?
I'm not trying to rock the boat too much, I would just like to see justification for an age-old admin practice.
I don't think setting up lots of partitions is something you should do for every system. Personally, on most of Linux my servers I just setup one big partition. Since most of my systems have smallish drives and are single purpose and serving some infrastructure-role (dns, dhcp, firewall, router, etc). On my file servers I do setup partitions to separate the data from the system.
I highly doubt a well partitioned system would have any increased likely-hood of failure.
One reason to keep /home/ seperate is you can reinstall the operating system and never worry about losing user data. Beyond that, theres a lot of security to be had in mounting everything either read only, or noexec. If users can't run code anywhere that they can write code, it's one less attack vector.
I'd only bother with that on a public machine though, as the downside of running out of disk space in one partition but having it in another is a serious annoyance. There are ways to work around this like doing software raid or ZFS where you should be able to dynamically resize partitions easily, but I have no experience with them.
You can make backups (via dumpfs or similar) of things you want, not things you don't. dump(1) is a better backup system than tar(1).
That's an argument for partitioning as well. Users filling up their homedirs doesn't wreck the server, take the web server down, keep logs from happening, keep root from logging in, etc.
It also allows you to more transparently move a section of your data (say, /home) onto another disk: copy it over, mount it. If you're using something that allows shadow copies / snapshots/ whatever, you can even do that live.
I have always been taught to keep /var on a separate partition so if you get a out of control log file you will clog up a single partition not the entire drive. If it on the same space as the rest of the system and you 100% fill your entire disc, it can crash out and make for a nasty restore.
All of the arguments that Zoredache puts forward are valid; one might quibble with the details a bit (having a machine up faster so you can do other things while fsck'ing other file systems doesn't do you much good if the system's reason for existing in the first place is on those other filesystems); however they are all a bit of justification-after-the-fact.
In the really old-school days, you didn't have filesystems on separate partitions -- you had them on separate disks, because disks were really small. Think 10MB.(1) So you had a tiny / partition, a /var disk, a /usr disk, a /tmp disk, and a /home disk. If you needed more space, you bought another disk.
Then "big" 50MB disks started costing less than the moon program, and suddenly it became possible to put an entire system on one disk with a usable amount of user space.
Still, with the disk sizes being small compared to what it was possible for the computer to generate, isolating /var and /opt and /home so that filling one didn't bring down the computer was still a good idea.
Today, in an enterprise situation, I don't partition the OSs. Data gets partitioned off, especially if it is user-generated; but frequently that's because it's on high-speed and/or redundant disk arrays of some kind. However /var and /usr all live in the same partition as /.
In a home environment, same thing -- /home should probably be on a separate disk/array, so that one can install/upgrade/break/fix whatever OS flavors are desired.
The reason for this is because no matter how big you guess your /var or /usr or whatever tree might get -- you'll either be hilariously wrong or you'll ridiculously over-commit. One of my old(er)-school collegues swears by partitioning, and I always get grief from him when he ends up sitting through a 180-day-fsck on a system I've created. But I can count on one hand over my entire career the number of times something's filled up / and brought down the system, while I can count one hand the number of times so far this year that I've been staring at a system where someone's decided /var would never need to be more than (say) 1GB and been wrong, leaving me staring at a full /var and '00s of free GB elsewhere on the system, all of which might as well have been on the moon for all the good they do me.
In today's world of big disks, I don't see that there's any real reason to partition the OS tree. User data, yes. But separate partitions for /var and /usr and /var/spool etc etc etc? No.
(1) = and I know just by picking that size, I'm going to get someone in the comments saying 10MB? Luxury. Why our disks were merely...
In reply to :
On a Linux machine, LVM (logical volume management) is used to prevent this. Most filesystems allow resizing (some even online). I create different partitions for different uses and format them to different filesystems (ie: xfs for large download files that I can quickly delete). Need more space? Mount a new drive, move the data to it, then mount it where the data used to be. It completely seamless to users and applications.
With LVM, you can add disks or partitions into the volume group, then create logical volumes in that group. If you leave free space in the volume group, you can then grow partitions that are filling up. If the filesystem supports it (ext3, ext4, reiserfs) you can shrink a partition that you've over allocated.
For example: make a boot partition on /dev/sda1 make a second (unformatted) partition /dev/sda2
When you need more space on /downloads (while filesystem is mounted):
And you now have a 150GB download partition. Similar for home. In fact I just resized an ext4 lvm "partition" today. On the other hand, logical volumes aren't really partitions and what you say of partitions being the wrong size jives with my personal experience (more trouble than they're worth).
The traditional Unix partitioning scheme is definately an old school practice that isn't as useful as it once was. Back in the day when Unix system uptime was measured in years, and you had dozens of hundreds of users futzing around with shells, mounting /usr as read-only was a useful way to protect the system. Now re-mounting filesystems to patch seems more labor-intensive and not so useful.
At my university back in the good old days, the Unix clusters had read-only filesystems with the standard unix tools, and add-on applications were in /usr/local, which was an NFS and later an AFS filesystem. Part of that was convenience... who wanted to recompile software on a dozen boxes in the cluster when you could run apps over a high-speed, 4Mb or 10Mb network? Today, with decent package managers and lots of cheap disk, it isn't that big of a deal.
I think thought processes started to change for me on Sun boxes with Veritas Volume Manager back around 1999, which reduced the pain threshold for moving disks around considerably.
Today, when I think partitioning, I'm thinking in terms of data protection and performance. Illustrative example:
These considerations apply to Windows as well. We have an SCCM server that manages around 40k clients. The database and logs are on mega-buck IBM DS8000 disk. The software packages are on an EMC Celerra with large, slow SATA disks that cost 60% less per GB.
(Assuming a single large disk is available,) I put
home
andvar
on separate partitions to control the "out of control [user|log file] filling up all the space" problem, and to allow easy OS upgrades without touching home, but leave the rest together.On older hardware it was sometime necessary to have a separate
boot
partition in insure that the kernel image was accessible to the boot loader.I understand that this question is not OS-specific, right?
Under Windows, I tend to give all my machines as few partitions as possible, but no less than two - SYSTEM and DATA. If the machine has two physical disks, then one (smaller) will be SYSTEM, the other DATA. If there is just one disk, I split it in two partitions.
The reason for that is just one - when I need to reinstall the machine (and there will be such a time), I don't have to worry about the contents of the SYSTEM partition - I just do a full format on it and a clean install. This of course means that my Documents (and preferrably Desktop too) has to be mapped to a folder on DATA, but that's easy to do, especially on Vista and later.
I've also tried making more partitons (like GAMES, MUSIC, MOVIES, etc.) but that only resulted in some of them overflowing into others, creating more mess than order.
You mention one disk filling while the other has free space -- that's one of the reasons I partition -- because I can ensure that certain partitions don't fill up. Although, the way that quotas used to be managed, you'd have to assign all of the users a 0 quota on the other partitions, just to make sure that they didn't start hiding files away if they managed to find a directory they could write to.
As for simplifying backup -- if I know what the max size of each partition will be, I can make sure that it's a size that neatly fits onto a single tape, and can be completed in a fixed amount of time.
As for reliability, the only thing I can think of is monitoring -- I can more easily see when a given partition's growing more than it should be, and give me reason to look into it.
... now, all of that being said, we're far from the days of each user being given their little 20MB quota on a shared machine. Some of the old habits don't make sense -- but when you have a process go crazy and fill /var, which in turn fills /, and things grind to a halt, it's not all that bad of protection to have on production machines.
For home, I have partitions, but it's just to make it easier to manage the installed OSes.