I saw a presentation years ago that said hard drives had the best performance when they were < 50% full and that for busy servers, you want to keep your drives < 80% used. The reasoning was that the tracks are written from the inside out and that access, especially random access, was quicker for inner tracks than outer ones. Rotational latency was lower.
On the other side, with today's caching, and sometimes read ahead in products like SQL Server, a longer outside track, with no track to track movement, might be negating factors.
Is this true? Is there a reason to keep space free on a modern hard disk system? Is it different for Windows than *Nix?
In my experience, worrying about outer track versus inner track is no longer worth the effort. The difference in performance is just too small, when factored against other performance impacting things (RAID, caching, filesystem fragmentation, etc).
However, to answer your question directly, there is definitely still a reason to keep a decent amount of free space on a modern hard disk (especially rotational (non-SSD) disks), and that's file fragmentation and seek time. When there is a good amount of free space, files can be written sequentially, allowing them to be read in without multiple seeks. This allows a file to be retrieved much faster than if a disk head has to seek all over to pick up little chunks of a file.
This article/blog post is more targeted to file fragmentation than disk performance, but it offers one of the better explanations I've found for file fragmentation and why available free space impacts it: Why doesn't Linux need defragmenting?
The more a disk fills up, the more files (especially large files) will become fragmented and slower to read and access. This is also the reason that Linux filesystems reserve a percentage of space (usually 5%) that is only available to root. This reserved space is very useful for emergencies (so a user can't completely fill a disk and cause problems), but primarily intended to reduce disk fragmentation as the disk fills up. When dealing with very large files, as are common with databases, the fragmentation problem can be reduced by pre-allocating your data files (assuming the database (or other application) supports it).
In these days of very large and relatively inexpensive disks, there is rarely a valid justification for letting a filesystem reach capacity. This is even more true in situations where performance matters.
I agree with Mr. Cashell (and have voted him up) but would like to add two other factors.
First, depending upon your OS, the system will want to have some free space for swap and other temporary files. Linux, of course, has a dedicated swap volume but Windows, OS X, and Netware all want to use the system volume for their temporary storage. Keeping at least a gigabyte to 10% (and up to 20% if you can manage it) free at all times on the system volume is good practice.
Second, the rule for servers is now and always will be that you combat slow disk performance with more RAM. OS Schedulers are getting ever-more sophisticated and will keep reads in RAM until a convenient time comes to write back to disk. Some applications, in the interest of data integrity, will also write temporary "rollback" files to disk that will eventually be merged with the master data set. The more RAM you have, the fewer reads have to come from disk (since often-accessed files will generally be cached in RAM) and the more the OS can hide the slow writes.