I understand that performance degrades with single disk volumes and small RAID volumes as the volumes fill up. Some admins say that they perceive slowdowns at 80% and use that as a general rule of thumb to increase available storage. Others say 90%. I'll also accept that different file types present different experiences, too — say, video files, version small database files, to general Office type documents, partition table and filesystem format. There are, I'm sure, other contributing factors that affect performance, like RAID vs SAN vs virtualized storage, RAID block size, NFS vs CIFS and other aspects.
My question is: in large, direct-attached, non-SAN, RAID 60 HFS+J volumes, of, say, 50TB, attached via fibre channel and made available to an office 100 concurrent users using typical network filesharing protocols -- does that same rule of thumb (80% or 90% used) hold true, or can administrators expect performance degradation to occur much later, at, say, 95% volume usage? In other words, if my 50TB RAID volume is at 80% now, at what point am I pressed to increase storage? When should I be alarmed?,
I guess that degradation you mention is caused by the filesystem and not the storage device. In that case, any block device (single disks, RAIDs, whatever) would behave similarly.
The only exception that comes to my mind is SDDs; on one hand, the lack of significant seek times would make irrelevant the level of fragmentation and non-locality that plague mostly-full filesystems, and on the other hand, non-enterprise SDDs tend to have little (if any) non-accessible storage cells that are crucial for the wear-leveling and erase-cycle-hiding algorithms, so it might start to behave more and more like cheap USB flashdrives.