On unix or Linux, individual device volumes are manifested through device files, traditionally held in the /dev directory. Depending on the device driver architecture or disk controller hardware you may be constrained in the number of devices actually supported, but this is a hardware or device driver constraint rather than an O/S specific constraint. The theoretical maximum is constrained by the number of bits available to the Major number, but will be 28, 216 or some other fairly large number. Note that a disk could be partitioned and therefore have multiple volumes on it. The space for device drivers is similarly large on Windows (note that Windows NT based systems support mount points so you aren't constrained by drive letters).
If you use hardware RAID or SAN based disks, a volume will be based on a group of disks presented by the controller, so the number of physical disks can be even larger.
In practice, the limitations of physical hardware is going to be a constraint before the number of available device handles becomes a problem. This will be the case on most O/S platforms.
To take a more practical example, a typical SAS disk array such as an HP MSA70 has (IIRC) four SAS ports and internal port multipliers connecting multiple SAS disks to each port. It will also allow a second array to be daisy chained off it. These arrays hold 25 disks each, so a group of 4 SAS ports can support up to 50 disks in two shelves.
A typical SAS RAID controller has 8-24 ports, so a single controller could take up to 4-12 arrays, or 100-300 disks. A large server such as an HP DL785 might be able to take several such controllers, so you could in theory put 1,000 or more disks on the machine.
However, this is probably not a very useful configuration. Dedicated SAN or NAS hardware or parallel file systems are much more likely to be appropriate for storage requirements needing 1,000+ physical disks. Database servers with 1,000+ direct attach disks are pretty rare outside of TPC-C benchmark configurations and the next few years will probably see SSDs taking over the market for storage on high-volume transaction processing applications.
Large SANs can scale to several thousand physical disks. A single fibre channel loop can support up to 254 disks and a high-end SAN controller can support many F/C loop interfaces. A logical volume manager can concatenate multiple physical volumes into a large file system, so a machine could potentially consolidate data from multiple SAN controllers into a single global volume.
The largest SAN I have seen documented had around 6,000 physical disks on it, but the limits are dependent on individual hardware.
Parallel file systems can scale outward by adding more nodes. With hardware like a Sun X4500 (thumper) one could scale outward by adding servers until you run out of network ports. The largest infiniband switches have several hundred ports, so a parallel file system based on Sun X4500s could support tens of thousands of physical disks.
However, any of thse large scale storage architectures will present RAID volumes spanning multiple physical disks to the host, so the number of logical units (devices) seen by the host will typically be much smaller. In almost all cases the physical limits of the hardware will restrict the number of disks before the name space on the host is exhausted.
These configurations can all be purchased off-the-shelf (for a price) from specialist vendors without having to go to any sort of exotic proprietary supercomputer architecture, so the answer to your question is:
Thousands or tens of thousands at the top end (without needing custom hardware). In fact, clustered file systems based on Sun X4500s or X4540s appear quite frequently as the storage components of Top 500 supercomputers.
Somewhere between 100 and perhaps 1,000-1,500 (at a guess - based on 4 28 port SAS RAID controllers with 2 shelves per 4 ports) on a Wintel or Lintel server. Oviously this will vary depending on the specific hardware.
External arrays notwithstanding, a desktop PC will be limited by the number of drives you can fit in the case. External desktop arrays might extend that limit to a few dozen, but this is niche market hardware.
On unix or Linux, individual device volumes are manifested through device files, traditionally held in the /dev directory. Depending on the device driver architecture or disk controller hardware you may be constrained in the number of devices actually supported, but this is a hardware or device driver constraint rather than an O/S specific constraint. The theoretical maximum is constrained by the number of bits available to the Major number, but will be 28, 216 or some other fairly large number. Note that a disk could be partitioned and therefore have multiple volumes on it. The space for device drivers is similarly large on Windows (note that Windows NT based systems support mount points so you aren't constrained by drive letters).
If you use hardware RAID or SAN based disks, a volume will be based on a group of disks presented by the controller, so the number of physical disks can be even larger.
In practice, the limitations of physical hardware is going to be a constraint before the number of available device handles becomes a problem. This will be the case on most O/S platforms.
To take a more practical example, a typical SAS disk array such as an HP MSA70 has (IIRC) four SAS ports and internal port multipliers connecting multiple SAS disks to each port. It will also allow a second array to be daisy chained off it. These arrays hold 25 disks each, so a group of 4 SAS ports can support up to 50 disks in two shelves.
A typical SAS RAID controller has 8-24 ports, so a single controller could take up to 4-12 arrays, or 100-300 disks. A large server such as an HP DL785 might be able to take several such controllers, so you could in theory put 1,000 or more disks on the machine.
However, this is probably not a very useful configuration. Dedicated SAN or NAS hardware or parallel file systems are much more likely to be appropriate for storage requirements needing 1,000+ physical disks. Database servers with 1,000+ direct attach disks are pretty rare outside of TPC-C benchmark configurations and the next few years will probably see SSDs taking over the market for storage on high-volume transaction processing applications.
Large SANs can scale to several thousand physical disks. A single fibre channel loop can support up to 254 disks and a high-end SAN controller can support many F/C loop interfaces. A logical volume manager can concatenate multiple physical volumes into a large file system, so a machine could potentially consolidate data from multiple SAN controllers into a single global volume.
The largest SAN I have seen documented had around 6,000 physical disks on it, but the limits are dependent on individual hardware.
Parallel file systems can scale outward by adding more nodes. With hardware like a Sun X4500 (thumper) one could scale outward by adding servers until you run out of network ports. The largest infiniband switches have several hundred ports, so a parallel file system based on Sun X4500s could support tens of thousands of physical disks.
However, any of thse large scale storage architectures will present RAID volumes spanning multiple physical disks to the host, so the number of logical units (devices) seen by the host will typically be much smaller. In almost all cases the physical limits of the hardware will restrict the number of disks before the name space on the host is exhausted.
These configurations can all be purchased off-the-shelf (for a price) from specialist vendors without having to go to any sort of exotic proprietary supercomputer architecture, so the answer to your question is:
Thousands or tens of thousands at the top end (without needing custom hardware). In fact, clustered file systems based on Sun X4500s or X4540s appear quite frequently as the storage components of Top 500 supercomputers.
Somewhere between 100 and perhaps 1,000-1,500 (at a guess - based on 4 28 port SAS RAID controllers with 2 shelves per 4 ports) on a Wintel or Lintel server. Oviously this will vary depending on the specific hardware.
External arrays notwithstanding, a desktop PC will be limited by the number of drives you can fit in the case. External desktop arrays might extend that limit to a few dozen, but this is niche market hardware.
Well, in DOS, you're limited to the number of available drive letters (which can be expanded up to "Z"). So, that'd be 26...