My understanding of SCSI timeouts is that any read, write, flush and other commands have a limited time to complete. If exceeded, the command is aborted and an error is reported to the upper layer. While waiting for the command to complete, any application depending on the I/O will stall.
My next layer would be mdraid, the Linux software RAID. From what I read, mdraid has no timeouts on its own but relies on the lower layer to timeout commands.
The default SCSI timeout value is 90 seconds for Kernel 3.2 (Debian).
A hard disk encountering a read error will try hard to correct the error within a time frame defined by firmware. That timeout is set high for desktop drives (typically stand-alone, so correction has high priority) and low for server drives (typically RAID, so report bad sector soon, let other drive answer). Sometimes it can be adjusted via smartctl (SCTERC, TLER, etc.).
So I guess if an HDD is set to a high ERC timeout, kernel will wait for 90 seconds by default before aborting the request. Only then will mdraid redirect the application's request to another disk.
90 seconds is a loooong time for a webpage to load.
Is it correct to assume the default SCSI timeout is meant for desktop purposes or non-hdd SCSI equipment (tape drive, tape library come to mind), and safe to tune down to, say, 7 seconds for RAID usage?
Suitability depends on your needs. For you, it sounds like 90 seconds is not a good fit.
I have seen vendor-documentation in the past recommending that fibre-channel HBA timeouts be set over 60 seconds in order to better handle things like array failover, firmware updates to controllers, and suchlike. The down-side is as you point out: it can lead to very long lags to return storage.
And actually that's not a bad thing. Many operating systems will forcibly dismount a LUN if it gets HBA timeouts on it, which can be far more disruptive than an occasional long lag to return a block. The trick is to balance the following:
In general, the disks you put into a RAID array should have a low timeout value since it lets the RAID controller know to handle the block request elsewhere. This is one big reason why consumer-grade drives are a bad idea when used with hardware RAID cards; their timeouts are very long, which can lead to just the problem you don't want.