I have a Dell PE1950 running the latest OpenSolaris, connected to a Dell MD1000 enclosure with 15 disks in it. I am not using PERC to control the disks, instead I use a simple SAS 5/E (LSISAS1068) controller that exposes the raw disks so we can use ZFS RAID instead of hardware RAID.
It all works very well, but I have one worry about the time when we need to replace one of the disks for any reason. When I used PERC, it had the capability of turning the error led on the disk if something went bad, and also gave me a way to manually blink the led should I want to physically locate it for any reason.
However, now when I use the plain SAS connection it looks like these capabilities are inaccessible, and the only way to identify the disk is by guessing what it is from the device number (which I find very risky), or shutting down the whole system, pulling the disks one by one and comparing the serial#.
Both options are, of course, not acceptable. I would like to know if there is any way that I could manually operate the LEDs on Solaris. I have searched a lot and found that on Sun servers this can be done using the cfgadm tool, but when I tried to run the same commands on my server it failed, saying the hardware specific feature is unavailable.
I also tried using the LSIUtil command, but it doesn't seem like it supports this functionality either.
Is there any way I could visually identify the disks?
search for MegaCli tool for solaris [ you can find it at lsi's webpage ] and use syntax: megacli -PdLocate -stop -physdrv[1:2] -a0 note: i have only perc controlers and it works fine with them, as i understand same tool can be used with non-raid controlers, but i might be wrong. your comment telling if it works or not is welcome.if that does not work - during maintenance window take whole system down, and label all the caddies with hard drives' serial numbers.
OK, I admit off the bat that this is a FUD answer and so I'd ask people just call me stupid rather than taking points off, but...
If you're using ZFS I believe that you can take a disk offline without fear of the entire raid set going funny, so (actually you might not need to take it offline first, i don't really know):
Can you not just run
dd
on the physical device (and output it to /dev/null). basically doing a massive read and causing the disk access light to be fully on? This does assume that you have blinky access LEDs for each physical disk.