I am not a sys admin but inherited some servers setup with no documentation in linux. Today the server died in a way it was unresponsive and the VMs running on it down...after a good few hours the server reboot itself, so could ssh to it again but realized what used to show up in /dev as
/dev/drbd0 /dev/drbd1 etc etc
Are no longer there at all...I am guessing a drive or a series of drives went kaput. A command
cli64 vsf info
Shows that my areca disk array is checking three (volumes? block devices? thingies?) and doing it slowwwlllyyyy...
# Name Raid Name Level Capacity Ch/Id/Lun State
===============================================================================
1 ARC-1883-VOL#000 vm-cache Raid3 300.0GB 00/00/00 Normal
2 ARC-1883-VOL#001 data Raid6 12000.0GB 00/00/01 Checking(50.4%)
3 ARC-1883-VOL#002 apogee Raid6 9000.0GB 00/00/02 Checking(50.6%)
4 ARC-1883-VOL#004 database Raid1+0 3000.0GB 00/00/03 Normal
5 ARC-1883-VOL#005 system Raid1+0 3000.0GB 00/00/04 Normal
6 ARC-1883-VOL#006 archive Raid6 6000.0GB 00/00/05 Checking(74.3%)
7 VM-Cache Backup VM-Cache Backup Raid1+0 2000.0GB 00/00/06 Normal
8 VS Apogee Backup RS Apogee BackupRaid0 3000.0GB 00/00/07 Normal
9 ARC-1883-VOL#008 TPM Raid1+0 1500.0GB 00/01/00 Normal
10 SDSS-BACKUP-VOLU SDSS-BACKUP-RAIDRaid0 1000.0GB 00/01/01 Normal
===============================================================================
GuiErrMsg<0x00>: Success.
It is my hope once the checks are done I will once again see the /dev/drbd folders so I can mount them and get my VM Image files off of them .... though I think that is wishful thinking. I am not sure what else to poke around to find and try to have it where drbd once again exists in my /dev directory.
Normally the command to get the VMs setup and ready to use is a :
drbdadm primary --force all
mount -o noatime /dev/drbd/by-res/vm-cache /vm-cache
then lo and behold the /vm-cache has all the .img files..... Though with the /dev/drbd missing, this mount is of course failling.
0 Answers