I am rather confused. One evening I had a server which had died. I went to reboot it and noticed it was stuck on the "GRUB" boot screen. I then noticed a hard drive had failed. I put a new hard drive in, booted into rescue mode and reinstalled grub.
The server booted, I then told mdadm to resync the new drive, and everything was good again.
Until, I noticed that the drive for some reason, had data which was 14days old, 26th Apr. So I had to restore a more recent backup to get the server up to date. However, this worries me is there, why did this happen?
Thanks
My guess, it could happen if: you have 2 disks - /dev/sda and /dev/sdb in the raid1. For example mbr record was on /dev/sda. On 26th Apr system considered that /dev/sdb is faulty (by mistake or due to some program failure) and has been removed from raid. In two weeks /dev/sda is fail and you got not synced raid. As you said above you need to setup mdadm and I will suggest to setup smartd (from smartmontools package). Smartd "rescued my life" a couple of times :)
P.S. raid1 is not backup, I had a few incidents when 2 drives has been failed at the same time without any chance to recover any data from them.
Perhaps your /boot is not on raid1? only / (Or your other partitions).
Some older versions of grub (0.9X I guess) could not boot from a mdadm device.
If you can boot a liveCD or similar perhaps you will be able to mount your raid and save the data.
You had RAID1 mirroring in place, one of the drives failed 14 days ago. Failed hard enough for the card to stop writing to it, but no so hard that it wouldn't actually work when you tried to read/write. But since it was marked as failed, your RAID card would no longer touch it. Then, 14 days later, perhaps in response to another issue, you took out the other (more current) drive and replaced it with a blank one.
Since your failed drive hadn't been written to in two weeks, the data was two weeks old. That's what you synced over to the fresh drive, which is why it looks like your server hasn't been used in two weeks.
Presumably your OTHER drive (the one which didn't fail two weeks ago) either
A: is still good and can be used to recover your recent data, or
B: also failed, albeit more recently and perhaps with more severity
A single disk failure in RAID-1 is not catastrophic, and therefore caries no inherent signs of distress. Your computer just keep chugging along on the remaining good drive. Unless you're actively monitoring your RAID array, you won't know about the failure until the other drive fails as well, which will cause the server to crash (no working drives left).
Some RAID cards will reset the fail/good flag on a drive after a reboot under certain conditions. It's stupid, it happens.
This sounds a lot like what happened to you.