We have a SQL 2000 DB. The server crashed due to Raid array failure. Now when we run DBCC CHECKDB, we get an error that there are 27 consistency errors in 9 pages.
When we run DBCC PAGE on these pages, we get this:
Msg 8939, Level 16, State 106, Line 1
Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (m_freeCnt == freeCnt) failed. Values are 2 and 19.
Msg 8939, Level 16, State 108, Line 1
Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (emptySlotCnt == 0) failed. Values are 1 and 0.
Since the indicated index is non-clustered and is created by a unique constarint that includes 2 columns, we tried dropping and recreating the index. This resulted in the following error:
CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID 2. Most significant primary key is '3280'.
The statement has been terminated.
However running
Select var_id,result_on
from tests
group by var_id,result_on
having count(*)>1
returns 0 rows.
Here is what we are planning to do:
- Restore a pre-server crash copy of the DB and run DBCC CHECKDB
- If that returns clean, then restore again with no recovery
- Apply all subequent TLOG backups
- Stop production app, take a tail log backup and apply that too
- Drop prod DB and rename the freshly restored DB to make it prod
- Start prod app
Could someone please punch holes in this approach? Maybe, suggest a different approach? What we need is minimum downtime.
SQL 2000 DB Size 94 GB The table that has corrupt pages has 460 Million+ rows of data
Thanks for the help.
Raj
Your recovery solution is the text book way to proceed. Assuming you have appropriate backups and provided you can backup the transaction log for the corrupt database, then your strategy is the text book one to implement.
Before proceeding however have you considered the possibility of recreating solely the affected table?
Sometimes you can get away with creating an exact copy of the affected table by doing a
Then just drop/swap the damaged table with the new, remembering to add appropriate constraints and indexes.
I'd try bulking out the data to file first and then bulk it back in to a new table. SELECT INTO isn't appropriate (IMO) for that number of records...