One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following:
- Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes.
- In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible.
- Backup the database within the allotted nightly time window (a few hours). My questions are:
Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?
Pretty simple: DO NOT PLAN TO RESTORE.
Really? Your definition of catastrophy is not mine and the rest of the worlds.
In case of a datastrophy you want to get back up asap, but asap may require rebuilding the data center due to fire. THIS is a catastrophy.
For server failures etc. - do not plan to use backups. Use replication, log file shipping to keep a second server (on a separate SAN) hot and read to take over within a defined short tmieframe. I know companies shipping log files every 10 minutes.
Pretty much your only chance. Move catastrophy up to something that is a REAL desaster, not a raid / san failure. Something where your questio nis not "how fast can I restore" but "how fast do I get new hardware".
Restores for dev etc. are less time critical.