Another question from talking to the SharePoint gurus while teaching MCM yesterday. The SharePoint guidelines are that content databases above 100GB are not supported. Without getting into the reasons behind those guidelines, I'm interested to hear about content databases larger than 100GB and your experiences with them (primarily around performance, disaster recovery, and HA provisioning).
How far have you managed to push your SharePoint installation? I've heard second-hand stories of > 1TB content databases, but I'd like to hear from the SharePoint admins themselves.
Thanks for any info you have.
We have a DB's at 111 and 102GB, respectively, that get backed up in under 30 minutes on a GigE network. I have heard that larger databases can have problems with long-running stored procedures, but have seen no demonstration of it.
A nice quote from The "Scaling SharePoint 2007: Storage Architecture" whitepaper:
"...This is commonly referred to as the “100GB content database size limitation”. In fact, this is not a true limitation but rather a recommendation. SQL Server databases have been scaling far beyond 100GB for years now. Practically speaking, the recommendation is based primarily on two significant factors:
Service Level Agreement (SLA) requirements for a given organization may dictate that backup operations for the SharePoint databases must be executable in a limited amount of time. The size of the content databases will have a direct impact on how long it takes to execute that backup.
The storage subsystem must be robust enough to handle the disk I/O requirements of the SharePoint solution that it serves.
As long as a given organization is able to mitigate these two considerations, then the content databases can be allowed to grow. Real world implementations have seen successful SharePoint deployments that have implemented database sizes of 100GB, 150GB, 200GB, 250GB, 300GB, 350GB and 400GB."
For day to day usage, database size isn't that important - most queries return the items in one list and it doesn't matter what else is in the database. However, operations that work on the whole database will become more difficult. Backups are the most obvious example - they will take longer with large databases. However, as long as the database doesn't exceed what can be backed up overnight you'll be ok - backups are designed to be long running and are pretty reliable as long as you don't run out of disk space.
Where you'll run into real problems is with less frequent things like moving or upgrading content databases - these can require about 5 times the database size in free space and are implemented using queries that can do things like trigger out of control autogrow.
We have a content databse that is 300 GB in size. No problems with backups after switching to Lite Speed. Before the switch we would see serious performance degradation with web sites.
For the record we did NOT want to have a Content DB this large. We had specific business requirements around content sharing that would have been very difficult to implement if we had put the content in seperate site collections.
When we first went live we had major locking issues with the database during peak usage. We traced this back to use of CrossListQueryCache object in SharePoint. We changed from using that API and it fixed a lot of our performance.
I wrote up a little blog article with more information here.
We still see locking issues with certain types of updates (deleting blobs > 20 MB), renaming webs (this can cause updates to lots of records in AllUserData table. We are working with MS Support on specific cases (i.e. removing large items from recycle bin). These have been traced back to the way specific stored procedures in SharePoint are deleting data, but we do not have a solution yet.
Personally I think problems occur after you get so many records in the AllUserData table and the easist way for MS to communicate this to people was to say stay below 100 GB.
I suggest pinging the people at MS IT... I have heard off the record that they have a SharePoint Content DB > 800 GB.
Our company has a database currently at 140Mb. We are experiencing slow performance in one particular list that has been allowed to grow to 1.5Gb which contains attachments including multiple versions of attachments. (I have only been there about 2 months by the way). We are now planning a migration and it is looking like to migrate to SP 2010 using Metalogix tool could take days to accomplish based upon our tests. Ours is a poorly designed database, poorly designed portal which now has those of us having to manage it with real problems.
We have a backup site which we use which is an exact copy of our production environment. But the hardware is our OLD hardware after our last hardware update - Old hardware for Development, new for production. Some areas of our development environment however are unusable due to performance problems forcing some development in large lists to have to be done in production. Ouch, Ouch, Ouch....
That's false. There are no limits about the size. They recomend not to have big databases but only for make databse management easier and minimize backup/restore time. We could say that the size limitation it depends on your SQL infraestructure only.