One of the (optional) requirements for the Works With SQL Server 2008 test is that row level compression is enabled on all tables and indexes. We have an existing database with a lot of tables and indexes already created. Is there an easy way to enable compression on all these tables and indexes?
Here is the script I ended up making from splattne's recommendation.
select 'ALTER TABLE [' + name + '] REBUILD WITH (DATA_COMPRESSION = ROW);'
from sysobjects where type = 'U' -- all user tables
UNION
select 'ALTER INDEX [' + k.name + '] ON [' + t.name + '] REBUILD WITH (DATA_COMPRESSION = ROW);'
from sysobjects k
join sysobjects t on k.parent_obj = t.id
where k.type = 'K' -- all keys
AND t.type = 'U' -- all user tables
I've just used the Works With SQL Server Tool to test after compressing using the a_hardin-splattne script. The test failed because several indexes were not compressed.
The "sysobjects" view includes some but not all of the indexes. We need "sysindexes" instead. Thanks to the anonymous poster at aspfaq.com for this index insight. We also want to ignore user-defined functions.
You could use this simple SQL script in order to create another script which should do the job:
(I didn't test this, but it should work.)
You can find a much more sophisticed script here on the SQLServerBible site (look for "db_compression procs".) Read the author's blog post "Whole Database - Data Compression Procs".
As an aside, be careful with enabling everything to be compressed. The data is compressed in memory and decompressed every time it is accessed. For an OLTP system with lots of changes and memory-resident data, compression is not suitable as you'll burn more CPU for no gain in IOs. For data that is read-occasionally, like a data warehouse, it's much more suitable because you get a big trade-off in reduced IOs against the extra CPU. Compression is a data warehousing feature, not an OLTP feature. Not sure if this applies to you, but worth pointing out just-in-case, and for others reading the thread.
One other point - it may be that you don't get a significant gain from compression so it isn't worth turning out. Best practice to check the compression gain before enabling using the sp_estimate_data_compression_savings stored-proc.
Thanks
You should probably look to handle new tables as well, so you don't need to run this batch on a regular basis. I detailed a method for automatically compressing new tables in this blog post.
I'd also mention that you should check to see whether or not the table is compressed before rebuilding it.
I'm a little late to the party, but here's a version that uses DMVs rather than the deprecated system tables and allows for arbitrary schema names. It enables or disables row or page compression on all heaps, clustered indexes and nonclustered indexes (including all partitioned tables) in the current database: