Why should you? Under normal circumstances it will jsut grow back again anyway until th enext backup. Plus it fragments the file which is bad for performance. BEST practices say not to use autogrow, which automatically means not to shring stuff so that it requires growth.
Generally speaking, you shouldn't need to shrink your transaction log on a regular basis.
However, sometimes is it needed in order to defragment it, or to recover space after a runaway transaction.
Fragmentation
This code will show you the internal structure of your transaction log.
-- get list of VLFs
use AdventureWorks2008R2
go
dbcc loginfo
go
What you want to see is all of your VLFs having the same size. If you have a percentage, or very small growth setting, you end up with a fragmented transaction log.
So, do you have hundreds or thousands of VLFs? Are they all different sizes? If so, your transaction log is fragmented.
Fixing It
Shrinking a data file fragments it. Shrinking a transaction log defragments it.
Run this code again:
-- get list of VLFs
use AdventureWorks2008R2
go
dbcc loginfo
go
The row with a Status of 2 is the active VLF. It's probably somewhere in the middle, we want it at the beginning. You won't be able to shrink the log beyond the location of the active VLF.
Run your LOG Backup Job several times. Then run the above code again. Keep doing this until the active VLF is at or near the beginning of the transaction log.
Shrinking It
Run this code to shrink your log file.
-- shrink the file, reducing the count of VLFs, thereby defragging the transaction log
dbcc shrinkfile('AdventureWorks2008R2_Log', 1)
go
Then go back and run the above code to check your VLFs. You should see a reduction in the number of VLFs.
You may have to repeat the log backup/shrinkfile routine a few times.
Sizing It
After your system has been running for a few business cycles, you should have a good idea for what its natural size tends to be. So that is what you'll size it to be after shrinking/defragmenting it.
First set the growth:
-- manually set the growth
use master
go
alter database AdventureWorks2008R2
modify file (name = 'AdventureWorks2008R2_Log', filegrowth = 512000kb)
go
Then size it:
-- manually set the log size
use master
go
alter database AdventureWorks2008R2
modify file (name = 'AdventureWorks2008R2_log', size = 4096000kb)
go
Your numbers, of course, will be different. One thing is, if your log is going to be something large, say 32G, don't set it to that in one go. Instead, grow it to 8G, then 16G, 24G, 32G.
While you do want to manually size your files, leave Autogrow turned on so you don't get caught by a rogue process.
Generally I don't recommend setting the MaxSize unless you have multiple Transaction Logs sharing the same LUN, or if you have a database that regularly runs a crazy query which fills the drive.
If you are using transaction log backups for databases in the FULL recovery model you will keep the transaction log in check and not need to shrink it. You will find special cases that you man need to do this but never on a regular basis.
The log shrink is not as evil as data file shrink. Do it only if the database is in Simple recovery model and the file has grown too much after a certain operation to a value you know it will not grow again. You can shrink it to a given value, so it does not allocate the space again at the next transaction which will cause performance to be slowed down. Don't do it on regular basis.
If the database is in Full, you should do backup logs on a regular basis.
Why should you? Under normal circumstances it will jsut grow back again anyway until th enext backup. Plus it fragments the file which is bad for performance. BEST practices say not to use autogrow, which automatically means not to shring stuff so that it requires growth.
It all depends on what you mean by the question.
Generally speaking, you shouldn't need to shrink your transaction log on a regular basis.
However, sometimes is it needed in order to defragment it, or to recover space after a runaway transaction.
Fragmentation
This code will show you the internal structure of your transaction log.
What you want to see is all of your VLFs having the same size. If you have a percentage, or very small growth setting, you end up with a fragmented transaction log.
Also, not too few or too many. See this article: http://www.sqlskills.com/blogs/kimberly/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
So, do you have hundreds or thousands of VLFs? Are they all different sizes? If so, your transaction log is fragmented.
Fixing It
Shrinking a data file fragments it. Shrinking a transaction log defragments it.
Run this code again:
The row with a Status of 2 is the active VLF. It's probably somewhere in the middle, we want it at the beginning. You won't be able to shrink the log beyond the location of the active VLF.
Run your LOG Backup Job several times. Then run the above code again. Keep doing this until the active VLF is at or near the beginning of the transaction log.
Shrinking It
Run this code to shrink your log file.
-- shrink the file, reducing the count of VLFs, thereby defragging the transaction log dbcc shrinkfile('AdventureWorks2008R2_Log', 1) go
Then go back and run the above code to check your VLFs. You should see a reduction in the number of VLFs.
You may have to repeat the log backup/shrinkfile routine a few times.
Sizing It
After your system has been running for a few business cycles, you should have a good idea for what its natural size tends to be. So that is what you'll size it to be after shrinking/defragmenting it.
First set the growth:
Then size it:
Your numbers, of course, will be different. One thing is, if your log is going to be something large, say 32G, don't set it to that in one go. Instead, grow it to 8G, then 16G, 24G, 32G.
One more thing, avoid multiples of 4G to avoid the 4G bug. Reference here: http://www.sqlskills.com/BLOGS/PAUL/post/Bug-log-file-growth-broken-for-multiples-of-4GB.aspx
So I use 4000MB or 8000MB, etc.
Autogrow and MaxSize
While you do want to manually size your files, leave Autogrow turned on so you don't get caught by a rogue process.
Generally I don't recommend setting the MaxSize unless you have multiple Transaction Logs sharing the same LUN, or if you have a database that regularly runs a crazy query which fills the drive.
If you are using transaction log backups for databases in the FULL recovery model you will keep the transaction log in check and not need to shrink it. You will find special cases that you man need to do this but never on a regular basis.
The log shrink is not as evil as data file shrink. Do it only if the database is in Simple recovery model and the file has grown too much after a certain operation to a value you know it will not grow again. You can shrink it to a given value, so it does not allocate the space again at the next transaction which will cause performance to be slowed down. Don't do it on regular basis.
If the database is in Full, you should do backup logs on a regular basis.