I have an SQL Server instance who's memory usage gradually grows until windows will give it no more. It seems logical that the occasional big query result would cause the instance to grow.
Is there a way I can convince SQL Server to release the memory it doesn't need any more (other than restarting the service)?
Edit:
I'm using SQL Server 2000
SQL Server 8.00.2039 - SP4 (Standard Edition)
I was able to find that out using the following query:
SELECT 'SQL Server '
+ CAST(SERVERPROPERTY('productversion') AS VARCHAR) + ' - '
+ CAST(SERVERPROPERTY('productlevel') AS VARCHAR) + ' ('
+ CAST(SERVERPROPERTY('edition') AS VARCHAR) + ')'
This is exactly how SQL Server is supposed to work.
If you have other services on that machine and don't want SQL to consume all the available memory, you will need to set the maximum server memory. See SQL Server Memory Options on MSDN.
The other posters are correct that this is by design, but you absolutely want to limit the max memory to a bit less than your server's RAM. Think about this sequence of events:
To avoid this, configure your max server memory limit to something around 80-90% of your actual physical memory. Instructions for SQL 2000 at: http://msdn.microsoft.com/en-us/library/ms178067.aspx
It will only release it if the OS signals that it is RAM starved, or if you stop and restart the service; the thing to do is limit the max amount SQL will use by configuring the 'max server memory' value. If there's nothing else on the server that needs the RAM (and hopefully there isn't) I wouldn't worry about it.
So, to summarise the answers:
There is no way to prompt MS SQL Server to release memory it doesn't immediately need. SQL Server should automatically releasing memory when it required, but not before then. And if your are having memory issues, you should reduce the value of the "max server memory" memory optin.
SQL Server will consume memory and not give it back unless it is told by the operating system that there is memory pressure. As Portman has indicated, this is by design, and if you want to limit the memory consumption, you need to set the maximum server memory SQL Server will use.
Remember that the behavior you're all describing is from SQL Server 2005 onwards, when the memory manager was rewritten to (among other things) respond to memory pressure requests from the OS.
For SQL Server 2000 and before, once it grabs memory it won't give it back, no matter how much the OS shouts for it.
CodeSlave - are you running on 2000, 2005, or 2008?
Old question, I know, but a way of forcing (newer, at least) SQL to release memory is to write an application allocating as much memory as it can in chunks, waiting for (say) 15 seconds (e.g. Sleep(15000)) and the freeing the allocated memory and exiting; I tried this and SQL does release the memory so the system gets back its RAM; writing code like the above is almost trivial using C/C++, just a matter of setting up a chain of stucts to hold the memory block chain (pointer and size), progressively reduce the size when a "malloc()" fails until it reaches a minimum (say less than 1024) and then traverse the linked list to free back the allocated blocks