I have a quite a few servers deployed around the world. They are running Windows 2003 x64 with SQL Server 2005 x64 with 6 GB of RAM. The boxes do not have the best (or even an acceptable) configuration, because the guy that ordered them years ago didn't really know what he was doing.
The boxes are fairly consistently running out of memory, end up using the paging file and everything slows down. Typically the commit charge is 5.8GB and then when someone needs to do something intensive (e.g. run a report), that number goes through the roof.
I've been trying to get the powers that be order more memory, but I am getting massive opposition (e.g. make the software more performant, costs too much for all these servers, or prove that the box does not have enough memory, etc...).
Are there guidelines (or a formula) for how much RAM a box needs that I can present to non-techies, so that we can finally order more memory?
An easy way to see if you need more RAM is to chart the Page Life Expectancy perfmon counter. This counter tells you how long SQL Server thinks that data will be kept in the buffer pool before it needs to make room for other data. You want this number as high as possible. With 6 Gigs of RAM installed (you should have SQL set to max out at probably 4 gigs) you'll probably only be keeping data in memory for a few minutes at most, when someone runs a large report you'll see this number tank down to a few seconds. The more RAM you have, the longer data can be kept in memory, and the less reading from the disks will need to be done.
For example, the systems I'm working with at the moment have 256 Gigs of RAM and we keep data in memory for about 12000 seconds or so.
Please don't ask for a target number to hit, you just want the number as high as possible. Without knowing a LOT more about your systems I can't give a good number to shoot for.
Not really any way to easily tell because it is entirely dependent on your usage and the application. You're maxing out a database server...how big is the database? What are your transaction stats?
The real-world limitations are obvious in your scenario. You're running for awhile on 6 gig without problem, then it's swapping and thrashing.Thus 6 gig isn't enough.
If performance is enough that it impacts business, then your higher ups should be hearing enough complaints that it is prudent to up the memory. Figure out what your time costs and then figure out how much it will cost to "tune" the server and troubleshoot the tuning, when memory added to the server may very well solve the issue for the cost of memory and less than a half hour of downtime.
You'll not know the exact amount of memory you need until you actually deploy in your real-life usage and work from there.
That said, you might want to verify that your application is truly the bottleneck. Run the windows performance monitor to see your disk i/o statistics and network throughput. See what your fragmentation level is as well (Google is a good friend here). You could try auditing the code for obvious issues too where a query is being massively inefficient (Google again).
But again it all depends on how badly this is impacting the business. Is it worth more to invest in the tuning, or is it bad enough to throw hardware at it first and then try tuning it?
Hmmmm. Well, 6 gigs is a decent amount of ram, even for a big MSSQL install. You might actually want to look and make sure that your code really IS efficient. A 6 gig transaction is a bit unusual...I've worked on state-wide payroll systems that didn't top a gig on year end 1099 processing...And to have one running often? I don't know. What kind of data are you working with?
That being said, you can stuff as much RAM as you like in a 64 bit box, and ram is dirt cheap, so might as well put as much in there as you possibly can...Can't really have too much RAM on a database server.
Edit: This is wildly out of date now. I have MSSQL boxes with 256 gigs of RAM.
Edit: This gets funnier every five years.
Before you jump the gun on buying more memory (or any other component) I would recommend running a performance analysis on the server. You can do this on your own using perfmon or you can look at using third party tools. You should analyze performance of both the OS and SQL server. IMHO, too often are we ready to throw hardware at a problem before a proper analysis has been done. For all you know at this point it could be a problem with a query, stored procedure, execution plan, disk I/O, CPU utilization, etc., etc. Memory pressure can often be a symptom of another bottleneck in the system.
as "Satanicpuppy" said, there is no such thing as too much RAM, but 6GB should be ok, maybe you should re-think on what your server does, I don't think that you have a "hardware" problem, you should focus on your SQL programming...
When it comes to database servers there's no such thing as "enough" memory. Sure, it depends on what they actually do and run but if it's a constantly utilized database containing a lot of data and doing complicated queries - 6 GB could easily be grossly inadequate.
I'd start by upgrading one troublesome server to at least 32 or 64 GB and see if it helps. If not, turn to database tuning, application troubleshooting and debugging - which all, unless an idiot designed the database, cost a lot more than a few sticks of server-grade memory (and even if an idiot designed the thing, getting even obvious design errors fixed with retained support could prove quite a challenge).
That said, as someone else stated - it could be something else holding it back (apart from software design issues), like a lack of disk or network I/O performance - hiring a DBA pro to just go through basic SQL performance monitoring for a day could prove useful.
You should look at building more indexes. I think that in general, most people under-index their database.
This is still air-code, I haven't fully tested yet, but it should get you in the right direction
http://accessadp.com/2011/08/22/missing-indexes-great-script-for-determining-roi/