Recently, I had our server admin tell me that the new servers we'd ordered with 140GB of RAM on them had "too much" ram and that servers started to suffer with more than about 80GB, since that was "the optimal amount". Was he blowing smoke, or is there really a performance problem with more RAM than a certain level? I could see the argument - more for the OS to manage, etc - but is that legitimate, or will the extra breathing room more than make up for the management?
I'm not asking "Will I use it all" (it's a SQL Server cluster with dozens of instances, so I suspect I will, but that's not relevant to my question), but just whether too much can cause problems. I'd always assumed that more is better, but maybe there's a limit to that.
There are a few thresholds out there for 'too much', though they're special cases.
In 32-bit land, PAE is what allows you to access memory over the 4GB line. The theoretical max for 32-bit machines is 64GB of RAM, which reflects the extra 4 bits PAE gives memory addresses. 64GB is less than 80GB.
From there we get processor-specific issues. 64-bit processors currently use between 40 and 48 bits internally for addressing memory which gives a maximum memory limit of between 1TB and 256TB. Both way more than 80GB.
Unless he has some clear reasons for why SQL Server can't handle that much memory, the base OS and hardware can do so without breaking a sweat.
He was blowing smoke - if he'd said 4GB and you were using 32-bit operating systems then he might have had half of an argument but no, 80GB is just a number he's pulled out of the air.
Obviously there are some problems if memory isn't 'bought wisely', for instance larger DIMMs usually cost more than twice the price of the half-size versions (i.e. 16GB DIMMS are more that twice 8GB DIMMS) and you can slow a machine down quite a way by not using the right number/size/layout of memory but it'll still be very fast. Also of course the more memory you have the more there is to break but I'm sure you'll be happy with that system for what you're asking of it.
I apologize but most of these answers are incorrect. There is, in fact, a point at which more RAM will run slower. For HP servers with 18 slots, like the G7, filling all 18 of the slots will cause memory to run at 800 instead of 1333. See here for some specs:
http://www8.hp.com/h20195/v2/GetHTML.aspx?docname=c04286665
(Click on Memory, of course.)
Typical memory config with 12 slots filled will be 48G (all 4s), 72G (8s and 4s), 96G (all 8s), etc... when you say "140G" I assume you really mean 144G, which would very likely be 8G in all 18 slots. This would in fact slow you down.
Now, from what research I have done it appears the slower memory speed doesn't affect a lot of applications, but one thing it is known to affect is database apps. In this case you say it's for a SQL cluster so yes, for that, too much RAM could slow you down.
It's possible the server admin you talked to knew this from practical experience without knowing the exact technical reason.
Hope that helps,
-Jody-
Take it to an extreme and say you have Petabytes of memory: The system (cpu) is not going to work harder to manage memory mappings. The OS should be smart enough to consume this memory for disk caching, and still have plenty to manage application space (memory and code). Mapping memory in RAM vs virtual space will consume the same amount of CPU cycles.
Ultimately the only thing to suffer with too much memory is wasted energy.
I found at least one scenario where you can have too much ram. Admittedly this is a software limitation, not a hardware limitation.
Java applications (like ElasticSearch) suffer when using more than 32GB of ram due to compressed object offsets.
Additional Information:
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html
Assuming the CPU can actually use the RAM (it's not one of the special thresholds that sysadmin1138 mentioned), more RAM can't possibly hurt performance.
However, since you have a limited budget, there may indeed be some "optimal" amount of RAM -- if you spend more money on RAM, then you have less money for CPU(s) and hard drive(s) and IO. If something other than RAM is the bottleneck, then adding more RAM doesn't help performance (although it doesn't hurt performance, either), and it costs money that could instead be applied to opening up the bottleneck.
(I'm neglecting the cost the cost of electricity to power the servers and the cost of electricity to cool the servers -- those costs can have a big effect on "optimizing" hardware selection in a data center).
Just throwing this small piece of information into the loop
Startup/bootup time is always affected by extra ram - it has to be counted and on one of my servers a full reboot or startup takes 30 minutes even with fast boot on, just to count the extra ram (384Gb on this server)and this is even before bootup starts. Hopefully you will not have to reboot your server often, but I figured I would mention this since no one else did.
I agree with all the above in general and in most situations more is better with exclusioons considered.
Final Thought - Always remember the quote about never needing more than 640K memory that is attributed to Bill Gates