I am trying to understand what I guess is called a balanced memory configuration. For the newest cpu's from Intel at this point in time, January 2018 which are for example Xeon 8180 and Xeon 6154, how many DIMMS per CPU socket should be used to get the best memory performance?
For a server board having say 4 Intel 6154 cpus, there is a choice of going with either RDIMM or LRDIMM... which I do not know if that warrants extra discussion but for now using RDIMMS because of pricing and I do not need more than 768GB of system RAM,
For the RDIMM choice there is 8gb or 16gb or 32gb per DIMM, there are 12 memory sockets per cpu socket so 48 sockets total on the board allowing for a maximum of 768 GB of RAM for the Xeon 6154 or 8180,
Note: maximum memory supported by 8180 or 6154 is 768gb per intel spec sheet, I am not sure if that is per CPU or total of the system. If more than that is needed then there is Xeon 8180m and 6154m which supports up to 1.5TB and again not sure if that is 1.5tb for just that cpu but anyway that is above and beyond what I need or am asking,
My question is, for the choice I make on getting a server(s) which I want to have N amount of RAM each which would be either at least 192GB, 256gb, 384gb, 512gb, 640gb, or 768gb...
- how would I know the minimum amount of RAM that should be in the system?
- choosing either 8gb DIMMs or 16 gb DIMMS affects (1.) obviously but for arguments sake let's say I want the server to have either 256gb or 384gb of system RAM? Why should I choose what size RDIMM and how should I populate the 48 memory slots so I have best memory performance? I have come across articles talking about reduced memory performance of X % and it being really bad if you mess up how you populate the memory slots.
- for something like the Xeon 6154 or 8180 should there be 6 DIMMS installed per CPU socket for best memory performance, bandwidth... etc. ? Or can there be just 2 DIMMS per CPU socket? I ask because there is mention of something about max # memory channels = 6 and this is the question I am really looking to an answer for. How bad is having 1 or 2 DIMMS per CPU vs 6 DIMMs per CPU?
- And what affect is there if 2, 3, 4, 6, 8 DIMMS per CPU socket are installed, resulting in whatever multiple of 8 or 16 of total system RAM between 192gb and 768gb?
Basically, the CPU has the fastest memory access with one DIMM per channel.
Adding more DIMMs increases the load on the memory bus and may decrease the RAM clock.
Dual-rank and quad-rank DIMMs cause higher load and potentially decrease the clock further than fewer ranks. Same size modules with fewer ranks are generally better.
Registered DIMMs (RDIMMs) decrease the bus load, may increase the maximum number of modules and ranks but come with a (slight) penalty in latency.
Load-reduced DIMMs decrease the bus load even further (possibly enabling more DIMMs/ranks or higher clock rate) and have the same latency penalty as RDIMMs.
Unbuffered DIMMs (UDIMMs) are the fastest but only allow very few modules and ranks. Usually they're only found in basic entry-level servers.
Low-voltage DIMMs save energy but the lower voltage swing often decreases the maximum RAM clock that's possible.
Generally, you can't mix UDIMMs, RDIMMs and LR-DIMMs in a system. You can usually mix LV and normal DIMMs but the LV DIMMs will run at the higher voltage.
The exact metrics for your system should be in the manual. There's no one rule for all.
I'm not sure what you mean by minimum. This would depend on your specific application requirements and the software vendor would be the one to tell you.
The memory slot population will be described in the manual for the server/motherboard. It should have specific instructions on which slots to populate and in which order.
Not fully utilizing the available memory channels will reduce performance but it always depends on what you're running on the server and how dependent it is on memory bandwidth specifically. This might also answer 4. As you increase the number of DIMMs per socket you'll see increased performance until you're at the number of memory channels per CPU. After that performance might degrade slightly as multiple DIMMs have to share the same memory channels.
https://www.pcworld.com/article/2982965/components/quad-channel-ram-vs-dual-channel-ram-the-shocking-truth-about-their-performance.html
I'm basing this on a few assumptions so let me know if you have any questions/corrections.
I had part of the same question as you did. I raised it here and documented an answer.
Does having more than one DIMMs per channel impact memory bandwidth? Yes, it does. Having one DIMM per channel seems ideal to leverage maximum memory bandwidth.