We rent our servers at a local hosting partner, they manage the setup and settings and we just use them. But we have as well admin rights, but the management is at their side. So if I switch importend things I will let them know beforehanded. Recently I figured out that at least some servers have the power options set to a Balanced plan. Since this option is recommended in Windows Server 2012 I don't understand why this plan can be the worst one? Since we all want performance over energy usage on a server, I guess, why is that value still recommended?
Also I don't see exactly what is changed when I switch it to high performance, does anyone have a test result of a server which run under same circumstances on balanced and once on high performance?
For me it's clear to set it to high performance, but I would like to understand more details. And to my understanding the only negative effect is the electricity bill and maybe a more used hardware.. correct?
If I go to the details of the power plan on my local machine, I see the option for the CPU under Processor power management, on the server there is only System Cooling policy under Processor power Management. It seems that the CPU is not throttled in any case?! This settings appear to be the same under all plans.
Short Answer With modern processor, with fast C6 (core/module power gating) capabilities, the difference in power consumption between the two power profiles is negligible. On the other hand, due to how different CPUs behave in power saving mode, you can lose considerable performance using the "balanced" profile. So I advise you to use the "high performance" profile, unless you have good reasons to use a different profile.
Long Answer The different power profiles typically tune the following three key areas:
How the information above affect the power governor/profile? Basically, a performance-optimize governor will fire up the clocks all the way up, burning more power. But when idling, even a performance governor will let the kernel issue the HALT instruction, which will push the CPU in the C1 states. After some more idling, the kernel will enter C2 states, and here the magic happens: CPU from Nehalem (or Bulldozer, for AMD) afterward internally remaps the C2 state into C6 - dropping voltage to 0. So, even if the power governor left the CPU at maximum clock (say, 3 GHz), the C6 state effectively override it, bringing frequency and voltage to 0. Some processor/PCU are even more aggressive, remapping C1E (which is automatically entered after some C1 time and before the kernel switch to C2) to C6. So, in a nutshell: a high-performance power governor let the CPU run at maximum speed, but modern CPUs automatically shut-down themselves when possible. This means that a performance governor will give high speed AND reasonable power consumption.
On the other hand, a "balanced" power governor will try to adjust to lower bound CPU frequency (in order to expoit the P-states savings). While with old CPU this is very reasonable, with modern CPU you have only marginally lower power. At the same time, you risks to lose considerable performance due to how the governor asks for lower frequency by default. Moreover, external links are generally slow to wake up after being put to sleep, so this is another speed-impairing risk of the balanced and conservative power options.
For these very reason, Windows 2008R2+ balanced power profile only very mildly try to save power - in many cases its behavior is comparable to the "high-performance" one.
Some interesting reads:
On Windows 2016 servers, when changing from Balanced to High Performance mode i see a 50% increase of Web server / ASP.Net performance (as seen in New Relic). That is big.
So i suggest never use balanced mode on a dedicated server hosting solution.