I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc.
I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way.
After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper.
It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance.
My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space).
Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?
Unlike some other OSes that implement the obnoxious out of memory killer or equivalent, Solaris doesn't overcommit memory (unless you are using very specific allocation techniques). When regular memory allocations are made, the OS make sure this memory will be available when required (i.e. reservation). The drawback is you need to have enough virtual memory space to store this potentially partially unused memory.
Free RAM is unrelated but it accounts in virtual memory size too.
Have a look at "swap -s" output when the problem occurs.
Note that you can easily increase the swap area by adding swap files or devices.