I have a (sparc) Solaris 10 server with 16G of RAM. There are over 4G free.
Memory: 16G phys mem, 4371M free mem, 8193M swap, 8193M free swap
I am running a lot of java processes (I'm using the 32-bit JVM because none of them need a lot of memory) and want to run another one. But it claims to be out of memory.
# /usr/jdk/jdk1.6.0_17/bin/java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
I tried running with a reduced memory pool max size (-Xmx). Then I gradually increased the ceiling until it was very high indeed. How much should it be allocating without the -Xmx flag? According to this page, I wouldn't expect it to try to use more than 1G. And yet I can go to more than three times that without error.
# /usr/jdk/jdk1.6.0_17/bin/java -Xmx3900m -version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) Server VM (build 14.3-b01, mixed mode)
If I raise it above that level, then I start to get other errors, but I would expect that since I am approaching the 4G limit of address space for a 32-bit process anyway.
What could possibly be happening here, and how can I diagnose it myself? Edit: most of the java processes are running as different users (no more than 10 per user). But note in the above that I am trying to launch the new process (merely 'java -version') as root.
# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29995
virtual memory (kbytes, -v) unlimited
You are clearly running out of swap space. The fact you still have free RAM is unrelated. Solaris isn't overcommitting memory so all the reservations must be backed by virtual memory.
Have a look at
swap -s
output to get information about virtual memory (a.k.a. swap) usage.I ran into a problem on a solaris 10 box that had plenty of physical memory and a project that allowed my application user to open pretty much as many processes and use as much memory as it wanted (planning on implementing control of resources at the application configuration level) and it turned out that java was unable to allocate a heap that was larger than the available space in /tmp which on my specific setup was mounted out of "swap" and fluctuated sometimes allowing larger heaps and other times restricting it down to almost less than 100m. I have not resolved this as of yet but with trial and error (trying heap initial sizes just above or below the /tmp available space) I am pretty sure the next step is to bring stability to the /tmp mountpoint.
you may have limits on per-process memory usage. IIRC, a memory limit set by ulimit restricts aggregate memory usage of a process and all its children. If you have spawned a number of JVMs from a shell the total aggregate limit of the shell's children may be exceeded (if a limit of this sort is set).
Try typing ulimit -a to see what limits you have in force.
To get some more detail, you can start java under truss:
truss java -version
(the dtrace version, dtruss, is another nice option).That will show you all the syscalls that java executes. The most interesting ones will occur towards the end, just before the syscalls to print your error message. If it's an mmap() giving ENOMEM, I'd look into your memory situation again -- can Solaris be affected by memory fragmentation?
My hunch is that it's files; if you're running several other processes or daemons as root, you might be close to the default limit.
Solaris invokes a 32-bit JVM per default, if you put -d64 as the first JVM arg it will invoke the sparcv9 (64-bit) version of the JVm.