I've had a test CentOS KVM host running for about a week now with 3 CentOS guests. There's 12GB physical RAM with about 7.5GB actually allocated to VMs. These VMs aren't even being used yet as the server's still in the testing stage but I've noticed that swap usage had been climbing over the last 24-48 hours. Now it looks like it's exhausted.
Here's the output of free
# free -m
total used free shared buffers cached
Mem: 11905 11749 155 0 81 4632
-/+ buffers/cache: 7035 4869
Swap: 2047 2047 0
So as you can see the physical memory is all used but it's used as cached memory which I believe is generally fine, as that will be released if an application needs it.
I ran the shell script found in this answer which listed 3 qemu-kvm
processes.
The server was provisioned for me with only a 2GB logical volume allocated for swap and usually I like to match physical memory up to 8GB.
Is it worth expanding the swap logical volume or adding a separate swap volume?
Is this common with KVM? It's not something I've seen on other KVM hosts so is there a particular setting I need to adjust?
Any other comments/suggestions?
This is normal. Pages used by idle VMs will be swapped and the memory will be used for cache. You could set swappiness to zero which may prevent swap being used like this at the cost of performance (smaller cache).
Here is Red Hat's recommendation (with many YMMVs):
This example does not take into account kernel same-page merging (KSM) which will reduce the amount of memory used.
Here RH is saying 4G swap for host but here they recommend 12G * 0.5 = 6G swap for you.
If you are new to KVM I recommend reading IBM's Best Practices for KVM document.
In my experience no, it's not worth assigning extra swap. You're going to be using swap (HDD) as RAM, and RAM as cache (shortcut for HDD).... sounds mostly counter-productive. Typically on a system with 8GB RAM I'll assign 1-2GB swap, usually the remainder of a raid partition scheme. Example:
3x64GB SSD (OS/software Raid0) 2x1TB SATA (Data Raid1)
I'll set my 1st SSD as 1GB on /boot, leaving 2x1GB wasted after I make the rest of the drives identical partitioning. Sounds like swap on Raid0, and usually it doesn't get accessed anyways.
As for KVM I'd look into bug reports as well as troubleshooting the individual guest machines. Could there be a memory leak in either?