When is swap moved back to physical memory in Linux? Is it only on demand, ie. when it's needed? Or is swap slowly transfered to physical memory when the computer is not on high load?
During regular operation data from swap is loaded to memory on demand, as other answered, but there is one more case when this happens: it is when swap space is disabled, provided there is enough physical memory to load whole swap content.
Just do:
swapoff -a
…and all your swap data will 'come back' to the memory. The side effect is that disk buffers/caches may get flushed.
Sometimes it may be desirable to do swapoff -a ; swapon -a, e.g. after some buggy memory-leaking process, before crashed, made more important processes swept out – to make sure any process running in the system is loaded into memory and won't be waiting for the swap in a few minutes.
As others have pointed out, pages will only be copied back into RAM when needed (on demand) instead of taking up RAM that might be better left available for cache/buffers.
The fact that the pages are copied back into RAM, not moved, is important and can lead to confusion if you are not aware of it. The page will not be deallocated from swap unless it is no longer needed at all (i.e. the page is deallocated completely), is changed in RAM (so the copy in swap is no longer correct), or swap is running low (and the on-disk blocks are needed to swap some other pages out). This way if the page needs to be swapped out again in future no disk write is needed as the kernel knows there is already a good copy on disk - this can greatly reduce "thrashing" when available RAM becomes critically low but swap space is not also congested.
You can see how many pages are currently in both RAM and swap from cat /proc/meminfo - the SwapCached line is the amount of data that is in pages that are currently both in RAM and on disk. If you think your current swap use it higher than you expect, check the SwapCached value as this may well explain the discrepancy.
This is typically bound to the hardware you're using. On most hardware (including intel) the MMU controls the whole process.
When a program allocates memory, it will request it to the MMU and get back a virtual address. In turn, the MMU will register that page as being "in use" in the global address space map.
When the program actually access that memory space, the MMU will lookup the page into the address map. If that page is in "live" memory, it will send back a "live" pointer to the OS which will handle the memory read/write in behalf of the program. If the memory isn't currently allocated, then it will trigger a page fault. This processor exception is then caught by the OS which is then responsible from figuring out where the data is in the swap file, load it into physical memory and give the page back to the MMU so that the initial process can continue.
This means that, unless the memory page is accessed, it will never get back into "live" memory once put into swap. That is why there is usually an OS API that allows programs to specify that a particular memory block is NOT to be swapped to disk and should be kept in memory (I don't know about Linux, but in Windows, it's the VirtualLock function).
On demand. In fact, Linux will slowly transfer physical memory to swap when idle (see: "swappiness").
During regular operation data from swap is loaded to memory on demand, as other answered, but there is one more case when this happens: it is when swap space is disabled, provided there is enough physical memory to load whole swap content.
Just do:
…and all your swap data will 'come back' to the memory. The side effect is that disk buffers/caches may get flushed.
Sometimes it may be desirable to do
swapoff -a ; swapon -a
, e.g. after some buggy memory-leaking process, before crashed, made more important processes swept out – to make sure any process running in the system is loaded into memory and won't be waiting for the swap in a few minutes.As others have pointed out, pages will only be copied back into RAM when needed (on demand) instead of taking up RAM that might be better left available for cache/buffers.
The fact that the pages are copied back into RAM, not moved, is important and can lead to confusion if you are not aware of it. The page will not be deallocated from swap unless it is no longer needed at all (i.e. the page is deallocated completely), is changed in RAM (so the copy in swap is no longer correct), or swap is running low (and the on-disk blocks are needed to swap some other pages out). This way if the page needs to be swapped out again in future no disk write is needed as the kernel knows there is already a good copy on disk - this can greatly reduce "thrashing" when available RAM becomes critically low but swap space is not also congested.
You can see how many pages are currently in both RAM and swap from
cat /proc/meminfo
- theSwapCached
line is the amount of data that is in pages that are currently both in RAM and on disk. If you think your current swap use it higher than you expect, check the SwapCached value as this may well explain the discrepancy.This is typically bound to the hardware you're using. On most hardware (including intel) the MMU controls the whole process.
When a program allocates memory, it will request it to the MMU and get back a virtual address. In turn, the MMU will register that page as being "in use" in the global address space map.
When the program actually access that memory space, the MMU will lookup the page into the address map. If that page is in "live" memory, it will send back a "live" pointer to the OS which will handle the memory read/write in behalf of the program. If the memory isn't currently allocated, then it will trigger a page fault. This processor exception is then caught by the OS which is then responsible from figuring out where the data is in the swap file, load it into physical memory and give the page back to the MMU so that the initial process can continue.
This means that, unless the memory page is accessed, it will never get back into "live" memory once put into swap. That is why there is usually an OS API that allows programs to specify that a particular memory block is NOT to be swapped to disk and should be kept in memory (I don't know about Linux, but in Windows, it's the VirtualLock function).