This is an old question that I've seen from time to time. My understanding of it is rather limited (having read about the differences a long time ago, but the factoid(s) involved never really stuck).
As I understand it,
Buffers
Are used by programs with active I/O operations, i.e. data waiting to be written to disk
Cache
Is the result of completed I/O operations, i.e. buffers that have been flushed or data read from disk to satisfy a request.
Can I get a clear explanation for posterity?
The "cached" total will also include some other memory allocations, such as any tmpfs filesytems. To see this in effect try:
and you will see the "cache" value drop by the 100Mb that you copied to the ram-based filesystem (assuming there was enough free RAM, you might find some of it ended up in swap if the machine is already over-committed in terms of memory use). The "sync; echo 3 > /proc/sys/vm/drop_caches" before each call to free should write anything pending in all write buffers (the sync) and clear all cached/buffered disk blocks from memory so free will only be reading other allocations in the "cached" value.
The RAM used by virtual machines (such as those running under VMWare) may also be counted in free's "cached" value, as will RAM used by currently open memory-mapped files (this will vary depending on the hypervisor/version you are using and possibly between kernel versions too).
So it isn't as simple as "buffers counts pending file/network writes and cached counts recently read/written blocks held in RAM to save future physical reads", though for most purposes this simpler description will do.
Tricky Question. When you calculate free space you actually need to add up buffer and cache both. This is what I Could find
A buffer is something that has yet to be "written" to disk. A cache is something that has been "read" from the disk and stored for later use.
http://visualbasic.ittoolbox.com/documents/difference-between-buffer-and-cache-12135
I was looking for more clear description about buffer and i found in
"Professional Linux® Kernel Architecture 2008"
Explained by RedHat:
Cache Pages:
A cache is the part of the memory which transparently stores data so that future requests for that data can be served faster. This memory is utilized by the kernel to cache disk data and improve i/o performance.
The Linux kernel is built in such a way that it will use as much RAM as it can to cache information from your local and remote filesystems and disks. As the time passes over various reads and writes are performed on the system, kernel tries to keep data stored in the memory for the various processes which are running on the system or the data that of relevant processes which would be used in the near future. The cache is not reclaimed at the time when process get stop/exit, however when the other processes requires more memory then the free available memory, kernel will run heuristics to reclaim the memory by storing the cache data and allocating that memory to new process.
When any kind of file/data is requested then the kernel will look for a copy of the part of the file the user is acting on, and, if no such copy exists, it will allocate one new page of cache memory and fill it with the appropriate contents read out from the disk.
The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere in the disk. When some data is requested, the cache is first checked to see whether it contains that data. The data can be retrieved more quickly from the cache than from its source origin.
SysV shared memory segments are also accounted as a cache, though they do not represent any data on the disks. One can check the size of the shared memory segments using ipcs -m command and checking the bytes column.
Buffers :
Buffers are the disk block representation of the data that is stored under the page caches. Buffers contains the metadata of the files/data which resides under the page cache. Example: When there is a request of any data which is present in the page cache, first the kernel checks the data in the buffers which contain the metadata which points to the actual files/data contained in the page caches. Once from the metadata the actual block address of the file is known, it is picked up by the kernel for processing.
Freeing buffer/cache
Warning This explain a strong method not recommended on production server! So you're warned, don't blame me if something goes wrong.
For understanding, the thing, you could force your system to delegate as many memory as possible to
Preamblecache
than drop the cached file:Before of doing the test, you could open another window an hit:
for following evolution of swap in real time.
Nota: You must dispose of as many disk free on current directory, you have mem+swap
The demoNota, the host on wich I've done this is strongly used. This will be more significant on a really quiet machine.