Note: The Linux kernel frees memory caches and buffers as needed, so there is no need to induce a cache flush outside of specific troubleshooting situations. Also note that this procedure should only be done for debugging, diagnostics, and benchmarks--never under normal operating circumstances. In addition, although this procedure should not cause the operating system, kernel, or processes to crash, hardware issues could become apparent as increased load is placed on the storage device(s) and as the system works to rebuild the caches.
Memory utilization will not necessarily drop significantly, even if you completely stop processes on the system. This is because most processes are made up mainly of mapped files, which in turn are backed by the system cache. Stopping an application like Apache will release the small amount of memory that each child process uses for bookkeeping, but the content that Apache serves is still in memory in the cache.
To release the caches, prime the Linux kernel's
drop_caches knob to release cached memory. Before doing this, run the
sync command to ensure that all "dirty" pages are written to disk and that the caches contain as few dirty pages as possible.
To access the
drop_caches facility, pass the desired numerical control to the
/proc/sys/vm/drop_caches path via
echo, for example:
echo 3 > /proc/sys/vm/drop_caches
1 - Free the page cache: contains file data and other data that is cached from various sources. The largest gains are from this flag.
2 - Free inode and dentry caches: This is metadata about files and directory entries. This can be large on file systems with a large number of files or when larger files are in use.
3 - Combine both 1 and 2 at the same time. The most effective use of the
Once you have done this, the caches will begin to repopulate immediately. All the memory-mapped files will "fill in" the caches as soon as the processes start doing work. Code segments will be the number one occupant here. Filesystem access will be slow and latent until the inodes and dentries get a chance to repopulate, and other sluggish behavior will be noticeable at this time as well. Eventually, depending on load, the caches will fill back up and more than likely will reach the same sizes as before.