Split the anonymous and file backed pages out onto their own pageout queues. This we do not unnecessarily churn through lots of anonymous pages when we do not want to swap them out anyway. This should (with additional tuning) be a great step forward in scalability, allowing Linux to run well on very large systems where scanning through the anonymous memory (on our way to the page cache memory we do want to evict) is slowing systems down significantly. This patch has been stress tested and seems to work, but has not been fine tuned or benchmarked yet. For now the swappiness parameter can be used to tweak swap aggressiveness up and down as desired, but in the long run we may want to simply measure IO cost of page cache and anonymous memory and auto-adjust. We apply pressure to each of sets of the pageout queues based on: - the size of each queue - the fraction of recently referenced pages in each queue, not counting used-once file pages - swappiness (file IO is more efficient than swap IO) Please take this patch for a spin and let me know what goes well and what goes wrong. More info on the patch can be found on: http://linux-mm.org/PageReplacementDesign Signed-off-by: Rik van Riel Changelog: - Fix page_anon() to put all the file pages really on the file list. - Fix get_scan_ratio() to return more stable numbers, by properly keeping track of the scanned anon and file pages. -- Politics is the struggle between those who want to make their country the best in the world, and those who believe it already is. Each group calls the other unpatriotic.