Commit 2ab051e11bfa3cbb7b24177f3d6aaed10a0d743e

Authored by Jerome Marchand
Committed by Linus Torvalds
1 parent 7c0db9e917

memcg, vmscan: Fix forced scan of anonymous pages

When memory cgoups are enabled, the code that decides to force to scan
anonymous pages in get_scan_count() compares global values (free,
high_watermark) to a value that is restricted to a memory cgroup (file).
It make the code over-eager to force anon scan.

For instance, it will force anon scan when scanning a memcg that is
mainly populated by anonymous page, even when there is plenty of file
pages to get rid of in others memcgs, even when swappiness == 0.  It
breaks user's expectation about swappiness and hurts performance.

This patch makes sure that forced anon scan only happens when there not
enough file pages for the all zone, not just in one random memcg.

[hannes@cmpxchg.org: cleanups]
Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 1 changed file with 13 additions and 7 deletions Side-by-side Diff

... ... @@ -1930,11 +1930,6 @@
1930 1930 goto out;
1931 1931 }
1932 1932  
1933   - anon = get_lru_size(lruvec, LRU_ACTIVE_ANON) +
1934   - get_lru_size(lruvec, LRU_INACTIVE_ANON);
1935   - file = get_lru_size(lruvec, LRU_ACTIVE_FILE) +
1936   - get_lru_size(lruvec, LRU_INACTIVE_FILE);
1937   -
1938 1933 /*
1939 1934 * Prevent the reclaimer from falling into the cache trap: as
1940 1935 * cache pages start out inactive, every cache fault will tip
1941 1936  
... ... @@ -1945,9 +1940,14 @@
1945 1940 * anon pages. Try to detect this based on file LRU size.
1946 1941 */
1947 1942 if (global_reclaim(sc)) {
1948   - unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
  1943 + unsigned long zonefile;
  1944 + unsigned long zonefree;
1949 1945  
1950   - if (unlikely(file + free <= high_wmark_pages(zone))) {
  1946 + zonefree = zone_page_state(zone, NR_FREE_PAGES);
  1947 + zonefile = zone_page_state(zone, NR_ACTIVE_FILE) +
  1948 + zone_page_state(zone, NR_INACTIVE_FILE);
  1949 +
  1950 + if (unlikely(zonefile + zonefree <= high_wmark_pages(zone))) {
1951 1951 scan_balance = SCAN_ANON;
1952 1952 goto out;
1953 1953 }
... ... @@ -1982,6 +1982,12 @@
1982 1982 *
1983 1983 * anon in [0], file in [1]
1984 1984 */
  1985 +
  1986 + anon = get_lru_size(lruvec, LRU_ACTIVE_ANON) +
  1987 + get_lru_size(lruvec, LRU_INACTIVE_ANON);
  1988 + file = get_lru_size(lruvec, LRU_ACTIVE_FILE) +
  1989 + get_lru_size(lruvec, LRU_INACTIVE_FILE);
  1990 +
1985 1991 spin_lock_irq(&zone->lru_lock);
1986 1992 if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) {
1987 1993 reclaim_stat->recent_scanned[0] /= 2;