Commit 04bab05a95fece32015d897d4058880bbb5c65eb

Authored by Jerome Marchand
Committed by Jiri Slaby
1 parent 788a2f69f9

memcg, vmscan: Fix forced scan of anonymous pages

commit 2ab051e11bfa3cbb7b24177f3d6aaed10a0d743e upstream.

When memory cgoups are enabled, the code that decides to force to scan
anonymous pages in get_scan_count() compares global values (free,
high_watermark) to a value that is restricted to a memory cgroup (file).
It make the code over-eager to force anon scan.

For instance, it will force anon scan when scanning a memcg that is
mainly populated by anonymous page, even when there is plenty of file
pages to get rid of in others memcgs, even when swappiness == 0.  It
breaks user's expectation about swappiness and hurts performance.

This patch makes sure that forced anon scan only happens when there not
enough file pages for the all zone, not just in one random memcg.

[hannes@cmpxchg.org: cleanups]
Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>

Showing 1 changed file with 15 additions and 8 deletions Side-by-side Diff

... ... @@ -1847,7 +1847,7 @@
1847 1847 struct zone *zone = lruvec_zone(lruvec);
1848 1848 unsigned long anon_prio, file_prio;
1849 1849 enum scan_balance scan_balance;
1850   - unsigned long anon, file, free;
  1850 + unsigned long anon, file;
1851 1851 bool force_scan = false;
1852 1852 unsigned long ap, fp;
1853 1853 enum lru_list lru;
... ... @@ -1895,11 +1895,6 @@
1895 1895 goto out;
1896 1896 }
1897 1897  
1898   - anon = get_lru_size(lruvec, LRU_ACTIVE_ANON) +
1899   - get_lru_size(lruvec, LRU_INACTIVE_ANON);
1900   - file = get_lru_size(lruvec, LRU_ACTIVE_FILE) +
1901   - get_lru_size(lruvec, LRU_INACTIVE_FILE);
1902   -
1903 1898 /*
1904 1899 * If it's foreseeable that reclaiming the file cache won't be
1905 1900 * enough to get the zone back into a desirable shape, we have
... ... @@ -1907,8 +1902,14 @@
1907 1902 * thrashing - remaining file pages alone.
1908 1903 */
1909 1904 if (global_reclaim(sc)) {
1910   - free = zone_page_state(zone, NR_FREE_PAGES);
1911   - if (unlikely(file + free <= high_wmark_pages(zone))) {
  1905 + unsigned long zonefile;
  1906 + unsigned long zonefree;
  1907 +
  1908 + zonefree = zone_page_state(zone, NR_FREE_PAGES);
  1909 + zonefile = zone_page_state(zone, NR_ACTIVE_FILE) +
  1910 + zone_page_state(zone, NR_INACTIVE_FILE);
  1911 +
  1912 + if (unlikely(zonefile + zonefree <= high_wmark_pages(zone))) {
1912 1913 scan_balance = SCAN_ANON;
1913 1914 goto out;
1914 1915 }
... ... @@ -1943,6 +1944,12 @@
1943 1944 *
1944 1945 * anon in [0], file in [1]
1945 1946 */
  1947 +
  1948 + anon = get_lru_size(lruvec, LRU_ACTIVE_ANON) +
  1949 + get_lru_size(lruvec, LRU_INACTIVE_ANON);
  1950 + file = get_lru_size(lruvec, LRU_ACTIVE_FILE) +
  1951 + get_lru_size(lruvec, LRU_INACTIVE_FILE);
  1952 +
1946 1953 spin_lock_irq(&zone->lru_lock);
1947 1954 if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) {
1948 1955 reclaim_stat->recent_scanned[0] /= 2;