Commit 5013473152d1ac9d44d787fb02edda845fdf2cb3

Authored by Andrea Arcangeli
Committed by Linus Torvalds
1 parent b16d3d5a52

mm: vmscan: check if we isolated a compound page during lumpy scan

Properly take into account if we isolated a compound page during the lumpy
scan in reclaim and skip over the tail pages when encountered.  This
corrects the values given to the tracepoint for number of lumpy pages
isolated and will avoid breaking the loop early if compound pages smaller
than the requested allocation size are requested.

[mgorman@suse.de: Updated changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andy Isaacson <adi@hexapodia.org>
Cc: Nai Xia <nai.xia@gmail.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 1 changed file with 7 additions and 3 deletions Side-by-side Diff

... ... @@ -1196,13 +1196,17 @@
1196 1196 break;
1197 1197  
1198 1198 if (__isolate_lru_page(cursor_page, mode, file) == 0) {
  1199 + unsigned int isolated_pages;
  1200 +
1199 1201 mem_cgroup_lru_del(cursor_page);
1200 1202 list_move(&cursor_page->lru, dst);
1201   - nr_taken += hpage_nr_pages(cursor_page);
1202   - nr_lumpy_taken++;
  1203 + isolated_pages = hpage_nr_pages(cursor_page);
  1204 + nr_taken += isolated_pages;
  1205 + nr_lumpy_taken += isolated_pages;
1203 1206 if (PageDirty(cursor_page))
1204   - nr_lumpy_dirty++;
  1207 + nr_lumpy_dirty += isolated_pages;
1205 1208 scan++;
  1209 + pfn += isolated_pages - 1;
1206 1210 } else {
1207 1211 /*
1208 1212 * Check if the page is freed already.