Commit 49e068f0b73dd042c186ffa9b420a9943e90389a

Authored by Vlastimil Babka
Committed by Linus Torvalds
1 parent 0e3b7e5402

mm/compaction: make isolate_freepages start at pageblock boundary

The compaction freepage scanner implementation in isolate_freepages()
starts by taking the current cc->free_pfn value as the first pfn.  In a
for loop, it scans from this first pfn to the end of the pageblock, and
then subtracts pageblock_nr_pages from the first pfn to obtain the first
pfn for the next for loop iteration.

This means that when cc->free_pfn starts at offset X rather than being
aligned on pageblock boundary, the scanner will start at offset X in all
scanned pageblock, ignoring potentially many free pages.  Currently this
can happen when

 a) zone's end pfn is not pageblock aligned, or

 b) through zone->compact_cached_free_pfn with CONFIG_HOLES_IN_ZONE
    enabled and a hole spanning the beginning of a pageblock

This patch fixes the problem by aligning the initial pfn in
isolate_freepages() to pageblock boundary.  This also permits replacing
the end-of-pageblock alignment within the for loop with a simple
pageblock_nr_pages increment.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Heesub Shin <heesub.shin@samsung.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Christoph Lameter <cl@linux.com>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Dongjun Shin <d.j.shin@samsung.com>
Cc: Sunghwan Yun <sunghwan.yun@samsung.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 1 changed file with 12 additions and 10 deletions Side-by-side Diff

... ... @@ -671,16 +671,20 @@
671 671 struct compact_control *cc)
672 672 {
673 673 struct page *page;
674   - unsigned long high_pfn, low_pfn, pfn, z_end_pfn, end_pfn;
  674 + unsigned long high_pfn, low_pfn, pfn, z_end_pfn;
675 675 int nr_freepages = cc->nr_freepages;
676 676 struct list_head *freelist = &cc->freepages;
677 677  
678 678 /*
679 679 * Initialise the free scanner. The starting point is where we last
680   - * scanned from (or the end of the zone if starting). The low point
681   - * is the end of the pageblock the migration scanner is using.
  680 + * successfully isolated from, zone-cached value, or the end of the
  681 + * zone when isolating for the first time. We need this aligned to
  682 + * the pageblock boundary, because we do pfn -= pageblock_nr_pages
  683 + * in the for loop.
  684 + * The low boundary is the end of the pageblock the migration scanner
  685 + * is using.
682 686 */
683   - pfn = cc->free_pfn;
  687 + pfn = cc->free_pfn & ~(pageblock_nr_pages-1);
684 688 low_pfn = ALIGN(cc->migrate_pfn + 1, pageblock_nr_pages);
685 689  
686 690 /*
... ... @@ -700,6 +704,7 @@
700 704 for (; pfn >= low_pfn && cc->nr_migratepages > nr_freepages;
701 705 pfn -= pageblock_nr_pages) {
702 706 unsigned long isolated;
  707 + unsigned long end_pfn;
703 708  
704 709 /*
705 710 * This can iterate a massively long zone without finding any
706 711  
... ... @@ -734,13 +739,10 @@
734 739 isolated = 0;
735 740  
736 741 /*
737   - * As pfn may not start aligned, pfn+pageblock_nr_page
738   - * may cross a MAX_ORDER_NR_PAGES boundary and miss
739   - * a pfn_valid check. Ensure isolate_freepages_block()
740   - * only scans within a pageblock
  742 + * Take care when isolating in last pageblock of a zone which
  743 + * ends in the middle of a pageblock.
741 744 */
742   - end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
743   - end_pfn = min(end_pfn, z_end_pfn);
  745 + end_pfn = min(pfn + pageblock_nr_pages, z_end_pfn);
744 746 isolated = isolate_freepages_block(cc, pfn, end_pfn,
745 747 freelist, false);
746 748 nr_freepages += isolated;