Commit 6619741f17f541113a02c30f22a9ca22e32c9546

Authored by Johannes Weiner
Committed by Greg Kroah-Hartman
1 parent 7adcd47267

mm: page_alloc: fix zone allocation fairness on UP

commit abe5f972912d086c080be4bde67750630b6fb38b upstream.

The zone allocation batches can easily underflow due to higher-order
allocations or spills to remote nodes.  On SMP that's fine, because
underflows are expected from concurrency and dealt with by returning 0.
But on UP, zone_page_state will just return a wrapped unsigned long,
which will get past the <= 0 check and then consider the zone eligible
until its watermarks are hit.

Commit 3a025760fc15 ("mm: page_alloc: spill to remote nodes before
waking kswapd") already made the counter-resetting use
atomic_long_read() to accomodate underflows from remote spills, but it
didn't go all the way with it.

Make it clear that these batches are expected to go negative regardless
of concurrency, and use atomic_long_read() everywhere.

Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Leon Romanovsky <leon@leon.nu>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: <stable@vger.kernel.org>	[3.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Showing 1 changed file with 3 additions and 4 deletions Side-by-side Diff

... ... @@ -1957,7 +1957,7 @@
1957 1957 if (alloc_flags & ALLOC_FAIR) {
1958 1958 if (!zone_local(preferred_zone, zone))
1959 1959 continue;
1960   - if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)
  1960 + if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0)
1961 1961 continue;
1962 1962 }
1963 1963 /*
... ... @@ -5670,9 +5670,8 @@
5670 5670 zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
5671 5671  
5672 5672 __mod_zone_page_state(zone, NR_ALLOC_BATCH,
5673   - high_wmark_pages(zone) -
5674   - low_wmark_pages(zone) -
5675   - zone_page_state(zone, NR_ALLOC_BATCH));
  5673 + high_wmark_pages(zone) - low_wmark_pages(zone) -
  5674 + atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
5676 5675  
5677 5676 setup_zone_migrate_reserve(zone);
5678 5677 spin_unlock_irqrestore(&zone->lock, flags);