Commit 11bc82d67d1150767901bca54a24466621d763d7

Authored by Andrea Arcangeli
Committed by Linus Torvalds
1 parent b2eef8c0d0

mm: compaction: Use async migration for __GFP_NO_KSWAPD and enforce no writeback

__GFP_NO_KSWAPD allocations are usually very expensive and not mandatory
to succeed as they have graceful fallback.  Waiting for I/O in those,
tends to be overkill in terms of latencies, so we can reduce their latency
by disabling sync migrate.

Unfortunately, even with async migration it's still possible for the
process to be blocked waiting for a request slot (e.g.  get_request_wait
in the block layer) when ->writepage is called.  To prevent
__GFP_NO_KSWAPD blocking, this patch prevents ->writepage being called on
dirty page cache for asynchronous migration.

Addresses https://bugzilla.kernel.org/show_bug.cgi?id=31142

[mel@csn.ul.ie: Avoid writebacks for NFS, retry locked pages, use bool]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: Arthur Marsh <arthur.marsh@internode.on.net>
Cc: Clemens Ladisch <cladisch@googlemail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Reported-by: Alex Villacis Lasso <avillaci@ceibo.fiec.espol.edu.ec>
Tested-by: Alex Villacis Lasso <avillaci@ceibo.fiec.espol.edu.ec>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 2 changed files with 34 additions and 16 deletions Side-by-side Diff

... ... @@ -564,7 +564,7 @@
564 564 * == 0 - success
565 565 */
566 566 static int move_to_new_page(struct page *newpage, struct page *page,
567   - int remap_swapcache)
  567 + int remap_swapcache, bool sync)
568 568 {
569 569 struct address_space *mapping;
570 570 int rc;
571 571  
572 572  
... ... @@ -586,18 +586,28 @@
586 586 mapping = page_mapping(page);
587 587 if (!mapping)
588 588 rc = migrate_page(mapping, newpage, page);
589   - else if (mapping->a_ops->migratepage)
  589 + else {
590 590 /*
591   - * Most pages have a mapping and most filesystems
592   - * should provide a migration function. Anonymous
593   - * pages are part of swap space which also has its
594   - * own migration function. This is the most common
595   - * path for page migration.
  591 + * Do not writeback pages if !sync and migratepage is
  592 + * not pointing to migrate_page() which is nonblocking
  593 + * (swapcache/tmpfs uses migratepage = migrate_page).
596 594 */
597   - rc = mapping->a_ops->migratepage(mapping,
598   - newpage, page);
599   - else
600   - rc = fallback_migrate_page(mapping, newpage, page);
  595 + if (PageDirty(page) && !sync &&
  596 + mapping->a_ops->migratepage != migrate_page)
  597 + rc = -EBUSY;
  598 + else if (mapping->a_ops->migratepage)
  599 + /*
  600 + * Most pages have a mapping and most filesystems
  601 + * should provide a migration function. Anonymous
  602 + * pages are part of swap space which also has its
  603 + * own migration function. This is the most common
  604 + * path for page migration.
  605 + */
  606 + rc = mapping->a_ops->migratepage(mapping,
  607 + newpage, page);
  608 + else
  609 + rc = fallback_migrate_page(mapping, newpage, page);
  610 + }
601 611  
602 612 if (rc) {
603 613 newpage->mapping = NULL;
... ... @@ -641,7 +651,7 @@
641 651 rc = -EAGAIN;
642 652  
643 653 if (!trylock_page(page)) {
644   - if (!force)
  654 + if (!force || !sync)
645 655 goto move_newpage;
646 656  
647 657 /*
648 658  
... ... @@ -686,8 +696,16 @@
686 696 BUG_ON(charge);
687 697  
688 698 if (PageWriteback(page)) {
689   - if (!force || !sync)
  699 + /*
  700 + * For !sync, there is no point retrying as the retry loop
  701 + * is expected to be too short for PageWriteback to be cleared
  702 + */
  703 + if (!sync) {
  704 + rc = -EBUSY;
690 705 goto uncharge;
  706 + }
  707 + if (!force)
  708 + goto uncharge;
691 709 wait_on_page_writeback(page);
692 710 }
693 711 /*
... ... @@ -757,7 +775,7 @@
757 775  
758 776 skip_unmap:
759 777 if (!page_mapped(page))
760   - rc = move_to_new_page(newpage, page, remap_swapcache);
  778 + rc = move_to_new_page(newpage, page, remap_swapcache, sync);
761 779  
762 780 if (rc && remap_swapcache)
763 781 remove_migration_ptes(page, page);
... ... @@ -850,7 +868,7 @@
850 868 try_to_unmap(hpage, TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
851 869  
852 870 if (!page_mapped(hpage))
853   - rc = move_to_new_page(new_hpage, hpage, 1);
  871 + rc = move_to_new_page(new_hpage, hpage, 1, sync);
854 872  
855 873 if (rc)
856 874 remove_migration_ptes(hpage, hpage);
... ... @@ -2103,7 +2103,7 @@
2103 2103 sync_migration);
2104 2104 if (page)
2105 2105 goto got_pg;
2106   - sync_migration = true;
  2106 + sync_migration = !(gfp_mask & __GFP_NO_KSWAPD);
2107 2107  
2108 2108 /* Try direct reclaim and then allocating */
2109 2109 page = __alloc_pages_direct_reclaim(gfp_mask, order,