Commit af34770e55fd899c96d8d73bdc04dbc956096650

Authored by Johannes Weiner
Committed by Linus Torvalds
1 parent 9b4f98cdac

mm: reduce rmap overhead for ex-KSM page copies created on swap faults

When ex-KSM pages are faulted from swap cache, the fault handler is not
capable of re-establishing anon_vma-spanning KSM pages.  In this case, a
copy of the page is created instead, just like during a COW break.

These freshly made copies are known to be exclusive to the faulting VMA
and there is no reason to go look for this page in parent and sibling
processes during rmap operations.

Use page_add_new_anon_rmap() for these copies.  This also puts them on
the proper LRU lists and marks them SwapBacked, so we can get rid of
doing this ad-hoc in the KSM copy code.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Simon Jeons <simon.jeons@gmail.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Satoru Moriya <satoru.moriya@hds.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 2 changed files with 4 additions and 7 deletions Side-by-side Diff

... ... @@ -1590,13 +1590,7 @@
1590 1590  
1591 1591 SetPageDirty(new_page);
1592 1592 __SetPageUptodate(new_page);
1593   - SetPageSwapBacked(new_page);
1594 1593 __set_page_locked(new_page);
1595   -
1596   - if (!mlocked_vma_newpage(vma, new_page))
1597   - lru_cache_add_lru(new_page, LRU_ACTIVE_ANON);
1598   - else
1599   - add_page_to_unevictable_list(new_page);
1600 1594 }
1601 1595  
1602 1596 return new_page;
... ... @@ -3044,7 +3044,10 @@
3044 3044 }
3045 3045 flush_icache_page(vma, page);
3046 3046 set_pte_at(mm, address, page_table, pte);
3047   - do_page_add_anon_rmap(page, vma, address, exclusive);
  3047 + if (swapcache) /* ksm created a completely new copy */
  3048 + page_add_new_anon_rmap(page, vma, address);
  3049 + else
  3050 + do_page_add_anon_rmap(page, vma, address, exclusive);
3048 3051 /* It's better to call commit-charge after rmap is established */
3049 3052 mem_cgroup_commit_charge_swapin(page, ptr);
3050 3053