Commit e8a03feb54ca7f1768bbdc2b491f9ef654e6d01d

Authored by Rik van Riel
Committed by Linus Torvalds
1 parent 9b030e2006

rmap: add exclusively owned pages to the newest anon_vma

The recent anon_vma fixes cause many anonymous pages to end up
in the parent process anon_vma, even when the page is exclusively
owned by the current process.

Adding exclusively owned anonymous pages to the top anon_vma
reduces rmap scanning overhead, especially in workloads with
forking servers.

This patch adds a parameter to __page_set_anon_rmap that can
be used to indicate whether or not the added page is exclusively
owned by the current process.

Pages added through page_add_new_anon_rmap are exclusively
owned by the current process, and can be added to the top
anon_vma.

Pages added through page_add_anon_rmap can be either shared
or exclusively owned, so we do the conservative thing and
add it to the oldest anon_vma.

A next step would be to add the exclusive parameter to
page_add_anon_rmap, to be used from functions where we do
know for sure whether a page is exclusively owned.

Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Johannes Weiner <hannes@cmpxchg.org>
Lightly-tested-by: Borislav Petkov <bp@alien8.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
[ Edited to look nicer  - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 1 changed file with 16 additions and 11 deletions Side-by-side Diff

... ... @@ -730,23 +730,28 @@
730 730 * @page: the page to add the mapping to
731 731 * @vma: the vm area in which the mapping is added
732 732 * @address: the user virtual address mapped
  733 + * @exclusive: the page is exclusively owned by the current process
733 734 */
734 735 static void __page_set_anon_rmap(struct page *page,
735   - struct vm_area_struct *vma, unsigned long address)
  736 + struct vm_area_struct *vma, unsigned long address, int exclusive)
736 737 {
737   - struct anon_vma_chain *avc;
738   - struct anon_vma *anon_vma;
  738 + struct anon_vma *anon_vma = vma->anon_vma;
739 739  
740   - BUG_ON(!vma->anon_vma);
  740 + BUG_ON(!anon_vma);
741 741  
742 742 /*
743   - * We must use the _oldest_ possible anon_vma for the page mapping!
  743 + * If the page isn't exclusively mapped into this vma,
  744 + * we must use the _oldest_ possible anon_vma for the
  745 + * page mapping!
744 746 *
745   - * So take the last AVC chain entry in the vma, which is the deepest
746   - * ancestor, and use the anon_vma from that.
  747 + * So take the last AVC chain entry in the vma, which is
  748 + * the deepest ancestor, and use the anon_vma from that.
747 749 */
748   - avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma);
749   - anon_vma = avc->anon_vma;
  750 + if (!exclusive) {
  751 + struct anon_vma_chain *avc;
  752 + avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma);
  753 + anon_vma = avc->anon_vma;
  754 + }
750 755  
751 756 anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
752 757 page->mapping = (struct address_space *) anon_vma;
... ... @@ -802,7 +807,7 @@
802 807 VM_BUG_ON(!PageLocked(page));
803 808 VM_BUG_ON(address < vma->vm_start || address >= vma->vm_end);
804 809 if (first)
805   - __page_set_anon_rmap(page, vma, address);
  810 + __page_set_anon_rmap(page, vma, address, 0);
806 811 else
807 812 __page_check_anon_rmap(page, vma, address);
808 813 }
... ... @@ -824,7 +829,7 @@
824 829 SetPageSwapBacked(page);
825 830 atomic_set(&page->_mapcount, 0); /* increment count (starts at -1) */
826 831 __inc_zone_page_state(page, NR_ANON_PAGES);
827   - __page_set_anon_rmap(page, vma, address);
  832 + __page_set_anon_rmap(page, vma, address, 1);
828 833 if (page_evictable(page, vma))
829 834 lru_cache_add_lru(page, LRU_ACTIVE_ANON);
830 835 else