Commit 73848b4684e84a84cfd1555af78d41158f31e16b

Authored by Hugh Dickins
Committed by Linus Torvalds
1 parent 08beca44df

ksm: fix mlockfreed to munlocked

When KSM merges an mlocked page, it has been forgetting to munlock it:
that's been left to free_page_mlock(), which reports it in /proc/vmstat as
unevictable_pgs_mlockfreed instead of unevictable_pgs_munlocked (and
whinges "Page flag mlocked set for process" in mmotm, whereas mainline is
silently forgiving).  Call munlock_vma_page() to fix that.

Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Izik Eidus <ieidus@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Wright <chrisw@redhat.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 3 changed files with 8 additions and 3 deletions Side-by-side Diff

... ... @@ -105,9 +105,10 @@
105 105 }
106 106  
107 107 /*
108   - * must be called with vma's mmap_sem held for read, and page locked.
  108 + * must be called with vma's mmap_sem held for read or write, and page locked.
109 109 */
110 110 extern void mlock_vma_page(struct page *page);
  111 +extern void munlock_vma_page(struct page *page);
111 112  
112 113 /*
113 114 * Clear the page's PageMlocked(). This can be useful in a situation where
... ... @@ -34,6 +34,7 @@
34 34 #include <linux/ksm.h>
35 35  
36 36 #include <asm/tlbflush.h>
  37 +#include "internal.h"
37 38  
38 39 /*
39 40 * A few notes about the KSM scanning process,
... ... @@ -761,6 +762,9 @@
761 762 if (write_protect_page(vma, page, &orig_pte) == 0 &&
762 763 pages_identical(page, kpage))
763 764 err = replace_page(vma, page, kpage, orig_pte);
  765 +
  766 + if ((vma->vm_flags & VM_LOCKED) && !err)
  767 + munlock_vma_page(page);
764 768  
765 769 unlock_page(page);
766 770 out:
... ... @@ -99,14 +99,14 @@
99 99 * not get another chance to clear PageMlocked. If we successfully
100 100 * isolate the page and try_to_munlock() detects other VM_LOCKED vmas
101 101 * mapping the page, it will restore the PageMlocked state, unless the page
102   - * is mapped in a non-linear vma. So, we go ahead and SetPageMlocked(),
  102 + * is mapped in a non-linear vma. So, we go ahead and ClearPageMlocked(),
103 103 * perhaps redundantly.
104 104 * If we lose the isolation race, and the page is mapped by other VM_LOCKED
105 105 * vmas, we'll detect this in vmscan--via try_to_munlock() or try_to_unmap()
106 106 * either of which will restore the PageMlocked state by calling
107 107 * mlock_vma_page() above, if it can grab the vma's mmap sem.
108 108 */
109   -static void munlock_vma_page(struct page *page)
  109 +void munlock_vma_page(struct page *page)
110 110 {
111 111 BUG_ON(!PageLocked(page));
112 112