Commit 1da286ebc5a1d23d0b4b88ba0d64fc141ac4c37d

Authored by Waiman Long
Committed by Greg Kroah-Hartman
1 parent de1fc405fb

mm, thp: move invariant bug check out of loop in __split_huge_page_map

commit f8303c2582b889351e261ff18c4d8eb197a77db2 upstream.

In __split_huge_page_map(), the check for page_mapcount(page) is
invariant within the for loop.  Because of the fact that the macro is
implemented using atomic_read(), the redundant check cannot be optimized
away by the compiler leading to unnecessary read to the page structure.

This patch moves the invariant bug check out of the loop so that it will
be done only once.  On a 3.16-rc1 based kernel, the execution time of a
microbenchmark that broke up 1000 transparent huge pages using munmap()
had an execution time of 38,245us and 38,548us with and without the
patch respectively.  The performance gain is about 1%.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Showing 1 changed file with 2 additions and 2 deletions Side-by-side Diff

... ... @@ -1819,6 +1819,8 @@
1819 1819 if (pmd) {
1820 1820 pgtable = pgtable_trans_huge_withdraw(mm, pmd);
1821 1821 pmd_populate(mm, &_pmd, pgtable);
  1822 + if (pmd_write(*pmd))
  1823 + BUG_ON(page_mapcount(page) != 1);
1822 1824  
1823 1825 haddr = address;
1824 1826 for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
... ... @@ -1828,8 +1830,6 @@
1828 1830 entry = maybe_mkwrite(pte_mkdirty(entry), vma);
1829 1831 if (!pmd_write(*pmd))
1830 1832 entry = pte_wrprotect(entry);
1831   - else
1832   - BUG_ON(page_mapcount(page) != 1);
1833 1833 if (!pmd_young(*pmd))
1834 1834 entry = pte_mkold(entry);
1835 1835 if (pmd_numa(*pmd))