Commit 7526674de0c921e7f1e9b6f71a1f9d832557b554

Authored by Adam Litke
Committed by Linus Torvalds
1 parent b5f1f5557f

hugetlb: make unmap_ref_private multi-size-aware

Oops.  Part of the hugetlb private reservation code was not fully
converted to use hstates.

When a huge page must be unmapped from VMAs due to a failed COW,
HPAGE_SIZE is used in the call to unmap_hugepage_range() regardless of
the page size being used.  This works if the VMA is using the default
huge page size.  Otherwise we might unmap too much, too little, or
trigger a BUG_ON.  Rare but serious -- fix it.

Signed-off-by: Adam Litke <agl@us.ibm.com>
Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 1 changed file with 3 additions and 2 deletions Side-by-side Diff

... ... @@ -1796,6 +1796,7 @@
1796 1796 static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
1797 1797 struct page *page, unsigned long address)
1798 1798 {
  1799 + struct hstate *h = hstate_vma(vma);
1799 1800 struct vm_area_struct *iter_vma;
1800 1801 struct address_space *mapping;
1801 1802 struct prio_tree_iter iter;
... ... @@ -1805,7 +1806,7 @@
1805 1806 * vm_pgoff is in PAGE_SIZE units, hence the different calculation
1806 1807 * from page cache lookup which is in HPAGE_SIZE units.
1807 1808 */
1808   - address = address & huge_page_mask(hstate_vma(vma));
  1809 + address = address & huge_page_mask(h);
1809 1810 pgoff = ((address - vma->vm_start) >> PAGE_SHIFT)
1810 1811 + (vma->vm_pgoff >> PAGE_SHIFT);
1811 1812 mapping = (struct address_space *)page_private(page);
... ... @@ -1824,7 +1825,7 @@
1824 1825 */
1825 1826 if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
1826 1827 unmap_hugepage_range(iter_vma,
1827   - address, address + HPAGE_SIZE,
  1828 + address, address + huge_page_size(h),
1828 1829 page);
1829 1830 }
1830 1831