20 Apr, 2005

14 commits

  • Once all the MMU architectures define FIRST_USER_ADDRESS, remove hack from
    mmap.c which derived it from FIRST_USER_PGD_NR.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Replace misleading definition of FIRST_USER_PGD_NR 0 by definition of
    FIRST_USER_ADDRESS 0 in all the MMU architectures beyond arm and arm26.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • ARM26 define FIRST_USER_ADDRESS as PAGE_SIZE (beyond the machine vectors when
    they are mapped low), and use that definition in place of locally defined
    MIN_MAP_ADDR. Previously, ARM26 permitted user mappings at 0 if the machine
    vectors were mapped high; but that's inconsistent with ARM, and
    FIRST_USER_ADDRESS would then have to be determined at runtime. Let's fix it
    at PAGE_SIZE throughout the architecture.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • ARM define FIRST_USER_ADDRESS as PAGE_SIZE (beyond the machine vectors when
    they are mapped low), and use that definition in place of locally defined
    MIN_MAP_ADDR.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Remove use of FIRST_USER_PGD_NR from sys_mincore: it's inconsistent (no other
    syscall refers to it), unnecessary (sys_mincore loops over vmas further down)
    and incorrect (misses user addresses in ARM's first pgd).

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • The patches to free_pgtables by vma left problems on any architectures which
    leave some user address page table entries unencapsulated by vma. Andi has
    fixed the 32-bit vDSO on x86_64 to use a vma. Now fix arm (and arm26), whose
    first PAGE_SIZE is reserved (perhaps) for machine vectors.

    Our calls to free_pgtables must not touch that area, and exit_mmap's
    BUG_ON(nr_ptes) must allow that arm's get_pgd_slow may (or may not) have
    allocated an extra page table, which its free_pgd_slow would free later.

    FIRST_USER_PGD_NR has misled me and others: until all the arches define
    FIRST_USER_ADDRESS instead, a hack in mmap.c to derive one from t'other. This
    patch fixes the bugs, the remaining patches just clean it up.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Once we're strict about clearing away page tables, hugetlb_prefault can assume
    there are no page tables left within its range. Since the other arches
    continue if !pte_none here, let i386 do the same.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • While dabbling here in mmap.c, clean up mysterious "mpnt"s to "vma"s.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • ia64 and sparc64 hurriedly had to introduce their own variants of
    pgd_addr_end, to leapfrog over the holes in their virtual address spaces which
    the final clear_page_range suddenly presented when converted from pgd_index to
    pgd_addr_end. But now that free_pgtables respects the vma list, those holes
    are never presented, and the arch variants can go.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being
    called, and it wasn't obvious what to do about them.

    The ppc64 case turns out to be easy: the associated tables are noted elsewhere
    and freed later, safe to either skip its hugetlb areas or go through the
    motions of freeing nothing. Since ia64 does need a special case, restore to
    ppc64 the special case of skipping them.

    The ia64 hugetlb case has been broken since pgd_addr_end went in, though it
    probably appeared to work okay if you just had one such area; in fact it's
    been broken much longer if you consider a long munmap spanning from another
    region into the hugetlb region.

    In the ia64 hugetlb region, more virtual address bits are available than in
    the other regions, yet the page tables are structured the same way: the page
    at the bottom is larger. Here we need to scale down each addr before passing
    it to the standard free_pgd_range. Was about to write a hugely_scaled_down
    macro, but found htlbpage_to_page already exists for just this purpose. Fixed
    off-by-one in ia64 is_hugepage_only_range.

    Uninline free_pgd_range to make it available to ia64. Make sure the
    vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any
    other (safe to join huges? probably but don't bother).

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • There's only one usage of MM_VM_SIZE(mm) left, and it's a troublesome macro
    because mm doesn't contain the (32-bit emulation?) info needed. But it too is
    only needed because we ignore the end from the vma list.

    We could make flush_pgtables return that end, or unmap_vmas. Choose the
    latter, since it's a natural fit with unmap_mapping_range_vma needing to know
    its restart addr. This does make more than minimal change, but if unmap_vmas
    had returned the end before, this is how we'd have done it, rather than
    storing the break_addr in zap_details.

    unmap_vmas used to return count of vmas scanned, but that's just debug which
    hasn't been useful in a while; and if we want the map_count 0 on exit check
    back, it can easily come from the final remove_vm_struct loop.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Recent woes with some arches needing their own pgd_addr_end macro; and 4-level
    clear_page_range regression since 2.6.10's clear_page_tables; and its
    long-standing well-known inefficiency in searching throughout the higher-level
    page tables for those few entries to clear and free: all can be blamed on
    ignoring the list of vmas when we free page tables.

    Replace exit_mmap's clear_page_range of the total user address space by
    free_pgtables operating on the mm's vma list; unmap_region use it in the same
    way, giving floor and ceiling beyond which it may not free tables. This
    brings lmbench fork/exec/sh numbers back to 2.6.10 (unless preempt is enabled,
    in which case latency fixes spoil unmap_vmas throughput).

    Beware: the do_mmap_pgoff driver failure case must now use unmap_region
    instead of zap_page_range, since a page table might have been allocated, and
    can only be freed while it is touched by some vma.

    Move free_pgtables from mmap.c to memory.c, where its lower levels are adapted
    from the clear_page_range levels. (Most of free_pgtables' old code was
    actually for a non-existent case, prev not properly set up, dating from before
    hch gave us split_vma.) Pass mmu_gather** in the public interfaces, since we
    might want to add latency lockdrops later; but no attempt to do so yet, going
    by vma should itself reduce latency.

    But what if is_hugepage_only_range? Those ia64 and ppc64 cases need careful
    examination: put that off until a later patch of the series.

    What of x86_64's 32bit vdso page __map_syscall32 maps outside any vma?

    And the range to sparc64's flush_tlb_pgtables? It's less clear to me now that
    we need to do more than is done here - every PMD_SIZE ever occupied will be
    flushed, do we really have to flush every PGDIR_SIZE ever partially occupied?
    A shame to complicate it unnecessarily.

    Special thanks to David Miller for time spent repairing my ceilings.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • for 13 driver core, sysfs, and debugfs fixes.

    Linus Torvalds
     
  • for 11 aoe bugfix patches.

    Linus Torvalds
     

19 Apr, 2005

26 commits