16 Apr, 2015

1 commit

  • We converted some of the usages of ACCESS_ONCE to READ_ONCE in the mm/
    tree since it doesn't work reliably on non-scalar types.

    This patch removes the rest of the usages of ACCESS_ONCE, and use the new
    READ_ONCE API for the read accesses. This makes things cleaner, instead
    of using separate/multiple sets of APIs.

    Signed-off-by: Jason Low
    Acked-by: Michal Hocko
    Acked-by: Davidlohr Bueso
    Acked-by: Rik van Riel
    Reviewed-by: Christian Borntraeger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jason Low
     

26 Mar, 2015

1 commit

  • I have constantly stumbled upon "kernel BUG at mm/rmap.c:399!" after
    upgrading to 3.19 and had no luck with 4.0-rc1 neither.

    So, after looking into new logic introduced by commit 7a3ef208e662 ("mm:
    prevent endless growth of anon_vma hierarchy"), I found chances are that
    unlink_anon_vmas() is called without incrementing dst->anon_vma->degree
    in anon_vma_clone() due to allocation failure. If dst->anon_vma is not
    NULL in error path, its degree will be incorrectly decremented in
    unlink_anon_vmas() and eventually underflow when exiting as a result of
    another call to unlink_anon_vmas(). That's how "kernel BUG at
    mm/rmap.c:399!" is triggered for me.

    This patch fixes the underflow by dropping dst->anon_vma when allocation
    fails. It's safe to do so regardless of original value of dst->anon_vma
    because dst->anon_vma doesn't have valid meaning if anon_vma_clone()
    fails. Besides, callers don't care dst->anon_vma in such case neither.

    Also suggested by Michal Hocko, we can clean up vma_adjust() a bit as
    anon_vma_clone() now does the work.

    [akpm@linux-foundation.org: tweak comment]
    Fixes: 7a3ef208e662 ("mm: prevent endless growth of anon_vma hierarchy")
    Signed-off-by: Leon Yu
    Signed-off-by: Konstantin Khlebnikov
    Reviewed-by: Michal Hocko
    Acked-by: Rik van Riel
    Acked-by: David Rientjes
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Leon Yu
     

12 Feb, 2015

1 commit

  • The complexity of memcg page stat synchronization is currently leaking
    into the callsites, forcing them to keep track of the move_lock state and
    the IRQ flags. Simplify the API by tracking it in the memcg.

    Signed-off-by: Johannes Weiner
    Acked-by: Michal Hocko
    Reviewed-by: Vladimir Davydov
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

11 Feb, 2015

1 commit


09 Jan, 2015

1 commit

  • Constantly forking task causes unlimited grow of anon_vma chain. Each
    next child allocates new level of anon_vmas and links vma to all
    previous levels because pages might be inherited from any level.

    This patch adds heuristic which decides to reuse existing anon_vma
    instead of forking new one. It adds counter anon_vma->degree which
    counts linked vmas and directly descending anon_vmas and reuses anon_vma
    if counter is lower than two. As a result each anon_vma has either vma
    or at least two descending anon_vmas. In such trees half of nodes are
    leafs with alive vmas, thus count of anon_vmas is no more than two times
    bigger than count of vmas.

    This heuristic reuses anon_vmas as few as possible because each reuse
    adds false aliasing among vmas and rmap walker ought to scan more ptes
    when it searches where page is might be mapped.

    Link: http://lkml.kernel.org/r/20120816024610.GA5350@evergreen.ssec.wisc.edu
    Fixes: 5beb49305251 ("mm: change anon_vma linking to fix multi-process server scalability issue")
    [akpm@linux-foundation.org: fix typo, per Rik]
    Signed-off-by: Konstantin Khlebnikov
    Reported-by: Daniel Forrest
    Tested-by: Michal Hocko
    Tested-by: Jerome Marchand
    Reviewed-by: Michal Hocko
    Reviewed-by: Rik van Riel
    Cc: [2.6.34+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov
     

21 Dec, 2014

1 commit

  • Pull ACCESS_ONCE cleanup preparation from Christian Borntraeger:
    "kernel: Provide READ_ONCE and ASSIGN_ONCE

    As discussed on LKML http://marc.info/?i=54611D86.4040306%40de.ibm.com
    ACCESS_ONCE might fail with specific compilers for non-scalar
    accesses.

    Here is a set of patches to tackle that problem.

    The first patch introduce READ_ONCE and ASSIGN_ONCE. If the data
    structure is larger than the machine word size memcpy is used and a
    warning is emitted. The next patches fix up several in-tree users of
    ACCESS_ONCE on non-scalar types.

    This does not yet contain a patch that forces ACCESS_ONCE to work only
    on scalar types. This is targetted for the next merge window as Linux
    next already contains new offenders regarding ACCESS_ONCE vs.
    non-scalar types"

    * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/borntraeger/linux:
    s390/kvm: REPLACE barrier fixup with READ_ONCE
    arm/spinlock: Replace ACCESS_ONCE with READ_ONCE
    arm64/spinlock: Replace ACCESS_ONCE READ_ONCE
    mips/gup: Replace ACCESS_ONCE with READ_ONCE
    x86/gup: Replace ACCESS_ONCE with READ_ONCE
    x86/spinlock: Replace ACCESS_ONCE with READ_ONCE
    mm: replace ACCESS_ONCE with READ_ONCE or barriers
    kernel: Provide READ_ONCE and ASSIGN_ONCE

    Linus Torvalds
     

18 Dec, 2014

1 commit

  • ACCESS_ONCE does not work reliably on non-scalar types. For
    example gcc 4.6 and 4.7 might remove the volatile tag for such
    accesses during the SRA (scalar replacement of aggregates) step
    (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)

    Let's change the code to access the page table elements with
    READ_ONCE that does implicit scalar accesses for the gup code.

    mm_find_pmd is tricky, because m68k and sparc(32bit) define pmd_t
    as array of longs. This code requires just that the pmd_present
    and pmd_trans_huge check are done on the same value, so a barrier
    is sufficent.

    A similar case is in handle_pte_fault. On ppc44x the word size is
    32 bit, but a pte is 64 bit. A barrier is ok as well.

    Signed-off-by: Christian Borntraeger
    Cc: linux-mm@kvack.org
    Acked-by: Paul E. McKenney

    Christian Borntraeger
     

16 Dec, 2014

1 commit

  • Pull drm updates from Dave Airlie:
    "Highlights:

    - AMD KFD driver merge

    This is the AMD HSA interface for exposing a lowlevel interface for
    GPGPU use. They have an open source userspace built on top of this
    interface, and the code looks as good as it was going to get out of
    tree.

    - Initial atomic modesetting work

    The need for an atomic modesetting interface to allow userspace to
    try and send a complete set of modesetting state to the driver has
    arisen, and been suffering from neglect this past year. No more,
    the start of the common code and changes for msm driver to use it
    are in this tree. Ongoing work to get the userspace ioctl finished
    and the code clean will probably wait until next kernel.

    - DisplayID 1.3 and tiled monitor exposed to userspace.

    Tiled monitor property is now exposed for userspace to make use of.

    - Rockchip drm driver merged.

    - imx gpu driver moved out of staging

    Other stuff:

    - core:
    panel - MIPI DSI + new panels.
    expose suggested x/y properties for virtual GPUs

    - i915:
    Initial Skylake (SKL) support
    gen3/4 reset work
    start of dri1/ums removal
    infoframe tracking
    fixes for lots of things.

    - nouveau:
    tegra k1 voltage support
    GM204 modesetting support
    GT21x memory reclocking work

    - radeon:
    CI dpm fixes
    GPUVM improvements
    Initial DPM fan control

    - rcar-du:
    HDMI support added
    removed some support for old boards
    slave encoder driver for Analog Devices adv7511

    - exynos:
    Exynos4415 SoC support

    - msm:
    a4xx gpu support
    atomic helper conversion

    - tegra:
    iommu support
    universal plane support
    ganged-mode DSI support

    - sti:
    HDMI i2c improvements

    - vmwgfx:
    some late fixes.

    - qxl:
    use suggested x/y properties"

    * 'drm-next' of git://people.freedesktop.org/~airlied/linux: (969 commits)
    drm: sti: fix module compilation issue
    drm/i915: save/restore GMBUS freq across suspend/resume on gen4
    drm: sti: correctly cleanup CRTC and planes
    drm: sti: add HQVDP plane
    drm: sti: add cursor plane
    drm: sti: enable auxiliary CRTC
    drm: sti: fix delay in VTG programming
    drm: sti: prepare sti_tvout to support auxiliary crtc
    drm: sti: use drm_crtc_vblank_{on/off} instead of drm_vblank_{on/off}
    drm: sti: fix hdmi avi infoframe
    drm: sti: remove event lock while disabling vblank
    drm: sti: simplify gdp code
    drm: sti: clear all mixer control
    drm: sti: remove gpio for HDMI hot plug detection
    drm: sti: allow to change hdmi ddc i2c adapter
    drm/doc: Document drm_add_modes_noedid() usage
    drm/i915: Remove '& 0xffff' from the mask given to WA_REG()
    drm/i915: Invert the mask and val arguments in wa_add() and WA_REG()
    drm: Zero out DRM object memory upon cleanup
    drm/i915/bdw: Fix the write setting up the WIZ hashing mode
    ...

    Linus Torvalds
     

14 Dec, 2014

4 commits

  • Call page_to_pgoff() to get the page offset once we are sure we actually
    need it, and any very obvious initial function checks have passed.
    Trivial micro-optimization, and potentially save some cycles.

    Signed-off-by: Davidlohr Bueso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • Similarly to the anon memory counterpart, we can share the mapping's lock
    ownership as the interval tree is not modified when doing doing the walk,
    only the file page.

    Signed-off-by: Davidlohr Bueso
    Acked-by: Rik van Riel
    Acked-by: "Kirill A. Shutemov"
    Acked-by: Hugh Dickins
    Cc: Oleg Nesterov
    Acked-by: Peter Zijlstra (Intel)
    Cc: Srikar Dronamraju
    Acked-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • The i_mmap_mutex is a close cousin of the anon vma lock, both protecting
    similar data, one for file backed pages and the other for anon memory. To
    this end, this lock can also be a rwsem. In addition, there are some
    important opportunities to share the lock when there are no tree
    modifications.

    This conversion is straightforward. For now, all users take the write
    lock.

    [sfr@canb.auug.org.au: update fremap.c]
    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Rik van Riel
    Acked-by: "Kirill A. Shutemov"
    Acked-by: Hugh Dickins
    Cc: Oleg Nesterov
    Acked-by: Peter Zijlstra (Intel)
    Cc: Srikar Dronamraju
    Acked-by: Mel Gorman
    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • Convert all open coded mutex_lock/unlock calls to the
    i_mmap_[lock/unlock]_write() helpers.

    Signed-off-by: Davidlohr Bueso
    Acked-by: Rik van Riel
    Acked-by: "Kirill A. Shutemov"
    Acked-by: Hugh Dickins
    Cc: Oleg Nesterov
    Acked-by: Peter Zijlstra (Intel)
    Cc: Srikar Dronamraju
    Acked-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

11 Dec, 2014

1 commit

  • Since commit d7365e783edb ("mm: memcontrol: fix missed end-writeback
    page accounting") mem_cgroup_end_page_stat consumes locked and flags
    variables directly rather than via pointers which might trigger C
    undefined behavior as those variables are initialized only in the slow
    path of mem_cgroup_begin_page_stat.

    Although mem_cgroup_end_page_stat handles parameters correctly and
    touches them only when they hold a sensible value it is caller which
    loads a potentially uninitialized value which then might allow compiler
    to do crazy things.

    I haven't seen any warning from gcc and it seems that the current
    version (4.9) doesn't exploit this type undefined behavior but Sasha has
    reported the following:

    UBSan: Undefined behaviour in mm/rmap.c:1084:2
    load of value 255 is not a valid value for type '_Bool'
    CPU: 4 PID: 8304 Comm: rngd Not tainted 3.18.0-rc2-next-20141029-sasha-00039-g77ed13d-dirty #1427
    Call Trace:
    dump_stack (lib/dump_stack.c:52)
    ubsan_epilogue (lib/ubsan.c:159)
    __ubsan_handle_load_invalid_value (lib/ubsan.c:482)
    page_remove_rmap (mm/rmap.c:1084 mm/rmap.c:1096)
    unmap_page_range (./arch/x86/include/asm/atomic.h:27 include/linux/mm.h:463 mm/memory.c:1146 mm/memory.c:1258 mm/memory.c:1279 mm/memory.c:1303)
    unmap_single_vma (mm/memory.c:1348)
    unmap_vmas (mm/memory.c:1377 (discriminator 3))
    exit_mmap (mm/mmap.c:2837)
    mmput (kernel/fork.c:659)
    do_exit (./arch/x86/include/asm/thread_info.h:168 kernel/exit.c:462 kernel/exit.c:747)
    do_group_exit (include/linux/sched.h:775 kernel/exit.c:873)
    SyS_exit_group (kernel/exit.c:901)
    tracesys_phase2 (arch/x86/kernel/entry_64.S:529)

    Fix this by using pointer parameters for both locked and flags and be
    more robust for future compiler changes even though the current code is
    implemented correctly.

    Signed-off-by: Michal Hocko
    Reported-by: Sasha Levin
    Acked-by: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

08 Dec, 2014

1 commit

  • Linux 3.18

    Backmerge Linus tree into -next as we had conflicts in i915/radeon/nouveau,
    and everyone was solving them individually.

    * tag 'v3.18': (57 commits)
    Linux 3.18
    watchdog: s3c2410_wdt: Fix the mask bit offset for Exynos7
    uapi: fix to export linux/vm_sockets.h
    i2c: cadence: Set the hardware time-out register to maximum value
    i2c: davinci: generate STP always when NACK is received
    ahci: disable MSI on SAMSUNG 0xa800 SSD
    context_tracking: Restore previous state in schedule_user
    slab: fix nodeid bounds check for non-contiguous node IDs
    lib/genalloc.c: export devm_gen_pool_create() for modules
    mm: fix anon_vma_clone() error treatment
    mm: fix swapoff hang after page migration and fork
    fat: fix oops on corrupted vfat fs
    ipc/sem.c: fully initialize sem_array before making it visible
    drivers/input/evdev.c: don't kfree() a vmalloc address
    cxgb4: Fill in supported link mode for SFP modules
    xen-netfront: Remove BUGs on paged skb data which crosses a page boundary
    mm/vmpressure.c: fix race in vmpressure_work_fn()
    mm: frontswap: invalidate expired data on a dup-store failure
    mm: do not overwrite reserved pages counter at show_mem()
    drm/radeon: kernel panic in drm_calc_vbltimestamp_from_scanoutpos with 3.18.0-rc6
    ...

    Conflicts:
    drivers/gpu/drm/i915/intel_display.c
    drivers/gpu/drm/nouveau/nouveau_drm.c
    drivers/gpu/drm/radeon/radeon_cs.c

    Dave Airlie
     

04 Dec, 2014

1 commit

  • Andrew Morton noticed that the error return from anon_vma_clone() was
    being dropped and replaced with -ENOMEM (which is not itself a bug
    because the only error return value from anon_vma_clone() is -ENOMEM).

    I did an audit of callers of anon_vma_clone() and discovered an actual
    bug where the error return was being lost. In __split_vma(), between
    Linux 3.11 and 3.12 the code was changed so the err variable is used
    before the call to anon_vma_clone() and the default initial value of
    -ENOMEM is overwritten. So a failure of anon_vma_clone() will return
    success since err at this point is now zero.

    Below is a patch which fixes this bug and also propagates the error
    return value from anon_vma_clone() in all cases.

    Fixes: ef0855d334e1 ("mm: mempolicy: turn vma_set_policy() into vma_dup_policy()")
    Signed-off-by: Daniel Forrest
    Reviewed-by: Michal Hocko
    Cc: Konstantin Khlebnikov
    Cc: Andrea Arcangeli
    Cc: Rik van Riel
    Cc: Tim Hartrick
    Cc: Hugh Dickins
    Cc: Michel Lespinasse
    Cc: Vlastimil Babka
    Cc: [3.12+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Forrest
     

30 Oct, 2014

2 commits

  • page_remove_rmap() has too many branches on PageAnon() and is hard to
    follow. Move the file part into a separate function.

    Signed-off-by: Johannes Weiner
    Reviewed-by: Michal Hocko
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Commit 0a31bc97c80c ("mm: memcontrol: rewrite uncharge API") changed
    page migration to uncharge the old page right away. The page is locked,
    unmapped, truncated, and off the LRU, but it could race with writeback
    ending, which then doesn't unaccount the page properly:

    test_clear_page_writeback() migration
    wait_on_page_writeback()
    TestClearPageWriteback()
    mem_cgroup_migrate()
    clear PCG_USED
    mem_cgroup_update_page_stat()
    if (PageCgroupUsed(pc))
    decrease memcg pages under writeback

    release pc->mem_cgroup->move_lock

    The per-page statistics interface is heavily optimized to avoid a
    function call and a lookup_page_cgroup() in the file unmap fast path,
    which means it doesn't verify whether a page is still charged before
    clearing PageWriteback() and it has to do it in the stat update later.

    Rework it so that it looks up the page's memcg once at the beginning of
    the transaction and then uses it throughout. The charge will be
    verified before clearing PageWriteback() and migration can't uncharge
    the page as long as that is still set. The RCU lock will protect the
    memcg past uncharge.

    As far as losing the optimization goes, the following test results are
    from a microbenchmark that maps, faults, and unmaps a 4GB sparse file
    three times in a nested fashion, so that there are two negative passes
    that don't account but still go through the new transaction overhead.
    There is no actual difference:

    old: 33.195102545 seconds time elapsed ( +- 0.01% )
    new: 33.199231369 seconds time elapsed ( +- 0.03% )

    The time spent in page_remove_rmap()'s callees still adds up to the
    same, but the time spent in the function itself seems reduced:

    # Children Self Command Shared Object Symbol
    old: 0.12% 0.11% filemapstress [kernel.kallsyms] [k] page_remove_rmap
    new: 0.12% 0.08% filemapstress [kernel.kallsyms] [k] page_remove_rmap

    Signed-off-by: Johannes Weiner
    Acked-by: Michal Hocko
    Cc: Vladimir Davydov
    Cc: [3.17.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

10 Oct, 2014

1 commit

  • Trivially convert a few VM_BUG_ON calls to VM_BUG_ON_VMA to extract
    more information when they trigger.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Sasha Levin
    Reviewed-by: Naoya Horiguchi
    Cc: Kirill A. Shutemov
    Cc: Konstantin Khlebnikov
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Hugh Dickins
    Cc: Vlastimil Babka
    Cc: Michel Lespinasse
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

24 Sep, 2014

1 commit

  • 1. We were calling clear_flush_young_notify in unmap_one, but we are
    within an mmu notifier invalidate range scope. The spte exists no more
    (due to range_start) and the accessed bit info has already been
    propagated (due to kvm_pfn_set_accessed). Simply call
    clear_flush_young.

    2. We clear_flush_young on a primary MMU PMD, but this may be mapped
    as a collection of PTEs by the secondary MMU (e.g. during log-dirty).
    This required expanding the interface of the clear_flush_young mmu
    notifier, so a lot of code has been trivially touched.

    3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate
    the access bit by blowing the spte. This requires proper synchronizing
    with MMU notifier consumers, like every other removal of spte's does.

    Signed-off-by: Andres Lagar-Cavilla
    Acked-by: Rik van Riel
    Signed-off-by: Paolo Bonzini

    Andres Lagar-Cavilla
     

09 Aug, 2014

2 commits

  • The memcg uncharging code that is involved towards the end of a page's
    lifetime - truncation, reclaim, swapout, migration - is impressively
    complicated and fragile.

    Because anonymous and file pages were always charged before they had their
    page->mapping established, uncharges had to happen when the page type
    could still be known from the context; as in unmap for anonymous, page
    cache removal for file and shmem pages, and swap cache truncation for swap
    pages. However, these operations happen well before the page is actually
    freed, and so a lot of synchronization is necessary:

    - Charging, uncharging, page migration, and charge migration all need
    to take a per-page bit spinlock as they could race with uncharging.

    - Swap cache truncation happens during both swap-in and swap-out, and
    possibly repeatedly before the page is actually freed. This means
    that the memcg swapout code is called from many contexts that make
    no sense and it has to figure out the direction from page state to
    make sure memory and memory+swap are always correctly charged.

    - On page migration, the old page might be unmapped but then reused,
    so memcg code has to prevent untimely uncharging in that case.
    Because this code - which should be a simple charge transfer - is so
    special-cased, it is not reusable for replace_page_cache().

    But now that charged pages always have a page->mapping, introduce
    mem_cgroup_uncharge(), which is called after the final put_page(), when we
    know for sure that nobody is looking at the page anymore.

    For page migration, introduce mem_cgroup_migrate(), which is called after
    the migration is successful and the new page is fully rmapped. Because
    the old page is no longer uncharged after migration, prevent double
    charges by decoupling the page's memcg association (PCG_USED and
    pc->mem_cgroup) from the page holding an actual charge. The new bits
    PCG_MEM and PCG_MEMSW represent the respective charges and are transferred
    to the new page during migration.

    mem_cgroup_migrate() is suitable for replace_page_cache() as well,
    which gets rid of mem_cgroup_replace_page_cache(). However, care
    needs to be taken because both the source and the target page can
    already be charged and on the LRU when fuse is splicing: grab the page
    lock on the charge moving side to prevent changing pc->mem_cgroup of a
    page under migration. Also, the lruvecs of both pages change as we
    uncharge the old and charge the new during migration, and putback may
    race with us, so grab the lru lock and isolate the pages iff on LRU to
    prevent races and ensure the pages are on the right lruvec afterward.

    Swap accounting is massively simplified: because the page is no longer
    uncharged as early as swap cache deletion, a new mem_cgroup_swapout() can
    transfer the page's memory+swap charge (PCG_MEMSW) to the swap entry
    before the final put_page() in page reclaim.

    Finally, page_cgroup changes are now protected by whatever protection the
    page itself offers: anonymous pages are charged under the page table lock,
    whereas page cache insertions, swapin, and migration hold the page lock.
    Uncharging happens under full exclusion with no outstanding references.
    Charging and uncharging also ensure that the page is off-LRU, which
    serializes against charge migration. Remove the very costly page_cgroup
    lock and set pc->flags non-atomically.

    [mhocko@suse.cz: mem_cgroup_charge_statistics needs preempt_disable]
    [vdavydov@parallels.com: fix flags definition]
    Signed-off-by: Johannes Weiner
    Cc: Hugh Dickins
    Cc: Tejun Heo
    Cc: Vladimir Davydov
    Tested-by: Jet Chen
    Acked-by: Michal Hocko
    Tested-by: Felipe Balbi
    Signed-off-by: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • These patches rework memcg charge lifetime to integrate more naturally
    with the lifetime of user pages. This drastically simplifies the code and
    reduces charging and uncharging overhead. The most expensive part of
    charging and uncharging is the page_cgroup bit spinlock, which is removed
    entirely after this series.

    Here are the top-10 profile entries of a stress test that reads a 128G
    sparse file on a freshly booted box, without even a dedicated cgroup (i.e.
    executing in the root memcg). Before:

    15.36% cat [kernel.kallsyms] [k] copy_user_generic_string
    13.31% cat [kernel.kallsyms] [k] memset
    11.48% cat [kernel.kallsyms] [k] do_mpage_readpage
    4.23% cat [kernel.kallsyms] [k] get_page_from_freelist
    2.38% cat [kernel.kallsyms] [k] put_page
    2.32% cat [kernel.kallsyms] [k] __mem_cgroup_commit_charge
    2.18% kswapd0 [kernel.kallsyms] [k] __mem_cgroup_uncharge_common
    1.92% kswapd0 [kernel.kallsyms] [k] shrink_page_list
    1.86% cat [kernel.kallsyms] [k] __radix_tree_lookup
    1.62% cat [kernel.kallsyms] [k] __pagevec_lru_add_fn

    After:

    15.67% cat [kernel.kallsyms] [k] copy_user_generic_string
    13.48% cat [kernel.kallsyms] [k] memset
    11.42% cat [kernel.kallsyms] [k] do_mpage_readpage
    3.98% cat [kernel.kallsyms] [k] get_page_from_freelist
    2.46% cat [kernel.kallsyms] [k] put_page
    2.13% kswapd0 [kernel.kallsyms] [k] shrink_page_list
    1.88% cat [kernel.kallsyms] [k] __radix_tree_lookup
    1.67% cat [kernel.kallsyms] [k] __pagevec_lru_add_fn
    1.39% kswapd0 [kernel.kallsyms] [k] free_pcppages_bulk
    1.30% cat [kernel.kallsyms] [k] kfree

    As you can see, the memcg footprint has shrunk quite a bit.

    text data bss dec hex filename
    37970 9892 400 48262 bc86 mm/memcontrol.o.old
    35239 9892 400 45531 b1db mm/memcontrol.o

    This patch (of 4):

    The memcg charge API charges pages before they are rmapped - i.e. have an
    actual "type" - and so every callsite needs its own set of charge and
    uncharge functions to know what type is being operated on. Worse,
    uncharge has to happen from a context that is still type-specific, rather
    than at the end of the page's lifetime with exclusive access, and so
    requires a lot of synchronization.

    Rewrite the charge API to provide a generic set of try_charge(),
    commit_charge() and cancel_charge() transaction operations, much like
    what's currently done for swap-in:

    mem_cgroup_try_charge() attempts to reserve a charge, reclaiming
    pages from the memcg if necessary.

    mem_cgroup_commit_charge() commits the page to the charge once it
    has a valid page->mapping and PageAnon() reliably tells the type.

    mem_cgroup_cancel_charge() aborts the transaction.

    This reduces the charge API and enables subsequent patches to
    drastically simplify uncharging.

    As pages need to be committed after rmap is established but before they
    are added to the LRU, page_add_new_anon_rmap() must stop doing LRU
    additions again. Revive lru_cache_add_active_or_unevictable().

    [hughd@google.com: fix shmem_unuse]
    [hughd@google.com: Add comments on the private use of -EAGAIN]
    Signed-off-by: Johannes Weiner
    Acked-by: Michal Hocko
    Cc: Tejun Heo
    Cc: Vladimir Davydov
    Signed-off-by: Hugh Dickins
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

24 Jul, 2014

1 commit

  • I triggered VM_BUG_ON() in vma_address() when I tried to migrate an
    anonymous hugepage with mbind() in the kernel v3.16-rc3. This is
    because pgoff's calculation in rmap_walk_anon() fails to consider
    compound_order() only to have an incorrect value.

    This patch introduces page_to_pgoff(), which gets the page's offset in
    PAGE_CACHE_SIZE.

    Kirill pointed out that page cache tree should natively handle
    hugepages, and in order to make hugetlbfs fit it, page->index of
    hugetlbfs page should be in PAGE_CACHE_SIZE. This is beyond this patch,
    but page_to_pgoff() contains the point to be fixed in a single function.

    Signed-off-by: Naoya Horiguchi
    Acked-by: Kirill A. Shutemov
    Cc: Joonsoo Kim
    Cc: Hugh Dickins
    Cc: Rik van Riel
    Cc: Hillf Danton
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

24 Jun, 2014

1 commit

  • Trinity has reported:

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
    IP: __lock_acquire (kernel/locking/lockdep.c:3070 (discriminator 1))
    CPU: 6 PID: 16173 Comm: trinity-c364 Tainted: G W
    3.15.0-rc1-next-20140415-sasha-00020-gaa90d09 #398
    lock_acquire (arch/x86/include/asm/current.h:14
    kernel/locking/lockdep.c:3602)
    _raw_spin_lock (include/linux/spinlock_api_smp.h:143
    kernel/locking/spinlock.c:151)
    remove_migration_pte (mm/migrate.c:137)
    rmap_walk (mm/rmap.c:1628 mm/rmap.c:1699)
    remove_migration_ptes (mm/migrate.c:224)
    migrate_pages (mm/migrate.c:922 mm/migrate.c:960 mm/migrate.c:1126)
    migrate_misplaced_page (mm/migrate.c:1733)
    __handle_mm_fault (mm/memory.c:3762 mm/memory.c:3812 mm/memory.c:3925)
    handle_mm_fault (mm/memory.c:3948)
    __get_user_pages (mm/memory.c:1851)
    __mlock_vma_pages_range (mm/mlock.c:255)
    __mm_populate (mm/mlock.c:711)
    SyS_mlockall (include/linux/mm.h:1799 mm/mlock.c:817 mm/mlock.c:791)

    I believe this comes about because, whereas collapsing and splitting THP
    functions take anon_vma lock in write mode (which excludes concurrent
    rmap walks), faulting THP functions (write protection and misplaced
    NUMA) do not - and mostly they do not need to.

    But they do use a pmdp_clear_flush(), set_pmd_at() sequence which, for
    an instant (indeed, for a long instant, given the inter-CPU TLB flush in
    there), leaves *pmd neither present not trans_huge.

    Which can confuse a concurrent rmap walk, as when removing migration
    ptes, seen in the dumped trace. Although that rmap walk has a 4k page
    to insert, anon_vmas containing THPs are in no way segregated from
    4k-page anon_vmas, so the 4k-intent mm_find_pmd() does need to cope with
    that instant when a trans_huge pmd is temporarily absent.

    I don't think we need strengthen the locking at the THP end: it's easily
    handled with an ACCESS_ONCE() before testing both conditions.

    And since mm_find_pmd() had only one caller who wanted a THP rather than
    a pmd, let's slightly repurpose it to fail when it hits a THP or
    non-present pmd, and open code split_huge_page_address() again.

    Signed-off-by: Hugh Dickins
    Reported-by: Sasha Levin
    Acked-by: Kirill A. Shutemov
    Cc: Konstantin Khlebnikov
    Cc: Mel Gorman
    Cc: Bob Liu
    Cc: Christoph Lameter
    Cc: Dave Jones
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

09 Jun, 2014

1 commit

  • Now that 3.15 is released, this merges the 'next' branch into 'master',
    bringing us to the normal situation where my 'master' branch is the
    merge window.

    * accumulated work in next: (6809 commits)
    ufs: sb mutex merge + mutex_destroy
    powerpc: update comments for generic idle conversion
    cris: update comments for generic idle conversion
    idle: remove cpu_idle() forward declarations
    nbd: zero from and len fields in NBD_CMD_DISCONNECT.
    mm: convert some level-less printks to pr_*
    MAINTAINERS: adi-buildroot-devel is moderated
    MAINTAINERS: add linux-api for review of API/ABI changes
    mm/kmemleak-test.c: use pr_fmt for logging
    fs/dlm/debug_fs.c: replace seq_printf by seq_puts
    fs/dlm/lockspace.c: convert simple_str to kstr
    fs/dlm/config.c: convert simple_str to kstr
    mm: mark remap_file_pages() syscall as deprecated
    mm: memcontrol: remove unnecessary memcg argument from soft limit functions
    mm: memcontrol: clean up memcg zoneinfo lookup
    mm/memblock.c: call kmemleak directly from memblock_(alloc|free)
    mm/mempool.c: update the kmemleak stack trace for mempool allocations
    lib/radix-tree.c: update the kmemleak stack trace for radix tree allocations
    mm: introduce kmemleak_update_trace()
    mm/kmemleak.c: use %u to print ->checksum
    ...

    Linus Torvalds
     

06 Jun, 2014

1 commit

  • While working address sanitizer for kernel I've discovered
    use-after-free bug in __put_anon_vma.

    For the last anon_vma, anon_vma->root freed before child anon_vma.
    Later in anon_vma_free(anon_vma) we are referencing to already freed
    anon_vma->root to check rwsem.

    This fixes it by freeing the child anon_vma before freeing
    anon_vma->root.

    Signed-off-by: Andrey Ryabinin
    Acked-by: Peter Zijlstra
    Cc: # v3.0+
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

05 Jun, 2014

7 commits

  • Transform action part of ttu_flags into individiual bits. These flags
    aren't part of any uses-space visible api or even trace events.

    Signed-off-by: Konstantin Khlebnikov
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov
     
  • In its munmap mode, try_to_unmap_one() searches other mlocked vmas, it
    never unmaps pages. There is no reason for invalidation because ptes are
    left unchanged.

    Signed-off-by: Konstantin Khlebnikov
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov
     
  • In previous commit(mm: use the light version __mod_zone_page_state in
    mlocked_vma_newpage()) a irq-unsafe __mod_zone_page_state is used. And as
    suggested by Andrew, to reduce the risks that new call sites incorrectly
    using mlocked_vma_newpage() without knowing they are adding racing, this
    patch folds mlocked_vma_newpage() into its only call site,
    page_add_new_anon_rmap, to make it open-cocded for people to know what is
    going on.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Jianyu Zhan
    Suggested-by: Andrew Morton
    Suggested-by: Hugh Dickins
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianyu Zhan
     
  • mlocked_vma_newpage() is called with pte lock held(a spinlock), which
    implies preemtion disabled, and the vm stat counter is not modified from
    interrupt context, so we need not use an irq-safe mod_zone_page_state()
    here, using a light-weight version __mod_zone_page_state() would be OK.

    This patch also documents __mod_zone_page_state() and some of its
    callsites. The comment above __mod_zone_page_state() is from Hugh
    Dickins, and acked by Christoph.

    Most credits to Hugh and Christoph for the clarification on the usage of
    the __mod_zone_page_state().

    [akpm@linux-foundation.org: coding-style fixes]
    Suggested-by: Andrew Morton
    Acked-by: Hugh Dickins
    Signed-off-by: Jianyu Zhan
    Reviewed-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianyu Zhan
     
  • KSM was converted to use rmap_walk() and now nobody uses these functions
    outside mm/rmap.c.

    Let's covert them back to static.

    Signed-off-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     
  • pte_file_mksoft_dirty operates with argument passed by a value and
    returns modified result thus we need to assign @ptfile here, otherwise
    itis a no-op which may lead to loss of the softdirty bit.

    Signed-off-by: Cyrill Gorcunov
    Cc: Pavel Emelyanov
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • Trinity reports BUG:

    sleeping function called from invalid context at kernel/locking/rwsem.c:47
    in_atomic(): 0, irqs_disabled(): 0, pid: 5787, name: trinity-c27

    __might_sleep < down_write < __put_anon_vma < page_get_anon_vma <
    migrate_pages < compact_zone < compact_zone_order < try_to_compact_pages ..

    Right, since conversion to mutex then rwsem, we should not put_anon_vma()
    from inside an rcu_read_lock()ed section: fix the two places that did so.
    And add might_sleep() to anon_vma_free(), as suggested by Peter Zijlstra.

    Fixes: 88c22088bf23 ("mm: optimize page_lock_anon_vma() fast-path")
    Reported-by: Dave Jones
    Signed-off-by: Hugh Dickins
    Cc: Peter Zijlstra
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

08 Apr, 2014

1 commit

  • A BUG_ON(!PageLocked) was triggered in mlock_vma_page() by Sasha Levin
    fuzzing with trinity. The call site try_to_unmap_cluster() does not lock
    the pages other than its check_page parameter (which is already locked).

    The BUG_ON in mlock_vma_page() is not documented and its purpose is
    somewhat unclear, but apparently it serializes against page migration,
    which could otherwise fail to transfer the PG_mlocked flag. This would
    not be fatal, as the page would be eventually encountered again, but
    NR_MLOCK accounting would become distorted nevertheless. This patch adds
    a comment to the BUG_ON in mlock_vma_page() and munlock_vma_page() to that
    effect.

    The call site try_to_unmap_cluster() is fixed so that for page !=
    check_page, trylock_page() is attempted (to avoid possible deadlocks as we
    already have check_page locked) and mlock_vma_page() is performed only
    upon success. If the page lock cannot be obtained, the page is left
    without PG_mlocked, which is again not a problem in the whole unevictable
    memory design.

    Signed-off-by: Vlastimil Babka
    Signed-off-by: Bob Liu
    Reported-by: Sasha Levin
    Cc: Wanpeng Li
    Cc: Michel Lespinasse
    Cc: KOSAKI Motohiro
    Acked-by: Rik van Riel
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: Joonsoo Kim
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vlastimil Babka
     

01 Apr, 2014

1 commit

  • Pull s390 updates from Martin Schwidefsky:
    "There are two memory management related changes, the CMMA support for
    KVM to avoid swap-in of freed pages and the split page table lock for
    the PMD level. These two come with common code changes in mm/.

    A fix for the long standing theoretical TLB flush problem, this one
    comes with a common code change in kernel/sched/.

    Another set of changes is Heikos uaccess work, included is the initial
    set of patches with more to come.

    And fixes and cleanups as usual"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (36 commits)
    s390/con3270: optionally disable auto update
    s390/mm: remove unecessary parameter from pgste_ipte_notify
    s390/mm: remove unnecessary parameter from gmap_do_ipte_notify
    s390/mm: fixing comment so that parameter name match
    s390/smp: limit number of cpus in possible cpu mask
    hypfs: Add clarification for "weight_min" attribute
    s390: update defconfigs
    s390/ptrace: add support for PTRACE_SINGLEBLOCK
    s390/perf: make print_debug_cf() static
    s390/topology: Remove call to update_cpu_masks()
    s390/compat: remove compat exec domain
    s390: select CONFIG_TTY for use of tty in unconditional keyboard driver
    s390/appldata_os: fix cpu array size calculation
    s390/checksum: remove memset() within csum_partial_copy_from_user()
    s390/uaccess: remove copy_from_user_real()
    s390/sclp_early: Return correct HSA block count also for zero
    s390: add some drivers/subsystems to the MAINTAINERS file
    s390: improve debug feature usage
    s390/airq: add support for irq ranges
    s390/mm: enable split page table lock for PMD level
    ...

    Linus Torvalds
     

21 Mar, 2014

1 commit

  • Add remove_linear_migration_ptes_from_nonlinear(), to fix an interesting
    little include/linux/swapops.h:131 BUG_ON(!PageLocked) found by trinity:
    indicating that remove_migration_ptes() failed to find one of the
    migration entries that was temporarily inserted.

    The problem comes from remap_file_pages()'s switch from vma_interval_tree
    (good for inserting the migration entry) to i_mmap_nonlinear list (no good
    for locating it again); but can only be a problem if the remap_file_pages()
    range does not cover the whole of the vma (zap_pte() clears the range).

    remove_migration_ptes() needs a file_nonlinear method to go down the
    i_mmap_nonlinear list, applying linear location to look for migration
    entries in those vmas too, just in case there was this race.

    The file_nonlinear method does need rmap_walk_control.arg to do this;
    but it never needed vma passed in - vma comes from its own iteration.

    Reported-and-tested-by: Dave Jones
    Reported-and-tested-by: Sasha Levin
    Signed-off-by: Hugh Dickins
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

21 Feb, 2014

1 commit

  • In a virtualized environment and given an appropriate interface the guest
    can mark pages as unused while they are free (for the s390 implementation
    see git commit 45e576b1c3d00206 "guest page hinting light"). For the host
    the unused state is a property of the pte.

    This patch adds the primitive 'pte_unused' and code to the host swap out
    handler so that pages marked as unused by all mappers are not swapped out
    but discarded instead, thus saving one IO for swap out and potentially
    another one for swap in.

    [ Martin Schwidefsky: patch reordering and simplification ]

    Signed-off-by: Konstantin Weitz
    Signed-off-by: Martin Schwidefsky

    Konstantin Weitz
     

24 Jan, 2014

2 commits

  • mm/rmap.c:851:9-10: WARNING: return of 0/1 in function 'invalid_mkclean_vma' with return type bool

    Return statements in functions returning bool should use
    true/false instead of 1/0.

    Generated by: coccinelle/misc/boolreturn.cocci

    Signed-off-by: Fengguang Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Most of the VM_BUG_ON assertions are performed on a page. Usually, when
    one of these assertions fails we'll get a BUG_ON with a call stack and
    the registers.

    I've recently noticed based on the requests to add a small piece of code
    that dumps the page to various VM_BUG_ON sites that the page dump is
    quite useful to people debugging issues in mm.

    This patch adds a VM_BUG_ON_PAGE(cond, page) which beyond doing what
    VM_BUG_ON() does, also dumps the page before executing the actual
    BUG_ON.

    [akpm@linux-foundation.org: fix up includes]
    Signed-off-by: Sasha Levin
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

22 Jan, 2014

2 commits

  • Now, we have an infrastructure in rmap_walk() to handle difference from
    variants of rmap traversing functions.

    So, just use it in page_mkclean().

    In this patch, I change following things.

    1. remove some variants of rmap traversing functions.
    cf> page_mkclean_file
    2. mechanical change to use rmap_walk() in page_mkclean().

    Signed-off-by: Joonsoo Kim
    Reviewed-by: Naoya Horiguchi
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: Rik van Riel
    Cc: Ingo Molnar
    Cc: Hillf Danton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Now, we have an infrastructure in rmap_walk() to handle difference from
    variants of rmap traversing functions.

    So, just use it in page_referenced().

    In this patch, I change following things.

    1. remove some variants of rmap traversing functions.
    cf> page_referenced_ksm, page_referenced_anon,
    page_referenced_file

    2. introduce new struct page_referenced_arg and pass it to
    page_referenced_one(), main function of rmap_walk, in order to count
    reference, to store vm_flags and to check finish condition.

    3. mechanical change to use rmap_walk() in page_referenced().

    [liwanp@linux.vnet.ibm.com: fix BUG at rmap_walk]
    Signed-off-by: Joonsoo Kim
    Reviewed-by: Naoya Horiguchi
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: Rik van Riel
    Cc: Ingo Molnar
    Cc: Hillf Danton
    Signed-off-by: Wanpeng Li
    Cc: Sasha Levin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim