05 Jul, 2015

1 commit

  • Pull more vfs updates from Al Viro:
    "Assorted VFS fixes and related cleanups (IMO the most interesting in
    that part are f_path-related things and Eric's descriptor-related
    stuff). UFS regression fixes (it got broken last cycle). 9P fixes.
    fs-cache series, DAX patches, Jan's file_remove_suid() work"

    [ I'd say this is much more than "fixes and related cleanups". The
    file_table locking rule change by Eric Dumazet is a rather big and
    fundamental update even if the patch isn't huge. - Linus ]

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (49 commits)
    9p: cope with bogus responses from server in p9_client_{read,write}
    p9_client_write(): avoid double p9_free_req()
    9p: forgetting to cancel request on interrupted zero-copy RPC
    dax: bdev_direct_access() may sleep
    block: Add support for DAX reads/writes to block devices
    dax: Use copy_from_iter_nocache
    dax: Add block size note to documentation
    fs/file.c: __fget() and dup2() atomicity rules
    fs/file.c: don't acquire files->file_lock in fd_install()
    fs:super:get_anon_bdev: fix race condition could cause dev exceed its upper limitation
    vfs: avoid creation of inode number 0 in get_next_ino
    namei: make set_root_rcu() return void
    make simple_positive() public
    ufs: use dir_pages instead of ufs_dir_pages()
    pagemap.h: move dir_pages() over there
    remove the pointless include of lglock.h
    fs: cleanup slight list_entry abuse
    xfs: Correctly lock inode when removing suid and file capabilities
    fs: Call security_ops->inode_killpriv on truncate
    fs: Provide function telling whether file_remove_privs() will do anything
    ...

    Linus Torvalds
     

04 Jul, 2015

1 commit

  • Pull block fixes from Jens Axboe:
    "Mainly sending this off now for the writeback fixes, since they fix a
    real regression introduced with the cgroup writeback changes. The
    NVMe fix could wait for next pull for this series, but it's simple
    enough that we might as well include it.

    This contains:

    - two cgroup writeback fixes from Tejun, fixing a user reported issue
    with luks crypt devices hanging when being closed.

    - NVMe error cleanup fix from Jon Derrick, fixing a case where we'd
    attempt to free an unregistered IRQ"

    * 'for-linus' of git://git.kernel.dk/linux-block:
    NVMe: Fix irq freeing when queue_request_irq fails
    writeback: don't drain bdi_writeback_congested on bdi destruction
    writeback: don't embed root bdi_writeback_congested in bdi_writeback

    Linus Torvalds
     

03 Jul, 2015

1 commit

  • …scm/linux/kernel/git/paulg/linux

    Pull module_init replacement part two from Paul Gortmaker:
    "Replace module_init with appropriate alternate initcall in non
    modules.

    This series converts non-modular code that is using the module_init()
    call to hook itself into the system to instead use one of our
    alternate priority initcalls.

    Unlike the previous series that used device_initcall and hence was a
    runtime no-op, these commits change to one of the alternate initcalls,
    because (a) we have them and (b) it seems like the right thing to do.

    For example, it would seem logical to use arch_initcall for arch
    specific setup code and fs_initcall for filesystem setup code.

    This does mean however, that changes in the init ordering will be
    taking place, and so there is a small risk that some kind of implicit
    init ordering issue may lie uncovered. But I think it is still better
    to give these ones sensible priorities than to just assign them all to
    device_initcall in order to exactly preserve the old ordering.

    Thad said, we have already made similar changes in core kernel code in
    commit c96d6660dc65 ("kernel: audit/fix non-modular users of
    module_init in core code") without any regressions reported, so this
    type of change isn't without precedent. It has also got the same
    local testing and linux-next coverage as all the other pull requests
    that I'm sending for this merge window have got.

    Once again, there is an unused module_exit function removal that shows
    up as an outlier upon casual inspection of the diffstat"

    * tag 'module_init-alternate_initcall-v4.1-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux:
    x86: perf_event_intel_pt.c: use arch_initcall to hook in enabling
    x86: perf_event_intel_bts.c: use arch_initcall to hook in enabling
    mm/page_owner.c: use late_initcall to hook in enabling
    lib/list_sort: use late_initcall to hook in self tests
    arm: use subsys_initcall in non-modular pl320 IPC code
    powerpc: don't use module_init for non-modular core hugetlb code
    powerpc: use subsys_initcall for Freescale Local Bus
    x86: don't use module_init for non-modular core bootflag code
    netfilter: don't use module_init/exit in core IPV4 code
    fs/notify: don't use module_init for non-modular inotify_user code
    mm: replace module_init usages with subsys_initcall in nommu.c

    Linus Torvalds
     

02 Jul, 2015

4 commits

  • 52ebea749aae ("writeback: make backing_dev_info host cgroup-specific
    bdi_writebacks") made bdi (backing_dev_info) host per-cgroup wb's
    (bdi_writeback's). As the congested state needs to be per-wb and
    referenced from blkcg side and multiple wbs, the patch made all
    non-root cong's (bdi_writeback_congested's) reference counted and
    indexed on bdi.

    When a bdi is destroyed, cgwb_bdi_destroy() tries to drain all
    non-root cong's; however, this can hang indefinitely because wb's can
    also be referenced from blkcg_gq's which are destroyed after bdi
    destruction is complete.

    This patch fixes the bug by updating bdi destruction to not wait for
    cong's to drain. A cong is unlinked from bdi->cgwb_congested_tree on
    bdi destuction regardless of its reference count as the bdi may go
    away any point after destruction. wb_congested_put() checks whether
    the cong is already unlinked on release.

    Signed-off-by: Tejun Heo
    Reported-by: Jon Christopherson
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=100681
    Fixes: 52ebea749aae ("writeback: make backing_dev_info host cgroup-specific bdi_writebacks")
    Tested-by: Jon Christopherson
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • 52ebea749aae ("writeback: make backing_dev_info host cgroup-specific
    bdi_writebacks") made bdi (backing_dev_info) host per-cgroup wb's
    (bdi_writeback's). As the congested state needs to be per-wb and
    referenced from blkcg side and multiple wbs, the patch made all
    non-root cong's (bdi_writeback_congested's) reference counted and
    indexed on bdi.

    When a bdi is destroyed, cgwb_bdi_destroy() tries to drain all
    non-root cong's; however, this can hang indefinitely because wb's can
    also be referenced from blkcg_gq's which are destroyed after bdi
    destruction is complete.

    To fix the bug, bdi destruction will be updated to not wait for cong's
    to drain, which naturally means that cong's may outlive the associated
    bdi. This is fine for non-root cong's but is problematic for the root
    cong's which are embedded in their bdi's as they may end up getting
    dereferenced after the containing bdi's are freed.

    This patch makes root cong's behave the same as non-root cong's. They
    are no longer embedded in their bdi's but allocated separately during
    bdi initialization, indexed and reference counted the same way.

    * As cong handling is the same for all wb's, wb->congested
    initialization is moved into wb_init().

    * When !CONFIG_CGROUP_WRITEBACK, there was no indexing or refcnting.
    bdi->wb_congested is now a pointer pointing to the root cong
    allocated during bdi init and minimal refcnting operations are
    implemented.

    * The above makes root wb init paths diverge depending on
    CONFIG_CGROUP_WRITEBACK. root wb init is moved to cgwb_bdi_init().

    This patch in itself shouldn't cause any consequential behavior
    differences but prepares for the actual fix.

    Signed-off-by: Tejun Heo
    Reported-by: Jon Christopherson
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=100681
    Tested-by: Jon Christopherson

    Added include to backing-dev.h for kfree() definition.

    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Merge third patchbomb from Andrew Morton:

    - the rest of MM

    - scripts/gdb updates

    - ipc/ updates

    - lib/ updates

    - MAINTAINERS updates

    - various other misc things

    * emailed patches from Andrew Morton : (67 commits)
    genalloc: rename of_get_named_gen_pool() to of_gen_pool_get()
    genalloc: rename dev_get_gen_pool() to gen_pool_get()
    x86: opt into HAVE_COPY_THREAD_TLS, for both 32-bit and 64-bit
    MAINTAINERS: add zpool
    MAINTAINERS: BCACHE: Kent Overstreet has changed email address
    MAINTAINERS: move Jens Osterkamp to CREDITS
    MAINTAINERS: remove unused nbd.h pattern
    MAINTAINERS: update brcm gpio filename pattern
    MAINTAINERS: update brcm dts pattern
    MAINTAINERS: update sound soc intel patterns
    MAINTAINERS: remove website for paride
    MAINTAINERS: update Emulex ocrdma email addresses
    bcache: use kvfree() in various places
    libcxgbi: use kvfree() in cxgbi_free_big_mem()
    target: use kvfree() in session alloc and free
    IB/ehca: use kvfree() in ipz_queue_{cd}tor()
    drm/nouveau/gem: use kvfree() in u_free()
    drm: use kvfree() in drm_free_large()
    cxgb4: use kvfree() in t4_free_mem()
    cxgb3: use kvfree() in cxgb_free_mem()
    ...

    Linus Torvalds
     
  • Avoid the warning:

    WARNING: mm/built-in.o(.text.unlikely+0xc22): Section mismatch in reference from the function .new_kmalloc_cache() to the variable .init.rodata:kmalloc_info
    The function .new_kmalloc_cache() references
    the variable __initconst kmalloc_info.

    Signed-off-by: Christoph Lameter
    Reported-by: Stephen Rothwell
    Tested-by: Geert Uytterhoeven
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 Jul, 2015

13 commits

  • Waiman Long reported that 24TB machines hit OOM during basic setup when
    struct page initialisation was deferred. One approach is to initialise
    memory on demand but it interferes with page allocator paths. This patch
    creates dedicated threads to initialise memory before basic setup. It
    then blocks on a rw_semaphore until completion as a wait_queue and counter
    is overkill. This may be slower to boot but it's simplier overall and
    also gets rid of a section mangling which existed so kswapd could do the
    initialisation.

    [akpm@linux-foundation.org: include rwsem.h, use DECLARE_RWSEM, fix comment, remove unneeded cast]
    Signed-off-by: Mel Gorman
    Cc: Waiman Long
    Cc: Dave Hansen
    Cc: Scott Norton
    Tested-by: Daniel J Blueman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • mminit_verify_page_links() is an extremely paranoid check that was
    introduced when memory initialisation was being heavily reworked.
    Profiles indicated that up to 10% of parallel memory initialisation was
    spent on checking this for every page. The cost could be reduced but in
    practice this check only found problems very early during the
    initialisation rewrite and has found nothing since. This patch removes an
    expensive unnecessary check.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • During parallel sturct page initialisation, ranges are checked for every
    PFN unnecessarily which increases boot times. This patch alters when the
    ranges are checked.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Parallel struct page frees pages one at a time. Try free pages as single
    large pages where possible.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Deferred struct page initialisation is using pfn_to_page() on every PFN
    unnecessarily. This patch minimises the number of lookups and scheduler
    checks.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Only a subset of struct pages are initialised at the moment. When this
    patch is applied kswapd initialise the remaining struct pages in parallel.

    This should boot faster by spreading the work to multiple CPUs and
    initialising data that is local to the CPU. The user-visible effect on
    large machines is that free memory will appear to rapidly increase early
    in the lifetime of the system until kswapd reports that all memory is
    initialised in the kernel log. Once initialised there should be no other
    user-visibile effects.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This patch initalises all low memory struct pages and 2G of the highest
    zone on each node during memory initialisation if
    CONFIG_DEFERRED_STRUCT_PAGE_INIT is set. That config option cannot be set
    but will be available in a later patch. Parallel initialisation of struct
    page depends on some features from memory hotplug and it is necessary to
    alter alter section annotations.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • early_pfn_in_nid() and meminit_pfn_in_nid() are small functions that are
    unnecessarily visible outside memory initialisation. As well as
    unnecessary visibility, it's unnecessary function call overhead when
    initialising pages. This patch moves the helpers inline.

    [akpm@linux-foundation.org: fix build]
    [mhocko@suse.cz: fix build]
    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • __early_pfn_to_nid() use static variables to cache recent lookups as
    memblock lookups are very expensive but it assumes that memory
    initialisation is single-threaded. Parallel initialisation of struct
    pages will break that assumption so this patch makes __early_pfn_to_nid()
    SMP-safe by requiring the caller to cache recent search information.
    early_pfn_to_nid() keeps the same interface but is only safe to use early
    in boot due to the use of a global static variable. meminit_pfn_in_nid()
    is an SMP-safe version that callers must maintain their own state for.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • __free_pages_bootmem prepares a page for release to the buddy allocator
    and assumes that the struct page is initialised. Parallel initialisation
    of struct pages defers initialisation and __free_pages_bootmem can be
    called for struct pages that cannot yet map struct page to PFN. This
    patch passes PFN to __free_pages_bootmem with no other functional change.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Currently each page struct is set as reserved upon initialization. This
    patch leaves the reserved bit clear and only sets the reserved bit when it
    is known the memory was allocated by the bootmem allocator. This makes it
    easier to distinguish between uninitialised struct pages and reserved
    struct pages in later patches.

    Signed-off-by: Robin Holt
    Signed-off-by: Nathan Zimmer
    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nathan Zimmer
     
  • Currently, memmap_init_zone() has all the smarts for initializing a single
    page. A subset of this is required for parallel page initialisation and
    so this patch breaks up the monolithic function in preparation.

    Signed-off-by: Robin Holt
    Signed-off-by: Nathan Zimmer
    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     
  • Struct page initialisation had been identified as one of the reasons why
    large machines take a long time to boot. Patches were posted a long time ago
    to defer initialisation until they were first used. This was rejected on
    the grounds it should not be necessary to hurt the fast paths. This series
    reuses much of the work from that time but defers the initialisation of
    memory to kswapd so that one thread per node initialises memory local to
    that node.

    After applying the series and setting the appropriate Kconfig variable I
    see this in the boot log on a 64G machine

    [ 7.383764] kswapd 0 initialised deferred memory in 188ms
    [ 7.404253] kswapd 1 initialised deferred memory in 208ms
    [ 7.411044] kswapd 3 initialised deferred memory in 216ms
    [ 7.411551] kswapd 2 initialised deferred memory in 216ms

    On a 1TB machine, I see

    [ 8.406511] kswapd 3 initialised deferred memory in 1116ms
    [ 8.428518] kswapd 1 initialised deferred memory in 1140ms
    [ 8.435977] kswapd 0 initialised deferred memory in 1148ms
    [ 8.437416] kswapd 2 initialised deferred memory in 1148ms

    Once booted the machine appears to work as normal. Boot times were measured
    from the time shutdown was called until ssh was available again. In the
    64G case, the boot time savings are negligible. On the 1TB machine, the
    savings were 16 seconds.

    Nate Zimmer said:

    : On an older 8 TB box with lots and lots of cpus the boot time, as
    : measure from grub to login prompt, the boot time improved from 1484
    : seconds to exactly 1000 seconds.

    Waiman Long said:

    : I ran a bootup timing test on a 12-TB 16-socket IvyBridge-EX system. From
    : grub menu to ssh login, the bootup time was 453s before the patch and 265s
    : after the patch - a saving of 188s (42%).

    Daniel Blueman said:

    : On a 7TB, 1728-core NumaConnect system with 108 NUMA nodes, we're seeing
    : stock 4.0 boot in 7136s. This drops to 2159s, or a 70% reduction with
    : this patchset. Non-temporal PMD init (https://lkml.org/lkml/2015/4/23/350)
    : drops this to 1045s.

    This patch (of 13):

    As part of initializing struct page's in 2MiB chunks, we noticed that at
    the end of free_all_bootmem(), there was nothing which had forced the
    reserved/allocated 4KiB pages to be initialized.

    This helper function will be used for that expansion.

    Signed-off-by: Robin Holt
    Signed-off-by: Nate Zimmer
    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     

30 Jun, 2015

1 commit

  • This patch restores the slab creation sequence that was broken by commit
    4066c33d0308f8 and also reverts the portions that introduced the
    KMALLOC_LOOP_XXX macros. Those can never really work since the slab creation
    is much more complex than just going from a minimum to a maximum number.

    The latest upstream kernel boots cleanly on my machine with a 64 bit x86
    configuration under KVM using either SLAB or SLUB.

    Fixes: 4066c33d0308f8 ("support the slub_debug boot option")
    Reported-by: Theodore Ts'o
    Signed-off-by: Christoph Lameter
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

27 Jun, 2015

2 commits

  • Pull tracing updates from Steven Rostedt:
    "This patch series contains several clean ups and even a new trace
    clock "monitonic raw". Also some enhancements to make the ring buffer
    even faster. But the biggest and most noticeable change is the
    renaming of the ftrace* files, structures and variables that have to
    deal with trace events.

    Over the years I've had several developers tell me about their
    confusion with what ftrace is compared to events. Technically,
    "ftrace" is the infrastructure to do the function hooks, which include
    tracing and also helps with live kernel patching. But the trace
    events are a separate entity altogether, and the files that affect the
    trace events should not be named "ftrace". These include:

    include/trace/ftrace.h -> include/trace/trace_events.h
    include/linux/ftrace_event.h -> include/linux/trace_events.h

    Also, functions that are specific for trace events have also been renamed:

    ftrace_print_*() -> trace_print_*()
    (un)register_ftrace_event() -> (un)register_trace_event()
    ftrace_event_name() -> trace_event_name()
    ftrace_trigger_soft_disabled() -> trace_trigger_soft_disabled()
    ftrace_define_fields_##call() -> trace_define_fields_##call()
    ftrace_get_offsets_##call() -> trace_get_offsets_##call()

    Structures have been renamed:

    ftrace_event_file -> trace_event_file
    ftrace_event_{call,class} -> trace_event_{call,class}
    ftrace_event_buffer -> trace_event_buffer
    ftrace_subsystem_dir -> trace_subsystem_dir
    ftrace_event_raw_##call -> trace_event_raw_##call
    ftrace_event_data_offset_##call-> trace_event_data_offset_##call
    ftrace_event_type_funcs_##call -> trace_event_type_funcs_##call

    And a few various variables and flags have also been updated.

    This has been sitting in linux-next for some time, and I have not
    heard a single complaint about this rename breaking anything. Mostly
    because these functions, variables and structures are mostly internal
    to the tracing system and are seldom (if ever) used by anything
    external to that"

    * tag 'trace-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (33 commits)
    ring_buffer: Allow to exit the ring buffer benchmark immediately
    ring-buffer-benchmark: Fix the wrong type
    ring-buffer-benchmark: Fix the wrong param in module_param
    ring-buffer: Add enum names for the context levels
    ring-buffer: Remove useless unused tracing_off_permanent()
    ring-buffer: Give NMIs a chance to lock the reader_lock
    ring-buffer: Add trace_recursive checks to ring_buffer_write()
    ring-buffer: Allways do the trace_recursive checks
    ring-buffer: Move recursive check to per_cpu descriptor
    ring-buffer: Add unlikelys to make fast path the default
    tracing: Rename ftrace_get_offsets_##call() to trace_event_get_offsets_##call()
    tracing: Rename ftrace_define_fields_##call() to trace_event_define_fields_##call()
    tracing: Rename ftrace_event_type_funcs_##call to trace_event_type_funcs_##call
    tracing: Rename ftrace_data_offset_##call to trace_event_data_offset_##call
    tracing: Rename ftrace_raw_##call event structures to trace_event_raw_##call
    tracing: Rename ftrace_trigger_soft_disabled() to trace_trigger_soft_disabled()
    tracing: Rename FTRACE_EVENT_FL_* flags to EVENT_FILE_FL_*
    tracing: Rename struct ftrace_subsystem_dir to trace_subsystem_dir
    tracing: Rename ftrace_event_name() to trace_event_name()
    tracing: Rename FTRACE_MAX_EVENT to TRACE_EVENT_TYPE_MAX
    ...

    Linus Torvalds
     
  • Merge second patchbomb from Andrew Morton:

    - most of the rest of MM

    - lots of misc things

    - procfs updates

    - printk feature work

    - updates to get_maintainer, MAINTAINERS, checkpatch

    - lib/ updates

    * emailed patches from Andrew Morton : (96 commits)
    exit,stats: /* obey this comment */
    coredump: add __printf attribute to cn_*printf functions
    coredump: use from_kuid/kgid when formatting corename
    fs/reiserfs: remove unneeded cast
    NILFS2: support NFSv2 export
    fs/befs/btree.c: remove unneeded initializations
    fs/minix: remove unneeded cast
    init/do_mounts.c: add create_dev() failure log
    kasan: remove duplicate definition of the macro KASAN_FREE_PAGE
    fs/efs: femove unneeded cast
    checkpatch: emit "NOTE: " message only once after multiple files
    checkpatch: emit an error when there's a diff in a changelog
    checkpatch: validate MODULE_LICENSE content
    checkpatch: add multi-line handling for PREFER_ETHER_ADDR_COPY
    checkpatch: suggest using eth_zero_addr() and eth_broadcast_addr()
    checkpatch: fix processing of MEMSET issues
    checkpatch: suggest using ether_addr_equal*()
    checkpatch: avoid NOT_UNIFIED_DIFF errors on cover-letter.patch files
    checkpatch: remove local from codespell path
    checkpatch: add --showfile to allow input via pipe to show filenames
    ...

    Linus Torvalds
     

26 Jun, 2015

8 commits

  • Remove duplicate definition of the macro KASAN_FREE_PAGE in
    mm/kasan/kasan.h

    Signed-off-by: Wang Long
    Acked-by: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Long
     
  • Remove zpool_evict() helper function. As zbud is currently the only
    zpool implementation that supports eviction, add zpool and zpool_ops
    references to struct zbud_pool and directly call zpool_ops->evict(zpool,
    handle) on eviction.

    Currently zpool provides the zpool_evict helper which locks the zpool
    list lock and searches through all pools to find the specific one
    matching the caller, and call the corresponding zpool_ops->evict
    function. However, this is unnecessary, as the zbud pool can simply
    keep a reference to the zpool that created it, as well as the zpool_ops,
    and directly call the zpool_ops->evict function, when it needs to evict
    a page. This avoids a spinlock and list search in zpool for each
    eviction.

    Signed-off-by: Dan Streetman
    Cc: Seth Jennings
    Cc: Minchan Kim
    Cc: Nitin Gupta
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Streetman
     
  • Change the pr_info() calls to pr_debug(). There's no need for the extra
    verbosity in the log. Also change the msg formats to be consistent.

    Signed-off-by: Dan Streetman
    Cc: Kees Cook
    Cc: Minchan Kim
    Cc: Ganesh Mahendran
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Streetman
     
  • Change the "enabled" parameter to be configurable at runtime. Remove the
    enabled check from init(), and move it to the frontswap store() function;
    when enabled, pages will be stored, and when disabled, pages won't be
    stored.

    This is almost identical to Seth's patch from 2 years ago:
    http://lkml.iu.edu/hypermail/linux/kernel/1307.2/04289.html

    [akpm@linux-foundation.org: tweak documentation]
    Signed-off-by: Dan Streetman
    Suggested-by: Seth Jennings
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Streetman
     
  • The DEBUG define in zsmalloc is useless, there is no usage of it at all.

    Signed-off-by: Marcin Jabrzyk
    Acked-by: Sergey Senozhatsky
    Cc: Minchan Kim
    Cc: Nitin Gupta
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marcin Jabrzyk
     
  • With s390 dropping support for emulated hugepages, the last user of
    arch_prepare_hugepage and arch_release_hugepage is gone.

    Signed-off-by: Dominik Dingel
    Acked-by: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Christian Borntraeger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dominik Dingel
     
  • Pull cgroup writeback support from Jens Axboe:
    "This is the big pull request for adding cgroup writeback support.

    This code has been in development for a long time, and it has been
    simmering in for-next for a good chunk of this cycle too. This is one
    of those problems that has been talked about for at least half a
    decade, finally there's a solution and code to go with it.

    Also see last weeks writeup on LWN:

    http://lwn.net/Articles/648292/"

    * 'for-4.2/writeback' of git://git.kernel.dk/linux-block: (85 commits)
    writeback, blkio: add documentation for cgroup writeback support
    vfs, writeback: replace FS_CGROUP_WRITEBACK with SB_I_CGROUPWB
    writeback: do foreign inode detection iff cgroup writeback is enabled
    v9fs: fix error handling in v9fs_session_init()
    bdi: fix wrong error return value in cgwb_create()
    buffer: remove unusued 'ret' variable
    writeback: disassociate inodes from dying bdi_writebacks
    writeback: implement foreign cgroup inode bdi_writeback switching
    writeback: add lockdep annotation to inode_to_wb()
    writeback: use unlocked_inode_to_wb transaction in inode_congested()
    writeback: implement unlocked_inode_to_wb transaction and use it for stat updates
    writeback: implement [locked_]inode_to_wb_and_lock_list()
    writeback: implement foreign cgroup inode detection
    writeback: make writeback_control track the inode being written back
    writeback: relocate wb[_try]_get(), wb_put(), inode_{attach|detach}_wb()
    mm: vmscan: disable memcg direct reclaim stalling if cgroup writeback support is in use
    writeback: implement memcg writeback domain based throttling
    writeback: reset wb_domain->dirty_limit[_tstmp] when memcg domain size changes
    writeback: implement memcg wb_domain
    writeback: update wb_over_bg_thresh() to use wb_domain aware operations
    ...

    Linus Torvalds
     
  • Pull core block IO update from Jens Axboe:
    "Nothing really major in here, mostly a collection of smaller
    optimizations and cleanups, mixed with various fixes. In more detail,
    this contains:

    - Addition of policy specific data to blkcg for block cgroups. From
    Arianna Avanzini.

    - Various cleanups around command types from Christoph.

    - Cleanup of the suspend block I/O path from Christoph.

    - Plugging updates from Shaohua and Jeff Moyer, for blk-mq.

    - Eliminating atomic inc/dec of both remaining IO count and reference
    count in a bio. From me.

    - Fixes for SG gap and chunk size support for data-less (discards)
    IO, so we can merge these better. From me.

    - Small restructuring of blk-mq shared tag support, freeing drivers
    from iterating hardware queues. From Keith Busch.

    - A few cfq-iosched tweaks, from Tahsin Erdogan and me. Makes the
    IOPS mode the default for non-rotational storage"

    * 'for-4.2/core' of git://git.kernel.dk/linux-block: (35 commits)
    cfq-iosched: fix other locations where blkcg_to_cfqgd() can return NULL
    cfq-iosched: fix sysfs oops when attempting to read unconfigured weights
    cfq-iosched: move group scheduling functions under ifdef
    cfq-iosched: fix the setting of IOPS mode on SSDs
    blktrace: Add blktrace.c to BLOCK LAYER in MAINTAINERS file
    block, cgroup: implement policy-specific per-blkcg data
    block: Make CFQ default to IOPS mode on SSDs
    block: add blk_set_queue_dying() to blkdev.h
    blk-mq: Shared tag enhancements
    block: don't honor chunk sizes for data-less IO
    block: only honor SG gap prevention for merges that contain data
    block: fix returnvar.cocci warnings
    block, dm: don't copy bios for request clones
    block: remove management of bi_remaining when restoring original bi_end_io
    block: replace trylock with mutex_lock in blkdev_reread_part()
    block: export blkdev_reread_part() and __blkdev_reread_part()
    suspend: simplify block I/O handling
    block: collapse bio bit space
    block: remove unused BIO_RW_BLOCK and BIO_EOF flags
    block: remove BIO_EOPNOTSUPP
    ...

    Linus Torvalds
     

25 Jun, 2015

9 commits

  • Merge first patchbomb from Andrew Morton:

    - a few misc things

    - ocfs2 udpates

    - kernel/watchdog.c feature work (took ages to get right)

    - most of MM. A few tricky bits are held up and probably won't make 4.2.

    * emailed patches from Andrew Morton : (91 commits)
    mm: kmemleak_alloc_percpu() should follow the gfp from per_alloc()
    mm, thp: respect MPOL_PREFERRED policy with non-local node
    tmpfs: truncate prealloc blocks past i_size
    mm/memory hotplug: print the last vmemmap region at the end of hot add memory
    mm/mmap.c: optimization of do_mmap_pgoff function
    mm: kmemleak: optimise kmemleak_lock acquiring during kmemleak_scan
    mm: kmemleak: avoid deadlock on the kmemleak object insertion error path
    mm: kmemleak: do not acquire scan_mutex in kmemleak_do_cleanup()
    mm: kmemleak: fix delete_object_*() race when called on the same memory block
    mm: kmemleak: allow safe memory scanning during kmemleak disabling
    memcg: convert mem_cgroup->under_oom from atomic_t to int
    memcg: remove unused mem_cgroup->oom_wakeups
    frontswap: allow multiple backends
    x86, mirror: x86 enabling - find mirrored memory ranges
    mm/memblock: allocate boot time data structures from mirrored memory
    mm/memblock: add extra "flags" to memblock to allow selection of memory based on attribute
    mm: do not ignore mapping_gfp_mask in page cache allocation paths
    mm/cma.c: fix typos in comments
    mm/oom_kill.c: print points as unsigned int
    mm/hugetlb: handle races in alloc_huge_page and hugetlb_reserve_pages
    ...

    Linus Torvalds
     
  • Beginning at commit d52d3997f843 ("ipv6: Create percpu rt6_info"), the
    following INFO splat is logged:

    ===============================
    [ INFO: suspicious RCU usage. ]
    4.1.0-rc7-next-20150612 #1 Not tainted
    -------------------------------
    kernel/sched/core.c:7318 Illegal context switch in RCU-bh read-side critical section!
    other info that might help us debug this:
    rcu_scheduler_active = 1, debug_locks = 0
    3 locks held by systemd/1:
    #0: (rtnl_mutex){+.+.+.}, at: [] rtnetlink_rcv+0x1f/0x40
    #1: (rcu_read_lock_bh){......}, at: [] ipv6_add_addr+0x62/0x540
    #2: (addrconf_hash_lock){+...+.}, at: [] ipv6_add_addr+0x184/0x540
    stack backtrace:
    CPU: 0 PID: 1 Comm: systemd Not tainted 4.1.0-rc7-next-20150612 #1
    Hardware name: TOSHIBA TECRA A50-A/TECRA A50-A, BIOS Version 4.20 04/17/2014
    Call Trace:
    dump_stack+0x4c/0x6e
    lockdep_rcu_suspicious+0xe7/0x120
    ___might_sleep+0x1d5/0x1f0
    __might_sleep+0x4d/0x90
    kmem_cache_alloc+0x47/0x250
    create_object+0x39/0x2e0
    kmemleak_alloc_percpu+0x61/0xe0
    pcpu_alloc+0x370/0x630

    Additional backtrace lines are truncated. In addition, the above splat
    is followed by several "BUG: sleeping function called from invalid
    context at mm/slub.c:1268" outputs. As suggested by Martin KaFai Lau,
    these are the clue to the fix. Routine kmemleak_alloc_percpu() always
    uses GFP_KERNEL for its allocations, whereas it should follow the gfp
    from its callers.

    Reviewed-by: Catalin Marinas
    Reviewed-by: Kamalesh Babulal
    Acked-by: Martin KaFai Lau
    Signed-off-by: Larry Finger
    Cc: Martin KaFai Lau
    Cc: Catalin Marinas
    Cc: Tejun Heo
    Cc: Christoph Lameter
    Cc: [3.18+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Larry Finger
     
  • Since commit 077fcf116c8c ("mm/thp: allocate transparent hugepages on
    local node"), we handle THP allocations on page fault in a special way -
    for non-interleave memory policies, the allocation is only attempted on
    the node local to the current CPU, if the policy's nodemask allows the
    node.

    This is motivated by the assumption that THP benefits cannot offset the
    cost of remote accesses, so it's better to fallback to base pages on the
    local node (which might still be available, while huge pages are not due
    to fragmentation) than to allocate huge pages on a remote node.

    The nodemask check prevents us from violating e.g. MPOL_BIND policies
    where the local node is not among the allowed nodes. However, the
    current implementation can still give surprising results for the
    MPOL_PREFERRED policy when the preferred node is different than the
    current CPU's local node.

    In such case we should honor the preferred node and not use the local
    node, which is what this patch does. If hugepage allocation on the
    preferred node fails, we fall back to base pages and don't try other
    nodes, with the same motivation as is done for the local node hugepage
    allocations. The patch also moves the MPOL_INTERLEAVE check around to
    simplify the hugepage specific test.

    The difference can be demonstrated using in-tree transhuge-stress test
    on the following 2-node machine where half memory on one node was
    occupied to show the difference.

    > numactl --hardware
    available: 2 nodes (0-1)
    node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35
    node 0 size: 7878 MB
    node 0 free: 3623 MB
    node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 36 37 38 39 40 41 42 43 44 45 46 47
    node 1 size: 8045 MB
    node 1 free: 7818 MB
    node distances:
    node 0 1
    0: 10 21
    1: 21 10

    Before the patch:
    > numactl -p0 -C0 ./transhuge-stress
    transhuge-stress: 2.197 s/loop, 0.276 ms/page, 7249.168 MiB/s 7962 succeed, 0 failed, 1786 different pages

    > numactl -p0 -C12 ./transhuge-stress
    transhuge-stress: 2.962 s/loop, 0.372 ms/page, 5376.172 MiB/s 7962 succeed, 0 failed, 3873 different pages

    Number of successful THP allocations corresponds to free memory on node 0 in
    the first case and node 1 in the second case, i.e. -p parameter is ignored and
    cpu binding "wins".

    After the patch:
    > numactl -p0 -C0 ./transhuge-stress
    transhuge-stress: 2.183 s/loop, 0.274 ms/page, 7295.516 MiB/s 7962 succeed, 0 failed, 1760 different pages

    > numactl -p0 -C12 ./transhuge-stress
    transhuge-stress: 2.878 s/loop, 0.361 ms/page, 5533.638 MiB/s 7962 succeed, 0 failed, 1750 different pages

    > numactl -p1 -C0 ./transhuge-stress
    transhuge-stress: 4.628 s/loop, 0.581 ms/page, 3440.893 MiB/s 7962 succeed, 0 failed, 3918 different pages

    The -p parameter is respected regardless of cpu binding.

    > numactl -C0 ./transhuge-stress
    transhuge-stress: 2.202 s/loop, 0.277 ms/page, 7230.003 MiB/s 7962 succeed, 0 failed, 1750 different pages

    > numactl -C12 ./transhuge-stress
    transhuge-stress: 3.020 s/loop, 0.379 ms/page, 5273.324 MiB/s 7962 succeed, 0 failed, 3916 different pages

    Without -p parameter, hugepage restriction to CPU-local node works as before.

    Fixes: 077fcf116c8c ("mm/thp: allocate transparent hugepages on local node")
    Signed-off-by: Vlastimil Babka
    Cc: Aneesh Kumar K.V
    Acked-by: David Rientjes
    Cc: Kirill A. Shutemov
    Cc: Andrea Arcangeli
    Cc: Michal Hocko
    Cc: [4.0+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vlastimil Babka
     
  • One of the rocksdb people noticed that when you do something like this

    fallocate(fd, FALLOC_FL_KEEP_SIZE, 0, 10M)
    pwrite(fd, buf, 5M, 0)
    ftruncate(5M)

    on tmpfs, the file would still take up 10M: which led to super fun
    issues because we were getting ENOSPC before we thought we should be
    getting ENOSPC. This patch fixes the problem, and mirrors what all the
    other fs'es do (and was agreed to be the correct behaviour at LSF).

    I tested it locally to make sure it worked properly with the following

    xfs_io -f -c "falloc -k 0 10M" -c "pwrite 0 5M" -c "truncate 5M" file

    Without the patch we have "Blocks: 20480", with the patch we have the
    correct value of "Blocks: 10240".

    Signed-off-by: Josef Bacik
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Josef Bacik
     
  • When hot add two nodes continuously, we found the vmemmap region info is
    a bit messed. The last region of node 2 is printed when node 3 hot
    added, like the following:

    Initmem setup node 2 [mem 0x0000000000000000-0xffffffffffffffff]
    On node 2 totalpages: 0
    Built 2 zonelists in Node order, mobility grouping on. Total pages: 16090539
    Policy zone: Normal
    init_memory_mapping: [mem 0x40000000000-0x407ffffffff]
    [mem 0x40000000000-0x407ffffffff] page 1G
    [ffffea1000000000-ffffea10001fffff] PMD -> [ffff8a077d800000-ffff8a077d9fffff] on node 2
    [ffffea1000200000-ffffea10003fffff] PMD -> [ffff8a077de00000-ffff8a077dffffff] on node 2
    ...
    [ffffea101f600000-ffffea101f9fffff] PMD -> [ffff8a074ac00000-ffff8a074affffff] on node 2
    [ffffea101fa00000-ffffea101fdfffff] PMD -> [ffff8a074a800000-ffff8a074abfffff] on node 2
    Initmem setup node 3 [mem 0x0000000000000000-0xffffffffffffffff]
    On node 3 totalpages: 0
    Built 3 zonelists in Node order, mobility grouping on. Total pages: 16090539
    Policy zone: Normal
    init_memory_mapping: [mem 0x60000000000-0x607ffffffff]
    [mem 0x60000000000-0x607ffffffff] page 1G
    [ffffea101fe00000-ffffea101fffffff] PMD -> [ffff8a074a400000-ffff8a074a5fffff] on node 2 [ffff8a074a600000-ffff8a074a7fffff] on node 3
    [ffffea1800200000-ffffea18005fffff] PMD -> [ffff8a074a000000-ffff8a074a3fffff] on node 3
    [ffffea1800600000-ffffea18009fffff] PMD -> [ffff8a0749c00000-ffff8a0749ffffff] on node 3
    ...

    The cause is the last region was missed at the and of hot add memory,
    and p_start, p_end, node_start were not reset, so when hot add memory to
    a new node, it will consider they are not contiguous blocks and print
    the previous one. So we print the last vmemmap region at the end of hot
    add memory to avoid the confusion.

    Signed-off-by: Zhu Guihua
    Reviewed-by: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhu Guihua
     
  • The simple check for zero length memory mapping may be performed
    earlier. So that in case of zero length memory mapping some unnecessary
    code is not executed at all. It does not make the code less readable
    and saves some CPU cycles.

    Signed-off-by: Piotr Kwapulinski
    Acked-by: Michal Hocko
    Acked-by: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Piotr Kwapulinski
     
  • The kmemleak memory scanning uses finer grained object->lock spinlocks
    primarily to avoid races with the memory block freeing. However, the
    pointer lookup in the rb tree requires the kmemleak_lock to be held.
    This is currently done in the find_and_get_object() function for each
    pointer-like location read during scanning. While this allows a low
    latency on kmemleak_*() callbacks on other CPUs, the memory scanning is
    slower.

    This patch moves the kmemleak_lock outside the scan_block() loop,
    acquiring/releasing it only once per scanned memory block. The
    allow_resched logic is moved outside scan_block() and a new
    scan_large_block() function is implemented which splits large blocks in
    MAX_SCAN_SIZE chunks with cond_resched() calls in-between. A redundant
    (object->flags & OBJECT_NO_SCAN) check is also removed from
    scan_object().

    With this patch, the kmemleak scanning performance is significantly
    improved: at least 50% with lock debugging disabled and over an order of
    magnitude with lock proving enabled (on an arm64 system).

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • While very unlikely (usually kmemleak or sl*b bug), the create_object()
    function in mm/kmemleak.c may fail to insert a newly allocated object into
    the rb tree. When this happens, kmemleak disables itself and prints
    additional information about the object already found in the rb tree.
    Such printing is done with the parent->lock acquired, however the
    kmemleak_lock is already held. This is a potential race with the scanning
    thread which acquires object->lock and kmemleak_lock in a

    This patch removes the locking around the 'parent' object information
    printing. Such object cannot be freed or removed from object_tree_root
    and object_list since kmemleak_lock is already held. There is a very
    small risk that some of the object data is being modified on another CPU
    but the only downside is inconsistent information printing.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • The kmemleak_do_cleanup() work thread already waits for the kmemleak_scan
    thread to finish via kthread_stop(). Waiting in kthread_stop() while
    scan_mutex is held may lead to deadlock if kmemleak_scan_thread() also
    waits to acquire for scan_mutex.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas