12 Nov, 2016

1 commit

  • Limit the number of kmemleak false positives by including
    .data.ro_after_init in memory scanning. To achieve this we need to add
    symbols for start and end of the section to the linker scripts.

    The problem was been uncovered by commit 56989f6d8568 ("genetlink: mark
    families as __ro_after_init").

    Link: http://lkml.kernel.org/r/1478274173-15218-1-git-send-email-jakub.kicinski@netronome.com
    Reviewed-by: Catalin Marinas
    Signed-off-by: Jakub Kicinski
    Cc: Arnd Bergmann
    Cc: Cong Wang
    Cc: Johannes Berg
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jakub Kicinski
     

28 Oct, 2016

1 commit

  • Commit 68f24b08ee89 ("sched/core: Free the stack early if
    CONFIG_THREAD_INFO_IN_TASK") may cause the task->stack to be freed
    during kmemleak_scan() execution, leading to either a NULL pointer fault
    (if task->stack is NULL) or kmemleak accessing already freed memory.

    This patch uses the new try_get_task_stack() API to ensure that the task
    stack is not freed during kmemleak stack scanning.

    Addresses https://bugzilla.kernel.org/show_bug.cgi?id=173901.

    Fixes: 68f24b08ee89 ("sched/core: Free the stack early if CONFIG_THREAD_INFO_IN_TASK")
    Link: http://lkml.kernel.org/r/1476266223-14325-1-git-send-email-catalin.marinas@arm.com
    Signed-off-by: Catalin Marinas
    Reported-by: CAI Qian
    Tested-by: CAI Qian
    Acked-by: Michal Hocko
    Cc: Andy Lutomirski
    Cc: CAI Qian
    Cc: Hillf Danton
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     

12 Oct, 2016

1 commit

  • Some of the kmemleak_*() callbacks in memblock, bootmem, CMA convert a
    physical address to a virtual one using __va(). However, such physical
    addresses may sometimes be located in highmem and using __va() is
    incorrect, leading to inconsistent object tracking in kmemleak.

    The following functions have been added to the kmemleak API and they take
    a physical address as the object pointer. They only perform the
    corresponding action if the address has a lowmem mapping:

    kmemleak_alloc_phys
    kmemleak_free_part_phys
    kmemleak_not_leak_phys
    kmemleak_ignore_phys

    The affected calling places have been updated to use the new kmemleak
    API.

    Link: http://lkml.kernel.org/r/1471531432-16503-1-git-send-email-catalin.marinas@arm.com
    Signed-off-by: Catalin Marinas
    Reported-by: Vignesh R
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     

29 Jul, 2016

1 commit

  • If the user tries to disable automatic scanning early in the boot
    process using e.g.:

    echo scan=off > /sys/kernel/debug/kmemleak

    then this command will hang until SECS_FIRST_SCAN (= 60) seconds have
    elapsed, even though the system is fully initialised.

    We can fix this using interruptible sleep and checking if we're supposed
    to stop whenever we wake up (like the rest of the code does).

    Link: http://lkml.kernel.org/r/1468835005-2873-1-git-send-email-vegard.nossum@oracle.com
    Signed-off-by: Vegard Nossum
    Acked-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vegard Nossum
     

25 Jun, 2016

1 commit

  • When kmemleak dumps contents of leaked objects it reads whole objects
    regardless of user-requested size. This upsets KASAN. Disable KASAN
    checks around object dump.

    Link: http://lkml.kernel.org/r/1466617631-68387-1-git-send-email-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Acked-by: Catalin Marinas
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

18 Mar, 2016

2 commits

  • Kernel style prefers a single string over split strings when the string is
    'user-visible'.

    Miscellanea:

    - Add a missing newline
    - Realign arguments

    Signed-off-by: Joe Perches
    Acked-by: Tejun Heo [percpu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • There are a mixture of pr_warning and pr_warn uses in mm. Use pr_warn
    consistently.

    Miscellanea:

    - Coalesce formats
    - Realign arguments

    Signed-off-by: Joe Perches
    Acked-by: Tejun Heo [percpu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

15 Jan, 2016

1 commit

  • This reverts commit 8f4fc071b192 ("gfp: add __GFP_NOACCOUNT").

    Black-list kmem accounting policy (aka __GFP_NOACCOUNT) turned out to be
    fragile and difficult to maintain, because there seem to be many more
    allocations that should not be accounted than those that should be.
    Besides, false accounting an allocation might result in much worse
    consequences than not accounting at all, namely increased memory
    consumption due to pinned dead kmem caches.

    So it was decided to switch to the white-list policy. This patch
    reverts bits introducing the black-list policy. The white-list policy
    will be introduced later in the series.

    Signed-off-by: Vladimir Davydov
    Acked-by: Johannes Weiner
    Cc: Michal Hocko
    Cc: Tejun Heo
    Cc: Greg Thelen
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     

06 Nov, 2015

1 commit


11 Sep, 2015

1 commit

  • Instead of custom approach let's use recently introduced seq_hex_dump()
    helper.

    Signed-off-by: Andy Shevchenko
    Cc: Alexander Viro
    Cc: Joe Perches
    Cc: Tadeusz Struk
    Cc: Helge Deller
    Cc: Ingo Tuchscherer
    Acked-by: Catalin Marinas
    Cc: Vladimir Kondratiev
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Shevchenko
     

09 Sep, 2015

1 commit

  • In log_early function, crt_early_log should also count once when
    'crt_early_log >= ARRAY_SIZE(early_log)'. Otherwise the reported count
    from kmemleak_init is one less than 'actual number'.

    Then, in kmemleak_init, if early_log buffer size equal actual number,
    kmemleak will init sucessful, so change warning condition to
    'crt_early_log > ARRAY_SIZE(early_log)'.

    Signed-off-by: Wang Kai
    Acked-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Kai
     

25 Jun, 2015

6 commits

  • Beginning at commit d52d3997f843 ("ipv6: Create percpu rt6_info"), the
    following INFO splat is logged:

    ===============================
    [ INFO: suspicious RCU usage. ]
    4.1.0-rc7-next-20150612 #1 Not tainted
    -------------------------------
    kernel/sched/core.c:7318 Illegal context switch in RCU-bh read-side critical section!
    other info that might help us debug this:
    rcu_scheduler_active = 1, debug_locks = 0
    3 locks held by systemd/1:
    #0: (rtnl_mutex){+.+.+.}, at: [] rtnetlink_rcv+0x1f/0x40
    #1: (rcu_read_lock_bh){......}, at: [] ipv6_add_addr+0x62/0x540
    #2: (addrconf_hash_lock){+...+.}, at: [] ipv6_add_addr+0x184/0x540
    stack backtrace:
    CPU: 0 PID: 1 Comm: systemd Not tainted 4.1.0-rc7-next-20150612 #1
    Hardware name: TOSHIBA TECRA A50-A/TECRA A50-A, BIOS Version 4.20 04/17/2014
    Call Trace:
    dump_stack+0x4c/0x6e
    lockdep_rcu_suspicious+0xe7/0x120
    ___might_sleep+0x1d5/0x1f0
    __might_sleep+0x4d/0x90
    kmem_cache_alloc+0x47/0x250
    create_object+0x39/0x2e0
    kmemleak_alloc_percpu+0x61/0xe0
    pcpu_alloc+0x370/0x630

    Additional backtrace lines are truncated. In addition, the above splat
    is followed by several "BUG: sleeping function called from invalid
    context at mm/slub.c:1268" outputs. As suggested by Martin KaFai Lau,
    these are the clue to the fix. Routine kmemleak_alloc_percpu() always
    uses GFP_KERNEL for its allocations, whereas it should follow the gfp
    from its callers.

    Reviewed-by: Catalin Marinas
    Reviewed-by: Kamalesh Babulal
    Acked-by: Martin KaFai Lau
    Signed-off-by: Larry Finger
    Cc: Martin KaFai Lau
    Cc: Catalin Marinas
    Cc: Tejun Heo
    Cc: Christoph Lameter
    Cc: [3.18+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Larry Finger
     
  • The kmemleak memory scanning uses finer grained object->lock spinlocks
    primarily to avoid races with the memory block freeing. However, the
    pointer lookup in the rb tree requires the kmemleak_lock to be held.
    This is currently done in the find_and_get_object() function for each
    pointer-like location read during scanning. While this allows a low
    latency on kmemleak_*() callbacks on other CPUs, the memory scanning is
    slower.

    This patch moves the kmemleak_lock outside the scan_block() loop,
    acquiring/releasing it only once per scanned memory block. The
    allow_resched logic is moved outside scan_block() and a new
    scan_large_block() function is implemented which splits large blocks in
    MAX_SCAN_SIZE chunks with cond_resched() calls in-between. A redundant
    (object->flags & OBJECT_NO_SCAN) check is also removed from
    scan_object().

    With this patch, the kmemleak scanning performance is significantly
    improved: at least 50% with lock debugging disabled and over an order of
    magnitude with lock proving enabled (on an arm64 system).

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • While very unlikely (usually kmemleak or sl*b bug), the create_object()
    function in mm/kmemleak.c may fail to insert a newly allocated object into
    the rb tree. When this happens, kmemleak disables itself and prints
    additional information about the object already found in the rb tree.
    Such printing is done with the parent->lock acquired, however the
    kmemleak_lock is already held. This is a potential race with the scanning
    thread which acquires object->lock and kmemleak_lock in a

    This patch removes the locking around the 'parent' object information
    printing. Such object cannot be freed or removed from object_tree_root
    and object_list since kmemleak_lock is already held. There is a very
    small risk that some of the object data is being modified on another CPU
    but the only downside is inconsistent information printing.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • The kmemleak_do_cleanup() work thread already waits for the kmemleak_scan
    thread to finish via kthread_stop(). Waiting in kthread_stop() while
    scan_mutex is held may lead to deadlock if kmemleak_scan_thread() also
    waits to acquire for scan_mutex.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • Calling delete_object_*() on the same pointer is not a standard use case
    (unless there is a bug in the code calling kmemleak_free()). However,
    during kmemleak disabling (error or user triggered via /sys), there is a
    potential race between kmemleak_free() calls on a CPU and
    __kmemleak_do_cleanup() on a different CPU.

    The current delete_object_*() implementation first performs a look-up
    holding kmemleak_lock, increments the object->use_count and then
    re-acquires kmemleak_lock to remove the object from object_tree_root and
    object_list.

    This patch simplifies the delete_object_*() mechanism to both look up
    and remove an object from the object_tree_root and object_list
    atomically (guarded by kmemleak_lock). This allows safe concurrent
    calls to delete_object_*() on the same pointer without additional
    locking for synchronising the kmemleak_free_enabled flag.

    A side effect is a slight improvement in the delete_object_*() performance
    by avoiding acquiring kmemleak_lock twice and incrementing/decrementing
    object->use_count.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • The kmemleak scanning thread can run for minutes. Callbacks like
    kmemleak_free() are allowed during this time, the race being taken care
    of by the object->lock spinlock. Such lock also prevents a memory block
    from being freed or unmapped while it is being scanned by blocking the
    kmemleak_free() -> ... -> __delete_object() function until the lock is
    released in scan_object().

    When a kmemleak error occurs (e.g. it fails to allocate its metadata),
    kmemleak_enabled is set and __delete_object() is no longer called on
    freed objects. If kmemleak_scan is running at the same time,
    kmemleak_free() no longer waits for the object scanning to complete,
    allowing the corresponding memory block to be freed or unmapped (in the
    case of vfree()). This leads to kmemleak_scan potentially triggering a
    page fault.

    This patch separates the kmemleak_free() enabling/disabling from the
    overall kmemleak_enabled nob so that we can defer the disabling of the
    object freeing tracking until the scanning thread completed. The
    kmemleak_free_part() is deliberately ignored by this patch since this is
    only called during boot before the scanning thread started.

    Signed-off-by: Catalin Marinas
    Reported-by: Vignesh Radhakrishnan
    Tested-by: Vignesh Radhakrishnan
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     

15 May, 2015

1 commit

  • Not all kmem allocations should be accounted to memcg. The following
    patch gives an example when accounting of a certain type of allocations to
    memcg can effectively result in a memory leak. This patch adds the
    __GFP_NOACCOUNT flag which if passed to kmalloc and friends will force the
    allocation to go through the root cgroup. It will be used by the next
    patch.

    Note, since in case of kmemleak enabled each kmalloc implies yet another
    allocation from the kmemleak_object cache, we add __GFP_NOACCOUNT to
    gfp_kmemleak_mask.

    Alternatively, we could introduce a per kmem cache flag disabling
    accounting for all allocations of a particular kind, but (a) we would not
    be able to bypass accounting for kmalloc then and (b) a kmem cache with
    this flag set could not be merged with a kmem cache without this flag,
    which would increase the number of global caches and therefore
    fragmentation even if the memory cgroup controller is not used.

    Despite its generic name, currently __GFP_NOACCOUNT disables accounting
    only for kmem allocations while user page allocations are always charged.
    To catch abusing of this flag, a warning is issued on an attempt of
    passing it to mem_cgroup_try_charge.

    Signed-off-by: Vladimir Davydov
    Cc: Tejun Heo
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Greg Thelen
    Cc: Greg Kroah-Hartman
    Cc: [4.0.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     

14 Feb, 2015

1 commit

  • kmalloc internally round up allocation size, and kmemleak uses rounded up
    size as object's size. This makes kasan to complain while kmemleak scans
    memory or calculates of object's checksum. The simplest solution here is
    to disable kasan.

    Signed-off-by: Andrey Ryabinin
    Acked-by: Catalin Marinas
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

07 Jun, 2014

2 commits


05 Jun, 2014

1 commit

  • kmem_cache_{create,destroy,shrink} need to get a stable value of
    cpu/node online mask, because they init/destroy/access per-cpu/node
    kmem_cache parts, which can be allocated or destroyed on cpu/mem
    hotplug. To protect against cpu hotplug, these functions use
    {get,put}_online_cpus. However, they do nothing to synchronize with
    memory hotplug - taking the slab_mutex does not eliminate the
    possibility of race as described in patch 2.

    What we need there is something like get_online_cpus, but for memory.
    We already have lock_memory_hotplug, which serves for the purpose, but
    it's a bit of a hammer right now, because it's backed by a mutex. As a
    result, it imposes some limitations to locking order, which are not
    desirable, and can't be used just like get_online_cpus. That's why in
    patch 1 I substitute it with get/put_online_mems, which work exactly
    like get/put_online_cpus except they block not cpu, but memory hotplug.

    [ v1 can be found at https://lkml.org/lkml/2014/4/6/68. I NAK'ed it by
    myself, because it used an rw semaphore for get/put_online_mems,
    making them dead lock prune. ]

    This patch (of 2):

    {un}lock_memory_hotplug, which is used to synchronize against memory
    hotplug, is currently backed by a mutex, which makes it a bit of a
    hammer - threads that only want to get a stable value of online nodes
    mask won't be able to proceed concurrently. Also, it imposes some
    strong locking ordering rules on it, which narrows down the set of its
    usage scenarios.

    This patch introduces get/put_online_mems, which are the same as
    get/put_online_cpus, but for memory hotplug, i.e. executing a code
    inside a get/put_online_mems section will guarantee a stable value of
    online nodes, present pages, etc.

    lock_memory_hotplug()/unlock_memory_hotplug() are removed altogether.

    Signed-off-by: Vladimir Davydov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Tang Chen
    Cc: Zhang Yanfei
    Cc: Toshi Kani
    Cc: Xishi Qiu
    Cc: Jiang Liu
    Cc: Rafael J. Wysocki
    Cc: David Rientjes
    Cc: Wen Congyang
    Cc: Yasuaki Ishimatsu
    Cc: Lai Jiangshan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     

11 May, 2014

1 commit

  • Commit 8910ae896c8c ("kmemleak: change some global variables to int"),
    in addition to the atomic -> int conversion, moved the disabling of
    kmemleak_early_log to the beginning of the kmemleak_init() function,
    before the full kmemleak tracing is actually enabled. In this small
    window, kmem_cache_create() is called by kmemleak which triggers
    additional memory allocation that are not traced. This patch restores
    the original logic with kmemleak_early_log disabling when kmemleak is
    fully functional.

    Fixes: 8910ae896c8c (kmemleak: change some global variables to int)

    Signed-off-by: Catalin Marinas
    Cc: Sasha Levin
    Cc: Li Zefan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     

04 Apr, 2014

4 commits

  • They don't have to be atomic_t, because they are simple boolean toggles.

    Signed-off-by: Li Zefan
    Acked-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Li Zefan
     
  • Remove kmemleak_padding() and kmemleak_release().

    Signed-off-by: Li Zefan
    Acked-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Li Zefan
     
  • Currently if kmemleak is disabled, the kmemleak objects can never be
    freed, no matter if it's disabled by a user or due to fatal errors.

    Those objects can be a big waste of memory.

    OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
    1200264 1197433 99% 0.30K 46164 26 369312K kmemleak_object

    With this patch, after kmemleak was disabled you can reclaim memory
    with:

    # echo clear > /sys/kernel/debug/kmemleak

    Also inform users about this with a printk.

    Signed-off-by: Li Zefan
    Acked-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Li Zefan
     
  • Currently if you stop kmemleak thread before disabling kmemleak,
    kmemleak objects will be freed and so you won't be able to check
    previously reported leaks.

    With this patch, kmemleak objects won't be freed if there're leaks that
    can be reported.

    Signed-off-by: Li Zefan
    Acked-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Li Zefan
     

13 Nov, 2013

1 commit

  • Commit 248ac0e1943a ("mm/vmalloc: remove guard page from between vmap
    blocks") had the side effect of making vmap_area.va_end member point to
    the next vmap_area.va_start. This was creating an artificial reference
    to vmalloc'ed objects and kmemleak was rarely reporting vmalloc() leaks.

    This patch marks the vmap_area containing pointers explicitly and
    reduces the min ref_count to 2 as vm_struct still contains a reference
    to the vmalloc'ed object. The kmemleak add_scan_area() function has
    been improved to allow a SIZE_MAX argument covering the rest of the
    object (for simpler calling sites).

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     

12 Sep, 2013

1 commit


28 Feb, 2013

1 commit

  • I'm not sure why, but the hlist for each entry iterators were conceived

    list_for_each_entry(pos, head, member)

    The hlist ones were greedy and wanted an extra parameter:

    hlist_for_each_entry(tpos, pos, head, member)

    Why did they need an extra pos parameter? I'm not quite sure. Not only
    they don't really need it, it also prevents the iterator from looking
    exactly like the list iterator, which is unfortunate.

    Besides the semantic patch, there was some manual work required:

    - Fix up the actual hlist iterators in linux/list.h
    - Fix up the declaration of other iterators based on the hlist ones.
    - A very small amount of places were using the 'node' parameter, this
    was modified to use 'obj->member' instead.
    - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
    properly, so those had to be fixed up manually.

    The semantic patch which is mostly the work of Peter Senna Tschudin is here:

    @@
    iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;

    type T;
    expression a,c,d,e;
    identifier b;
    statement S;
    @@

    -T b;

    [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
    [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
    [akpm@linux-foundation.org: checkpatch fixes]
    [akpm@linux-foundation.org: fix warnings]
    [akpm@linux-foudnation.org: redo intrusive kvm changes]
    Tested-by: Peter Senna Tschudin
    Acked-by: Paul E. McKenney
    Signed-off-by: Sasha Levin
    Cc: Wu Fengguang
    Cc: Marcelo Tosatti
    Cc: Gleb Natapov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

24 Feb, 2013

1 commit

  • Add 2 helpers (zone_end_pfn() and zone_spans_pfn()) to reduce code
    duplication.

    This also switches to using them in compaction (where an additional
    variable needed to be renamed), page_alloc, vmstat, memory_hotplug, and
    kmemleak.

    Note that in compaction.c I avoid calling zone_end_pfn() repeatedly
    because I expect at some point the sycronization issues with start_pfn &
    spanned_pages will need fixing, either by actually using the seqlock or
    clever memory barrier usage.

    Signed-off-by: Cody P Schafer
    Cc: David Hansen
    Cc: Catalin Marinas
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cody P Schafer
     

19 Dec, 2012

1 commit


09 Oct, 2012

1 commit

  • kmemleak uses a tree where each node represents an allocated memory object
    in order to quickly find out what object a given address is part of.
    However, the objects don't overlap, so rbtrees are a better choice than
    prio tree for this use. They are both faster and have lower memory
    overhead.

    Tested by booting a kernel with kmemleak enabled, loading the
    kmemleak_test module, and looking for the expected messages.

    Signed-off-by: Michel Lespinasse
    Cc: Rik van Riel
    Cc: Hillf Danton
    Cc: Peter Zijlstra
    Cc: Andrea Arcangeli
    Cc: David Woodhouse
    Acked-by: Catalin Marinas
    Tested-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     

23 Sep, 2012

1 commit


21 Jan, 2012

2 commits

  • Commit b6693005 (kmemleak: When the early log buffer is exceeded, report
    the actual number) deferred the disabling of the early logging to
    kmemleak_init(). However, when CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y, the
    early logging was no longer disabled causing __init kmemleak functions
    to be called even after the kernel freed the init memory. This patch
    disables the early logging during kmemleak_init() if kmemleak is left
    disabled.

    Reported-by: Dirk Gouders
    Tested-by: Dirk Gouders
    Tested-by: Josh Boyer
    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • Kmemleak should only track valid scan areas with a non-zero size.
    Otherwise, such area may reside just at the end of an object and
    kmemleak would report "Adding scan area to unknown object".

    Signed-off-by: Tiejun Chen
    Signed-off-by: Catalin Marinas

    Tiejun Chen
     

03 Dec, 2011

4 commits