04 Mar, 2010

1 commit


27 Feb, 2010

1 commit

  • This patch allow to inject faults only for specific slabs.
    In order to preserve default behavior cache filter is off by
    default (all caches are faulty).

    One may define specific set of slabs like this:
    # mark skbuff_head_cache as faulty
    echo 1 > /sys/kernel/slab/skbuff_head_cache/failslab
    # Turn on cache filter (off by default)
    echo 1 > /sys/kernel/debug/failslab/cache-filter
    # Turn on fault injection
    echo 1 > /sys/kernel/debug/failslab/times
    echo 1 > /sys/kernel/debug/failslab/probability

    Acked-by: David Rientjes
    Acked-by: Akinobu Mita
    Acked-by: Christoph Lameter
    Signed-off-by: Dmitry Monakhov
    Signed-off-by: Pekka Enberg

    Dmitry Monakhov
     

30 Jan, 2010

1 commit

  • When factoring common code into transfer_objects in commit 3ded175 ("slab: add
    transfer_objects() function"), the 'touched' logic got a bit broken. When
    refilling from the shared array (taking objects from the shared array), we are
    making use of the shared array so it should be marked as touched.

    Subsequently pulling an element from the cpu array and allocating it should
    also touch the cpu array, but that is taken care of after the alloc_done label.
    (So yes, the cpu array was getting touched = 1 twice).

    So revert this logic to how it worked in earlier kernels.

    This also affects the behaviour in __drain_alien_cache, which would previously
    'touch' the shared array and now does not. I think it is more logical not to
    touch there, because we are pushing objects into the shared array rather than
    pulling them off. So there is no good reason to postpone reaping them -- if the
    shared array is getting utilized, then it will get 'touched' in the alloc path
    (where this patch now restores the touch).

    Acked-by: Christoph Lameter
    Signed-off-by: Nick Piggin
    Signed-off-by: Pekka Enberg

    Nick Piggin
     

12 Jan, 2010

1 commit


29 Dec, 2009

1 commit

  • Commit ce79ddc8e2376a9a93c7d42daf89bfcbb9187e62 ("SLAB: Fix lockdep annotations
    for CPU hotplug") broke init_node_lock_keys() off-slab logic which causes
    lockdep false positives.

    Fix that up by reverting the logic back to original while keeping CPU hotplug
    fixes intact.

    Reported-and-tested-by: Heiko Carstens
    Reported-and-tested-by: Andi Kleen
    Signed-off-by: Pekka Enberg

    Pekka Enberg
     

18 Dec, 2009

2 commits

  • …/rusty/linux-2.6-for-linus

    * 'cpumask-cleanups' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus:
    cpumask: rename tsk_cpumask to tsk_cpus_allowed
    cpumask: don't recommend set_cpus_allowed hack in Documentation/cpu-hotplug.txt
    cpumask: avoid dereferencing struct cpumask
    cpumask: convert drivers/idle/i7300_idle.c to cpumask_var_t
    cpumask: use modern cpumask style in drivers/scsi/fcoe/fcoe.c
    cpumask: avoid deprecated function in mm/slab.c
    cpumask: use cpu_online in kernel/perf_event.c

    Linus Torvalds
     
  • * 'kmemleak' of git://linux-arm.org/linux-2.6:
    kmemleak: fix kconfig for crc32 build error
    kmemleak: Reduce the false positives by checking for modified objects
    kmemleak: Show the age of an unreferenced object
    kmemleak: Release the object lock before calling put_object()
    kmemleak: Scan the _ftrace_events section in modules
    kmemleak: Simplify the kmemleak_scan_area() function prototype
    kmemleak: Do not use off-slab management with SLAB_NOLEAKTRACE

    Linus Torvalds
     

17 Dec, 2009

1 commit


15 Dec, 2009

2 commits

  • …/git/tip/linux-2.6-tip

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    perf sched: Fix build failure on sparc
    perf bench: Add "all" pseudo subsystem and "all" pseudo suite
    perf tools: Introduce perf_session class
    perf symbols: Ditch dso->find_symbol
    perf symbols: Allow lookups by symbol name too
    perf symbols: Add missing "Variables" entry to map_type__name
    perf symbols: Add support for 'variable' symtabs
    perf symbols: Introduce ELF counterparts to symbol_type__is_a
    perf symbols: Introduce symbol_type__is_a
    perf symbols: Rename kthreads to kmaps, using another abstraction for it
    perf tools: Allow building for ARM
    hw-breakpoints: Handle bad modify_user_hw_breakpoint off-case return value
    perf tools: Allow cross compiling
    tracing, slab: Fix no callsite ifndef CONFIG_KMEMTRACE
    tracing, slab: Define kmem_cache_alloc_notrace ifdef CONFIG_TRACING

    Trivial conflict due to different fixes to modify_user_hw_breakpoint()
    in include/linux/hw_breakpoint.h

    Linus Torvalds
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

12 Dec, 2009

1 commit


11 Dec, 2009

2 commits

  • For slab, if CONFIG_KMEMTRACE and CONFIG_DEBUG_SLAB are not set,
    __do_kmalloc() will not track callers:

    # ./perf record -f -a -R -e kmem:kmalloc
    ^C
    # ./perf trace
    ...
    perf-2204 [000] 147.376774: kmalloc: call_site=c0529d2d ...
    perf-2204 [000] 147.400997: kmalloc: call_site=c0529d2d ...
    Xorg-1461 [001] 147.405413: kmalloc: call_site=0 ...
    Xorg-1461 [001] 147.405609: kmalloc: call_site=0 ...
    konsole-1776 [001] 147.405786: kmalloc: call_site=0 ...

    Signed-off-by: Li Zefan
    Reviewed-by: Pekka Enberg
    Cc: Christoph Lameter
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: linux-mm@kvack.org
    Cc: Eduard - Gabriel Munteanu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     
  • Define kmem_trace_alloc_{,node}_notrace() if CONFIG_TRACING is
    enabled, otherwise perf-kmem will show wrong stats ifndef
    CONFIG_KMEM_TRACE, because a kmalloc() memory allocation may
    be traced by both trace_kmalloc() and trace_kmem_cache_alloc().

    Signed-off-by: Li Zefan
    Reviewed-by: Pekka Enberg
    Cc: Christoph Lameter
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: linux-mm@kvack.org
    Cc: Eduard - Gabriel Munteanu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     

06 Dec, 2009

3 commits


01 Dec, 2009

1 commit

  • As reported by Paul McKenney:

    I am seeing some lockdep complaints in rcutorture runs that include
    frequent CPU-hotplug operations. The tests are otherwise successful.
    My first thought was to send a patch that gave each array_cache
    structure's ->lock field its own struct lock_class_key, but you already
    have a init_lock_keys() that seems to be intended to deal with this.

    ------------------------------------------------------------------------

    =============================================
    [ INFO: possible recursive locking detected ]
    2.6.32-rc4-autokern1 #1
    ---------------------------------------------
    syslogd/2908 is trying to acquire lock:
    (&nc->lock){..-...}, at: [] .kmem_cache_free+0x118/0x2d4

    but task is already holding lock:
    (&nc->lock){..-...}, at: [] .kfree+0x1f0/0x324

    other info that might help us debug this:
    3 locks held by syslogd/2908:
    #0: (&u->readlock){+.+.+.}, at: [] .unix_dgram_recvmsg+0x70/0x338
    #1: (&nc->lock){..-...}, at: [] .kfree+0x1f0/0x324
    #2: (&parent->list_lock){-.-...}, at: [] .__drain_alien_cache+0x50/0xb8

    stack backtrace:
    Call Trace:
    [c0000000e8ccafc0] [c0000000000101e4] .show_stack+0x70/0x184 (unreliable)
    [c0000000e8ccb070] [c0000000000afebc] .validate_chain+0x6ec/0xf58
    [c0000000e8ccb180] [c0000000000b0ff0] .__lock_acquire+0x8c8/0x974
    [c0000000e8ccb280] [c0000000000b2290] .lock_acquire+0x140/0x18c
    [c0000000e8ccb350] [c000000000468df0] ._spin_lock+0x48/0x70
    [c0000000e8ccb3e0] [c0000000001407f4] .kmem_cache_free+0x118/0x2d4
    [c0000000e8ccb4a0] [c000000000140b90] .free_block+0x130/0x1a8
    [c0000000e8ccb540] [c000000000140f94] .__drain_alien_cache+0x80/0xb8
    [c0000000e8ccb5e0] [c0000000001411e0] .kfree+0x214/0x324
    [c0000000e8ccb6a0] [c0000000003ca860] .skb_release_data+0xe8/0x104
    [c0000000e8ccb730] [c0000000003ca2ec] .__kfree_skb+0x20/0xd4
    [c0000000e8ccb7b0] [c0000000003cf2c8] .skb_free_datagram+0x1c/0x5c
    [c0000000e8ccb830] [c00000000045597c] .unix_dgram_recvmsg+0x2f4/0x338
    [c0000000e8ccb920] [c0000000003c0f14] .sock_recvmsg+0xf4/0x13c
    [c0000000e8ccbb30] [c0000000003c28ec] .SyS_recvfrom+0xb4/0x130
    [c0000000e8ccbcb0] [c0000000003bfb78] .sys_recv+0x18/0x2c
    [c0000000e8ccbd20] [c0000000003ed388] .compat_sys_recv+0x14/0x28
    [c0000000e8ccbd90] [c0000000003ee1bc] .compat_sys_socketcall+0x178/0x220
    [c0000000e8ccbe30] [c0000000000085d4] syscall_exit+0x0/0x40

    This patch fixes the issue by setting up lockdep annotations during CPU
    hotplug.

    Reported-by: Paul E. McKenney
    Tested-by: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Pekka Enberg
     

29 Oct, 2009

1 commit

  • This patch updates percpu related symbols under kernel/ and mm/ such
    that percpu symbols are unique and don't clash with local symbols.
    This serves two purposes of decreasing the possibility of global
    percpu symbol collision and allowing dropping per_cpu__ prefix from
    percpu symbols.

    * kernel/lockdep.c: s/lock_stats/cpu_lock_stats/

    * kernel/sched.c: s/init_rq_rt/init_rt_rq_var/ (any better idea?)
    s/sched_group_cpus/sched_groups/

    * kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a

    * kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/
    s/watchdog_task/softlockup_watchdog/
    s/timestamp/ts/ for local variables

    * kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/

    * mm/slab.c: s/reap_work/slab_reap_work/
    s/reap_node/slab_reap_node/

    * mm/vmstat.c: local variable changed to avoid collision with vmstat_work

    Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
    which cause name clashes" patch.

    Signed-off-by: Tejun Heo
    Acked-by: (slab/vmstat) Christoph Lameter
    Reviewed-by: Christoph Lameter
    Cc: Rusty Russell
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Andrew Morton
    Cc: Nick Piggin

    Tejun Heo
     

28 Oct, 2009

2 commits


22 Sep, 2009

1 commit

  • Sizing of memory allocations shouldn't depend on the number of physical
    pages found in a system, as that generally includes (perhaps a huge amount
    of) non-RAM pages. The amount of what actually is usable as storage
    should instead be used as a basis here.

    Some of the calculations (i.e. those not intending to use high memory)
    should likely even use (totalram_pages - totalhigh_pages).

    Signed-off-by: Jan Beulich
    Acked-by: Rusty Russell
    Acked-by: Ingo Molnar
    Cc: Dave Airlie
    Cc: Kyle McMartin
    Cc: Jeremy Fitzhardinge
    Cc: Pekka Enberg
    Cc: Hugh Dickins
    Cc: "David S. Miller"
    Cc: Patrick McHardy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     

29 Jun, 2009

1 commit

  • Commit 8429db5... ("slab: setup cpu caches later on when interrupts are
    enabled") broke mm/slab.c lockdep annotations:

    [ 11.554715] =============================================
    [ 11.555249] [ INFO: possible recursive locking detected ]
    [ 11.555560] 2.6.31-rc1 #896
    [ 11.555861] ---------------------------------------------
    [ 11.556127] udevd/1899 is trying to acquire lock:
    [ 11.556436] (&nc->lock){-.-...}, at: [] kmem_cache_free+0xcd/0x25b
    [ 11.557101]
    [ 11.557102] but task is already holding lock:
    [ 11.557706] (&nc->lock){-.-...}, at: [] kfree+0x137/0x292
    [ 11.558109]
    [ 11.558109] other info that might help us debug this:
    [ 11.558720] 2 locks held by udevd/1899:
    [ 11.558983] #0: (&nc->lock){-.-...}, at: [] kfree+0x137/0x292
    [ 11.559734] #1: (&parent->list_lock){-.-...}, at: [] __drain_alien_cache+0x3b/0xbd
    [ 11.560442]
    [ 11.560443] stack backtrace:
    [ 11.561009] Pid: 1899, comm: udevd Not tainted 2.6.31-rc1 #896
    [ 11.561276] Call Trace:
    [ 11.561632] [] __lock_acquire+0x15ec/0x168f
    [ 11.561901] [] ? __lock_acquire+0x1676/0x168f
    [ 11.562171] [] ? trace_hardirqs_on_caller+0x113/0x13e
    [ 11.562490] [] ? trace_hardirqs_on_thunk+0x3a/0x3f
    [ 11.562807] [] lock_acquire+0xc1/0xe5
    [ 11.563073] [] ? kmem_cache_free+0xcd/0x25b
    [ 11.563385] [] _spin_lock+0x31/0x66
    [ 11.563696] [] ? kmem_cache_free+0xcd/0x25b
    [ 11.563964] [] kmem_cache_free+0xcd/0x25b
    [ 11.564235] [] ? __free_pages+0x1b/0x24
    [ 11.564551] [] slab_destroy+0x57/0x5c
    [ 11.564860] [] free_block+0xd8/0x123
    [ 11.565126] [] __drain_alien_cache+0xa2/0xbd
    [ 11.565441] [] kfree+0x14c/0x292
    [ 11.565752] [] skb_release_data+0xc6/0xcb
    [ 11.566020] [] __kfree_skb+0x19/0x86
    [ 11.566286] [] consume_skb+0x2b/0x2d
    [ 11.566631] [] skb_free_datagram+0x14/0x3a
    [ 11.566901] [] netlink_recvmsg+0x164/0x258
    [ 11.567170] [] sock_recvmsg+0xe5/0xfe
    [ 11.567486] [] ? might_fault+0xaf/0xb1
    [ 11.567802] [] ? autoremove_wake_function+0x0/0x38
    [ 11.568073] [] ? core_sys_select+0x3d/0x2b4
    [ 11.568378] [] ? __lock_acquire+0x1676/0x168f
    [ 11.568693] [] ? sockfd_lookup_light+0x1b/0x54
    [ 11.568961] [] sys_recvfrom+0xa3/0xf8
    [ 11.569228] [] ? trace_hardirqs_on+0xd/0xf
    [ 11.569546] [] system_call_fastpath+0x16/0x1b#

    Fix that up.

    Closes-bug: http://bugzilla.kernel.org/show_bug.cgi?id=13654
    Tested-by: Venkatesh Pallipadi
    Signed-off-by: Pekka Enberg

    Pekka Enberg
     

26 Jun, 2009

1 commit


19 Jun, 2009

1 commit


17 Jun, 2009

4 commits

  • Pekka Enberg
     
  • * akpm: (182 commits)
    fbdev: bf54x-lq043fb: use kzalloc over kmalloc/memset
    fbdev: *bfin*: fix __dev{init,exit} markings
    fbdev: *bfin*: drop unnecessary calls to memset
    fbdev: bfin-t350mcqb-fb: drop unused local variables
    fbdev: blackfin has __raw I/O accessors, so use them in fb.h
    fbdev: s1d13xxxfb: add accelerated bitblt functions
    tcx: use standard fields for framebuffer physical address and length
    fbdev: add support for handoff from firmware to hw framebuffers
    intelfb: fix a bug when changing video timing
    fbdev: use framebuffer_release() for freeing fb_info structures
    radeon: P2G2CLK_ALWAYS_ONb tested twice, should 2nd be P2G2CLK_DAC_ALWAYS_ONb?
    s3c-fb: CPUFREQ frequency scaling support
    s3c-fb: fix resource releasing on error during probing
    carminefb: fix possible access beyond end of carmine_modedb[]
    acornfb: remove fb_mmap function
    mb862xxfb: use CONFIG_OF instead of CONFIG_PPC_OF
    mb862xxfb: restrict compliation of platform driver to PPC
    Samsung SoC Framebuffer driver: add Alpha Channel support
    atmel-lcdc: fix pixclock upper bound detection
    offb: use framebuffer_alloc() to allocate fb_info struct
    ...

    Manually fix up conflicts due to kmemcheck in mm/slab.c

    Linus Torvalds
     
  • SLAB currently avoids checking a bitmap repeatedly by checking once and
    storing a flag. When the addition of nr_online_nodes as a cheaper version
    of num_online_nodes(), this check can be replaced by nr_online_nodes.

    (Christoph did a patch that this is lifted almost verbatim from)

    Signed-off-by: Christoph Lameter
    Signed-off-by: Mel Gorman
    Cc: KOSAKI Motohiro
    Reviewed-by: Pekka Enberg
    Cc: Peter Zijlstra
    Cc: Nick Piggin
    Cc: Dave Hansen
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Callers of alloc_pages_node() can optionally specify -1 as a node to mean
    "allocate from the current node". However, a number of the callers in
    fast paths know for a fact their node is valid. To avoid a comparison and
    branch, this patch adds alloc_pages_exact_node() that only checks the nid
    with VM_BUG_ON(). Callers that know their node is valid are then
    converted.

    Signed-off-by: Mel Gorman
    Reviewed-by: Christoph Lameter
    Reviewed-by: KOSAKI Motohiro
    Reviewed-by: Pekka Enberg
    Acked-by: Paul Mundt [for the SLOB NUMA bits]
    Cc: Peter Zijlstra
    Cc: Nick Piggin
    Cc: Dave Hansen
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

15 Jun, 2009

3 commits


13 Jun, 2009

1 commit

  • Move the SLAB struct kmem_cache definition to like
    with SLUB so kmemcheck can access ->ctor and ->flags.

    Cc: Ingo Molnar
    Cc: Christoph Lameter
    Cc: Andrew Morton
    Signed-off-by: Pekka Enberg

    [rebased for mainline inclusion]
    Signed-off-by: Vegard Nossum

    Pekka Enberg
     

12 Jun, 2009

6 commits

  • Fixes the following boot-time warning:

    [ 0.000000] ------------[ cut here ]------------
    [ 0.000000] WARNING: at kernel/smp.c:369 smp_call_function_many+0x56/0x1bc()
    [ 0.000000] Hardware name:
    [ 0.000000] Modules linked in:
    [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.30 #492
    [ 0.000000] Call Trace:
    [ 0.000000] [] ? _spin_unlock+0x4f/0x5c
    [ 0.000000] [] ? smp_call_function_many+0x56/0x1bc
    [ 0.000000] [] warn_slowpath_common+0x7c/0xa9
    [ 0.000000] [] warn_slowpath_null+0x14/0x16
    [ 0.000000] [] smp_call_function_many+0x56/0x1bc
    [ 0.000000] [] ? do_ccupdate_local+0x0/0x54
    [ 0.000000] [] ? do_ccupdate_local+0x0/0x54
    [ 0.000000] [] smp_call_function+0x3d/0x68
    [ 0.000000] [] ? do_ccupdate_local+0x0/0x54
    [ 0.000000] [] on_each_cpu+0x31/0x7c
    [ 0.000000] [] do_tune_cpucache+0x119/0x454
    [ 0.000000] [] ? lockdep_init_map+0x94/0x10b
    [ 0.000000] [] ? kmem_cache_init+0x421/0x593
    [ 0.000000] [] enable_cpucache+0x68/0xad
    [ 0.000000] [] kmem_cache_init+0x434/0x593
    [ 0.000000] [] ? mem_init+0x156/0x161
    [ 0.000000] [] start_kernel+0x1cc/0x3b9
    [ 0.000000] [] x86_64_start_reservations+0xaa/0xae
    [ 0.000000] [] x86_64_start_kernel+0xe1/0xe8
    [ 0.000000] ---[ end trace 4eaa2a86a8e2da22 ]---

    Cc: Christoph Lameter
    Cc: Nick Piggin
    Signed-off-by: Pekka Enberg

    Pekka Enberg
     
  • As explained by Benjamin Herrenschmidt:

    Oh and btw, your patch alone doesn't fix powerpc, because it's missing
    a whole bunch of GFP_KERNEL's in the arch code... You would have to
    grep the entire kernel for things that check slab_is_available() and
    even then you'll be missing some.

    For example, slab_is_available() didn't always exist, and so in the
    early days on powerpc, we used a mem_init_done global that is set form
    mem_init() (not perfect but works in practice). And we still have code
    using that to do the test.

    Therefore, mask out __GFP_WAIT, __GFP_IO, and __GFP_FS in the slab allocators
    in early boot code to avoid enabling interrupts.

    Signed-off-by: Pekka Enberg

    Pekka Enberg
     
  • Fixes the following warning during bootup when compiling with CONFIG_SLAB:

    [ 0.000000] ------------[ cut here ]------------
    [ 0.000000] WARNING: at kernel/lockdep.c:2282 lockdep_trace_alloc+0x91/0xb9()
    [ 0.000000] Hardware name:
    [ 0.000000] Modules linked in:
    [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.30 #491
    [ 0.000000] Call Trace:
    [ 0.000000] [] ? lockdep_trace_alloc+0x91/0xb9
    [ 0.000000] [] warn_slowpath_common+0x7c/0xa9
    [ 0.000000] [] warn_slowpath_null+0x14/0x16
    [ 0.000000] [] lockdep_trace_alloc+0x91/0xb9
    [ 0.000000] [] kmem_cache_alloc_node_notrace+0x26/0xdf
    [ 0.000000] [] ? setup_cpu_cache+0x7e/0x210
    [ 0.000000] [] setup_cpu_cache+0x113/0x210
    [ 0.000000] [] kmem_cache_create+0x409/0x486
    [ 0.000000] [] kmem_cache_init+0x232/0x593
    [ 0.000000] [] ? mem_init+0x156/0x161
    [ 0.000000] [] start_kernel+0x1cc/0x3b9
    [ 0.000000] [] x86_64_start_reservations+0xaa/0xae
    [ 0.000000] [] x86_64_start_kernel+0xe1/0xe8
    [ 0.000000] ---[ end trace 4eaa2a86a8e2da22 ]---

    Signed-off-by: Pekka Enberg

    Pekka Enberg
     
  • * 'for-linus' of git://linux-arm.org/linux-2.6:
    kmemleak: Add the corresponding MAINTAINERS entry
    kmemleak: Simple testing module for kmemleak
    kmemleak: Enable the building of the memory leak detector
    kmemleak: Remove some of the kmemleak false positives
    kmemleak: Add modules support
    kmemleak: Add kmemleak_alloc callback from alloc_large_system_hash
    kmemleak: Add the vmalloc memory allocation/freeing hooks
    kmemleak: Add the slub memory allocation/freeing hooks
    kmemleak: Add the slob memory allocation/freeing hooks
    kmemleak: Add the slab memory allocation/freeing hooks
    kmemleak: Add documentation on the memory leak detector
    kmemleak: Add the base support

    Manual conflict resolution (with the slab/earlyboot changes) in:
    drivers/char/vt.c
    init/main.c
    mm/slab.c

    Linus Torvalds
     
  • This patch makes kmalloc() available earlier in the boot sequence so we can get
    rid of some bootmem allocations. The bulk of the changes are due to
    kmem_cache_init() being called with interrupts disabled which requires some
    changes to allocator boostrap code.

    Note: 32-bit x86 does WP protect test in mem_init() so we must setup traps
    before we call mem_init() during boot as reported by Ingo Molnar:

    We have a hard crash in the WP-protect code:

    [ 0.000000] Checking if this processor honours the WP bit even in supervisor mode...BUG: Int 14: CR2 ffcff000
    [ 0.000000] EDI 00000188 ESI 00000ac7 EBP c17eaf9c ESP c17eaf8c
    [ 0.000000] EBX 000014e0 EDX 0000000e ECX 01856067 EAX 00000001
    [ 0.000000] err 00000003 EIP c10135b1 CS 00000060 flg 00010002
    [ 0.000000] Stack: c17eafa8 c17fd410 c16747bc c17eafc4 c17fd7e5 000011fd f8616000 c18237cc
    [ 0.000000] 00099800 c17bb000 c17eafec c17f1668 000001c5 c17f1322 c166e039 c1822bf0
    [ 0.000000] c166e033 c153a014 c18237cc 00020800 c17eaff8 c17f106a 00020800 01ba5003
    [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.30-tip-02161-g7a74539-dirty #52203
    [ 0.000000] Call Trace:
    [ 0.000000] [] ? printk+0x14/0x16
    [ 0.000000] [] ? do_test_wp_bit+0x19/0x23
    [ 0.000000] [] ? test_wp_bit+0x26/0x64
    [ 0.000000] [] ? mem_init+0x1ba/0x1d8
    [ 0.000000] [] ? start_kernel+0x164/0x2f7
    [ 0.000000] [] ? unknown_bootoption+0x0/0x19c
    [ 0.000000] [] ? __init_begin+0x6a/0x6f

    Acked-by: Johannes Weiner
    Acked-by Linus Torvalds
    Cc: Christoph Lameter
    Cc: Ingo Molnar
    Cc: Matt Mackall
    Cc: Nick Piggin
    Cc: Yinghai Lu
    Signed-off-by: Pekka Enberg

    Pekka Enberg
     
  • This patch adds the callbacks to kmemleak_(alloc|free) functions from
    the slab allocator. The patch also adds the SLAB_NOLEAKTRACE flag to
    avoid recursive calls to kmemleak when it allocates its own data
    structures.

    Signed-off-by: Catalin Marinas
    Reviewed-by: Pekka Enberg

    Catalin Marinas
     

22 May, 2009

1 commit

  • A generic page poisoning mechanism was added with commit:
    6a11f75b6a17b5d9ac5025f8d048382fd1f47377
    which destructively poisons full pages with a bitpattern.

    On arches where PAGE_POISONING is used, this conflicts with the slab
    redzone checking enabled by DEBUG_SLAB, scribbling bits all over its
    magic words and making it complain about that quite emphatically.

    On x86 (and I presume at present all the other arches which set
    ARCH_SUPPORTS_DEBUG_PAGEALLOC too), the kernel_map_pages() operation
    is non destructive so it can coexist with the other DEBUG_SLAB
    mechanisms just fine.

    This patch favours the expensive full page destruction test for
    cases where there is a collision and it is explicitly selected.

    Signed-off-by: Ron Lee
    Signed-off-by: Pekka Enberg

    Ron Lee
     

12 Apr, 2009

1 commit

  • Impact: refactor code for future changes

    Current kmemtrace.h is used both as header file of kmemtrace and kmem's
    tracepoints definition.

    Tracepoints' definition file may be used by other code, and should only have
    definition of tracepoint.

    We can separate include/trace/kmemtrace.h into 2 files:

    include/linux/kmemtrace.h: header file for kmemtrace
    include/trace/kmem.h: definition of kmem tracepoints

    Signed-off-by: Zhao Lei
    Acked-by: Eduard - Gabriel Munteanu
    Acked-by: Pekka Enberg
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Tom Zanussi
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Zhaolei