09 Jun, 2010

1 commit

  • For people who otherwise get to write: cpu_clock(smp_processor_id()),
    there is now: local_clock().

    Also, as per suggestion from Andrew, provide some documentation on
    the various clock interfaces, and minimize the unsigned long long vs
    u64 mess.

    Signed-off-by: Peter Zijlstra
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Jens Axboe
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

22 May, 2010

1 commit

  • The conversion of device->sem to device->mutex resulted in lockdep
    warnings. Create a novalidate class for now until the driver folks
    come up with separate classes. That way we have at least the basic
    mutex debugging coverage.

    Add a checkpatch error so the usage is reserved for device->mutex.

    [ tglx: checkpatch and compile fix for LOCKDEP=n ]

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Peter Zijlstra
     

18 May, 2010

2 commits

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (311 commits)
    perf tools: Add mode to build without newt support
    perf symbols: symbol inconsistency message should be done only at verbose=1
    perf tui: Add explicit -lslang option
    perf options: Type check all the remaining OPT_ variants
    perf options: Type check OPT_BOOLEAN and fix the offenders
    perf options: Check v type in OPT_U?INTEGER
    perf options: Introduce OPT_UINTEGER
    perf tui: Add workaround for slang < 2.1.4
    perf record: Fix bug mismatch with -c option definition
    perf options: Introduce OPT_U64
    perf tui: Add help window to show key associations
    perf tui: Make <- exit menus too
    perf newt: Add single key shortcuts for zoom into DSO and threads
    perf newt: Exit browser unconditionally when CTRL+C, q or Q is pressed
    perf newt: Fix the 'A'/'a' shortcut for annotate
    perf newt: Make <- exit the ui_browser
    x86, perf: P4 PMU - fix counters management logic
    perf newt: Make <- zoom out filters
    perf report: Report number of events, not samples
    perf hist: Clarify events_stats fields usage
    ...

    Fix up trivial conflicts in kernel/fork.c and tools/perf/builtin-record.c

    Linus Torvalds
     
  • * 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (24 commits)
    rcu: remove all rcu head initializations, except on_stack initializations
    rcu head introduce rcu head init on stack
    Debugobjects transition check
    rcu: fix build bug in RCU_FAST_NO_HZ builds
    rcu: RCU_FAST_NO_HZ must check RCU dyntick state
    rcu: make SRCU usable in modules
    rcu: improve the RCU CPU-stall warning documentation
    rcu: reduce the number of spurious RCU_SOFTIRQ invocations
    rcu: permit discontiguous cpu_possible_mask CPU numbering
    rcu: improve RCU CPU stall-warning messages
    rcu: print boot-time console messages if RCU configs out of ordinary
    rcu: disable CPU stall warnings upon panic
    rcu: enable CPU_STALL_VERBOSE by default
    rcu: slim down rcutiny by removing rcu_scheduler_active and friends
    rcu: refactor RCU's context-switch handling
    rcu: rename rcutiny rcu_ctrlblk to rcu_sched_ctrlblk
    rcu: shrink rcutiny by making synchronize_rcu_bh() be inline
    rcu: fix now-bogus rcu_scheduler_active comments.
    rcu: Fix bogus CONFIG_PROVE_LOCKING in comments to reflect reality.
    rcu: ignore offline CPUs in last non-dyntick-idle CPU check
    ...

    Linus Torvalds
     

11 May, 2010

1 commit

  • There is no need to disable lockdep after an RCU lockdep splat,
    so remove the debug_lockdeps_off() from lockdep_rcu_dereference().
    To avoid repeated lockdep splats, use a static variable in the inlined
    rcu_dereference_check() and rcu_dereference_protected() macros so that
    a given instance splats only once, but so that multiple instances can
    be detected per boot.

    This is controlled by a new config variable CONFIG_PROVE_RCU_REPEATEDLY,
    which is disabled by default. This provides the normal lockdep behavior
    by default, but permits people who want to find multiple RCU-lockdep
    splats per boot to easily do so.

    Requested-by: Eric Paris
    Signed-off-by: Lai Jiangshan
    Tested-by: Eric Paris
    Signed-off-by: Paul E. McKenney

    Lai Jiangshan
     

09 May, 2010

2 commits


07 May, 2010

1 commit

  • When calling check_prevs_add(), if all validations passed
    add_lock_to_list() will add new lock to dependency tree and
    alloc stack_trace for each list_entry.

    But at this time, we are always on the same stack, so stack_trace
    for each list_entry has the same value. This is redundant and eats
    up lots of memory which could lead to warning on low
    MAX_STACK_TRACE_ENTRIES.

    Use one copy of stack_trace instead.

    V2: As suggested by Peter Zijlstra, move save_trace() from
    check_prevs_add() to check_prev_add().
    Add tracking for trylock dependence which is also redundant.

    Signed-off-by: Yong Zhang
    Cc: David S. Miller
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Yong Zhang
     

04 May, 2010

1 commit

  • We forgot to provide a !CONFIG_DEBUG_LOCKDEP case for the
    redundant_hardirqs_on stat handling.

    Manage that in the headers with a new __debug_atomic_inc() helper.

    Fixes:

    kernel/lockdep.c:2306: error: 'lockdep_stats' undeclared (first use in this function)
    kernel/lockdep.c:2306: error: (Each undeclared identifier is reported only once
    kernel/lockdep.c:2306: error: for each function it appears in.)

    Reported-by: Ingo Molnar
    Signed-off-by: Frederic Weisbecker
    Cc: Peter Zijlstra

    Frederic Weisbecker
     

01 May, 2010

2 commits


06 Apr, 2010

2 commits

  • Locking statistics are implemented using global atomic
    variables. This is usually fine unless some path write them very
    often.

    This is the case for the function and function graph tracers
    that disable irqs for each entry saved (except if the function
    tracer is in preempt disabled only mode).
    And calls to local_irq_save/restore() increment
    hardirqs_on_events and hardirqs_off_events stats (or similar
    stats for redundant versions).

    Incrementing these global vars for each function ends up in too
    much cache bouncing if lockstats are enabled.

    To solve this, implement the debug_atomic_*() operations using
    per cpu vars.

    -v2: Use per_cpu() instead of get_cpu_var() to fetch the desired
    cpu vars on debug_atomic_read()

    -v3: Store the stats in a structure. No need for local_t as we
    are NMI/irq safe.

    -v4: Fix tons of build errors. I thought I had tested it but I
    probably forgot to select the relevant config.

    Suggested-by: Steven Rostedt
    Signed-off-by: Frederic Weisbecker
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Steven Rostedt

    Frederic Weisbecker
     
  • * 'slabh' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc:
    eeepc-wmi: include slab.h
    staging/otus: include slab.h from usbdrv.h
    percpu: don't implicitly include slab.h from percpu.h
    kmemcheck: Fix build errors due to missing slab.h
    include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
    iwlwifi: don't include iwl-dev.h from iwl-devtrace.h
    x86: don't include slab.h from arch/x86/include/asm/pgtable_32.h

    Fix up trivial conflicts in include/linux/percpu.h due to
    is_kernel_percpu_address() having been introduced since the slab.h
    cleanup with the percpu_up.c splitup.

    Linus Torvalds
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

29 Mar, 2010

1 commit

  • lockdep has custom code to check whether a pointer belongs to static
    percpu area which is somewhat broken. Implement proper
    is_kernel/module_percpu_address() and replace the custom code.

    On UP, percpu variables are regular static variables and can't be
    distinguished from them. Always return %false on UP.

    Signed-off-by: Tejun Heo
    Acked-by: Peter Zijlstra
    Cc: Rusty Russell
    Cc: Ingo Molnar

    Tejun Heo
     

19 Mar, 2010

1 commit

  • …/git/tip/linux-2.6-tip

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (35 commits)
    perf: Fix unexported generic perf_arch_fetch_caller_regs
    perf record: Don't try to find buildids in a zero sized file
    perf: export perf_trace_regs and perf_arch_fetch_caller_regs
    perf, x86: Fix hw_perf_enable() event assignment
    perf, ppc: Fix compile error due to new cpu notifiers
    perf: Make the install relative to DESTDIR if specified
    kprobes: Calculate the index correctly when freeing the out-of-line execution slot
    perf tools: Fix sparse CPU numbering related bugs
    perf_event: Fix oops triggered by cpu offline/online
    perf: Drop the obsolete profile naming for trace events
    perf: Take a hot regs snapshot for trace events
    perf: Introduce new perf_fetch_caller_regs() for hot regs snapshot
    perf/x86-64: Use frame pointer to walk on irq and process stacks
    lockdep: Move lock events under lockdep recursion protection
    perf report: Print the map table just after samples for which no map was found
    perf report: Add multiple event support
    perf session: Change perf_session post processing functions to take histogram tree
    perf session: Add storage for seperating event types in report
    perf session: Change add_hist_entry to take the tree root instead of session
    perf record: Add ID and to recorded event data when recording multiple events
    ...

    Linus Torvalds
     

10 Mar, 2010

1 commit

  • There are rcu locked read side areas in the path where we submit
    a trace event. And these rcu_read_(un)lock() trigger lock events,
    which create recursive events.

    One pair in do_perf_sw_event:

    __lock_acquire
    |
    |--96.11%-- lock_acquire
    | |
    | |--27.21%-- do_perf_sw_event
    | | perf_tp_event
    | | |
    | | |--49.62%-- ftrace_profile_lock_release
    | | | lock_release
    | | | |
    | | | |--33.85%-- _raw_spin_unlock

    Another pair in perf_output_begin/end:

    __lock_acquire
    |--23.40%-- perf_output_begin
    | | __perf_event_overflow
    | | perf_swevent_overflow
    | | perf_swevent_add
    | | perf_swevent_ctx_event
    | | do_perf_sw_event
    | | perf_tp_event
    | | |
    | | |--55.37%-- ftrace_profile_lock_acquire
    | | | lock_acquire
    | | | |
    | | | |--37.31%-- _raw_spin_lock

    The problem is not that much the trace recursion itself, as we have a
    recursion protection already (though it's always wasteful to recurse).
    But the trace events are outside the lockdep recursion protection, then
    each lockdep event triggers a lock trace, which will trigger two
    other lockdep events. Here the recursive lock trace event won't
    be taken because of the trace recursion, so the recursion stops there
    but lockdep will still analyse these new events:

    To sum up, for each lockdep events we have:

    lock_*()
    |
    trace lock_acquire
    |
    ----- rcu_read_lock()
    | |
    | lock_acquire()
    | |
    | trace_lock_acquire() (stopped)
    | |
    | lockdep analyze
    |
    ----- rcu_read_unlock()
    |
    lock_release
    |
    trace_lock_release() (stopped)
    |
    lockdep analyze

    And you can repeat the above two times as we have two rcu read side
    sections when we submit an event.

    This is fixed in this patch by moving the lock trace event under
    the lockdep recursion protection.

    Signed-off-by: Frederic Weisbecker
    Cc: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: Steven Rostedt
    Cc: Paul Mackerras
    Cc: Hitoshi Mitake
    Cc: Li Zefan
    Cc: Lai Jiangshan
    Cc: Masami Hiramatsu
    Cc: Jens Axboe

    Frederic Weisbecker
     

04 Mar, 2010

1 commit

  • Add the values of rcu_scheduler_active() and debug_locks() to
    the lockdep_rcu_dereference() output to help diagnose RCU
    lockdep splats that occur shortly after the scheduler starts.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

26 Feb, 2010

1 commit

  • Change from "unsafe" to "suspicious", given that there will be
    false alarms.

    Suggested-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

25 Feb, 2010

1 commit

  • Make rcu_dereference_check() print the list of held locks in
    addition to the stack dump to ease debugging.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

27 Jan, 2010

1 commit

  • Lockdep has found the real bug, but the output doesn't look right to me:

    > =========================================================
    > [ INFO: possible irq lock inversion dependency detected ]
    > 2.6.33-rc5 #77
    > ---------------------------------------------------------
    > emacs/1609 just changed the state of lock:
    > (&(&tty->ctrl_lock)->rlock){+.....}, at: [] tty_fasync+0xe8/0x190
    > but this lock took another, HARDIRQ-unsafe lock in the past:
    > (&(&sighand->siglock)->rlock){-.....}

    "HARDIRQ-unsafe" and "this lock took another" looks wrong, afaics.

    > ... key at: [] __key.46539+0x0/0x8
    > ... acquired at:
    > [] __lock_acquire+0x1056/0x15a0
    > [] lock_acquire+0x9f/0x120
    > [] _raw_spin_lock_irqsave+0x52/0x90
    > [] __proc_set_tty+0x3e/0x150
    > [] tty_open+0x51d/0x5e0

    The stack-trace shows that this lock (ctrl_lock) was taken under
    ->siglock (which is hopefully irq-safe).

    This is a clear typo in check_usage_backwards() where we tell the print a
    fancy routine we're forwards.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     

15 Dec, 2009

4 commits

  • Name space cleanup. No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     
  • Further name space cleanup. No functional change

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     
  • The raw_spin* namespace was taken by lockdep for the architecture
    specific implementations. raw_spin_* would be the ideal name space for
    the spinlocks which are not converted to sleeping locks in preempt-rt.

    Linus suggested to convert the raw_ to arch_ locks and cleanup the
    name space instead of using an artifical name like core_spin,
    atomic_spin or whatever

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

10 Dec, 2009

1 commit

  • ia64 found this the hard way (because we currently have a stub
    for save_stack_trace() that does nothing). But it would be a
    good idea to be cautious in case a real save_stack_trace()
    bailed out with an error before it set trace->nr_entries.

    Signed-off-by: Tony Luck
    Acked-by: Peter Zijlstra
    Cc: luming.yu@intel.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Luck, Tony
     

06 Dec, 2009

1 commit

  • Fix min, max times in /proc/lock_stats

    (1) When collecting lock hold and wait times, if the current minimum
    time is zero, it will be replaced by the next time.

    (2) When aggregating minimum and maximum lock hold and wait times
    accross cpus, the values are added, instead of selecting the
    minimum and maximum.

    Signed-off-by: Frank Rowand
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frank Rowand
     

13 Nov, 2009

1 commit

  • Lockdep events subsystem gathers various locking related events
    such as a request, release, contention or acquisition of a lock.

    The name of this event subsystem is a bit of a misnomer since
    these events are not quite related to lockdep but more generally
    to locking, ie: these events are not reporting lock dependencies
    or possible deadlock scenario but pure locking events.

    Hence this rename.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Peter Zijlstra
    Acked-by: Hitoshi Mitake
    Cc: Arnaldo Carvalho de Melo
    Cc: Mike Galbraith
    Cc: Paul Mackerras
    Cc: Steven Rostedt
    Cc: Li Zefan
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

29 Oct, 2009

1 commit

  • This patch updates percpu related symbols under kernel/ and mm/ such
    that percpu symbols are unique and don't clash with local symbols.
    This serves two purposes of decreasing the possibility of global
    percpu symbol collision and allowing dropping per_cpu__ prefix from
    percpu symbols.

    * kernel/lockdep.c: s/lock_stats/cpu_lock_stats/

    * kernel/sched.c: s/init_rq_rt/init_rt_rq_var/ (any better idea?)
    s/sched_group_cpus/sched_groups/

    * kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a

    * kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/
    s/watchdog_task/softlockup_watchdog/
    s/timestamp/ts/ for local variables

    * kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/

    * mm/slab.c: s/reap_work/slab_reap_work/
    s/reap_node/slab_reap_node/

    * mm/vmstat.c: local variable changed to avoid collision with vmstat_work

    Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
    which cause name clashes" patch.

    Signed-off-by: Tejun Heo
    Acked-by: (slab/vmstat) Christoph Lameter
    Reviewed-by: Christoph Lameter
    Cc: Rusty Russell
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Andrew Morton
    Cc: Nick Piggin

    Tejun Heo
     

09 Oct, 2009

1 commit

  • Some tracepoint magic (TRACE_EVENT(lock_acquired)) relies on
    the fact that lock hold times are positive and uses div64 on
    that. That triggered a build warning on MIPS, and probably
    causes bad output in certain circumstances as well.

    Make it truly positive.

    Reported-by: Andrew Morton
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

23 Sep, 2009

1 commit

  • This allows lockdep to locate symbols that are in arch-specific data
    sections (such as data in Blackfin on-chip SRAM regions).

    Signed-off-by: Mike Frysinger
    Cc: Ingo Molnar
    Cc: Robin Getz
    Cc: Sam Ravnborg
    Cc: Peter Zijlstra
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Frysinger
     

29 Aug, 2009

1 commit


02 Aug, 2009

6 commits

  • The unit is KB, so sizeof(struct circular_queue) should be
    divided by 1024.

    Signed-off-by: Ming Lei
    Cc: akpm@linux-foundation.org
    Cc: torvalds@linux-foundation.org
    Cc: davem@davemloft.net
    Cc: Ming Lei
    Cc: a.p.zijlstra@chello.nl
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • We still can apply DaveM's generation count optimization to
    BFS, based on the following idea:

    - before doing each BFS, increase the global generation id
    by 1

    - if one node in the graph has been visited, mark it as
    visited by storing the current global generation id into
    the node's dep_gen_id field

    - so we can decide if one node has been visited already, by
    comparing the node's dep_gen_id with the global generation id.

    By applying DaveM's generation count optimization to current
    implementation of BFS, we gain the following advantages:

    - we save MAX_LOCKDEP_ENTRIES/8 bytes memory;

    - we remove the bitmap_zero(bfs_accessed, MAX_LOCKDEP_ENTRIES);
    in each BFS, which is very time-consuming since
    MAX_LOCKDEP_ENTRIES may be very large.(16384UL)

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    Cc: "David S. Miller"
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • spin_lock_nest_lock() allows to take many instances of the same
    class, this can easily lead to overflow of MAX_LOCK_DEPTH.

    To avoid this overflow, we'll stop accounting instances but
    start reference counting the class in the held_lock structure.

    [ We could maintain a list of instances, if we'd move the hlock
    stuff into __lock_acquired(), but that would require
    significant modifications to the current code. ]

    We restrict this mode to spin_lock_nest_lock() only, because it
    degrades the lockdep quality due to lost of instance.

    For lockstat this means we don't track lock statistics for any
    but the first lock in the series.

    Currently nesting is limited to 11 bits because that was the
    spare space available in held_lock. This yields a 2048
    instances maximium.

    Signed-off-by: Peter Zijlstra
    Cc: Marcelo Tosatti
    Cc: Linus Torvalds
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Add a lockdep helper to validate that we indeed are the owner
    of a lock.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • fixes a few comments and whitespaces that annoyed me.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Truncate stupid -1 entries in backtraces.

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

24 Jul, 2009

2 commits