17 Apr, 2014

1 commit


11 Apr, 2014

1 commit

  • debug_mutex_unlock() would bail when !debug_locks and forgets to
    actually unlock.

    Reported-by: "Michael L. Semon"
    Reported-by: "Kirill A. Shutemov"
    Reported-by: Valdis Kletnieks
    Fixes: 6f008e72cd11 ("locking/mutex: Fix debug checks")
    Tested-by: Dave Jones
    Cc: Jason Low
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/20140410141559.GE13658@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

08 Apr, 2014

1 commit

  • When the system has only one CPU, lglock is effectively a spinlock; map
    it directly to spinlock to eliminate the indirection and duplicate code.

    In addition to removing overhead, this drops 1.6k of code with a
    defconfig modified to have !CONFIG_SMP, and 1.1k with a minimal config.

    Signed-off-by: Josh Triplett
    Cc: Rusty Russell
    Cc: Michal Marek
    Cc: Thomas Gleixner
    Cc: David Howells
    Cc: "H. Peter Anvin"
    Cc: Nick Piggin
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Josh Triplett
     

01 Apr, 2014

3 commits

  • Pull x86 LTO changes from Peter Anvin:
    "More infrastructure work in preparation for link-time optimization
    (LTO). Most of these changes is to make sure symbols accessed from
    assembly code are properly marked as visible so the linker doesn't
    remove them.

    My understanding is that the changes to support LTO are still not
    upstream in binutils, but are on the way there. This patchset should
    conclude the x86-specific changes, and remaining patches to actually
    enable LTO will be fed through the Kbuild tree (other than keeping up
    with changes to the x86 code base, of course), although not
    necessarily in this merge window"

    * 'x86-asmlinkage-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits)
    Kbuild, lto: Handle basic LTO in modpost
    Kbuild, lto: Disable LTO for asm-offsets.c
    Kbuild, lto: Add a gcc-ld script to let run gcc as ld
    Kbuild, lto: add ld-version and ld-ifversion macros
    Kbuild, lto: Drop .number postfixes in modpost
    Kbuild, lto, workaround: Don't warn for initcall_reference in modpost
    lto: Disable LTO for sys_ni
    lto: Handle LTO common symbols in module loader
    lto, workaround: Add workaround for initcall reordering
    lto: Make asmlinkage __visible
    x86, lto: Disable LTO for the x86 VDSO
    initconst, x86: Fix initconst mistake in ts5500 code
    initconst: Fix initconst mistake in dcdbas
    asmlinkage: Make trace_hardirqs_on/off_caller visible
    asmlinkage, x86: Fix 32bit memcpy for LTO
    asmlinkage Make __stack_chk_failed and memcmp visible
    asmlinkage: Mark rwsem functions that can be called from assembler asmlinkage
    asmlinkage: Make main_extable_sort_needed visible
    asmlinkage, mutex: Mark __visible
    asmlinkage: Make trace_hardirq visible
    ...

    Linus Torvalds
     
  • Pull scheduler changes from Ingo Molnar:
    "Bigger changes:

    - sched/idle restructuring: they are WIP preparation for deeper
    integration between the scheduler and idle state selection, by
    Nicolas Pitre.

    - add NUMA scheduling pseudo-interleaving, by Rik van Riel.

    - optimize cgroup context switches, by Peter Zijlstra.

    - RT scheduling enhancements, by Thomas Gleixner.

    The rest is smaller changes, non-urgnt fixes and cleanups"

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (68 commits)
    sched: Clean up the task_hot() function
    sched: Remove double calculation in fix_small_imbalance()
    sched: Fix broken setscheduler()
    sparc64, sched: Remove unused sparc64_multi_core
    sched: Remove unused mc_capable() and smt_capable()
    sched/numa: Move task_numa_free() to __put_task_struct()
    sched/fair: Fix endless loop in idle_balance()
    sched/core: Fix endless loop in pick_next_task()
    sched/fair: Push down check for high priority class task into idle_balance()
    sched/rt: Fix picking RT and DL tasks from empty queue
    trace: Replace hardcoding of 19 with MAX_NICE
    sched: Guarantee task priority in pick_next_task()
    sched/idle: Remove stale old file
    sched: Put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
    cpuidle/arm64: Remove redundant cpuidle_idle_call()
    cpuidle/powernv: Remove redundant cpuidle_idle_call()
    sched, nohz: Exclude isolated cores from load balancing
    sched: Fix select_task_rq_fair() description comments
    workqueue: Replace hardcoding of -20 and 19 with MIN_NICE and MAX_NICE
    sys: Replace hardcoding of -20 and 19 with MIN_NICE and MAX_NICE
    ...

    Linus Torvalds
     
  • Pull RCU updates from Ingo Molnar:
    "Main changes:

    - Torture-test changes, including refactoring of rcutorture and
    introduction of a vestigial locktorture.

    - Real-time latency fixes.

    - Documentation updates.

    - Miscellaneous fixes"

    * 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (77 commits)
    rcu: Provide grace-period piggybacking API
    rcu: Ensure kernel/rcu/rcu.h can be sourced/used stand-alone
    rcu: Fix sparse warning for rcu_expedited from kernel/ksysfs.c
    notifier: Substitute rcu_access_pointer() for rcu_dereference_raw()
    Documentation/memory-barriers.txt: Clarify release/acquire ordering
    rcutorture: Save kvm.sh output to log
    rcutorture: Add a lock_busted to test the test
    rcutorture: Place kvm-test-1-run.sh output into res directory
    rcutorture: Rename TREE_RCU-Kconfig.txt
    locktorture: Add kvm-recheck.sh plug-in for locktorture
    rcutorture: Gracefully handle NULL cleanup hooks
    locktorture: Add vestigial locktorture configuration
    rcutorture: Introduce "rcu" directory level underneath configs
    rcutorture: Rename kvm-test-1-rcu.sh
    rcutorture: Remove RCU dependencies from ver_functions.sh API
    rcutorture: Create CFcommon file for common Kconfig parameters
    rcutorture: Create config files for scripted test-the-test testing
    rcutorture: Add an rcu_busted to test the test
    locktorture: Add a lock-torture kernel module
    rcutorture: Abstract kvm-recheck.sh
    ...

    Linus Torvalds
     

12 Mar, 2014

1 commit

  • OK, so commit:

    1d8fe7dc8078 ("locking/mutexes: Unlock the mutex without the wait_lock")

    generates this boot warning when CONFIG_DEBUG_MUTEXES=y:

    WARNING: CPU: 0 PID: 139 at /usr/src/linux-2.6/kernel/locking/mutex-debug.c:82 debug_mutex_unlock+0x155/0x180() DEBUG_LOCKS_WARN_ON(lock->owner != current)

    And that makes sense, because as soon as we release the lock a
    new owner can come in...

    One would think that !__mutex_slowpath_needs_to_unlock()
    implementations suffer the same, but for DEBUG we fall back to
    mutex-null.h which has an unconditional 1 for that.

    The mutex debug code requires the mutex to be unlocked after
    doing the debug checks, otherwise it can find inconsistent
    state.

    Reported-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra
    Cc: jason.low2@hp.com
    Link: http://lkml.kernel.org/r/20140312122442.GB27965@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

11 Mar, 2014

6 commits

  • Add in an extra reschedule in an attempt to avoid getting reschedule
    the moment we've acquired the lock.

    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-zah5eyn9gu7qlgwh9r6n2anc@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Since we want a task waiting for a mutex_lock() to go to sleep and
    reschedule on need_resched() we must be able to abort the
    mcs_spin_lock() around the adaptive spin.

    Therefore implement a cancelable mcs lock.

    Signed-off-by: Peter Zijlstra
    Cc: chegu_vinod@hp.com
    Cc: paulmck@linux.vnet.ibm.com
    Cc: Waiman.Long@hp.com
    Cc: torvalds@linux-foundation.org
    Cc: tglx@linutronix.de
    Cc: riel@redhat.com
    Cc: akpm@linux-foundation.org
    Cc: davidlohr@hp.com
    Cc: hpa@zytor.com
    Cc: andi@firstfloor.org
    Cc: aswin@hp.com
    Cc: scott.norton@hp.com
    Cc: Jason Low
    Link: http://lkml.kernel.org/n/tip-62hcl5wxydmjzd182zhvk89m@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • When running workloads that have high contention in mutexes on an 8 socket
    machine, mutex spinners would often spin for a long time with no lock owner.

    The main reason why this is occuring is in __mutex_unlock_common_slowpath(),
    if __mutex_slowpath_needs_to_unlock(), then the owner needs to acquire the
    mutex->wait_lock before releasing the mutex (setting lock->count to 1). When
    the wait_lock is contended, this delays the mutex from being released.
    We should be able to release the mutex without holding the wait_lock.

    Signed-off-by: Jason Low
    Cc: chegu_vinod@hp.com
    Cc: paulmck@linux.vnet.ibm.com
    Cc: Waiman.Long@hp.com
    Cc: torvalds@linux-foundation.org
    Cc: tglx@linutronix.de
    Cc: riel@redhat.com
    Cc: akpm@linux-foundation.org
    Cc: davidlohr@hp.com
    Cc: hpa@zytor.com
    Cc: andi@firstfloor.org
    Cc: aswin@hp.com
    Cc: scott.norton@hp.com
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1390936396-3962-4-git-send-email-jason.low2@hp.com
    Signed-off-by: Ingo Molnar

    Jason Low
     
  • The mutex->spin_mlock was introduced in order to ensure that only 1 thread
    spins for lock acquisition at a time to reduce cache line contention. When
    lock->owner is NULL and the lock->count is still not 1, the spinner(s) will
    continually release and obtain the lock->spin_mlock. This can generate
    quite a bit of overhead/contention, and also might just delay the spinner
    from getting the lock.

    This patch modifies the way optimistic spinners are queued by queuing before
    entering the optimistic spinning loop as oppose to acquiring before every
    call to mutex_spin_on_owner(). So in situations where the spinner requires
    a few extra spins before obtaining the lock, then there will only be 1 spinner
    trying to get the lock and it will avoid the overhead from unnecessarily
    unlocking and locking the spin_mlock.

    Signed-off-by: Jason Low
    Cc: tglx@linutronix.de
    Cc: riel@redhat.com
    Cc: akpm@linux-foundation.org
    Cc: davidlohr@hp.com
    Cc: hpa@zytor.com
    Cc: andi@firstfloor.org
    Cc: aswin@hp.com
    Cc: scott.norton@hp.com
    Cc: chegu_vinod@hp.com
    Cc: Waiman.Long@hp.com
    Cc: paulmck@linux.vnet.ibm.com
    Cc: torvalds@linux-foundation.org
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1390936396-3962-3-git-send-email-jason.low2@hp.com
    Signed-off-by: Ingo Molnar

    Jason Low
     
  • The mutex_can_spin_on_owner() function should also return false if the
    task needs to be rescheduled to avoid entering the MCS queue when it
    needs to reschedule.

    Signed-off-by: Jason Low
    Signed-off-by: Peter Zijlstra
    Cc: Waiman.Long@hp.com
    Cc: torvalds@linux-foundation.org
    Cc: tglx@linutronix.de
    Cc: riel@redhat.com
    Cc: akpm@linux-foundation.org
    Cc: davidlohr@hp.com
    Cc: hpa@zytor.com
    Cc: andi@firstfloor.org
    Cc: aswin@hp.com
    Cc: scott.norton@hp.com
    Cc: chegu_vinod@hp.com
    Cc: paulmck@linux.vnet.ibm.com
    Link: http://lkml.kernel.org/r/1390936396-3962-2-git-send-email-jason.low2@hp.com
    Signed-off-by: Ingo Molnar

    Jason Low
     
  • The mcs_spinlock code is not meant (or suitable) as a generic locking
    primitive, therefore take it away from the normal includes and place
    it in kernel/locking/.

    This way the locking primitives implemented there can use it as part
    of their implementation but we do not risk it getting used
    inapropriately.

    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-byirmpamgr7h25m5kyavwpzx@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

24 Feb, 2014

2 commits


23 Feb, 2014

1 commit

  • If a PI boosted task policy/priority is modified by a setscheduler()
    call we unconditionally dequeue and requeue the task if it is on the
    runqueue even if the new priority is lower than the current effective
    boosted priority. This can result in undesired reordering of the
    priority bucket list.

    If the new priority is less or equal than the current effective we
    just store the new parameters in the task struct and leave the
    scheduler class and the runqueue untouched. This is handled when the
    task deboosts itself. Only if the new priority is higher than the
    effective boosted priority we apply the change immediately.

    Signed-off-by: Thomas Gleixner
    [ Rebase ontop of v3.14-rc1. ]
    Signed-off-by: Sebastian Andrzej Siewior
    Cc: Dario Faggioli
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1391803122-4425-7-git-send-email-bigeasy@linutronix.de
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

14 Feb, 2014

4 commits


10 Feb, 2014

3 commits

  • The __lockdep_no_validate check in mark_held_locks() adds the subtle
    and (afaics) unnecessary difference between no-validate and check==0.
    And this looks even more inconsistent because __lock_acquire() skips
    mark_irqflags()->mark_lock() if !check.

    Change mark_held_locks() to check hlock->check instead.

    Signed-off-by: Oleg Nesterov
    Cc: Dave Jones
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Paul McKenney
    Cc: Steven Rostedt
    Cc: Alan Stern
    Cc: Sasha Levin
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/20140120182013.GA26505@redhat.com
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     
  • Test-case:

    DEFINE_MUTEX(m1);
    DEFINE_MUTEX(m2);
    DEFINE_MUTEX(mx);

    void lockdep_should_complain(void)
    {
    lockdep_set_novalidate_class(&mx);

    // m1 -> mx -> m2
    mutex_lock(&m1);
    mutex_lock(&mx);
    mutex_lock(&m2);
    mutex_unlock(&m2);
    mutex_unlock(&mx);
    mutex_unlock(&m1);

    // m2 -> m1 ; should trigger the warning
    mutex_lock(&m2);
    mutex_lock(&m1);
    mutex_unlock(&m1);
    mutex_unlock(&m2);
    }

    this doesn't trigger any warning, lockdep can't detect the trivial
    deadlock.

    This is because lock(&mx) correctly avoids m1 -> mx dependency, it
    skips validate_chain() due to mx->check == 0. But lock(&m2) wrongly
    adds mx -> m2 and thus m1 -> m2 is not created.

    rcu_lock_acquire()->lock_acquire(check => 0) is fine due to read == 2,
    so currently only __lockdep_no_validate__ can trigger this problem.

    Signed-off-by: Oleg Nesterov
    Cc: Dave Jones
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Paul McKenney
    Cc: Steven Rostedt
    Cc: Alan Stern
    Cc: Sasha Levin
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/20140120182010.GA26498@redhat.com
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     
  • The "int check" argument of lock_acquire() and held_lock->check are
    misleading. This is actually a boolean: 2 means "true", everything
    else is "false".

    And there is no need to pass 1 or 0 to lock_acquire() depending on
    CONFIG_PROVE_LOCKING, __lock_acquire() checks prove_locking at the
    start and clears "check" if !CONFIG_PROVE_LOCKING.

    Note: probably we can simply kill this member/arg. The only explicit
    user of check => 0 is rcu_lock_acquire(), perhaps we can change it to
    use lock_acquire(trylock =>, read => 2). __lockdep_no_validate means
    check => 0 implicitly, but we can change validate_chain() to check
    hlock->instance->key instead. Not to mention it would be nice to get
    rid of lockdep_set_novalidate_class().

    Signed-off-by: Oleg Nesterov
    Cc: Dave Jones
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Paul McKenney
    Cc: Steven Rostedt
    Cc: Alan Stern
    Cc: Sasha Levin
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/20140120182006.GA26495@redhat.com
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     

28 Jan, 2014

2 commits

  • We will need the MCS lock code for doing optimistic spinning for rwsem
    and queued rwlock. Extracting the MCS code from mutex.c and put into
    its own file allow us to reuse this code easily.

    We also inline mcs_spin_lock and mcs_spin_unlock functions
    for better efficiency.

    Note that using the smp_load_acquire/smp_store_release pair used in
    mcs_lock and mcs_unlock is not sufficient to form a full memory barrier
    across cpus for many architectures (except x86). For applications that
    absolutely need a full barrier across multiple cpus with mcs_unlock and
    mcs_lock pair, smp_mb__after_unlock_lock() should be used after mcs_lock.

    Reviewed-by: Paul E. McKenney
    Signed-off-by: Tim Chen
    Signed-off-by: Davidlohr Bueso
    Signed-off-by: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1390347360.3138.63.camel@schen9-DESK
    Signed-off-by: Ingo Molnar

    Tim Chen
     
  • This patch corrects the way memory barriers are used in the MCS lock
    with smp_load_acquire and smp_store_release fucnctions. The previous
    barriers could leak critical sections if mcs lock is used by itself.
    It is not a problem when mcs lock is embedded in mutex but will be an
    issue when the mcs_lock is used elsewhere.

    The patch removes the incorrect barriers and put in correct
    barriers with the pair of functions smp_load_acquire and smp_store_release.

    Suggested-by: Michel Lespinasse
    Reviewed-by: Paul E. McKenney
    Signed-off-by: Waiman Long
    Signed-off-by: Jason Low
    Signed-off-by: Tim Chen
    Signed-off-by: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1390347353.3138.62.camel@schen9-DESK
    Signed-off-by: Ingo Molnar

    Waiman Long
     

21 Jan, 2014

1 commit

  • Pull scheduler changes from Ingo Molnar:

    - Add the initial implementation of SCHED_DEADLINE support: a real-time
    scheduling policy where tasks that meet their deadlines and
    periodically execute their instances in less than their runtime quota
    see real-time scheduling and won't miss any of their deadlines.
    Tasks that go over their quota get delayed (Available to privileged
    users for now)

    - Clean up and fix preempt_enable_no_resched() abuse all around the
    tree

    - Do sched_clock() performance optimizations on x86 and elsewhere

    - Fix and improve auto-NUMA balancing

    - Fix and clean up the idle loop

    - Apply various cleanups and fixes

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (60 commits)
    sched: Fix __sched_setscheduler() nice test
    sched: Move SCHED_RESET_ON_FORK into attr::sched_flags
    sched: Fix up attr::sched_priority warning
    sched: Fix up scheduler syscall LTP fails
    sched: Preserve the nice level over sched_setscheduler() and sched_setparam() calls
    sched/core: Fix htmldocs warnings
    sched/deadline: No need to check p if dl_se is valid
    sched/deadline: Remove unused variables
    sched/deadline: Fix sparse static warnings
    m68k: Fix build warning in mac_via.h
    sched, thermal: Clean up preempt_enable_no_resched() abuse
    sched, net: Fixup busy_loop_us_clock()
    sched, net: Clean up preempt_enable_no_resched() abuse
    sched/preempt: Fix up missed PREEMPT_NEED_RESCHED folding
    sched/preempt, locking: Rework local_bh_{dis,en}able()
    sched/clock, x86: Avoid a runtime condition in native_sched_clock()
    sched/clock: Fix up clear_sched_clock_stable()
    sched/clock, x86: Use a static_key for sched_clock_stable
    sched/clock: Remove local_irq_disable() from the clocks
    sched/clock, x86: Rewrite cyc2ns() to avoid the need to disable IRQs
    ...

    Linus Torvalds
     

13 Jan, 2014

2 commits

  • Some method to deal with rt-mutexes and make sched_dl interact with
    the current PI-coded is needed, raising all but trivial issues, that
    needs (according to us) to be solved with some restructuring of
    the pi-code (i.e., going toward a proxy execution-ish implementation).

    This is under development, in the meanwhile, as a temporary solution,
    what this commits does is:

    - ensure a pi-lock owner with waiters is never throttled down. Instead,
    when it runs out of runtime, it immediately gets replenished and it's
    deadline is postponed;

    - the scheduling parameters (relative deadline and default runtime)
    used for that replenishments --during the whole period it holds the
    pi-lock-- are the ones of the waiting task with earliest deadline.

    Acting this way, we provide some kind of boosting to the lock-owner,
    still by using the existing (actually, slightly modified by the previous
    commit) pi-architecture.

    We would stress the fact that this is only a surely needed, all but
    clean solution to the problem. In the end it's only a way to re-start
    discussion within the community. So, as always, comments, ideas, rants,
    etc.. are welcome! :-)

    Signed-off-by: Dario Faggioli
    Signed-off-by: Juri Lelli
    [ Added !RT_MUTEXES build fix. ]
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1383831828-15501-11-git-send-email-juri.lelli@gmail.com
    Signed-off-by: Ingo Molnar

    Dario Faggioli
     
  • Turn the pi-chains from plist to rb-tree, in the rt_mutex code,
    and provide a proper comparison function for -deadline and
    -priority tasks.

    This is done mainly because:
    - classical prio field of the plist is just an int, which might
    not be enough for representing a deadline;
    - manipulating such a list would become O(nr_deadline_tasks),
    which might be to much, as the number of -deadline task increases.

    Therefore, an rb-tree is used, and tasks are queued in it according
    to the following logic:
    - among two -priority (i.e., SCHED_BATCH/OTHER/RR/FIFO) tasks, the
    one with the higher (lower, actually!) prio wins;
    - among a -priority and a -deadline task, the latter always wins;
    - among two -deadline tasks, the one with the earliest deadline
    wins.

    Queueing and dequeueing functions are changed accordingly, for both
    the list of a task's pi-waiters and the list of tasks blocked on
    a pi-lock.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Dario Faggioli
    Signed-off-by: Juri Lelli
    Signed-off-again-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1383831828-15501-10-git-send-email-juri.lelli@gmail.com
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

17 Dec, 2013

1 commit

  • When mutex debugging is enabled and an imbalanced mutex_unlock()
    is called, we get the following, slightly confusing warning:

    [ 364.208284] DEBUG_LOCKS_WARN_ON(lock->owner != current)

    But in that case the warning is due to an imbalanced mutex_unlock() call,
    and the lock->owner is NULL - so the message is misleading.

    So improve the message by testing for this case specifically:

    DEBUG_LOCKS_WARN_ON(!lock->owner)

    Signed-off-by: Liu, Chuansheng
    Signed-off-by: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Cc: Thomas Gleixner
    Cc: Paul E. McKenney
    Link: http://lkml.kernel.org/r/1386136693.3650.48.camel@cliu38-desktop-build
    [ Improved the changelog, changed the patch to use !lock->owner consistently. ]
    Signed-off-by: Ingo Molnar

    Chuansheng Liu
     

27 Nov, 2013

1 commit

  • Lockdep is an awesome piece of code which detects locking issues
    which are relevant both to userspace and kernelspace. We can
    easily make lockdep work in userspace since there is really no
    kernel spacific magic going on in the code.

    All we need is to wrap two functions which are used by lockdep
    and are very kernel specific.

    Doing that will allow tools located in tools/ to easily utilize
    lockdep's code for their own use.

    Signed-off-by: Sasha Levin
    Signed-off-by: Peter Zijlstra
    Cc: penberg@kernel.org
    Cc: torvalds@linux-foundation.org
    Link: http://lkml.kernel.org/r/1352753446-24109-1-git-send-email-sasha.levin@oracle.com
    Signed-off-by: Ingo Molnar

    Sasha Levin
     

13 Nov, 2013

1 commit

  • There are new Sparse warnings:

    >> kernel/locking/lockdep.c:1235:15: sparse: symbol '__lockdep_count_forward_deps' was not declared. Should it be static?
    >> kernel/locking/lockdep.c:1261:15: sparse: symbol '__lockdep_count_backward_deps' was not declared. Should it be static?

    Please consider folding the attached diff :-)

    Signed-off-by: Fengguang Wu
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/527d1787.ThzXGoUspZWehFDl\%fengguang.wu@intel.com
    Signed-off-by: Ingo Molnar

    Fengguang Wu
     

11 Nov, 2013

2 commits

  • > kernel/locking/lockdep_proc.c: In function 'seq_lock_time':
    > >> kernel/locking/lockdep_proc.c:424:23: warning: comparison of distinct pointer types lacks a cast [enabled by default]
    >
    > 418 static void seq_lock_time(struct seq_file *m, struct lock_time *lt)
    > 419 {
    > 420 seq_printf(m, "%14lu", lt->nr);
    > 421 seq_time(m, lt->min);
    > 422 seq_time(m, lt->max);
    > 423 seq_time(m, lt->total);
    > > 424 seq_time(m, lt->nr ? do_div(lt->total, lt->nr) : 0);
    > 425 }

    My compiler refuses to actually say that; but it looks wrong in that
    do_div() returns the remainder, not the divisor.

    Reported-by: Fengguang Wu
    Tested-by: Fengguang Wu
    Signed-off-by: Peter Zijlstra
    Cc: Davidlohr Bueso
    Link: http://lkml.kernel.org/r/20131106164230.GE16117@laptop.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Fix this docbook error:

    >> docproc: kernel/mutex.c: No such file or directory

    by updating the stale references to kernel/mutex.c.

    Reported-by: fengguang.wu@intel.com
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-34pikw1tlsskj65rrt5iusrq@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

06 Nov, 2013

7 commits