22 May, 2010

1 commit

  • The conversion of device->sem to device->mutex resulted in lockdep
    warnings. Create a novalidate class for now until the driver folks
    come up with separate classes. That way we have at least the basic
    mutex debugging coverage.

    Add a checkpatch error so the usage is reserved for device->mutex.

    [ tglx: checkpatch and compile fix for LOCKDEP=n ]

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Peter Zijlstra
     

13 Mar, 2010

1 commit

  • Extern declarations in sysctl.c should be moved to their own header file,
    and then include them in relavant .c files.

    Move lockdep extern declarations to linux/lockdep.h

    Signed-off-by: Dave Young
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Young
     

25 Feb, 2010

1 commit

  • Make rcu_dereference_check() print the list of held locks in
    addition to the stack dump to ease debugging.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

02 Aug, 2009

3 commits

  • We still can apply DaveM's generation count optimization to
    BFS, based on the following idea:

    - before doing each BFS, increase the global generation id
    by 1

    - if one node in the graph has been visited, mark it as
    visited by storing the current global generation id into
    the node's dep_gen_id field

    - so we can decide if one node has been visited already, by
    comparing the node's dep_gen_id with the global generation id.

    By applying DaveM's generation count optimization to current
    implementation of BFS, we gain the following advantages:

    - we save MAX_LOCKDEP_ENTRIES/8 bytes memory;

    - we remove the bitmap_zero(bfs_accessed, MAX_LOCKDEP_ENTRIES);
    in each BFS, which is very time-consuming since
    MAX_LOCKDEP_ENTRIES may be very large.(16384UL)

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    Cc: "David S. Miller"
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • spin_lock_nest_lock() allows to take many instances of the same
    class, this can easily lead to overflow of MAX_LOCK_DEPTH.

    To avoid this overflow, we'll stop accounting instances but
    start reference counting the class in the held_lock structure.

    [ We could maintain a list of instances, if we'd move the hlock
    stuff into __lock_acquired(), but that would require
    significant modifications to the current code. ]

    We restrict this mode to spin_lock_nest_lock() only, because it
    degrades the lockdep quality due to lost of instance.

    For lockstat this means we don't track lock statistics for any
    but the first lock in the series.

    Currently nesting is limited to 11 bits because that was the
    spare space available in held_lock. This yields a 2048
    instances maximium.

    Signed-off-by: Peter Zijlstra
    Cc: Marcelo Tosatti
    Cc: Linus Torvalds
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Add a lockdep helper to validate that we indeed are the owner
    of a lock.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

24 Jul, 2009

2 commits

  • Some cleanups of the lockdep code after the BFS series:

    - Remove the last traces of the generation id
    - Fixup comment style
    - Move the bfs routines into lockdep.c
    - Cleanup the bfs routines

    [ tom.leiming@gmail.com: Fix crash ]
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Currently lockdep will print the 1st circle detected if it
    exists when acquiring a new (next) lock.

    This patch prints the shortest path from the next lock to be
    acquired to the previous held lock if a circle is found.

    The patch still uses the current method to check circle, and
    once the circle is found, breadth-first search algorithem is
    used to compute the shortest path from the next lock to the
    previous lock in the forward lock dependency graph.

    Printing the shortest path will shorten the dependency chain,
    and make troubleshooting for possible circular locking easier.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     

23 Jun, 2009

1 commit

  • Some filesystems need to set lockdep map for i_mutex differently for
    different directories. For example OCFS2 has system directories (for
    orphan inode tracking and for gathering all system files like journal
    or quota files into a single place) which have different locking
    locking rules than standard directories. For a filesystem setting
    lockdep map is naturaly done when the inode is read but we have to
    modify unlock_new_inode() not to overwrite the lockdep map the filesystem
    has set.

    Acked-by: peterz@infradead.org
    CC: mingo@redhat.com
    Signed-off-by: Jan Kara
    Signed-off-by: Joel Becker

    Jan Kara
     

03 Apr, 2009

1 commit

  • SGI has observed that on large systems, interrupts are not serviced for a
    long period of time when waiting for a rwlock. The following patch series
    re-enables irqs while waiting for the lock, resembling the code which is
    already there for spinlocks.

    I only made the ia64 version, because the patch adds some overhead to the
    fast path. I assume there is currently no demand to have this for other
    architectures, because the systems are not so large. Of course, the
    possibility to implement raw_{read|write}_lock_flags for any architecture
    is still there.

    This patch:

    The new macro LOCK_CONTENDED_FLAGS expands to the correct implementation
    depending on the config options, so that IRQ's are re-enabled when
    possible, but they remain disabled if CONFIG_LOCKDEP is set.

    Signed-off-by: Petr Tesarik
    Signed-off-by: Robin Holt
    Cc:
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     

15 Feb, 2009

4 commits

  • For convenience later.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • s/HELD_OVER/ENABLED/g

    so that its similar to the hard and soft-irq names.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • s/\(LOCKF\?_ENABLED_[^ ]*\)S\(_READ\)\?\>/\1\2/g

    So that the USED_IN and ENABLED have the same names.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Here is another version, with the incremental patch rolled up, and
    added reclaim context annotation to kswapd, and allocation tracing
    to slab allocators (which may only ever reach the page allocator
    in rare cases, so it is good to put annotations here too).

    Haven't tested this version as such, but it should be getting closer
    to merge worthy ;)

    --
    After noticing some code in mm/filemap.c accidentally perform a __GFP_FS
    allocation when it should not have been, I thought it might be a good idea to
    try to catch this kind of thing with lockdep.

    I coded up a little idea that seems to work. Unfortunately the system has to
    actually be in __GFP_FS page reclaim, then take the lock, before it will mark
    it. But at least that might still be some orders of magnitude more common
    (and more debuggable) than an actual deadlock condition, so we have some
    improvement I hope (the concept is no less complete than discovery of a lock's
    interrupt contexts).

    I guess we could even do the same thing with __GFP_IO (normal reclaim), and
    even GFP_NOIO locks too... but filesystems will have the most locks and fiddly
    code paths, so let's start there and see how it goes.

    It *seems* to work. I did a quick test.

    =================================
    [ INFO: inconsistent lock state ]
    2.6.28-rc6-00007-ged31348-dirty #26
    ---------------------------------
    inconsistent {in-reclaim-W} -> {ov-reclaim-W} usage.
    modprobe/8526 [HC0[0]:SC0[0]:HE1:SE1] takes:
    (testlock){--..}, at: [] brd_init+0x55/0x216 [brd]
    {in-reclaim-W} state was registered at:
    [] __lock_acquire+0x75b/0x1a60
    [] lock_acquire+0x91/0xc0
    [] mutex_lock_nested+0xb1/0x310
    [] brd_init+0x2b/0x216 [brd]
    [] _stext+0x3b/0x170
    [] sys_init_module+0xaf/0x1e0
    [] system_call_fastpath+0x16/0x1b
    [] 0xffffffffffffffff
    irq event stamp: 3929
    hardirqs last enabled at (3929): [] mutex_lock_nested+0x285/0x310
    hardirqs last disabled at (3928): [] mutex_lock_nested+0x59/0x310
    softirqs last enabled at (3732): [] sk_filter+0x83/0xe0
    softirqs last disabled at (3730): [] sk_filter+0x16/0xe0

    other info that might help us debug this:
    1 lock held by modprobe/8526:
    #0: (testlock){--..}, at: [] brd_init+0x55/0x216 [brd]

    stack backtrace:
    Pid: 8526, comm: modprobe Not tainted 2.6.28-rc6-00007-ged31348-dirty #26
    Call Trace:
    [] print_usage_bug+0x193/0x1d0
    [] mark_lock+0xaf0/0xca0
    [] mark_held_locks+0x55/0xc0
    [] ? brd_init+0x0/0x216 [brd]
    [] trace_reclaim_fs+0x2a/0x60
    [] __alloc_pages_internal+0x475/0x580
    [] ? mutex_lock_nested+0x26e/0x310
    [] ? brd_init+0x0/0x216 [brd]
    [] brd_init+0x6a/0x216 [brd]
    [] ? brd_init+0x0/0x216 [brd]
    [] _stext+0x3b/0x170
    [] ? mutex_unlock+0x9/0x10
    [] ? __mutex_unlock_slowpath+0x10d/0x180
    [] ? trace_hardirqs_on_caller+0x12c/0x190
    [] sys_init_module+0xaf/0x1e0
    [] system_call_fastpath+0x16/0x1b

    Signed-off-by: Nick Piggin
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Nick Piggin
     

31 Dec, 2008

1 commit

  • * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, sparseirq: clean up Kconfig entry
    x86: turn CONFIG_SPARSE_IRQ off by default
    sparseirq: fix numa_migrate_irq_desc dependency and comments
    sparseirq: add kernel-doc notation for new member in irq_desc, -v2
    locking, irq: enclose irq_desc_lock_class in CONFIG_LOCKDEP
    sparseirq, xen: make sure irq_desc is allocated for interrupts
    sparseirq: fix !SMP building, #2
    x86, sparseirq: move irq_desc according to smp_affinity, v7
    proc: enclose desc variable of show_stat() in CONFIG_SPARSE_IRQ
    sparse irqs: add irqnr.h to the user headers list
    sparse irqs: handle !GENIRQ platforms
    sparseirq: fix !SMP && !PCI_MSI && !HT_IRQ build
    sparseirq: fix Alpha build failure
    sparseirq: fix typo in !CONFIG_IO_APIC case
    x86, MSI: pass irq_cfg and irq_desc
    x86: MSI start irq numbering from nr_irqs_gsi
    x86: use NR_IRQS_LEGACY
    sparse irq_desc[] array: core kernel and x86 changes
    genirq: record IRQ_LEVEL in irq_desc[]
    irq.h: remove padding from irq_desc on 64bits

    Linus Torvalds
     

18 Dec, 2008

1 commit

  • Impact: simplify code

    commit "08678b0: generic: sparse irqs: use irq_desc() [...]" introduced
    the irq_desc_lock_class variable.

    But it is used only if CONFIG_SPARSE_IRQ=Y or CONFIG_TRACE_IRQFLAGS=Y.
    Otherwise, following warnings happen:

    CC kernel/irq/handle.o
    kernel/irq/handle.c:26: warning: 'irq_desc_lock_class' defined but not used

    Actually, current early_init_irq_lock_class has a bit strange and messy ifdef.
    In addition, it is not valueable.

    1. this function is protected by !CONFIG_SPARSE_IRQ, but that is not necessary.
    if CONFIG_SPARSE_IRQ=Y, desc of all irq number are initialized by NULL
    at first - then this function calling is safe.

    2. this function protected by CONFIG_TRACE_IRQFLAGS too. but it is not
    necessary either, because lockdep_set_class() doesn't have bad side
    effect even if CONFIG_TRACE_IRQFLAGS=n.

    This patch bloat kernel size a bit on CONFIG_TRACE_IRQFLAGS=n and
    CONFIG_SPARSE_IRQ=Y - but that's ok. early_init_irq_lock_class() is not
    a fastpatch at all.

    To avoid messy ifdefs is more important than a few bytes diet.

    Signed-off-by: KOSAKI Motohiro
    Signed-off-by: Ingo Molnar

    KOSAKI Motohiro
     

04 Dec, 2008

1 commit

  • Impact: introduce new lockdep API

    Allow to change a held lock's class. Basically the same as the existing
    code to change a subclass therefore reuse all that.

    The XFS code will be able to use this to annotate their inode locking.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

14 Nov, 2008

1 commit

  • fix this warning:

    net/bluetooth/af_bluetooth.c:60: warning: ‘bt_key_strings’ defined but not used
    net/bluetooth/af_bluetooth.c:71: warning: ‘bt_slock_key_strings’ defined but not used

    this is a lockdep macro problem in the !LOCKDEP case.

    We cannot convert it to an inline because the macro works on multiple types,
    but we can mark the parameter used.

    [ also clean up a misaligned tab in sock_lock_init_class_and_name() ]

    [ also remove #ifdefs from around af_family_clock_key strings - which
    were certainly added to get rid of the ugly build warnings. ]

    Signed-off-by: Ingo Molnar
    Signed-off-by: David S. Miller

    Ingo Molnar
     

12 Nov, 2008

1 commit

  • fix this warning:

    net/bluetooth/af_bluetooth.c:60: warning: ‘bt_key_strings’ defined but not used
    net/bluetooth/af_bluetooth.c:71: warning: ‘bt_slock_key_strings’ defined but not used

    this is a lockdep macro problem in the !LOCKDEP case.

    We cannot convert it to an inline because the macro works on multiple types,
    but we can mark the parameter used.

    [ also clean up a misaligned tab in sock_lock_init_class_and_name() ]

    [ also remove #ifdefs from around af_family_clock_key strings - which
    were certainly added to get rid of the ugly build warnings. ]

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

20 Oct, 2008

1 commit


10 Sep, 2008

1 commit


11 Aug, 2008

8 commits

  • certain configs produce:

    [ 70.076229] BUG: MAX_LOCKDEP_KEYS too low!
    [ 70.080230] turning off the locking correctness validator.

    tune them up.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • There is a overflow by 1 case in the new shrunken hlock code.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • the names were too generic:

    drivers/uio/uio.c:87: error: expected identifier or '(' before 'do'
    drivers/uio/uio.c:87: error: expected identifier or '(' before 'while'
    drivers/uio/uio.c:113: error: 'map_release' undeclared here (not in a function)

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Expose the new lock protection lock.

    This can be used to annotate places where we take multiple locks of the
    same class and avoid deadlocks by always taking another (top-level) lock
    first.

    NOTE: we're still bound to the MAX_LOCK_DEPTH (48) limit.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • On Fri, 2008-08-01 at 16:26 -0700, Linus Torvalds wrote:

    > On Fri, 1 Aug 2008, David Miller wrote:
    > >
    > > Taking more than a few locks of the same class at once is bad
    > > news and it's better to find an alternative method.
    >
    > It's not always wrong.
    >
    > If you can guarantee that anybody that takes more than one lock of a
    > particular class will always take a single top-level lock _first_, then
    > that's all good. You can obviously screw up and take the same lock _twice_
    > (which will deadlock), but at least you cannot get into ABBA situations.
    >
    > So maybe the right thing to do is to just teach lockdep about "lock
    > protection locks". That would have solved the multi-queue issues for
    > networking too - all the actual network drivers would still have taken
    > just their single queue lock, but the one case that needs to take all of
    > them would have taken a separate top-level lock first.
    >
    > Never mind that the multi-queue locks were always taken in the same order:
    > it's never wrong to just have some top-level serialization, and anybody
    > who needs to take locks might as well do , because they sure as
    > hell aren't going to be on _any_ fastpaths.
    >
    > So the simplest solution really sounds like just teaching lockdep about
    > that one special case. It's not "nesting" exactly, although it's obviously
    > related to it.

    Do as Linus suggested. The lock protection lock is called nest_lock.

    Note that we still have the MAX_LOCK_DEPTH (48) limit to consider, so anything
    that spills that it still up shit creek.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Most the free-standing lock_acquire() usages look remarkably similar, sweep
    them into a new helper.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • struct held_lock {
    u64 prev_chain_key; /* 0 8 */
    struct lock_class * class; /* 8 8 */
    long unsigned int acquire_ip; /* 16 8 */
    struct lockdep_map * instance; /* 24 8 */
    int irq_context; /* 32 4 */
    int trylock; /* 36 4 */
    int read; /* 40 4 */
    int check; /* 44 4 */
    int hardirqs_off; /* 48 4 */

    /* size: 56, cachelines: 1 */
    /* padding: 4 */
    /* last cacheline: 56 bytes */
    };

    struct held_lock {
    u64 prev_chain_key; /* 0 8 */
    long unsigned int acquire_ip; /* 8 8 */
    struct lockdep_map * instance; /* 16 8 */
    unsigned int class_idx:11; /* 24:21 4 */
    unsigned int irq_context:2; /* 24:19 4 */
    unsigned int trylock:1; /* 24:18 4 */
    unsigned int read:2; /* 24:16 4 */
    unsigned int check:2; /* 24:14 4 */
    unsigned int hardirqs_off:1; /* 24:13 4 */

    /* size: 32, cachelines: 1 */
    /* padding: 4 */
    /* bit_padding: 13 bits */
    /* last cacheline: 32 bytes */
    };

    [mingo@elte.hu: shrunk hlock->class too]
    [peterz@infradead.org: fixup bit sizes]
    Signed-off-by: Dave Jones
    Signed-off-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra

    Dave Jones
     
  • this can be used to reset a held lock's subclass, for arbitrary-depth
    iterated data structures such as trees or lists which have per-node
    locks.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

01 Aug, 2008

1 commit

  • When we traverse the graph, either forwards or backwards, we
    are interested in whether a certain property exists somewhere
    in a node reachable in the graph.

    Therefore it is never necessary to traverse through a node more
    than once to get a correct answer to the given query.

    Take advantage of this property using a global ID counter so that we
    need not clear all the markers in all the lock_class entries before
    doing a traversal. A new ID is choosen when we start to traverse, and
    we continue through a lock_class only if it's ID hasn't been marked
    with the new value yet.

    This short-circuiting is essential especially for high CPU count
    systems. The scheduler has a runqueue per cpu, and needs to take
    two runqueue locks at a time, which leads to long chains of
    backwards and forwards subgraphs from these runqueue lock nodes.
    Without the short-circuit implemented here, a graph traversal on
    a runqueue lock can take up to (1 << (N - 1)) checks on a system
    with N cpus.

    For anything more than 16 cpus or so, lockdep will eventually bring
    the machine to a complete standstill.

    Signed-off-by: David S. Miller
    Acked-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    David Miller
     

24 Jun, 2008

1 commit


20 Jun, 2008

1 commit


20 Oct, 2007

1 commit

  • In the following scenario:

    code path 1:
    my_function() -> lock(L1); ...; flush_workqueue(); ...

    code path 2:
    run_workqueue() -> my_work() -> ...; lock(L1); ...

    you can get a deadlock when my_work() is queued or running
    but my_function() has acquired L1 already.

    This patch adds a pseudo-lock to each workqueue to make lockdep
    warn about this scenario.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Johannes Berg
    Acked-by: Oleg Nesterov
    Acked-by: Ingo Molnar
    Acked-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Berg
     

12 Oct, 2007

2 commits


20 Jul, 2007

3 commits

  • __acquire
    |
    lock _____
    | \
    | __contended
    | |
    | wait
    | _______/
    |/
    |
    __acquired
    |
    __release
    |
    unlock

    We measure acquisition and contention bouncing.

    This is done by recording a cpu stamp in each lock instance.

    Contention bouncing requires the cpu stamp to be set on acquisition. Hence we
    move __acquired into the generic path.

    __acquired is then used to measure acquisition bouncing by comparing the
    current cpu with the old stamp before replacing it.

    __contended is used to measure contention bouncing (only useful for preemptable
    locks)

    [akpm@linux-foundation.org: cleanups]
    Signed-off-by: Peter Zijlstra
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • - update the copyright notices
    - use the default hash function
    - fix a thinko in a BUILD_BUG_ON
    - add a WARN_ON to spot inconsitent naming
    - fix a termination issue in /proc/lock_stat

    [akpm@linux-foundation.org: cleanups]
    Signed-off-by: Peter Zijlstra
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Introduce the core lock statistics code.

    Lock statistics provides lock wait-time and hold-time (as well as the count
    of corresponding contention and acquisitions events). Also, the first few
    call-sites that encounter contention are tracked.

    Lock wait-time is the time spent waiting on the lock. This provides insight
    into the locking scheme, that is, a heavily contended lock is indicative of
    a too coarse locking scheme.

    Lock hold-time is the duration the lock was held, this provides a reference for
    the wait-time numbers, so they can be put into perspective.

    1)
    lock
    2)
    ... do stuff ..
    unlock
    3)

    The time between 1 and 2 is the wait-time. The time between 2 and 3 is the
    hold-time.

    The lockdep held-lock tracking code is reused, because it already collects locks
    into meaningful groups (classes), and because it is an existing infrastructure
    for lock instrumentation.

    Currently lockdep tracks lock acquisition with two hooks:

    lock()
    lock_acquire()
    _lock()

    ... code protected by lock ...

    unlock()
    lock_release()
    _unlock()

    We need to extend this with two more hooks, in order to measure contention.

    lock_contended() - used to measure contention events
    lock_acquired() - completion of the contention

    These are then placed the following way:

    lock()
    lock_acquire()
    if (!_try_lock())
    lock_contended()
    _lock()
    lock_acquired()

    ... do locked stuff ...

    unlock()
    lock_release()
    _unlock()

    (Note: the try_lock() 'trick' is used to avoid instrumenting all platform
    dependent lock primitive implementations.)

    It is also possible to toggle the two lockdep features at runtime using:

    /proc/sys/kernel/prove_locking
    /proc/sys/kernel/lock_stat

    (esp. turning off the O(n^2) prove_locking functionaliy can help)

    [akpm@linux-foundation.org: build fixes]
    [akpm@linux-foundation.org: nuke unneeded ifdefs]
    Signed-off-by: Peter Zijlstra
    Acked-by: Ingo Molnar
    Acked-by: Jason Baron
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

23 Mar, 2007

1 commit

  • lockdep found a bug during a run of workqueue function - this could be also
    caused by a bug from other code running simultaneously.

    lockdep really shouldn't be used when debug_locks == 0!

    Reported-by: Folkert van Heusden
    Inspired-by: Oleg Nesterov
    Signed-off-by: Jarek Poplawski
    Cc: Ingo Molnar
    Acked-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jarek Poplawski
     

13 Feb, 2007

1 commit

  • 3117df0453828bd045c16244e6f50e5714667a8a causes this:

    In file included from arch/s390/kernel/early.c:13:
    include/linux/lockdep.h:300: warning:
    "struct task_struct" declared inside parameter list
    include/linux/lockdep.h:300:
    warning: its scope is only this definition or
    declaration, which is probably not what you want

    Acked-by: Ingo Molnar
    Cc: Martin Schwidefsky
    Signed-off-by: Heiko Carstens
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens