17 Aug, 2009

4 commits

  • - make lc_next() call lc_start()
    - use lock_chains directly instead of storing it in m->private

    Signed-off-by: Li Zefan
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     
  • Use seq_list_start_head() and seq_list_next().

    Signed-off-by: Li Zefan
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     
  • Two entries are missing in the output of /proc/lock_chains.

    One is chains[1]. When lc_next() is called the 1st time,
    chains[0] is returned. And when it's called the 2nd time,
    chains[2] is returned.

    The other missing ons is, when lc_start() is called the 2nd
    time, we should start from chains[@pos-1] but not chains[@pos],
    because pos == 0 is the header.

    Signed-off-by: Li Zefan
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     
  • One entry is missing in the output of /proc/lock_stat.

    The cause is, when ls_start() is called the 2nd time, we should
    start from stats[@pos-1] but not stats[@pos], because pos == 0
    is the header.

    Signed-off-by: Li Zefan
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     

02 Aug, 2009

7 commits

  • The unit is KB, so sizeof(struct circular_queue) should be
    divided by 1024.

    Signed-off-by: Ming Lei
    Cc: akpm@linux-foundation.org
    Cc: torvalds@linux-foundation.org
    Cc: davem@davemloft.net
    Cc: Ming Lei
    Cc: a.p.zijlstra@chello.nl
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • We still can apply DaveM's generation count optimization to
    BFS, based on the following idea:

    - before doing each BFS, increase the global generation id
    by 1

    - if one node in the graph has been visited, mark it as
    visited by storing the current global generation id into
    the node's dep_gen_id field

    - so we can decide if one node has been visited already, by
    comparing the node's dep_gen_id with the global generation id.

    By applying DaveM's generation count optimization to current
    implementation of BFS, we gain the following advantages:

    - we save MAX_LOCKDEP_ENTRIES/8 bytes memory;

    - we remove the bitmap_zero(bfs_accessed, MAX_LOCKDEP_ENTRIES);
    in each BFS, which is very time-consuming since
    MAX_LOCKDEP_ENTRIES may be very large.(16384UL)

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    Cc: "David S. Miller"
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • spin_lock_nest_lock() allows to take many instances of the same
    class, this can easily lead to overflow of MAX_LOCK_DEPTH.

    To avoid this overflow, we'll stop accounting instances but
    start reference counting the class in the held_lock structure.

    [ We could maintain a list of instances, if we'd move the hlock
    stuff into __lock_acquired(), but that would require
    significant modifications to the current code. ]

    We restrict this mode to spin_lock_nest_lock() only, because it
    degrades the lockdep quality due to lost of instance.

    For lockstat this means we don't track lock statistics for any
    but the first lock in the series.

    Currently nesting is limited to 11 bits because that was the
    spare space available in held_lock. This yields a 2048
    instances maximium.

    Signed-off-by: Peter Zijlstra
    Cc: Marcelo Tosatti
    Cc: Linus Torvalds
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Add a lockdep helper to validate that we indeed are the owner
    of a lock.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • fixes a few comments and whitespaces that annoyed me.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Truncate stupid -1 entries in backtraces.

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Fix:

    kernel/built-in.o: In function `lockdep_stats_show':
    lockdep_proc.c:(.text+0x48202): undefined reference to `max_bfs_queue_depth'

    As max_bfs_queue_depth is only available under
    CONFIG_PROVE_LOCKING=y.

    Cc: Ming Lei
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

24 Jul, 2009

10 commits

  • Some cleanups of the lockdep code after the BFS series:

    - Remove the last traces of the generation id
    - Fixup comment style
    - Move the bfs routines into lockdep.c
    - Cleanup the bfs routines

    [ tom.leiming@gmail.com: Fix crash ]
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Add BFS statistics to the existing lockdep stats.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • Also account the BFS memory usage.

    Signed-off-by: Ming Lei
    [ fix build for !PROVE_LOCKING ]
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • Implement lockdep_count_{for,back}ward using BFS.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • Since the shortest lock dependencies' path may be obtained by BFS,
    we print the shortest one by print_shortest_lock_dependencies().

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • This patch uses BFS to implement find_usage_*wards(),which
    was originally writen by DFS.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • This patch uses BFS to implement check_noncircular() and
    prints the generated shortest circle if exists.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • 1,introduce match() to BFS in order to make it usable to
    match different pattern;

    2,also rename some functions to make them more suitable.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • 1,replace %MAX_CIRCULAR_QUE_SIZE with &(MAX_CIRCULAR_QUE_SIZE-1)
    since we define MAX_CIRCULAR_QUE_SIZE as power of 2;

    2,use bitmap to mark if a lock is accessed in BFS in order to
    clear it quickly, because we may search a graph many times.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     
  • Currently lockdep will print the 1st circle detected if it
    exists when acquiring a new (next) lock.

    This patch prints the shortest path from the next lock to be
    acquired to the previous held lock if a circle is found.

    The patch still uses the current method to check circle, and
    once the circle is found, breadth-first search algorithem is
    used to compute the shortest path from the next lock to the
    previous lock in the forward lock dependency graph.

    Printing the shortest path will shorten the dependency chain,
    and make troubleshooting for possible circular locking easier.

    Signed-off-by: Ming Lei
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ming Lei
     

23 Jul, 2009

19 commits