19 Nov, 2011

1 commit

  • Change the linked lists to rb_tree's in the rsb
    hash table to speed up searches. Slow rsb searches
    were having a large impact on gfs2 performance due
    to the large number of dlm locks gfs2 uses.

    Signed-off-by: Bob Peterson
    Signed-off-by: David Teigland

    Bob Peterson
     

16 Jul, 2011

1 commit

  • Instead of creating our own kthread (dlm_astd) to deliver
    callbacks for all lockspaces, use a per-lockspace workqueue
    to deliver the callbacks. This eliminates complications and
    slowdowns from many lockspaces sharing the same thread.

    Signed-off-by: David Teigland

    David Teigland
     

15 Jul, 2011

1 commit


13 Jul, 2011

1 commit

  • By pre-allocating rsb structs before searching the hash
    table, they can be inserted immediately. This avoids
    always having to repeat the search when adding the struct
    to hash list.

    This also adds space to the rsb struct for a max resource
    name, so an rsb allocation can be used by any request.
    The constant size also allows us to finally use a slab
    for the rsb structs.

    Signed-off-by: David Teigland

    David Teigland
     

11 Jul, 2011

3 commits

  • This is simpler and quicker than the hash table, and
    avoids needing to search the hash list for every new
    lkid to check if it's used.

    Signed-off-by: David Teigland

    David Teigland
     
  • The gfp and size args were switched.

    Signed-off-by: David Teigland

    David Teigland
     
  • In fs/dlm/lock.c in the dlm_scan_waiters() function there are 3 small
    issues:

    1) There's no need to test the return value of the allocation and do a
    memset if is succeedes. Just use kzalloc() to obtain zeroed memory.

    2) Since kfree() handles NULL pointers gracefully, the test of
    'warned' against NULL before the kfree() after the loop is completely
    pointless. Remove it.

    3) The arguments to kmalloc() (now kzalloc()) were swapped. Thanks to
    Dr. David Alan Gilbert for pointing this out.

    Signed-off-by: Jesper Juhl
    Signed-off-by: David Teigland

    Jesper Juhl
     

25 May, 2011

1 commit


05 Apr, 2011

1 commit

  • kmalloc a stub message struct during recovery instead of sharing the
    struct in the lockspace. This leaves the lockspace stub_ms only for
    faking downconvert replies, where it is never modified and sharing
    is not a problem.

    Also improve the debug messages in the same recovery function.

    Signed-off-by: David Teigland

    David Teigland
     

02 Apr, 2011

1 commit


31 Mar, 2011

1 commit


11 Mar, 2011

1 commit

  • Change how callbacks are recorded for locks. Previously, information
    about multiple callbacks was combined into a couple of variables that
    indicated what the end result should be. In some situations, we
    could not tell from this combined state what the exact sequence of
    callbacks were, and would end up either delivering the callbacks in
    the wrong order, or suppress redundant callbacks incorrectly. This
    new approach records all the data for each callback, leaving no
    uncertainty about what needs to be delivered.

    Signed-off-by: David Teigland

    David Teigland
     

03 Sep, 2010

1 commit

  • When converting a lock, an lkb is in the granted state and also being used
    to request a new state. In the case that the conversion was a "try 1cb"
    type which has failed, and if the new state was incompatible with the old
    state, a callback was being generated to the requesting node. This is
    incorrect as callbacks should only be sent to all the other nodes holding
    blocking locks. The requesting node should receive the normal (failed)
    response to its "try 1cb" conversion request only.

    This was discovered while debugging a performance problem on GFS2, however
    this fix also speeds up GFS as well. In the GFS2 case the performance gain
    is over 10x for cases of write activity to an inode whose glock is cached
    on another, idle (wrt that glock) node.

    (comment added, dct)

    Signed-off-by: Steven Whitehouse
    Tested-by: Abhijith Das
    Signed-off-by: David Teigland

    Steven Whitehouse
     

01 May, 2010

1 commit

  • Smatch complains because "lkb" is never NULL. Looking at it, the original
    code actually adds the new element to the end of the list fine, so we can
    just get rid of the if condition. This code is four years old and no one
    has complained so it must work.

    Signed-off-by: Dan Carpenter
    Signed-off-by: David Teigland

    Dan Carpenter
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

27 Feb, 2010

2 commits

  • The bast mode that appears in the debugfs output should be
    useful on both master and process nodes. lkb_highbast is
    currently printed, and is only useful on the master node.
    lkb_bastmode is only useful on the process node. This
    patch sets lkb_bastmode on the master node as well, and
    uses that value in the debugfs print.

    Signed-off-by: David Teigland

    David Teigland
     
  • When the lock master processes a successful operation (request,
    convert, cancel, or unlock), it will process the effects of the
    change before sending the reply for the operation. The "effects"
    of the operation are:

    - blocking callbacks (basts) for any newly granted locks
    - waiting or converting locks that can now be granted

    The cast is queued on the local node when the reply from the lock
    master is received. This means that a lock holder can receive a
    bast for a lock mode that is doesn't yet know has been granted.

    Signed-off-by: David Teigland

    David Teigland
     

25 Feb, 2010

1 commit

  • When both blocking and completion callbacks are queued for lock,
    the dlm would always deliver the completion callback (cast) first.
    In some cases the blocking callback (bast) is queued before the
    cast, though, and should be delivered first. This patch keeps
    track of the order in which they were queued and delivers them
    in that order.

    This patch also keeps track of the granted mode in the last cast
    and eliminates the following bast if the bast mode is compatible
    with the preceding cast mode. This happens when a remotely mastered
    lock is demoted, e.g. EX->NL, in which case the local node queues
    a cast immediately after sending the demote message. In this way
    a cast can be queued for a mode, e.g. NL, that makes an in-transit
    bast extraneous.

    Signed-off-by: David Teigland

    David Teigland
     

01 Dec, 2009

1 commit

  • Replace all GFP_KERNEL and ls_allocation with GFP_NOFS.
    ls_allocation would be GFP_KERNEL for userland lockspaces
    and GFP_NOFS for file system lockspaces.

    It was discovered that any lockspaces on the system can
    affect all others by triggering memory reclaim in the
    file system which could in turn call back into the dlm
    to acquire locks, deadlocking dlm threads that were
    shared by all lockspaces, like dlm_recv.

    Signed-off-by: David Teigland

    David Teigland
     

18 Jun, 2009

1 commit


12 Mar, 2009

2 commits

  • Return immediately from dlm_unlock(CANCEL) if the lock is
    granted and not being converted; there's nothing to cancel.

    Signed-off-by: David Teigland

    David Teigland
     
  • When a conversion completes successfully and finds that a cancel
    of the convert is still in progress (which is now a moot point),
    preemptively clear the state associated with outstanding cancel.
    That state could cause a subsequent conversion to be ignored.

    Also, improve the consistency and content of error and debug
    messages in this area.

    Signed-off-by: David Teigland

    David Teigland
     

09 Jan, 2009

1 commit


24 Dec, 2008

3 commits

  • Record the time the latest blocking callback was queued for
    a lock. This will be used for debugging in combination with
    lock queue timestamp changes in the previous patch.

    Signed-off-by: David Teigland

    David Teigland
     
  • Use ktime instead of jiffies for timestamping lkb's. Also stamp the
    time on every lkb whenever it's added to a resource queue, instead of
    just stamping locks subject to timeouts. This will allow us to use
    timestamps more widely for debugging all locks.

    Signed-off-by: David Teigland

    David Teigland
     
  • The lkb bastmode value is set in the context of processing the
    lock, and read by the dlm_astd thread. Because it's accessed
    in these two separate contexts, the writing/reading ought to
    be done under a lock. This is simple to do by setting it and
    reading it when the lkb is added to and removed from dlm_astd's
    callback list which is properly locked.

    Signed-off-by: David Teigland

    David Teigland
     

15 Jul, 2008

2 commits

  • gcc 4.3.0 correctly emits the following warning.
    search_rsb_list does not *r_ret if no dlm_rsb is found
    and _search_rsb may pass the uninitialized value upstream
    on the error path when both calls to search_rsb_list
    return non-zero error.

    The fix sets *r_ret to NULL on search_rsb_list's not-found path.

    Signed-off-by: Benny Halevy
    Signed-off-by: David Teigland

    Benny Halevy
     
  • The fix in commit 3650925893469ccb03dbcc6a440c5d363350f591 was addressing
    the case of a granted PR lock with waiting PR and CW locks. It's a
    special case that requires forcing a CW bast. However, that forced CW
    bast was incorrectly applying to a second condition where the granted
    lock was CW. So, the holder of a CW lock could receive an extraneous CW
    bast instead of a PR bast. This fix narrows the original special case to
    what was intended.

    Signed-off-by: David Teigland

    David Teigland
     

22 Apr, 2008

2 commits


07 Feb, 2008

1 commit


06 Feb, 2008

1 commit


04 Feb, 2008

7 commits


31 Jan, 2008

1 commit

  • To prevent the master of an rsb from changing rapidly, an unused rsb is kept
    on the "toss list" for a period of time to be reused. The toss list was
    being cleared completely for each recovery, which is unnecessary. Much of
    the benefit of the toss list can be maintained if nodes keep rsb's in their
    toss list that they are the master of. These rsb's need to be included
    when the resource directory is rebuilt during recovery.

    Signed-off-by: David Teigland

    David Teigland