13 Sep, 2011

1 commit

  • There is no reason to allow the lock protecting rwsems (the
    ownerless variant) to be preemptible on -rt. Convert it to raw.

    In mainline this change documents the low level nature of
    the lock - otherwise there's no functional difference. Lockdep
    and Sparse checking will work as usual.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

27 Jan, 2011

1 commit

  • Peter Zijlstra pointed out, that the only user of asmregparm (x86) is
    compiling the kernel already with -mregparm=3. So the annotation of
    the rwsem functions is redundant. Remove it.

    Signed-off-by: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: David Howells
    Cc: Benjamin Herrenschmidt
    Cc: Matt Turner
    Cc: Tony Luck
    Cc: Heiko Carstens
    Cc: Paul Mundt
    Cc: David Miller
    Cc: Chris Zankel
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

10 Aug, 2010

5 commits

  • More code can be pushed from rwsem_down_read_failed and
    rwsem_down_write_failed into rwsem_down_failed_common.

    Following change adding down_read_critical infrastructure support also
    enjoys having flags available in a register rather than having to fish it
    out in the struct rwsem_waiter...

    Signed-off-by: Michel Lespinasse
    Acked-by: David Howells
    Cc: Mike Waychison
    Cc: Suleiman Souhlal
    Cc: Ying Han
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     
  • This change addresses the following situation:

    - Thread A acquires the rwsem for read
    - Thread B tries to acquire the rwsem for write, notices there is already
    an active owner for the rwsem.
    - Thread C tries to acquire the rwsem for read, notices that thread B already
    tried to acquire it.
    - Thread C grabs the spinlock and queues itself on the wait queue.
    - Thread B grabs the spinlock and queues itself behind C. At this point A is
    the only remaining active owner on the rwsem.

    In this situation thread B could notice that it was the last active writer
    on the rwsem, and decide to wake C to let it proceed in parallel with A
    since they both only want the rwsem for read.

    Signed-off-by: Michel Lespinasse
    Acked-by: David Howells
    Cc: Mike Waychison
    Cc: Suleiman Souhlal
    Cc: Ying Han
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     
  • Previously each waiting thread added a bias of RWSEM_WAITING_BIAS. With
    this change, the bias is added only once to indicate that the wait list is
    non-empty.

    This has a few nice properties which will be used in following changes:
    - when the spinlock is held and the waiter list is known to be non-empty,
    count < RWSEM_WAITING_BIAS there is an active writer on that sem
    - count == RWSEM_WAITING_BIAS there are waiting threads and no
    active readers/writers on that sem

    Signed-off-by: Michel Lespinasse
    Acked-by: David Howells
    Cc: Mike Waychison
    Cc: Suleiman Souhlal
    Cc: Ying Han
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     
  • In __rwsem_do_wake(), we can skip the active count check unless we come
    there from up_xxxx(). Also when checking the active count, it is not
    actually necessary to increment it; this allows us to get rid of the read
    side undo code and simplify the calculation of the final rwsem count
    adjustment once we've counted the reader threads to wake.

    The basic observation is the following. When there are waiter threads on
    a rwsem and the spinlock is held, other threads can only increment the
    active count by trying to grab the rwsem in down_xxxx(). However
    down_xxxx() will notice there are waiter threads and take the down_failed
    path, blocking to acquire the spinlock on the way there. Therefore, a
    thread observing an active count of zero with waiters queued and the
    spinlock held, is protected against other threads acquiring the rwsem
    until it wakes the last waiter or releases the spinlock.

    Signed-off-by: Michel Lespinasse
    Acked-by: David Howells
    Cc: Mike Waychison
    Cc: Suleiman Souhlal
    Cc: Ying Han
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     
  • This is in preparation for later changes in the series.

    In __rwsem_do_wake(), the first queued waiter is checked first in order to
    determine whether it's a writer or a reader. The code paths diverge at
    this point. The code that checks and increments the rwsem active count is
    duplicated on both sides - the point is that later changes in the series
    will be able to independently modify both sides.

    Signed-off-by: Michel Lespinasse
    Acked-by: David Howells
    Cc: Mike Waychison
    Cc: Suleiman Souhlal
    Cc: Ying Han
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     

13 May, 2010

1 commit

  • If there are no active threasd using a semaphore, it is always correct
    to unqueue blocked threads. This seems to be what was intended in the
    undo code.

    What was done instead, was to look for a sem count of zero - this is an
    impossible situation, given that at least one thread is known to be
    queued on the semaphore. The code might be correct as written, but it's
    hard to reason about and it's not what was intended (otherwise the goto
    out would have been unconditional).

    Go for checking the active count - the alternative is not worth the
    headache.

    Signed-off-by: Michel Lespinasse
    Signed-off-by: David Howells
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     

30 Jan, 2008

1 commit

  • introduce the "asmregparm" calling convention: for functions
    implemented in assembly with a fixed regparm input parameters
    calling convention.

    mark the semaphore and rwsem slowpath functions with that.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Miklos Szeredi
    Signed-off-by: Thomas Gleixner

    Ingo Molnar
     

18 Dec, 2007

1 commit

  • This following commit

    http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=fdf8cb0909b531f9ae8f9b9d7e4eb35ba3505f07

    un-inlined a low-level rwsem function, but did not mark it as __sched.
    The result is that it now shows up as thread wchan (which also affects
    /proc/profile stats). The following simple patch fixes this by properly
    marking rwsem_down_failed_common() as a __sched function.

    Also in this patch, which is up for discussion, marks down_read() and
    down_write() proper as __sched. For profiling, it is pretty much
    useless to know that a semaphore is beig help - it is necessary to know
    _which_ one. By going up another frame on the stack, the information
    becomes much more useful.

    In summary, the below change to lib/rwsem.c should be applied; the
    changes to kernel/rwsem.c could be applied if other kernel hackers agree
    with my proposal that down_read()/down_write() in the profile is not
    enough.

    [ akpm@linux-foundation.org: build fix ]

    Signed-off-by: Livio Soares
    Signed-off-by: Andrew Morton
    Signed-off-by: Ingo Molnar

    Livio Soares
     

11 Oct, 2006

1 commit


30 Sep, 2006

1 commit

  • Un-inlining rwsem_down_failed_common() (two callsites) reduced lib/rwsem.o
    on my Athlon, gcc 4.1.2 from 5935 to 5480 Bytes (455 Bytes saved).

    I thus guess that reduced icache footprint (and better function caching) is
    worth more than any function call overhead.

    Signed-off-by: Andreas Mohr
    Cc: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andreas Mohr
     

04 Jul, 2006

2 commits


01 May, 2005

1 commit


17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds