26 Mar, 2011

1 commit


27 Feb, 2010

2 commits

  • The current locking mechanism uses a ll/sc sequence to release a
    spinlock. This is slower than a wmb() followed by a store to unlock.

    The branching forward to .subsection 2 on sc failure slows down the
    contended case. So we get rid of that part too.

    Since we are now working on naturally aligned u16 values, we can get
    rid of a masking operation as the LHU already does the right thing.
    The ANDI are reversed for better scheduling on multi-issue CPUs

    On a 12 CPU 750MHz Octeon cn5750 this patch improves ipv4 UDP packet
    forwarding rates from 3.58*10^6 PPS to 3.99*10^6 PPS, or about 11%.

    Signed-off-by: David Daney
    To: linux-mips@linux-mips.org
    Patchwork: http://patchwork.linux-mips.org/patch/937/
    Signed-off-by: Ralf Baechle

    David Daney
     
  • Replace some instances of smp_llsc_mb() with a new macro
    smp_mb__before_llsc(). It is used before ll/sc sequences that are
    documented as needing write barrier semantics.

    The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(),
    so there are no changes in semantics.

    Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just
    barrier() in the non-SMP case.

    Signed-off-by: David Daney
    To: linux-mips@linux-mips.org
    Patchwork: http://patchwork.linux-mips.org/patch/851/
    Signed-off-by: Ralf Baechle

    David Daney
     

15 Dec, 2009

4 commits

  • Name space cleanup for rwlock functions. No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     
  • Not strictly necessary for -rt as -rt does not have non sleeping
    rwlocks, but it's odd to not have a consistent naming convention.

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     
  • Name space cleanup. No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     
  • The raw_spin* namespace was taken by lockdep for the architecture
    specific implementations. raw_spin_* would be the ideal name space for
    the spinlocks which are not converted to sleeping locks in preempt-rt.

    Linus suggested to convert the raw_ to arch_ locks and cleanup the
    name space instead of using an artifical name like core_spin,
    atomic_spin or whatever

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     

03 Apr, 2009

1 commit

  • Pass the original flags to rwlock arch-code, so that it can re-enable
    interrupts if implemented for that architecture.

    Initially, make __raw_read_lock_flags and __raw_write_lock_flags stubs
    which just do the same thing as non-flags variants.

    Signed-off-by: Petr Tesarik
    Signed-off-by: Robin Holt
    Acked-by: Peter Zijlstra
    Cc:
    Acked-by: Ingo Molnar
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     

30 Mar, 2009

1 commit

  • If the lock is not acquired and has to spin *and* the second attempt
    to acquire the lock fails, the delay time is not masked by the ticket
    range mask. If the ticket number wraps around to zero, the result is
    that the lock sampling delay is essentially infinite (due to casting
    -1 to an unsigned int).

    The fix: Always mask the difference between my_ticket and the current
    ticket value before calculating the delay.

    Signed-off-by: David Daney
    Signed-off-by: Ralf Baechle

    David Daney
     

10 Feb, 2009

1 commit

  • Architectures other than mips and x86 are not using ticket spinlocks.
    Therefore, the contention on the lock is meaningless, since there is
    nobody known to be waiting on it (arguably /fairly/ unfair locks).

    Dummy it out to return 0 on other architectures.

    Signed-off-by: Kyle McMartin
    Acked-by: Ralf Baechle
    Acked-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Kyle McMartin
     

15 Oct, 2008

1 commit


11 Oct, 2008

2 commits