01 Oct, 2006

2 commits

  • Powerpc already has a directed yield for CONFIG_PREEMPT="n". To make it
    work with CONFIG_PREEMPT="y" as well the _raw_{spin,read,write}_relax
    primitives need to be defined to call __spin_yield() for spinlocks and
    __rw_yield() for rw-locks.

    Acked-by: Paul Mackerras
    Signed-off-by: Martin Schwidefsky
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Martin Schwidefsky
     
  • On systems running with virtual cpus there is optimization potential in
    regard to spinlocks and rw-locks. If the virtual cpu that has taken a lock
    is known to a cpu that wants to acquire the same lock it is beneficial to
    yield the timeslice of the virtual cpu in favour of the cpu that has the
    lock (directed yield).

    With CONFIG_PREEMPT="n" this can be implemented by the architecture without
    common code changes. Powerpc already does this.

    With CONFIG_PREEMPT="y" the lock loops are coded with _raw_spin_trylock,
    _raw_read_trylock and _raw_write_trylock in kernel/spinlock.c. If the lock
    could not be taken cpu_relax is called. A directed yield is not possible
    because cpu_relax doesn't know anything about the lock. To be able to
    yield the lock in favour of the current lock holder variants of cpu_relax
    for spinlocks and rw-locks are needed. The new _raw_spin_relax,
    _raw_read_relax and _raw_write_relax primitives differ from cpu_relax
    insofar that they have an argument: a pointer to the lock structure.

    Signed-off-by: Martin Schwidefsky
    Cc: Ingo Molnar
    Cc: Paul Mackerras
    Cc: Haavard Skinnemoen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Martin Schwidefsky
     

13 Sep, 2006

1 commit

  • This changes the writeX family of functions to have a sync instruction
    before the MMIO store rather than after, because the generally expected
    behaviour is that the device receiving the MMIO store can be guaranteed
    to see the effects of any preceding writes to normal memory.

    To preserve ordering between writeX and readX, and to preserve ordering
    between preceding stores and the readX, the readX family of functions
    have had an sync added before the load.

    Although writeX followed by spin_unlock is not officially guaranteed
    to keep the writeX inside the spin-locked region unless an mmiowb()
    is used, there are currently drivers that depend on the previous
    behaviour on powerpc, which was that the mmiowb wasn't actually required.
    Therefore we have a per-cpu flag that is set by writeX, cleared by
    __raw_spin_lock and mmiowb, and tested by __raw_spin_unlock. If it is
    set, __raw_spin_unlock does a sync and clears it.

    This changes both 32-bit and 64-bit readX/writeX. 32-bit already has a
    sync in __raw_spin_unlock (since lwsync doesn't exist on 32-bit), and thus
    doesn't need the per-cpu flag.

    Tested on G5 (PPC970) and POWER5.

    Signed-off-by: Paul Mackerras

    Paul Mackerras
     

13 Jan, 2006

2 commits

  • eieio is only a store - store ordering. When used to order an unlock
    operation loads may leak out of the critical region. This is potentially
    buggy, one example is if a user wants to atomically read a couple of
    values.

    We can solve this with an lwsync which orders everything except store - load.

    I removed the (now unused) EIEIO_ON_SMP macros and the c versions
    isync_on_smp and eieio_on_smp now we dont use them. I also removed some
    old comments that were used to identify inline spinlocks in assembly,
    they dont make sense now our locks are out of line.

    Another interesting thing was that read_unlock was using an eieio even
    though the rest of the spinlock code had already been converted to
    use lwsync.

    Signed-off-by: Anton Blanchard
    Signed-off-by: Paul Mackerras

    Anton Blanchard
     
  • At present the lppaca - the structure shared with the iSeries
    hypervisor and phyp - is contained within the PACA, our own low-level
    per-cpu structure. This doesn't have to be so, the patch below
    removes it, making a separate array of lppaca structures.

    This saves approximately 500*NR_CPUS bytes of image size and kernel
    memory, because we don't need aligning gap between the Linux and
    hypervisor portions of every PACA. On the other hand it means an
    extra level of dereference in many accesses to the lppaca.

    The patch also gets rid of several places where we assign the paca
    address to a local variable for no particular reason.

    Signed-off-by: David Gibson
    Signed-off-by: Paul Mackerras

    David Gibson
     

09 Jan, 2006

1 commit

  • include/asm-ppc/ had #ifdef __KERNEL__ in all header files that
    are not meant for use by user space, include/asm-powerpc does
    not have this yet.

    This patch gets us a lot closer there. There are a few cases
    where I was not sure, so I left them out. I have verified
    that no CONFIG_* symbols are used outside of __KERNEL__
    any more and that there are no obvious compile errors when
    including any of the headers in user space libraries.

    Signed-off-by: Arnd Bergmann
    Signed-off-by: Paul Mackerras

    Arnd Bergmann
     

19 Nov, 2005

1 commit