21 Nov, 2011

1 commit

  • The Kconfig options for the PL310 errata workarounds do not use a
    consistent naming scheme for either the config option or the bool
    description.

    This patch tidies up the options by ensuring that the bool descriptions
    are prefixed with "PL310 errata:" and the config options are prefixed
    with PL310_ERRATA_, making it much clearer in menuconfig as to what the
    workarounds are for.

    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     

26 Oct, 2011

1 commit

  • * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits)
    rtmutex: Add missing rcu_read_unlock() in debug_rt_mutex_print_deadlock()
    lockdep: Comment all warnings
    lib: atomic64: Change the type of local lock to raw_spinlock_t
    locking, lib/atomic64: Annotate atomic64_lock::lock as raw
    locking, x86, iommu: Annotate qi->q_lock as raw
    locking, x86, iommu: Annotate irq_2_ir_lock as raw
    locking, x86, iommu: Annotate iommu->register_lock as raw
    locking, dma, ipu: Annotate bank_lock as raw
    locking, ARM: Annotate low level hw locks as raw
    locking, drivers/dca: Annotate dca_lock as raw
    locking, powerpc: Annotate uic->lock as raw
    locking, x86: mce: Annotate cmci_discover_lock as raw
    locking, ACPI: Annotate c3_lock as raw
    locking, oprofile: Annotate oprofilefs lock as raw
    locking, video: Annotate vga console lock as raw
    locking, latencytop: Annotate latency_lock as raw
    locking, timer_stats: Annotate table_lock as raw
    locking, rwsem: Annotate inner lock as raw
    locking, semaphores: Annotate inner lock as raw
    locking, sched: Annotate thread_group_cputimer as raw
    ...

    Fix up conflicts in kernel/posix-cpu-timers.c manually: making
    cputimer->cputime a raw lock conflicted with the ABBA fix in commit
    bcd5cff7216f ("cputimer: Cure lock inversion").

    Linus Torvalds
     

17 Oct, 2011

4 commits


13 Sep, 2011

1 commit

  • Annotate the low level hardware locks which must not be preempted.

    In mainline this change documents the low level nature of
    the lock - otherwise there's no functional difference. Lockdep
    and Sparse checking will work as usual.

    Signed-off-by: Thomas Gleixner
    Cc: Russell King
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

07 Sep, 2011

1 commit

  • Fighting unfixed U-Boots and other beasts that may the cache in
    a locked-down state when starting the kernel, we make sure to
    disable all cache lock-down when initializing the l2x0 so we
    are in a known state.

    Cc: Srinidhi Kasagar
    Cc: Rabin Vincent
    Cc: Adrian Bunk
    Cc: Rob Herring
    Cc: Catalin Marinas
    Cc: Will Deacon
    Reviewed-by: Santosh Shilimkar
    Reported-by: Jan Rinze
    Tested-by: Robert Marklund
    Signed-off-by: Linus Walleij
    Signed-off-by: Russell King

    Linus Walleij
     

07 Jul, 2011

1 commit

  • The l2x0_disable function attempts to writel with the l2x0_lock held.
    This results in deadlock when the writel contains an outer_sync call
    for the platform since the l2x0_lock is already held by the disable
    function. A further problem is that disabling the L2 without flushing it
    first can lead to the spin_lock operation becoming visible after the
    spin_unlock, causing any subsequent L2 maintenance to deadlock.

    This patch replaces the writel with a call to writel_relaxed in the
    disabling code and adds a flush before disabling in the control
    register, preventing livelock from occurring.

    Acked-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     

17 Mar, 2011

1 commit


09 Mar, 2011

1 commit

  • PL310 implements the Clean & Invalidate by Way L2 cache maintenance
    operation (offset 0x7FC). This operation runs in background so that
    PL310 can handle normal accesses while it is in progress. Under very
    rare circumstances, due to this erratum, write data can be lost when
    PL310 treats a cacheable write transaction during a Clean & Invalidate
    by Way operation.

    Workaround:
    Disable Write-Back and Cache Linefill (Debug Control Register)
    Clean & Invalidate by Way (0x7FC)
    Re-enable Write-Back and Cache Linefill (Debug Control Register)

    This patch also removes any OMAP dependency on PL310 Errata's

    Signed-off-by: Santosh Shilimkar
    Acked-by: Catalin Marinas
    Signed-off-by: Russell King

    Santosh Shilimkar
     

19 Feb, 2011

1 commit

  • The effect of cache sync operation is to drain the store buffer and
    wait for all internal buffers to be empty. In normal conditions, store
    buffer is able to merge the normal memory writes within its 32-byte
    data buffers. Due to this erratum present in r3p0, the effect of cache
    sync operation on the store buffer still remains when the operation
    completes. This means that the store buffer is always asked to drain
    and this prevents it from merging any further writes.

    This can severely affect performance on the write traffic esp. on
    Normal memory NC one.

    The proposed workaround is to replace the normal offset of cache sync
    operation(0x730) by another offset targeting an unmapped PL310
    register 0x740.

    Signed-off-by: srinidhi kasagar
    Acked-by: Linus Walleij
    Acked-by: Catalin Marinas
    Signed-off-by: Russell King

    Srinidhi Kasagar
     

26 Oct, 2010

4 commits

  • For the big buffers which are in excess of cache size, the maintaince
    operations by PA are very slow. For such buffers the maintainace
    operations can be speeded up by using the WAY based method.

    Signed-off-by: Santosh Shilimkar
    Acked-by: Catalin Marinas
    Acked-by: Linus Walleij

    Santosh Shilimkar
     
  • The cache size is needed for to optimise range based
    maintainance operations

    Signed-off-by: Santosh Shilimkar
    Acked-by: Catalin Marinas
    Acked-by: Linus Walleij

    Santosh Shilimkar
     
  • Add flush_all, inv_all and disable functions to the l2x0 code. These
    functions are called from kexec code to prevent random crashes in the
    new kernel.

    Platforms like OMAP which control L2 enable/disable via SMI mode can
    override the outer_cache.disable() function to implement their own.

    Signed-off-by: Thomas Gleixner
    Acked-by: Catalin Marinas
    Acked-by: Linus Walleij

    Thomas Gleixner
     
  • With this L2 cache controller, the cache maintenance by PA and sync
    operations are atomic and do not require a "wait" loop. This patch
    conditionally defines the cache_wait() function.

    Since L2x0 cache controllers do not work with ARMv7 CPUs, the patch
    automatically enables CACHE_PL310 when only CPU_V7 is defined.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     

29 Jul, 2010

1 commit

  • This patch is in preparation for a subsequent patch which adds barriers
    to the I/O accessors. Since the mandatory barriers may do an L2 cache
    sync, this patch avoids a recursive call into l2x0_cache_sync() via the
    write*() accessors and wmb() and a call into l2x0_cache_sync() with the
    l2x0_lock held.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Russell King

    Catalin Marinas
     

09 Jul, 2010

1 commit


18 May, 2010

1 commit


15 May, 2010

1 commit

  • The L310 cache controller's interface is almost identical
    to the L210. One major difference is that the PL310 can
    have up to 16 ways.

    This change uses the cache's part ID and the Associativity
    bits in the AUX_CTRL register to determine the number of ways.

    Also, this version prints out the CACHE_ID and AUX_CTRL registers.

    Acked-by: Will Deacon
    Acked-by: Acked-by: Catalin Marinas
    Signed-off-by: Jason S. McMullan
    Signed-off-by: Russell King

    Jason McMullan
     

26 Mar, 2010

1 commit


16 Feb, 2010

2 commits


14 Dec, 2009

3 commits


04 Dec, 2009

1 commit


06 Sep, 2008

1 commit


17 Sep, 2007

1 commit


21 Jul, 2007

1 commit


12 Feb, 2007

1 commit