02 May, 2012

1 commit

  • The cacheflush syscall can fail for two reasons:

    (1) The arguments are invalid (nonsensical address range or no VMA)

    (2) The region generates a translation fault on a VIPT or PIPT cache

    This patch allows do_cache_op to return an error code to userspace in
    the case of the above. The various coherent_user_range implementations
    are modified to return 0 in the case of VIVT caches or -EFAULT in the
    case of an abort on v6/v7 cores.

    Reviewed-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     

16 Feb, 2012

1 commit

  • Bootup with lockdep enabled has been broken on v7 since b46c0f74657d
    ("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR").

    This is because v7_setup (which is called very early during boot) calls
    v7_flush_dcache_all, and the save_and_disable_irqs added by that patch
    ends up attempting to call into lockdep C code (trace_hardirqs_off())
    when we are in no position to execute it (no stack, MMU off).

    Fix this by using a notrace variant of save_and_disable_irqs. The code
    already uses the notrace variant of restore_irqs.

    Reviewed-by: Nicolas Pitre
    Acked-by: Stephen Boyd
    Cc: Catalin Marinas
    Cc: stable@vger.kernel.org
    Signed-off-by: Rabin Vincent
    Signed-off-by: Russell King

    Rabin Vincent
     

10 Feb, 2012

1 commit

  • armv7's flush_cache_all() flushes caches via set/way. To
    determine the cache attributes (line size, number of sets,
    etc.) the assembly first writes the CSSELR register to select a
    cache level and then reads the CCSIDR register. The CSSELR register
    is banked per-cpu and is used to determine which cache level CCSIDR
    reads. If the task is migrated between when the CSSELR is written and
    the CCSIDR is read the CCSIDR value may be for an unexpected cache
    level (for example L1 instead of L2) and incorrect cache flushing
    could occur.

    Disable interrupts across the write and read so that the correct
    cache attributes are read and used for the cache flushing
    routine. We disable interrupts instead of disabling preemption
    because the critical section is only 3 instructions and we want
    to call v7_dcache_flush_all from __v7_setup which doesn't have a
    full kernel stack with a struct thread_info.

    This fixes a problem we see in scm_call() when flush_cache_all()
    is called from preemptible context and sometimes the L2 cache is
    not properly flushed out.

    Signed-off-by: Stephen Boyd
    Acked-by: Catalin Marinas
    Reviewed-by: Nicolas Pitre
    Cc: stable@vger.kernel.org
    Signed-off-by: Russell King

    Stephen Boyd
     

17 Sep, 2011

1 commit

  • This patch implements a workaround for erratum 764369 affecting
    Cortex-A9 MPCore with two or more processors (all current revisions).
    Under certain timing circumstances, a data cache line maintenance
    operation by MVA targeting an Inner Shareable memory region may fail to
    proceed up to either the Point of Coherency or to the Point of
    Unification of the system. This workaround adds a DSB instruction before
    the relevant cache maintenance functions and sets a specific bit in the
    diagnostic control register of the SCU.

    Cc:
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Tested-by: Mark Rutland
    Signed-off-by: Russell King

    Will Deacon
     

07 Jul, 2011

1 commit


26 May, 2011

1 commit

  • The v6 and v7 implementations of flush_kern_dcache_area do not align
    the passed MVA to the size of a cacheline in the data cache. If a
    misaligned address is used, only a subset of the requested area will
    be flushed. This has been observed to cause failures in SMP boot where
    the secondary_data initialised by the primary CPU is not cacheline
    aligned, causing the secondary CPUs to read incorrect values for their
    pgd and stack pointers.

    This patch ensures that the base address is cacheline aligned before
    flushing the d-cache.

    Cc:
    Acked-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     

31 Mar, 2011

1 commit


13 Dec, 2010

1 commit

  • The current implementation of the v7_coherent_*_range function assumes
    that the D and I cache lines have the same size, which is incorrect
    architecturally. This patch adds the icache_line_size macro which reads
    the CTR register. The main loop in v7_coherent_*_range is split in two
    independent loops or the D and I caches. This also has the performance
    advantage that the DSB is moved outside the main loop.

    Reported-by: Kevin Sapp
    Signed-off-by: Catalin Marinas
    Signed-off-by: Russell King

    Catalin Marinas
     

05 Oct, 2010

2 commits

  • Do this by adding flush_icache_all to cache_fns for ARMv6 and 7.
    As flush_icache_all may neeed to be called from flush_kern_cache_all,
    add it as the first entry in the cache_fns.

    Note that now we can remove the ARM_ERRATA_411920 dependency
    to !SMP so it can be selected on UP ARMv6 processors, such
    as omap2.

    Signed-off-by: Tony Lindgren
    Signed-off-by: Anand Gadiyar
    Signed-off-by: Russell King

    Tony Lindgren
     
  • UP systems do not implement all the instructions that SMP systems have,
    so in order to boot a SMP kernel on a UP system, we need to rewrite
    parts of the kernel.

    Do this using an 'alternatives' scheme, where the kernel code and data
    is modified prior to initialization to replace the SMP instructions,
    thereby rendering the problematical code ineffectual. We use the linker
    to generate a list of 32-bit word locations and their replacement values,
    and run through these replacements when we detect a UP system.

    Signed-off-by: Russell King

    Russell King
     

21 May, 2010

1 commit


08 May, 2010

1 commit


15 Feb, 2010

3 commits


14 Dec, 2009

1 commit


07 Oct, 2009

1 commit


24 Jul, 2009

1 commit


06 Nov, 2008

1 commit


01 Sep, 2008

1 commit