02 May, 2012
1 commit
-
The cacheflush syscall can fail for two reasons:
(1) The arguments are invalid (nonsensical address range or no VMA)
(2) The region generates a translation fault on a VIPT or PIPT cache
This patch allows do_cache_op to return an error code to userspace in
the case of the above. The various coherent_user_range implementations
are modified to return 0 in the case of VIVT caches or -EFAULT in the
case of an abort on v6/v7 cores.Reviewed-by: Catalin Marinas
Signed-off-by: Will Deacon
Signed-off-by: Russell King
16 Feb, 2012
1 commit
-
Bootup with lockdep enabled has been broken on v7 since b46c0f74657d
("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR").This is because v7_setup (which is called very early during boot) calls
v7_flush_dcache_all, and the save_and_disable_irqs added by that patch
ends up attempting to call into lockdep C code (trace_hardirqs_off())
when we are in no position to execute it (no stack, MMU off).Fix this by using a notrace variant of save_and_disable_irqs. The code
already uses the notrace variant of restore_irqs.Reviewed-by: Nicolas Pitre
Acked-by: Stephen Boyd
Cc: Catalin Marinas
Cc: stable@vger.kernel.org
Signed-off-by: Rabin Vincent
Signed-off-by: Russell King
10 Feb, 2012
1 commit
-
armv7's flush_cache_all() flushes caches via set/way. To
determine the cache attributes (line size, number of sets,
etc.) the assembly first writes the CSSELR register to select a
cache level and then reads the CCSIDR register. The CSSELR register
is banked per-cpu and is used to determine which cache level CCSIDR
reads. If the task is migrated between when the CSSELR is written and
the CCSIDR is read the CCSIDR value may be for an unexpected cache
level (for example L1 instead of L2) and incorrect cache flushing
could occur.Disable interrupts across the write and read so that the correct
cache attributes are read and used for the cache flushing
routine. We disable interrupts instead of disabling preemption
because the critical section is only 3 instructions and we want
to call v7_dcache_flush_all from __v7_setup which doesn't have a
full kernel stack with a struct thread_info.This fixes a problem we see in scm_call() when flush_cache_all()
is called from preemptible context and sometimes the L2 cache is
not properly flushed out.Signed-off-by: Stephen Boyd
Acked-by: Catalin Marinas
Reviewed-by: Nicolas Pitre
Cc: stable@vger.kernel.org
Signed-off-by: Russell King
17 Sep, 2011
1 commit
-
This patch implements a workaround for erratum 764369 affecting
Cortex-A9 MPCore with two or more processors (all current revisions).
Under certain timing circumstances, a data cache line maintenance
operation by MVA targeting an Inner Shareable memory region may fail to
proceed up to either the Point of Coherency or to the Point of
Unification of the system. This workaround adds a DSB instruction before
the relevant cache maintenance functions and sets a specific bit in the
diagnostic control register of the SCU.Cc:
Signed-off-by: Will Deacon
Signed-off-by: Catalin Marinas
Tested-by: Mark Rutland
Signed-off-by: Russell King
07 Jul, 2011
1 commit
-
Signed-off-by: Dave Martin
26 May, 2011
1 commit
-
The v6 and v7 implementations of flush_kern_dcache_area do not align
the passed MVA to the size of a cacheline in the data cache. If a
misaligned address is used, only a subset of the requested area will
be flushed. This has been observed to cause failures in SMP boot where
the secondary_data initialised by the primary CPU is not cacheline
aligned, causing the secondary CPUs to read incorrect values for their
pgd and stack pointers.This patch ensures that the base address is cacheline aligned before
flushing the d-cache.Cc:
Acked-by: Catalin Marinas
Signed-off-by: Will Deacon
Signed-off-by: Russell King
31 Mar, 2011
1 commit
-
Fixes generated by 'codespell' and manually reviewed.
Signed-off-by: Lucas De Marchi
13 Dec, 2010
1 commit
-
The current implementation of the v7_coherent_*_range function assumes
that the D and I cache lines have the same size, which is incorrect
architecturally. This patch adds the icache_line_size macro which reads
the CTR register. The main loop in v7_coherent_*_range is split in two
independent loops or the D and I caches. This also has the performance
advantage that the DSB is moved outside the main loop.Reported-by: Kevin Sapp
Signed-off-by: Catalin Marinas
Signed-off-by: Russell King
05 Oct, 2010
2 commits
-
Do this by adding flush_icache_all to cache_fns for ARMv6 and 7.
As flush_icache_all may neeed to be called from flush_kern_cache_all,
add it as the first entry in the cache_fns.Note that now we can remove the ARM_ERRATA_411920 dependency
to !SMP so it can be selected on UP ARMv6 processors, such
as omap2.Signed-off-by: Tony Lindgren
Signed-off-by: Anand Gadiyar
Signed-off-by: Russell King -
UP systems do not implement all the instructions that SMP systems have,
so in order to boot a SMP kernel on a UP system, we need to rewrite
parts of the kernel.Do this using an 'alternatives' scheme, where the kernel code and data
is modified prior to initialization to replace the SMP instructions,
thereby rendering the problematical code ineffectual. We use the linker
to generate a list of 32-bit word locations and their replacement values,
and run through these replacements when we detect a UP system.Signed-off-by: Russell King
21 May, 2010
1 commit
-
This patch fixes the flush_cache_all for ARMv7 SMP.It was
missing from commit b8349b569aae661dea9d59d7d2ee587ccea3336cSigned-off-by: Santosh Shilimkar
Acked-by: Catalin Marinas
Cc:
Signed-off-by: Russell King
08 May, 2010
1 commit
-
The standard I-cache Invalidate All (ICIALLU) and Branch Predication
Invalidate All (BPIALL) operations are not automatically broadcast to
the other CPUs in an ARMv7 MP system. The patch adds the Inner Shareable
variants, ICIALLUIS and BPIALLIS, if ARMv7 and SMP.Signed-off-by: Catalin Marinas
Signed-off-by: Russell King
15 Feb, 2010
3 commits
-
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes
DMA cache coherency handling slightly more interesting. Rather than
being able to rely upon the CPU not accessing the DMA buffer until DMA
has completed, we now must expect that the cache could be loaded with
possibly stale data from the DMA buffer.Where DMA involves data being transferred to the device, we clean the
cache before handing it over for DMA, otherwise we invalidate the buffer
to get rid of potential writebacks. On DMA Completion, if data was
transferred from the device, we invalidate the buffer to get rid of
any stale speculative prefetches.Signed-off-by: Russell King
Tested-By: Santosh Shilimkar -
These are now unused, and so can be removed.
Signed-off-by: Russell King
Tested-By: Santosh Shilimkar -
Signed-off-by: Russell King
Tested-By: Santosh Shilimkar
14 Dec, 2009
1 commit
-
... and rename the function since it no longer operates on just
pages.Signed-off-by: Russell King
07 Oct, 2009
1 commit
-
This is needed because applications using the sys_cacheflush system call
can pass a memory range which isn't mapped yet even though the
corresponding vma is valid. The patch also adds unwinding annotations
for correct backtraces from the coherent_user_range() functions.Signed-off-by: Catalin Marinas
Signed-off-by: Russell King
24 Jul, 2009
1 commit
-
This patch adds the ARM/Thumb-2 unified support to the arch/arm/mm/*
files.Signed-off-by: Catalin Marinas
06 Nov, 2008
1 commit
-
The flush_cache_all function on ARMv7 is implemented as a series of
cache operations by set/way. These are not guaranteed to be ordered with
previous memory accesses, requiring a DMB. This patch also adds barriers
for the TLB operations in compressed/head.SSigned-off-by: Catalin Marinas
01 Sep, 2008
1 commit
-
This declaration specifies the "function" type and size for various
assembly functions, mainly needed for generating the correct branch
instructions in Thumb-2.Signed-off-by: Catalin Marinas
Signed-off-by: Russell King