26 Jan, 2019
1 commit
-
[ Upstream commit 33309ecda0070506c49182530abe7728850ebe78 ]
The dcache_by_line_op macro suffers from a couple of small problems:
First, the GAS directives that are currently being used rely on
assembler behavior that is not documented, and probably not guaranteed
to produce the correct behavior going forward. As a result, we end up
with some undefined symbols in cache.o:$ nm arch/arm64/mm/cache.o
...
U civac
...
U cvac
U cvap
U cvauThis is due to the fact that the comparisons used to select the
operation type in the dcache_by_line_op macro are comparing symbols
not strings, and even though it seems that GAS is doing the right
thing here (undefined symbols by the same name are equal to each
other), it seems unwise to rely on this.Second, when patching in a DC CVAP instruction on CPUs that support it,
the fallback path consists of a DC CVAU instruction which may be
affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE.Solve these issues by unrolling the various maintenance routines and
using the conditional directives that are documented as operating on
strings. To avoid the complexity of nested alternatives, we move the
DC CVAP patching to __clean_dcache_area_pop, falling back to a branch
to __clean_dcache_area_poc if DCPOP is not supported by the CPU.Reported-by: Ard Biesheuvel
Suggested-by: Robin Murphy
Signed-off-by: Will Deacon
Signed-off-by: Sasha Levin
17 Feb, 2018
2 commits
-
Commit 6b88a32c7af6 upstream.
With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the
active ASID to decide whether user access was enabled (non-zero ASID)
when the exception was taken. On return from exception, if user access
was previously disabled, it re-instates TTBR0_EL1 from the per-thread
saved value (updated in switch_mm() or efi_set_pgd()).Commit 7655abb95386 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a
TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e75711
("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the
__uaccess_ttbr0_disable() function and asm macro to first write the
reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an
exception occurs between these two, the exception return code will
re-instate a valid TTBR0_EL1. Similar scenario can happen in
cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID
update in cpu_do_switch_mm().This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and
disables the interrupts around the TTBR0_EL1 and ASID switching code in
__uaccess_ttbr0_disable(). It also ensures that, when returning from the
EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in
TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}.The accesses to current_thread_info()->ttbr0 are updated to use
READ_ONCE/WRITE_ONCE.As a safety measure, __uaccess_ttbr0_enable() always masks out any
existing non-zero ASID TTBR1_EL1 before writing in the new ASID.Fixes: 27a921e75711 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN")
Acked-by: Will Deacon
Reported-by: Ard Biesheuvel
Tested-by: Ard Biesheuvel
Reviewed-by: James Morse
Tested-by: James Morse
Co-developed-by: Marc Zyngier
Signed-off-by: Catalin Marinas
Signed-off-by: Will Deacon
Signed-off-by: Ard Biesheuvel
Signed-off-by: Greg Kroah-Hartman -
Commit 27a921e75711 upstream.
With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN
by ensuring that we switch to a reserved ASID of zero when disabling
user access and restore the active user ASID on the uaccess enable path.Reviewed-by: Mark Rutland
Tested-by: Laura Abbott
Tested-by: Shanker Donthineni
Signed-off-by: Will Deacon
Signed-off-by: Ard Biesheuvel
Signed-off-by: Greg Kroah-Hartman
09 Aug, 2017
2 commits
-
Add a clean-to-point-of-persistence cache maintenance helper, and wire
up the basic architectural support for the pmem driver based on it.Reviewed-by: Will Deacon
Signed-off-by: Robin Murphy
[catalin.marinas@arm.com: move arch_*_pmem() functions to arch/arm64/mm/flush.c]
[catalin.marinas@arm.com: change dmb(sy) to dmb(osh)]
Signed-off-by: Catalin Marinas -
__inval_cache_range() is already the odd one out among our data cache
maintenance routines as the only remaining range-based one; as we're
going to want an invalidation routine to call from C code for the pmem
API, let's tweak the prototype and name to bring it in line with the
clean operations, and to make its relationship with __dma_inv_area()
neatly mirror that of __clean_dcache_area_poc() and __dma_clean_area().
The loop clearing the early page tables gets mildly massaged in the
process for the sake of consistency.Reviewed-by: Will Deacon
Signed-off-by: Robin Murphy
Signed-off-by: Catalin Marinas
27 Dec, 2016
1 commit
-
Split asm-only parts of arm64 uaccess.h into a new header and use that
from *.S.Signed-off-by: Al Viro
25 Dec, 2016
1 commit
-
This was entirely automated, using the script by Al:
PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*'
sed -i -e "s!$PATT!#include !" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)to do the replacement at the end of the merge window.
Requested-by: Al Viro
Signed-off-by: Linus Torvalds
22 Nov, 2016
1 commit
-
When the TTBR0 PAN feature is enabled, the kernel entry points need to
disable access to TTBR0_EL1. The PAN status of the interrupted context
is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
Restoring access to TTBR0_EL1 is done on exception return if returning
to user or returning to a context where PAN was disabled.Context switching via switch_mm() must defer the update of TTBR0_EL1
until a return to user or an explicit uaccess_enable() call.Special care needs to be taken for two cases where TTBR0_EL1 is set
outside the normal kernel context switch operation: EFI run-time
services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
Code has been added to avoid deferred TTBR0_EL1 switching as in
switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
special TTBR0_EL1.User cache maintenance (user_cache_maint_handler and
__flush_cache_user_range) needs the TTBR0_EL1 re-instated since the
operations are performed by user virtual address.This patch also removes a stale comment on the switch_mm() function.
Cc: Will Deacon
Cc: James Morse
Cc: Kees Cook
Cc: Mark Rutland
Signed-off-by: Catalin Marinas
22 Aug, 2016
1 commit
-
__dma_* routines have been converted to use start and size instread of
start and end addresses. The patch was origianlly for adding
__clean_dcache_area_poc() which will be used in pmem driver to clean
dcache to the PoC(Point of Coherency) in arch_wb_cache_pmem().The functionality of __clean_dcache_area_poc() was equivalent to
__dma_clean_range(). The difference was __dma_clean_range() uses the end
address, but __clean_dcache_area_poc() uses the size to clean.Thus, __clean_dcache_area_poc() has been revised with a fallthrough
function of __dma_clean_range() after the change that __dma_* routines
use start and size instead of using start and end.As a consequence of using start and size, the name of __dma_* routines
has also been altered following the terminology below:
area: takes a start and size
range: takes a start and endReviewed-by: Robin Murphy
Signed-off-by: Kwangwoo Lee
Signed-off-by: Will Deacon
01 Jul, 2016
1 commit
-
The ARM errata 819472, 826319, 827319 and 824069 for affected
Cortex-A53 cores demand to promote "dc cvau" instructions to
"dc civac" as well.
Attribute the usage of the instruction in __flush_cache_user_range
to also be covered by our alternative patching efforts.
For that we introduce an assembly macro which both deals with
alternatives while still tagging the instructions as USER.Signed-off-by: Andre Przywara
Signed-off-by: Catalin Marinas
28 Apr, 2016
1 commit
-
To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to
be used outside the mm code move the contents of proc-macros.S to
asm/assembler.h. Also, delete proc-macros.S, and fix up all references
to proc-macros.S.Signed-off-by: Geoff Levand
Acked-by: Pavel Machek
[rebased, included dcache_by_line_op]
Signed-off-by: James Morse
Acked-by: Catalin Marinas
Signed-off-by: Will Deacon
17 Dec, 2015
1 commit
-
In systems with three levels of cache(PoU at L1 and PoC at L3),
PoC cache flush instructions flushes L2 and L3 caches which could affect
performance.
For cache flushes for I and D coherency, PoU should suffice.
So changing all I and D coherency related cache flushes to PoU.Introduced a new __clean_dcache_area_pou API for dcache flush till PoU
and provided a common macro for __flush_dcache_area and
__clean_dcache_area_pou.Also, now in __sync_icache_dcache, icache invalidation for non-aliasing
VIPT icache is done only for that particular page instead of the earlier
__flush_icache_all.Reviewed-by: Catalin Marinas
Reviewed-by: Mark Rutland
Signed-off-by: Ashok Kumar
Signed-off-by: Will Deacon
12 Oct, 2015
1 commit
-
For more control over which functions are called with the MMU off or
with the UEFI 1:1 mapping active, annotate some assembler routines as
position independent. This is done by introducing ENDPIPROC(), which
replaces the ENDPROC() declaration of those routines.Signed-off-by: Ard Biesheuvel
Signed-off-by: Catalin Marinas
27 Jul, 2015
1 commit
-
Convert the dynamic patching for ARM64_WORKAROUND_CLEAN_CACHE over to
the newly added alternative assembler macros.Signed-off-by: Daniel Thompson
Signed-off-by: Will Deacon
05 Jun, 2015
1 commit
-
asm/alternative-asm.h and asm/alternative.h are extremely similar,
and really deserve to live in the same file (as this makes further
modufications a bit easier).Fold the content of alternative-asm.h into alternative.h, and
update the few users.Acked-by: Will Deacon
Signed-off-by: Marc Zyngier
Signed-off-by: Catalin Marinas
19 May, 2015
1 commit
-
The documented semantics of flush_cache_all are not possible to provide
for arm64 (short of flushing the entire physical address space by VA),
and there are currently no users; KVM uses VA maintenance exclusively,
cpu_reset is never called, and the only two users outside of arch code
cannot be built for arm64.While cpu_soft_reset and related functions (which call flush_cache_all)
were thought to be useful for kexec, their current implementations only
serve to mask bugs. For correctness kexec will need to perform
maintenance by VA anyway to account for system caches, line migration,
and other subtleties of the cache architecture. As the extent of this
cache maintenance will be kexec-specific, it should probably live in the
kexec code.This patch removes flush_cache_all, and related unused components,
preventing further abuse.Signed-off-by: Mark Rutland
Cc: AKASHI Takahiro
Cc: Geoff Levand
Acked-by: Ard Biesheuvel
Acked-by: Catalin Marinas
Acked-by: Lorenzo Pieralisi
Acked-by: Marc Zyngier
Acked-by: Will Deacon
Signed-off-by: Catalin Marinas
01 Dec, 2014
1 commit
-
Update handling of cacheflush syscall with changes made in arch/arm
counterpart:
- return error to userspace when flushing syscall fails
- split user cache-flushing into interruptible chunks
- don't bother rounding to nearest vmaSigned-off-by: Vladimir Murzin
[will: changed internal return value from -EINTR to 0 to match arch/arm/]
Signed-off-by: Will Deacon
25 Nov, 2014
1 commit
-
The ARM errata 819472, 826319, 827319 and 824069 define the same
workaround for these hardware issues in certain Cortex-A53 parts.
Use the new alternatives framework and the CPU MIDR detection to
patch "cache clean" into "cache clean and invalidate" instructions if
an affected CPU is detected at runtime.Signed-off-by: Andre Przywara
[will: add __maybe_unused to squash gcc warning]
Signed-off-by: Will Deacon
10 May, 2014
1 commit
-
In order to ensure ordering and completion of inner-shareable maintenance
instructions (cache and TLB) on AArch64, we can use the -ish suffix to
the dmb and dsb instructions respectively.This patch updates our low-level cache and tlb maintenance routines to
use the inner-shareable barrier variants where appropriate.Acked-by: Catalin Marinas
Signed-off-by: Will Deacon
Signed-off-by: Catalin Marinas
08 Apr, 2014
1 commit
-
If the buffer needing cache invalidation for inbound DMA does start or
end on a cache line aligned address, we need to use the non-destructive
clean&invalidate operation. This issue was introduced by commit
7363590d2c46 (arm64: Implement coherent DMA API based on swiotlb).Signed-off-by: Catalin Marinas
Reported-by: Jon Medhurst (Tixy)
05 Apr, 2014
1 commit
-
With system caches for the host OS or architected caches for guest OS we
cannot easily guarantee that there are no dirty or stale cache lines for
the areas of memory written by the kernel during boot with the MMU off
(therefore non-cacheable accesses).This patch adds the necessary cache maintenance during boot and relaxes
the booting requirements.Signed-off-by: Catalin Marinas
04 Mar, 2014
1 commit
-
Currently we flush the entire dcache at boot within __cpu_setup, but
this is unnecessary as the booting protocol demands that the dcache is
invalid and off upon entering the kernel. The presence of the cache
flush only serves to hide bugs in bootloaders, and is not safe in the
presence of SMP.In an SMP boot scenario the CPUs enter coherency outside of the kernel,
and the primary CPU enables its caches before bringing up secondary
CPUs. Therefore if any secondary CPU has an entry in its cache (in
violation of the boot protocol), the primary CPU might snoop it even if
the secondary CPU's cache is disabled. The boot-time cache flush only
serves to hide a firmware bug, and slows down a cpu boot unnecessarily.This patch removes the unnecessary boot-time cache flush.
Signed-off-by: Mark Rutland
Acked-by: Will Deacon
[catalin.marinas@arm.com: make __flush_dcache_all local only]
Signed-off-by: Catalin Marinas
28 Feb, 2014
1 commit
-
This patch adds support for DMA API cache maintenance on SoCs without
hardware device cache coherency.Signed-off-by: Catalin Marinas
23 Jan, 2014
1 commit
-
Fix the function name of comment of __flush_dcache_area,
because __flush_dcache_area is the correct name. Also,
the missing variable 'size' is added to the comment.Signed-off-by: Jingoo Han
Signed-off-by: Catalin Marinas
14 May, 2013
1 commit
-
The format of the lower 32-bits of the 64-bit operand to 'dc cisw' is
unchanged from ARMv7 architecture and the upper bits are RES0. This
implies that the 'way' field of the operand of 'dc cisw' occupies the
bit-positions [31 .. (32-A)]. Due to the use of 64-bit extended operands
to 'clz', the existing implementation of __flush_dcache_all is incorrectly
placing the 'way' field in the bit-positions [63 .. (64-A)].Signed-off-by: Sukanto Ghosh
Tested-by: Anup Patel
Signed-off-by: Catalin Marinas
Cc: stable@vger.kernel.org
17 Sep, 2012
1 commit
-
The patch adds functionality required for cache maintenance. The AArch64
architecture mandates non-aliasing VIPT or PIPT D-cache and VIPT (may
have aliases) or ASID-tagged VIVT I-cache. Cache maintenance operations
are automatically broadcast in hardware between CPUs.Signed-off-by: Will Deacon
Signed-off-by: Catalin Marinas
Acked-by: Tony Lindgren
Acked-by: Nicolas Pitre
Acked-by: Olof Johansson
Acked-by: Santosh Shilimkar
Acked-by: Arnd Bergmann