15 Jan, 2021
1 commit
-
__apply_alternatives makes indirect calls to functions whose address is
taken in assembly code using the alternative_cb macro. With CFI enabled
using non-canonical jump tables, the compiler isn't able to replace the
function reference with the jump table reference, which trips CFI.Bug: 145210207
Change-Id: I2361b601d987cd25f88aa0b9f37b400ff566febc
Signed-off-by: Sami Tolvanen
17 Jul, 2020
1 commit
-
This reverts commit 2e34bc14ef7996dd9f8adc9e49c83748054193b3 as CFI is
being removed from the tree to come back later as a "clean" set of
patches.Bug: 145210207
Cc: Sami Tolvanen
Signed-off-by: Greg Kroah-Hartman
Change-Id: I209609a6396a0bef2acee70beb1f7f9390c51cbd
13 Jul, 2020
1 commit
-
Linux 5.8-rc5
Signed-off-by: Greg Kroah-Hartman
Change-Id: I0604003a73d4691fe7e3f78cb51d6605f5ac8f4a
09 Jul, 2020
1 commit
-
Commit f7b93d42945c ("arm64/alternatives: use subsections for replacement
sequences") moved the alternatives replacement sequences into subsections,
in order to keep the as close as possible to the code that they replace.Unfortunately, this broke the logic in branch_insn_requires_update,
which assumed that any branch into kernel executable code was a branch
that required updating, which is no longer the case now that the code
sequences that are patched in are in the same section as the patch site
itself.So the only way to discriminate branches that require updating and ones
that don't is to check whether the branch targets the replacement sequence
itself, and so we can drop the call to kernel_text_address() entirely.Fixes: f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences")
Reported-by: Alexandru Elisei
Signed-off-by: Ard Biesheuvel
Tested-by: Alexandru Elisei
Link: https://lore.kernel.org/r/20200709125953.30918-1-ardb@kernel.org
Signed-off-by: Will Deacon
27 Nov, 2019
1 commit
-
__apply_alternatives makes indirect calls to functions whose address is
taken in assembly code using the alternative_cb macro. With CFI enabled
using non-canonical jump tables, the compiler isn't able to replace the
function reference with the jump table reference, which trips CFI.Bug: 145210207
Change-Id: I6cdd164f9315c0aa16a1427ab1a67cfa8aad3ffd
Signed-off-by: Sami Tolvanen
19 Jun, 2019
1 commit
-
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not see http www gnu org
licensesextracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 503 file(s).
Signed-off-by: Thomas Gleixner
Reviewed-by: Alexios Zavras
Reviewed-by: Allison Randal
Reviewed-by: Enrico Weigelt
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de
Signed-off-by: Greg Kroah-Hartman
06 Feb, 2019
2 commits
-
Currently alternatives are applied very late in the boot process (and
a long time after we enable scheduling). Some alternative sequences,
such as those that alter the way CPU context is stored, must be applied
much earlier in the boot sequence.Introduce apply_boot_alternatives() to allow some alternatives to be
applied immediately after we detect the CPU features of the boot CPU.Signed-off-by: Daniel Thompson
[julien.thierry@arm.com: rename to fit new cpufeature framework better,
apply BOOT_SCOPE feature early in boot]
Signed-off-by: Julien Thierry
Reviewed-by: Suzuki K Poulose
Reviewed-by: Marc Zyngier
Cc: Will Deacon
Cc: Christoffer Dall
Cc: Suzuki K Poulose
Signed-off-by: Catalin Marinas -
In preparation for the application of alternatives at different points
during the boot process, provide the possibility to check whether
alternatives for a feature of interest was already applied instead of
having a global boolean for all alternatives.Make VHE enablement code check for the VHE feature instead of considering
all alternatives.Signed-off-by: Julien Thierry
Acked-by: Marc Zyngier
Cc: Will Deacon
Cc: Suzuki K Poulose
Cc: Marc Zyngier
Cc: Christoffer Dall
Signed-off-by: Catalin Marinas
08 Aug, 2018
1 commit
-
Return statements in functions returning bool should use true or false
instead of an integer value. This code was detected with the help of
Coccinelle.Signed-off-by: Gustavo A. R. Silva
Signed-off-by: Will Deacon
28 Jun, 2018
1 commit
-
The implementation of flush_icache_range() includes instruction sequences
which are themselves patched at runtime, so it is not safe to call from
the patching framework.This patch reworks the alternatives cache-flushing code so that it rolls
its own internal D-cache maintenance using DC CIVAC before invalidating
the entire I-cache after all alternatives have been applied at boot.
Modules don't cause any issues, since flush_icache_range() is safe to
call by the time they are loaded.Acked-by: Mark Rutland
Reported-by: Rohit Khanna
Cc: Alexander Van Brunt
Signed-off-by: Will Deacon
Signed-off-by: Catalin Marinas
19 Mar, 2018
1 commit
-
We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to provide a range of potential
replacement instructions. For a single feature, this is an all or
nothing thing.It would be interesting to have a more flexible grained way of patching
the kernel though, where we could dynamically tune the code that gets
injected.In order to achive this, let's introduce a new form of dynamic patching,
assiciating a callback to a patching site. This callback gets source and
target locations of the patching request, as well as the number of
instructions to be patched.Dynamic patching is declared with the new ALTERNATIVE_CB and alternative_cb
directives:asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
: "r" (v));
or
alternative_cb callback
mov x0, #0
alternative_cb_endwhere callback is the C function computing the alternative.
Reviewed-by: Christoffer Dall
Reviewed-by: Catalin Marinas
Signed-off-by: Marc Zyngier
13 Jan, 2018
1 commit
-
Now that KVM uses tpidr_el2 in the same way as Linux's cpu_offset in
tpidr_el1, merge the two. This saves KVM from save/restoring tpidr_el1
on VHE hosts, and allows future code to blindly access per-cpu variables
without triggering world-switch.Signed-off-by: James Morse
Reviewed-by: Christoffer Dall
Reviewed-by: Catalin Marinas
Signed-off-by: Catalin Marinas
29 Jun, 2017
1 commit
-
get_alt_insn() is used to read and create ARM instructions, which
are always stored in memory in little-endian order. These values
are thus correctly converted to/from native order when processed
but the pointers used to hold the address of these instructions
are declared as for native order values.Fix this by declaring the pointers as __le32* instead of u32* and
make the few appropriate needed changes like removing the unneeded
cast '(u32*)' in front of __ALT_PTR()'s definition.Signed-off-by: Luc Van Oostenryck
Signed-off-by: Will Deacon
23 Mar, 2017
1 commit
-
One important rule of thumb when desiging a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.Reviewed-by: Laura Abbott
Reviewed-by: Mark Rutland
Tested-by: Mark Rutland
Signed-off-by: Ard Biesheuvel
Signed-off-by: Catalin Marinas
09 Sep, 2016
2 commits
-
adrp uses PC-relative address offset to a page (of 4K size) of
a symbol. If it appears in an alternative code patched in, we
should adjust the offset to reflect the address where it will
be run from. This patch adds support for fixing the offset
for adrp instructions.Cc: Will Deacon
Cc: Marc Zyngier
Cc: Andre Przywara
Cc: Mark Rutland
Signed-off-by: Suzuki K Poulose
Signed-off-by: Will Deacon -
The alternative code patching doesn't check if the replaced instruction
uses a pc relative literal. This could cause silent corruption in the
instruction stream as the instruction will be executed from a different
address than what it was compiled for. Catch all such cases.Cc: Marc Zyngier
Cc: Andre Przywara
Cc: Mark Rutland
Cc: Catalin Marinas
Suggested-by: Will Deacon
Signed-off-by: Suzuki K Poulose
Signed-off-by: Will Deacon
26 Aug, 2016
1 commit
-
Each time new section markers are added, kernel/vmlinux.ld.S is updated,
and new extern char __start_foo[] definitions are scattered through the
tree.Create asm/include/sections.h to collect these definitions (and include
the existing asm-generic version).Signed-off-by: James Morse
Reviewed-by: Mark Rutland
Tested-by: Mark Rutland
Reviewed-by: Catalin Marinas
Signed-off-by: Will Deacon
11 Dec, 2015
1 commit
-
Currently we treat the alternatives separately from other data that's
only used during initialisation, using separate .altinstructions and
.altinstr_replacement linker sections. These are freed for general
allocation separately from .init*. This is problematic as:* We do not remove execute permissions, as we do for .init, leaving the
memory executable.* We pad between them, making the kernel Image bianry up to PAGE_SIZE
bytes larger than necessary.This patch moves the two sections into the contiguous region used for
.init*. This saves some memory, ensures that we remove execute
permissions, and allows us to remove some code made redundant by this
reorganisation.Signed-off-by: Mark Rutland
Cc: Andre Przywara
Cc: Catalin Marinas
Cc: Jeremy Linton
Cc: Laura Abbott
Cc: Will Deacon
Signed-off-by: Will Deacon
05 Aug, 2015
1 commit
-
In order to guarantee that the patched instruction stream is visible to
a CPU, that CPU must execute an isb instruction after any related cache
maintenance has completed.The instruction patching routines in kernel/insn.c get this right for
things like jump labels and ftrace, but the alternatives patching omits
it entirely leaving secondary cores in a potential limbo between the old
and the new code.This patch adds an isb following the secondary polling loop in the
altenatives patching.Signed-off-by: Will Deacon
31 Jul, 2015
1 commit
-
When patching the kernel text with alternatives, we may end up patching
parts of the stop_machine state machine (e.g. atomic_dec_and_test in
ack_state) and consequently corrupt the instruction stream of any
secondary CPUs.This patch passes the cpu_online_mask to stop_machine, forcing all of
the CPUs into our own callback which can place the secondary cores into
a dumb (but safe!) polling loop whilst the patching is carried out.Signed-off-by: Will Deacon
05 Jun, 2015
1 commit
-
Since all branches are PC-relative on AArch64, these instructions
cannot be used as an alternative with the simplistic approach
we currently have (the immediate has been computed from
the .altinstr_replacement section, and end-up being completely off
if the target is outside of the replacement sequence).This patch handles the branch instructions in a different way,
using the insn framework to recompute the immediate, and generate
the right displacement in the above case.Acked-by: Will Deacon
Signed-off-by: Marc Zyngier
Signed-off-by: Catalin Marinas
05 May, 2015
1 commit
-
This reverts most of commit fef7f2b2010381c795ae43743ad31931cc58f5ad.
It turns out that there are a couple of problems with the way we're
fixing up branch instructions used as part of alternative instruction
sequences:(1) If the branch target is also in the alternative sequence, we'll
generate a branch into the .altinstructions section which actually
gets freed.(2) The calls to aarch64_insn_{read,write} bring an awful lot more
code into the patching path (e.g. taking locks, poking the fixmap,
invalidating the TLB) which isn't actually needed for the early
patching run under stop_machine, but makes the use of alternative
sequences extremely fragile (as we can't patch code that could be
used by the patching code).Given that no code actually requires alternative patching of immediate
branches, let's remove this support for now and revisit it when we've
got a user. We leave the updated size check, since we really do require
the sequences to be the same length.Acked-by: Marc Zyngier
Signed-off-by: Will Deacon
30 Mar, 2015
1 commit
-
Since all immediate branches are PC-relative on Aarch64, these
instructions cannot be used as an alternative with the simplistic
approach we currently have (the immediate has been computed from
the .altinstr_replacement section, and end-up being completely off
if we insert it directly).This patch handles the b and bl instructions in a different way,
using the insn framework to recompute the immediate, and generate
the right displacement.Reviewed-by: Andre Przywara
Acked-by: Will Deacon
Signed-off-by: Marc Zyngier
Signed-off-by: Will Deacon
04 Dec, 2014
1 commit
-
Currently the kernel patches all necessary instructions once at boot
time, so modules are not covered by this.
Change the apply_alternatives() function to take a beginning and an
end pointer and introduce a new variant (apply_alternatives_all()) to
cover the existing use case for the static kernel image section.
Add a module_finalize() function to arm64 to check for an
alternatives section in a module and patch only the instructions from
that specific area.
Since that module code is not touched before the module
initialization has ended, we don't need to halt the machine before
doing the patching in the module's code.Signed-off-by: Andre Przywara
Signed-off-by: Will Deacon
25 Nov, 2014
1 commit
-
With a blatant copy of some x86 bits we introduce the alternative
runtime patching "framework" to arm64.
This is quite basic for now and we only provide the functions we need
at this time.
This is connected to the newly introduced feature bits.Signed-off-by: Andre Przywara
Signed-off-by: Will Deacon