19 Jan, 2021
1 commit
-
This reverts commit 6f9aba5a20b84a20848cc444a304f4ada6538b39.
Reason for revert: Breaks CTS
Change-Id: I88ce3506b4881a7d8dae0aaf687dba602a0ca0ff
Signed-off-by: Quentin Perret
16 Jan, 2021
1 commit
-
commit e5383432d92c ("scsi: ufs: Clear UAC for FFU and RPMB LUNs") in -stable
merge, wrong code was added back.
Let's fix it.Fixes: 7eadb0006a9b ("Merge 5.10.7 into android12-5.10")
Signed-off-by: Jaegeuk Kim
Change-Id: I69c8dee273194279bc7bc23f199f0ecb0e617d03
15 Jan, 2021
38 commits
-
We have to start somewhere, so add initial abi_gki_aarch64.xml file
for the current snapshot with a limited set of symbols.Note, these symbols have not been reviewed yet, it just gives us a base
to work off of, as now the infrastructure allows for building and
managing the .xml file properly.Bug: 177417361
Signed-off-by: Greg Kroah-Hartman
Change-Id: Ic9d9aeead1f017409644810f50528be2d165bae6 -
In the Scudo memory allocator [1] we would like to be able to detect
use-after-free vulnerabilities involving large allocations by issuing
mprotect(PROT_NONE) on the memory region used for the allocation when it
is deallocated. Later on, after the memory region has been "quarantined"
for a sufficient period of time we would like to be able to use it for
another allocation by issuing mprotect(PROT_READ|PROT_WRITE).Before this patch, after removing the write protection, any writes to the
memory region would result in page faults and entering the copy-on-write
code path, even in the usual case where the pages are only referenced by a
single PTE, harming performance unnecessarily. Make it so that any pages
in anonymous mappings that are only referenced by a single PTE are
immediately made writable during the mprotect so that we can avoid the
page faults.This program shows the critical syscall sequence that we intend to use in
the allocator:#include
#includeenum { kSize = 131072 };
int main(int argc, char **argv) {
char *addr = (char *)mmap(0, kSize, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
for (int i = 0; i != 100000; ++i) {
memset(addr, i, kSize);
mprotect((void *)addr, kSize, PROT_NONE);
mprotect((void *)addr, kSize, PROT_READ | PROT_WRITE);
}
}The effect of this patch on the above program was measured on a
DragonBoard 845c by taking the median real time execution time of 10 runs.Before: 3.19s
After: 0.79sThe effect was also measured using one of the microbenchmarks that
we normally use to benchmark the allocator [2], after modifying it
to make the appropriate mprotect calls [3]. With an allocation size
of 131072 bytes to trigger the allocator's "large allocation" code
path the per-iteration time was measured as follows:Before: 33364ns
After: 6886nsThis patch means that we do more work during the mprotect call itself
in exchange for less work when the pages are accessed. In the worst
case, the pages are not accessed at all. The effect of this patch in
such cases was measured using the following program:#include
#includeenum { kSize = 131072 };
int main(int argc, char **argv) {
char *addr = (char *)mmap(0, kSize, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
memset(addr, 1, kSize);
for (int i = 0; i != 100000; ++i) {
#ifdef PAGE_FAULT
memset(addr + (i * 4096) % kSize, i, 4096);
#endif
mprotect((void *)addr, kSize, PROT_NONE);
mprotect((void *)addr, kSize, PROT_READ | PROT_WRITE);
}
}With PAGE_FAULT undefined (0 pages touched after removing write
protection) the median real time execution time of 100 runs was measured
as follows:Before: 0.325928s
After: 0.365493sWith PAGE_FAULT defined (1 page touched) the measurements were
as follows:Before: 0.441516s
After: 0.380251sSo it seems that even with a single page fault the new approach is faster.
I saw similar results if I adjusted the programs to use a larger mapping
size. With kSize = 1048576 I get these numbers with PAGE_FAULT undefined:Before: 1.563078s
After: 1.607476si.e. around 3%.
And these with PAGE_FAULT defined:
Before: 1.684663s
After: 1.683272si.e. about the same.
What I think we may conclude from these results is that for smaller
mappings the advantage of the previous approach, although measurable, is
wiped out by a single page fault. I think we may expect that there should
be at least one access resulting in a page fault (under the previous
approach) after making the pages writable, since the program presumably
made the pages writable for a reason.For larger mappings we may guesstimate that the new approach wins if the
density of future page faults is > 0.4%. But for the mappings that are
large enough for density to matter (not just the absolute number of page
faults) it doesn't seem like the increase in mprotect latency would be
very large relative to the total mprotect execution time.Link: https://lkml.kernel.org/r/20201230004134.1185017-1-pcc@google.com
Link: https://linux-review.googlesource.com/id/I98d75ef90e20330c578871c87494d64b1df3f1b8
Link: [1] https://source.android.com/devices/tech/debug/scudo
Link: [2] https://cs.android.com/android/platform/superproject/+/master:bionic/benchmarks/stdlib_benchmark.cpp;l=53;drc=e8693e78711e8f45ccd2b610e4dbe0b94d551cc9
Link: [3] https://github.com/pcc/llvm-project/commit/scudo-mprotect-secondary
Signed-off-by: Peter Collingbourne
Cc: Kostya Kortchinsky
Signed-off-by: Andrew Morton
Signed-off-by: Stephen Rothwell
(cherry picked from commit 2a9e75c907fa2de626d77dd4051fc038f0dbaf52
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git akpm)
Bug: 135772972
Change-Id: I98d75ef90e20330c578871c87494d64b1df3f1b8 -
According to the spec (JESD220E chapter 7.2), while powering off/on the ufs
device, RST_n signal should be between VSS(Ground) and VCCQ/VCCQ2.Link: https://lore.kernel.org/r/1610103385-45755-3-git-send-email-ziqichen@codeaurora.org
Acked-by: Avri Altman
Signed-off-by: Ziqi Chen
Signed-off-by: Martin K. PetersenBug: 177449264
Change-Id: I033301c981d7f85c1b14eacf859335c3b50010e2
(cherry picked from commit b61d0414136853fc38898829cde837ce5d691a9a
git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git 5.12/scsi-staging)
[Can Guo: Resolved minor conflict]
Signed-off-by: Can Guo -
According to the spec (JESD220E chapter 7.2), while powering off/on the ufs
device, REF_CLK signal should be between VSS(Ground) and VCCQ/VCCQ2.Link: https://lore.kernel.org/r/1610103385-45755-2-git-send-email-ziqichen@codeaurora.org
Reviewed-by: Can Guo
Acked-by: Avri Altman
Signed-off-by: Ziqi Chen
Signed-off-by: Martin K. PetersenBug: 177449264
Change-Id: I75c269cbf7602c45b13a3a7023b53daa0ecb838b
(cherry picked from commit 528db9e563d1cb6abf60417ea0bedbd492c68ee4
git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git 5.12/scsi-staging)
[Can Guo: Resolved minor conflict]
Signed-off-by: Can Guo -
Code added for cpu pause feature should be conditional based on
CONFIG_SUSPENDFixes: 5ada76d05637 ("ANDROID: sched/pause: prevent wake up paused cpus")
Bug: 161210528
Reported-by: kernel test robot
Signed-off-by: Todd Kjos
Change-Id: I8dc31064bafb31dd570daae97b7bb547384a771f -
Vendor hooks required explicitly defining macros or inline functions
to handle the non-GKI build case (!CONFIG_ANDROID_VENDOR_HOOKS). Added
support for generating them automatically so the macros are no longer
required.Both models are now supported so we can transition.
Bug: 177416721
Signed-off-by: Todd Kjos
Change-Id: I01acc389d315a5d509b0c48116854342a42e1058 -
Fixes use of %p format for u64 (should be %llx)
Fixes: e091aa59b956 ("ANDROID: tracing: Add register read and write
tracing support")
Signed-off-by: Todd Kjos
Change-Id: I4cd3a179e08a3fb682db6d9bb8530d504eb71720 -
CFI has additional overhead on indirect branches to modules as the
target is not known at kernel compile-time. This has been demonstrated
to cause problematic performance regressions on benchmarks using GKI
together with modularized scheduler callbacks attached to restricted
vendor hooks.To restore some of the performance back, let's disable CFI around the
restricted hook call sites and issue a raw indirect call in fast paths.We should be able to drop this patch when/if the arm64 static_call
port lands upstream [1] as this would make tracepoints circumvent some
of the CFI checks using text patching, but that still remain to be
proven.[1] https://lore.kernel.org/linux-arm-kernel/20201028184114.6834-1-ardb@kernel.org/
Bug: 168521642
Change-Id: I7cd59f582b12fed15be64059f08122f96786e650
Signed-off-by: Quentin Perret -
e820__mapped_all() is passed as a callback to is_mmconf_reserved(),
which expects a function of type:typedef bool (*check_reserved_t)(u64 start, u64 end, unsigned type);
However, e820__mapped_all() accepts enum e820_type as the last argument
and this type mismatch trips indirect call checking with Clang's
Control-Flow Integrity (CFI).As is_mmconf_reserved() only passes enum e820_type values for the
type argument, change the typedef and the unused type argument in
is_acpi_reserved() to enum e820_type to fix the type mismatch.Bug: 145210207
Change-Id: Ic7d0f28887e44c40d09e2392c4301547e642a294
(cherry picked from commit 83321c335dccba262a57378361d63da96b8166d6)
Reported-by: Sedat Dilek
Suggested-by: Borislav Petkov
Signed-off-by: Sami Tolvanen
Signed-off-by: Borislav Petkov
Link: https://lkml.kernel.org/r/20201130193900.456726-1-samitolvanen@google.com -
Disable CFI for the stand-alone purgatory.ro.
Bug: 145210207
Change-Id: I957fd1d000ed27ca9fe9adb6c0ec2b6e0f6d73ce
Signed-off-by: Sami Tolvanen -
optprobe_template_func is not marked as a global symbol, which
conflicts with the C declaration and confuses LLVM when CFI is
enabled. However, marking the symbol global results in a CFI jump
table entry being generated for it, which makes objtool unhappy as the
jump table contains a jump to .rodata.This change solves both issues by removing the C reference to
optprobe_template_func and generates the STACK_FRAME_NON_STANDARD
entry in inline assembly instead.Bug: 145210207
Change-Id: Ib19b86cf437277036fa218d6e8d7292f10bef940
Signed-off-by: Sami Tolvanen -
Allow CFI enabled entry code to make indirect calls by also mapping
CFI jump tables, and add a check to ensure the jump table section is
not empty.Bug: 145210207
Change-Id: I4ad3506f7a365cd068009348d45b54e228e42e33
Signed-off-by: Sami Tolvanen -
Also ignore these relocations when loading modules.
Bug: 145210207
Change-Id: I53c8ed4811fee4b770fc5824376fef657ab47bdf
Signed-off-by: Sami Tolvanen -
The __typeid__* symbols aren't actually relocations, so they can be
ignored during relocation generation.Bug: 145210207
Change-Id: Ib9abe21c3c2aeee2a41491f8358f1a88717fa843
Signed-off-by: Kees Cook
Signed-off-by: Sami Tolvanen -
Instead of using inline asm for the int3 selftest (which confuses the
Clang's ThinLTO pass), this restores the C function but disables KASAN
(and tracing for good measure) to keep the things simple and avoid
unexpected side-effects. This attempts to keep the fix from commit
ecc606103837 ("x86/alternatives: Fix int3_emulate_call() selftest stack
corruption") without using inline asm.Bug: 145210207
Change-Id: Ib4cdfde61473febd867c2329f57ec9a8a5eced2f
Signed-off-by: Kees Cook
Signed-off-by: Sami Tolvanen -
The exception table entries are constructed out of a relative offset
and point to the actual function, not the CFI table entry. For now,
just mark the caller as not checking CFI. The failure is most visible
at boot with CONFIG_DEBUG_RODATA_TEST=y.Bug: 145210207
Change-Id: Idf6efed424fc95ef20ddd69596478dc813754ce4
Signed-off-by: Kees Cook
Signed-off-by: Sami Tolvanen -
Older versions of Clang didn't generate BTI instructions for the
compiler-generated CFI check functions. As CFI provides a more
fine-grained control-flow checking then BTI, disable BTI when CFI is
enabled and we're using Clang -
Disable LTO+CFI for code that runs at EL2 to avoid address space
confusion as the CFI jump tables point to EL1 addresses.Bug: 145210207
Change-Id: I81359ec648b2616e85dfd3bb399327bac980b3fe
Signed-off-by: Sami Tolvanen -
__apply_alternatives makes indirect calls to functions whose address is
taken in assembly code using the alternative_cb macro. With CFI enabled
using non-canonical jump tables, the compiler isn't able to replace the
function reference with the jump table reference, which trips CFI.Bug: 145210207
Change-Id: I2361b601d987cd25f88aa0b9f37b400ff566febc
Signed-off-by: Sami Tolvanen -
We use non-canonical CFI jump tables with CONFIG_CFI_CLANG, which
means the compiler replaces function address references with the
address of the function's CFI jump table entry. This results in
__pa_symbol(function), for example, returning the physical address
of the jump table entry, which can lead to address space confusion
since the jump table itself points to a virtual address. The same
issue happens when passing function pointers to hypervisor code
running at EL2.This change adds __va_function and __pa_function macros, which use
inline assembly to take the actual function address instead, and
changes the relevant code to use these macros.Bug: 145210207
Change-Id: Ie3079c10427bde705a2244cfb3cb5fb954e5e065
Signed-off-by: Sami Tolvanen -
Disable CFI checking for functions that switch to linear mapping and
make an indirect call to a physical address, since the compiler only
understands virtual addresses.Bug: 145210207
Change-Id: I2bd39c5891d4f2ce033e5ee515cf86d96eb0447f
Signed-off-by: Sami Tolvanen -
To ensure we take the actual address of a function in kernel text,
use __va_function. Otherwise, with CONFIG_CFI_CLANG, the compiler
may replace the address with a pointer to the CFI jump table, which
can reside inside the module, when compiled with CONFIG_LKDTM=m.Bug: 145210207
Change-Id: Ie65d3aace55695a5e515436267c048b13ace9002
Signed-off-by: Sami Tolvanen -
Instead of casting callback functions to type iw_handler, which trips
indirect call checking with Clang's Control-Flow Integrity (CFI), add
stub functions with the correct function type for the callbacks.Bug: 145210207
Change-Id: Ief26496449ec985d600dd06b5e190dd21bf8eb4a
Link: https://lore.kernel.org/lkml/20201117205902.405316-1-samitolvanen@google.com/
Reported-by: Sedat Dilek
Signed-off-by: Sami Tolvanen -
Casting the comparison function to a different type trips indirect call
Control-Flow Integrity (CFI) checking. Remove the additional consts from
cmp_func, and the now unneeded casts.Bug: 145210207
Change-Id: Iffe0eeec8e7f65a5937513a4bb87e5107faa004e
Link: https://lore.kernel.org/lkml/20200110225602.91663-1-samitolvanen@google.com/
Fixes: 043b3f7b6388 ("lib/list_sort: simplify and remove MAX_LIST_LENGTH_BITS")
Signed-off-by: Sami Tolvanen -
BPF dispatcher functions are patched at runtime to perform direct
instead of indirect calls. Disable CFI for the dispatcher functions
to avoid conflicts.Bug: 145210207
Change-Id: Iea72f5a9fe09dd5adbb90b0174945707f42594b0
Signed-off-by: Sami Tolvanen -
With ThinLTO and CFI both enabled, LLVM appends a hash to the
names of all static functions. This breaks userspace tools, so
strip out the hash from output.Bug: 145210207
Change-Id: Icc0173f1d754b378ae81a9f91d84c0814ba26b78
Suggested-by: Jack Pham
Signed-off-by: Sami Tolvanen -
With CFI, a callback function passed to __kthread_queue_delayed_work
from a module can point to a jump table entry defined in the module
instead of the one used in the core kernel, which breaks this test:WARN_ON_ONCE(timer->function != kthread_delayed_work_timer_fn);
To work around the problem, disable the warning when CFI and modules
are both enabled.Bug: 145210207
Change-Id: I5b0a60bb69ce8e2bc0d8e4bf6736457b6425b6cf
Signed-off-by: Sami Tolvanen -
With CFI, a callback function passed to __queue_delayed_work from a
module can point to a jump table entry defined in the module instead
of the one used in the core kernel, which breaks this test:WARN_ON_ONCE(timer->function != delayed_work_timer_fn);
To work around the problem, disable the warning when CFI and modules
are both enabled.Bug: 145210207
Change-Id: I2a631ea3da9e401af38accf1001082b93b9b3443
Signed-off-by: Sami Tolvanen -
With -ffunction-sections, Clang can generate a jump beyond the end of a
section when the section ends in an unreachable instruction. If the
offset matches the section length, use the last instruction as the jump
destination.Bug: 145210207
Change-Id: I422b805fe0e857915f0726404d14f62c01629849
Signed-off-by: Sami Tolvanen -
Skip checking for the compiler-generated jump table symbols when Clang's
Control-Flow Integrity (CFI) is enabled.Bug: 145210207
Change-Id: Icd1fad50214016348289ac5980b062708ab9ecd0
Signed-off-by: Sami Tolvanen -
With CONFIG_CFI_CLANG, LLVM replaces function references with CFI
jump table addresses to allow type checking with indirect calls. This is
unnecessary in ksymtab, so this change uses inline assembly to emit
ksymtab entries that point to the actual function instead when CFI is
enabled.Bug: 145210207
Change-Id: I894af2c7df476eb00d656c7692a33b25de31e26d
Signed-off-by: Sami Tolvanen -
On modules with no executable code, LLVM generates a __cfi_check stub,
but won't align it to page size as expected. This change ensures the
function is at the beginning of the .text section and correctly aligned
for the CFI shadow.Also discard the .eh_frame section, which LLD may emit with CFI_CLANG.
Bug: 145210207
Change-Id: I08923febb549aa64454282cc864ac80dadd717b9
Link: https://bugs.llvm.org/show_bug.cgi?id=46293
Signed-off-by: Sami Tolvanen -
We use non-canonical CFI jump tables with CONFIG_CFI_CLANG, which
means the compiler replaces function address references with the
address of the function's CFI jump table entry. This results in
__pa_symbol(function), for example, returning the physical address
of the jump table entry, which can lead to address space confusion
since the jump table itself points to a virtual address.This change adds generic definitions for __pa/va_function, which
architectures that support CFI can override.Bug: 145210207
Change-Id: I5b616901d5582478df613a4d28bf2b9c911edb46
Signed-off-by: Sami Tolvanen -
With non-canonical CFI, the compiler rewrites function references to
point to the CFI jump table for indirect call checking. This won't
happen when the address is taken in assembly, and will result in a CFI
failure if we jump to the address later in C code.This change adds the __cficanonical attribute, which tells the
compiler to switch to a canonical jump table for the function. With
canonical CFI, the compiler appends a .cfi postfix to the function
name, and points the original symbol to the jump table. This allows
addresses taken in assembly code to be used for indirect calls without
tripping CFI checks.Bug: 145210207
Change-Id: Iaca9d1d95f59d7169168d89bc10bf71420487a67
Signed-off-by: Sami Tolvanen -
This change adds the CONFIG_CFI_CLANG option, CFI error handling,
and a faster look-up table for cross module CFI checks.Bug: 145210207
Change-Id: I68d620ca548a911e2f49ba801bc0531406e679a3
Signed-off-by: Sami Tolvanen -
Bug: 177234986
Test: incfs_test passes
Signed-off-by: Paul Lawrence
Change-Id: I79b4273a050b8695b5810abd618fcb4437a05ce5 -
Bug: 177280103
Test: incfs_test passes
Signed-off-by: Paul Lawrence
Change-Id: I24b0d4bf5353834900f868f65e7510529867b615 -
Bug: 177075428
Test: incfs_test passes
atest GtsIncrementalInstallTestCases has only 8 failures
Signed-off-by: Paul Lawrence
Change-Id: I73accfc1982aec1cd7947996c25a23e4a97cfdac