21 Aug, 2020
1 commit
-
... and don't bother zeroing destination on error
Signed-off-by: Al Viro
01 May, 2020
1 commit
-
In order to change the {JMP,CALL}_NOSPEC macros to call out-of-line
versions of the retpoline magic, we need to remove the '%' from the
argument, such that we can paste it onto symbol names.Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Josh Poimboeuf
Link: https://lkml.kernel.org/r/20200428191700.151623523@infradead.org
18 Oct, 2019
2 commits
-
These are all functions which are invoked from elsewhere, so annotate
them as global using the new SYM_FUNC_START and their ENDPROC's by
SYM_FUNC_END.Now, ENTRY/ENDPROC can be forced to be undefined on X86, so do so.
Signed-off-by: Jiri Slaby
Signed-off-by: Borislav Petkov
Cc: Allison Randal
Cc: Andrey Ryabinin
Cc: Andy Lutomirski
Cc: Andy Shevchenko
Cc: Ard Biesheuvel
Cc: Bill Metzenthen
Cc: Boris Ostrovsky
Cc: Darren Hart
Cc: "David S. Miller"
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: linux-arch@vger.kernel.org
Cc: linux-crypto@vger.kernel.org
Cc: linux-efi
Cc: linux-efi@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: Mark Rutland
Cc: Matt Fleming
Cc: Pavel Machek
Cc: platform-driver-x86@vger.kernel.org
Cc: "Rafael J. Wysocki"
Cc: Thomas Gleixner
Cc: Will Deacon
Cc: x86-ml
Link: https://lkml.kernel.org/r/20191011115108.12392-28-jslaby@suse.cz -
These are all functions which are invoked from elsewhere, so annotate
them as global using the new SYM_FUNC_START and their ENDPROC's by
SYM_FUNC_END.Make sure ENTRY/ENDPROC is not defined on X86_64, given these were the
last users.Signed-off-by: Jiri Slaby
Signed-off-by: Borislav Petkov
Reviewed-by: Rafael J. Wysocki [hibernate]
Reviewed-by: Boris Ostrovsky [xen bits]
Acked-by: Herbert Xu [crypto]
Cc: Allison Randal
Cc: Andrey Ryabinin
Cc: Andy Lutomirski
Cc: Andy Shevchenko
Cc: Ard Biesheuvel
Cc: Armijn Hemel
Cc: Cao jin
Cc: Darren Hart
Cc: Dave Hansen
Cc: "David S. Miller"
Cc: Enrico Weigelt
Cc: Greg Kroah-Hartman
Cc: Herbert Xu
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: Jim Mattson
Cc: Joerg Roedel
Cc: Josh Poimboeuf
Cc: Juergen Gross
Cc: Kate Stewart
Cc: "Kirill A. Shutemov"
Cc: kvm ML
Cc: Len Brown
Cc: linux-arch@vger.kernel.org
Cc: linux-crypto@vger.kernel.org
Cc: linux-efi
Cc: linux-efi@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: Mark Rutland
Cc: Matt Fleming
Cc: Paolo Bonzini
Cc: Pavel Machek
Cc: Peter Zijlstra
Cc: platform-driver-x86@vger.kernel.org
Cc: "Radim Krčmář"
Cc: Sean Christopherson
Cc: Stefano Stabellini
Cc: "Steven Rostedt (VMware)"
Cc: Thomas Gleixner
Cc: Vitaly Kuznetsov
Cc: Wanpeng Li
Cc: Wei Huang
Cc: x86-ml
Cc: xen-devel@lists.xenproject.org
Cc: Xiaoyao Li
Link: https://lkml.kernel.org/r/20191011115108.12392-25-jslaby@suse.cz
31 May, 2019
1 commit
-
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later versionextracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 3029 file(s).
Signed-off-by: Thomas Gleixner
Reviewed-by: Allison Randal
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman
03 Sep, 2018
1 commit
-
Currently, most fixups for attempting to access userspace memory are
handled using _ASM_EXTABLE, which is also used for various other types of
fixups (e.g. safe MSR access, IRET failures, and a bunch of other things).
In order to make it possible to add special safety checks to uaccess fixups
(in particular, checking whether the fault address is actually in
userspace), introduce a new exception table handler ex_handler_uaccess()
and wire it up to all the user access fixups (excluding ones that
already use _ASM_EXTABLE_EX).Signed-off-by: Jann Horn
Signed-off-by: Thomas Gleixner
Tested-by: Kees Cook
Cc: Andy Lutomirski
Cc: kernel-hardening@lists.openwall.com
Cc: dvyukov@google.com
Cc: Masami Hiramatsu
Cc: "Naveen N. Rao"
Cc: Anil S Keshavamurthy
Cc: "David S. Miller"
Cc: Alexander Viro
Cc: linux-fsdevel@vger.kernel.org
Cc: Borislav Petkov
Link: https://lkml.kernel.org/r/20180828201421.157735-5-jannh@google.com
12 Jan, 2018
1 commit
-
Convert all indirect jumps in 32bit checksum assembler code to use
non-speculative sequences when CONFIG_RETPOLINE is enabled.Signed-off-by: David Woodhouse
Signed-off-by: Thomas Gleixner
Acked-by: Arjan van de Ven
Acked-by: Ingo Molnar
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: Rik van Riel
Cc: Andi Kleen
Cc: Josh Poimboeuf
Cc: thomas.lendacky@amd.com
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: Jiri Kosina
Cc: Andy Lutomirski
Cc: Dave Hansen
Cc: Kees Cook
Cc: Tim Chen
Cc: Greg Kroah-Hartman
Cc: Paul Turner
Link: https://lkml.kernel.org/r/1515707194-20531-11-git-send-email-dwmw@amazon.co.uk
08 Aug, 2016
1 commit
-
Signed-off-by: Al Viro
02 Jun, 2015
1 commit
-
So the dwarf2 annotations in low level assembly code have
become an increasing hindrance: unreadable, messy macros
mixed into some of the most security sensitive code paths
of the Linux kernel.These debug info annotations don't even buy the upstream
kernel anything: dwarf driven stack unwinding has caused
problems in the past so it's out of tree, and the upstream
kernel only uses the much more robust framepointers based
stack unwinding method.In addition to that there's a steady, slow bitrot going
on with these annotations, requiring frequent fixups.
There's no tooling and no functionality upstream that
keeps it correct.So burn down the sick forest, allowing new, healthier growth:
27 files changed, 350 insertions(+), 1101 deletions(-)
Someone who has the willingness and time to do this
properly can attempt to reintroduce dwarf debuginfo in x86
assembly code plus dwarf unwinding from first principles,
with the following conditions:- it should be maximally readable, and maximally low-key to
'ordinary' code reading and maintenance.- find a build time method to insert dwarf annotations
automatically in the most common cases, for pop/push
instructions that manipulate the stack pointer. This could
be done for example via a preprocessing step that just
looks for common patterns - plus special annotations for
the few cases where we want to depart from the default.
We have hundreds of CFI annotations, so automating most of
that makes sense.- it should come with build tooling checks that ensure that
CFI annotations are sensible. We've seen such efforts from
the framepointer side, and there's no reason it couldn't be
done on the dwarf side.Cc: Andy Lutomirski
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Denys Vlasenko
Cc: Frédéric Weisbecker
Cc: Jan Beulich
Cc: Josh Poimboeuf
Cc: Linus Torvalds
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar
07 Mar, 2015
1 commit
-
By the nature of the TEST operation, it is often possible to test
a narrower part of the operand:"testl $3, mem" -> "testb $3, mem",
"testq $3, %rcx" -> "testb $3, %cl"This results in shorter instructions, because the TEST instruction
has no sign-entending byte-immediate forms unlike other ALU ops.Note that this change does not create any LCP (Length-Changing Prefix)
stalls, which happen when adding a 0x66 prefix, which happens when
16-bit immediates are used, which changes such TEST instructions:[test_opcode] [modrm] [imm32]
to:
[0x66] [test_opcode] [modrm] [imm16]
where [imm16] has a *different length* now: 2 bytes instead of 4.
This confuses the decoder and slows down execution.REX prefixes were carefully designed to almost never hit this case:
adding REX prefix does not change instruction length except MOVABS
and MOV [addr],RAX instruction.This patch does not add instructions which would use a 0x66 prefix,
code changes in assembly are:-48 f7 07 01 00 00 00 testq $0x1,(%rdi)
+f6 07 01 testb $0x1,(%rdi)
-48 f7 c1 01 00 00 00 test $0x1,%rcx
+f6 c1 01 test $0x1,%cl
-48 f7 c1 02 00 00 00 test $0x2,%rcx
+f6 c1 02 test $0x2,%cl
-41 f7 c2 01 00 00 00 test $0x1,%r10d
+41 f6 c2 01 test $0x1,%r10b
-48 f7 c1 04 00 00 00 test $0x4,%rcx
+f6 c1 04 test $0x4,%cl
-48 f7 c1 08 00 00 00 test $0x8,%rcx
+f6 c1 08 test $0x8,%clLinus further notes:
"There are no stalls from using 8-bit instruction forms.
Now, changing from 64-bit or 32-bit 'test' instructions to 8-bit ones
*could* cause problems if it ends up having forwarding issues, so that
instead of just forwarding the result, you end up having to wait for
it to be stable in the L1 cache (or possibly the register file). The
forwarding from the store buffer is simplest and most reliable if the
read is done at the exact same address and the exact same size as the
write that gets forwarded.But that's true only if:
(a) the write was very recent and is still in the write queue. I'm
not sure that's the case here anyway.(b) on at least most Intel microarchitectures, you have to test a
different byte than the lowest one (so forwarding a 64-bit write
to a 8-bit read ends up working fine, as long as the 8-bit read
is of the low 8 bits of the written data).A very similar issue *might* show up for registers too, not just
memory writes, if you use 'testb' with a high-byte register (where
instead of forwarding the value from the original producer it needs to
go through the register file and then shifted). But it's mainly a
problem for store buffers.But afaik, the way Denys changed the test instructions, neither of the
above issues should be true.The real problem for store buffer forwarding tends to be "write 8
bits, read 32 bits". That can be really surprisingly expensive,
because the read ends up having to wait until the write has hit the
cacheline, and we might talk tens of cycles of latency here. But
"write 32 bits, read the low 8 bits" *should* be fast on pretty much
all x86 chips, afaik."Signed-off-by: Denys Vlasenko
Acked-by: Andy Lutomirski
Acked-by: Linus Torvalds
Cc: Borislav Petkov
Cc: Frederic Weisbecker
Cc: H. Peter Anvin
Cc: H. Peter Anvin
Cc: Kees Cook
Cc: Oleg Nesterov
Cc: Steven Rostedt
Cc: Will Drewry
Link: http://lkml.kernel.org/r/1425675332-31576-1-git-send-email-dvlasenk@redhat.com
Signed-off-by: Ingo Molnar
05 Mar, 2015
1 commit
-
Sequences:
pushl_cfi %reg
CFI_REL_OFFSET reg, 0and:
popl_cfi %reg
CFI_RESTORE reghappen quite often. This patch adds macros which generate them.
No assembly changes (verified with objdump -dr vmlinux.o).
Signed-off-by: Denys Vlasenko
Signed-off-by: Andy Lutomirski
Cc: Alexei Starovoitov
Cc: Borislav Petkov
Cc: Frederic Weisbecker
Cc: H. Peter Anvin
Cc: Kees Cook
Cc: Linus Torvalds
Cc: Oleg Nesterov
Cc: Will Drewry
Link: http://lkml.kernel.org/r/1421017655-25561-1-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/2202eb90f175cf45d1b2d1c64dbb5676a8ad07ad.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar
15 Apr, 2013
1 commit
-
As suggested by Peter Anvin.
Signed-off-by: Andy Shevchenko
Cc: H . Peter Anvin
Signed-off-by: Ingo Molnar
21 Apr, 2012
1 commit
-
Remove open-coded exception table entries in arch/x86/lib/checksum_32.S,
and replace them with _ASM_EXTABLE() macros; this will allow us to
change the format and type of the exception table entries.Signed-off-by: H. Peter Anvin
Cc: David Daney
Link: http://lkml.kernel.org/r/CA%2B55aFyijf43qSu3N9nWHEBwaGbb7T2Oq9A=9EyR=Jtyqfq_cQ@mail.gmail.com
01 Mar, 2011
1 commit
-
Cleaning up and shortening code...
Signed-off-by: Jan Beulich
Cc: Alexander van Heukelum
LKML-Reference:
Signed-off-by: Ingo Molnar
11 Oct, 2007
1 commit
-
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar