15 Mar, 2018
1 commit
-
commit 8554004a0231dedf44d4d62147fb3d6a6db489aa upstream.
Omitting suffixes from instructions in AT&T mode is bad practice when
operand size cannot be determined by the assembler from register
operands, and is likely going to be warned about by upstream GAS in the
future (mine does already). Add the single missing suffix here.Signed-off-by: Jan Beulich
Acked-by: Thomas Gleixner
Cc: Andy Lutomirski
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Denys Vlasenko
Cc: H. Peter Anvin
Cc: Josh Poimboeuf
Cc: Linus Torvalds
Cc: Peter Zijlstra
Link: http://lkml.kernel.org/r/5A8AF5F602000078001A9230@prv-mh.provo.novell.com
Signed-off-by: Ingo Molnar
Signed-off-by: Greg Kroah-Hartman
02 Nov, 2017
1 commit
-
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.By default all files without license information are under the default
license of the kernel, which is GPL version 2.Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if
Reviewed-by: Philippe Ombredanne
Reviewed-by: Thomas Gleixner
Signed-off-by: Greg Kroah-Hartman
18 Jul, 2017
2 commits
-
Add support to check if memory encryption is active in the kernel and that
it has been enabled on the AP. If memory encryption is active in the kernel
but has not been enabled on the AP, then set the memory encryption bit (bit
23) of MSR_K8_SYSCFG to enable memory encryption on that AP and allow the
AP to continue start up.Signed-off-by: Tom Lendacky
Reviewed-by: Thomas Gleixner
Reviewed-by: Borislav Petkov
Cc: Alexander Potapenko
Cc: Andrey Ryabinin
Cc: Andy Lutomirski
Cc: Arnd Bergmann
Cc: Borislav Petkov
Cc: Brijesh Singh
Cc: Dave Young
Cc: Dmitry Vyukov
Cc: Jonathan Corbet
Cc: Konrad Rzeszutek Wilk
Cc: Larry Woodman
Cc: Linus Torvalds
Cc: Matt Fleming
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Peter Zijlstra
Cc: Radim Krčmář
Cc: Rik van Riel
Cc: Toshimitsu Kani
Cc: kasan-dev@googlegroups.com
Cc: kvm@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-efi@vger.kernel.org
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/37e29b99c395910f56ca9f8ecf7b0439b28827c8.1500319216.git.thomas.lendacky@amd.com
Signed-off-by: Ingo Molnar -
When Secure Memory Encryption is enabled, the trampoline area must not
be encrypted. A CPU running in real mode will not be able to decrypt
memory that has been encrypted because it will not be able to use addresses
with the memory encryption mask.Signed-off-by: Tom Lendacky
Reviewed-by: Thomas Gleixner
Reviewed-by: Borislav Petkov
Cc: Alexander Potapenko
Cc: Andrey Ryabinin
Cc: Andy Lutomirski
Cc: Arnd Bergmann
Cc: Borislav Petkov
Cc: Brijesh Singh
Cc: Dave Young
Cc: Dmitry Vyukov
Cc: Jonathan Corbet
Cc: Konrad Rzeszutek Wilk
Cc: Larry Woodman
Cc: Linus Torvalds
Cc: Matt Fleming
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Peter Zijlstra
Cc: Radim Krčmář
Cc: Rik van Riel
Cc: Toshimitsu Kani
Cc: kasan-dev@googlegroups.com
Cc: kvm@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-efi@vger.kernel.org
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/c70ffd2614fa77e80df31c9169ca98a9b16ff97c.1500319216.git.thomas.lendacky@amd.com
Signed-off-by: Ingo Molnar
13 Jun, 2017
1 commit
-
With CONFIG_X86_5LEVEL=y, level 4 is no longer top level of page tables.
Let's give these variable more generic names: init_top_pgt and
early_top_pgt.Signed-off-by: Kirill A. Shutemov
Reviewed-by: Juergen Gross
Cc: Andrew Morton
Cc: Andy Lutomirski
Cc: Andy Lutomirski
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Dave Hansen
Cc: Denys Vlasenko
Cc: H. Peter Anvin
Cc: Josh Poimboeuf
Cc: Linus Torvalds
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: linux-arch@vger.kernel.org
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20170606113133.22974-9-kirill.shutemov@linux.intel.com
Signed-off-by: Ingo Molnar
09 May, 2017
1 commit
-
set_memory_* functions have moved to set_memory.h. Switch to this
explicitly.Link: http://lkml.kernel.org/r/1488920133-27229-6-git-send-email-labbott@redhat.com
Signed-off-by: Laura Abbott
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
28 Nov, 2016
1 commit
-
The make variable KBUILD_CFLAGS contains $(LINUXINCLUDE). But the build
already picks up $(LINUXINCLUDE) from scripts/Makefile.lib. The net effect
is that the (long) list of include directories is used twice.This is harmless but pointless. So stop using $(LINUXINCLUDE) twice.
Signed-off-by: Paul Bolle
Cc: Linus Torvalds
Cc: Masahiro Yamada
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/1480077514-2586-1-git-send-email-pebolle@tiscali.nl
Signed-off-by: Ingo Molnar
11 Aug, 2016
3 commits
-
If reserve_real_mode() fails, panicing immediately means we're
doomed. Make it safe to try more than once to allocate the
trampoline:- Degrade a failure from panic() to pr_info(). (If we make it to
setup_real_mode() without reserving the trampoline, we'll panic
them.)- Factor out helpers so that platform code can supply a specific
address to try.- Warn if reserve_real_mode() is called after we're done with the
memblock allocator. If that were to happen, we would behave
unpredictably.Signed-off-by: Andy Lutomirski
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Denys Vlasenko
Cc: H. Peter Anvin
Cc: Josh Poimboeuf
Cc: Linus Torvalds
Cc: Mario Limonciello
Cc: Matt Fleming
Cc: Matthew Garrett
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/876e383038f3e9971aa72fd20a4f5da05f9d193d.1470821230.git.luto@kernel.org
Signed-off-by: Ingo Molnar -
There's no need to run setup_real_mode() as early as we run it.
Defer it to the same early_initcall that sets up the page
permissions for the real mode code.This should be a code size reduction. More importantly, it give us
a longer window in which we can allocate the real mode trampoline.Signed-off-by: Andy Lutomirski
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Denys Vlasenko
Cc: H. Peter Anvin
Cc: Josh Poimboeuf
Cc: Linus Torvalds
Cc: Mario Limonciello
Cc: Matt Fleming
Cc: Matthew Garrett
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/fd62f0da4f79357695e9bf3e365623736b05f119.1470821230.git.luto@kernel.org
Signed-off-by: Ingo Molnar -
The initialization process for trampoline_cr4_features and
mmu_cr4_features was confusing. The intent is for mmu_cr4_features
and *trampoline_cr4_features to stay in sync, but
trampoline_cr4_features is NULL until setup_real_mode() runs. The
old code synchronized *trampoline_cr4_features *twice*, once in
setup_real_mode() and once in setup_arch(). It also initialized
mmu_cr4_features in setup_real_mode(), which causes the actual value
of mmu_cr4_features to potentially depend on when setup_real_mode()
is called.With this patch, mmu_cr4_features is initialized directly in
setup_arch(), and *trampoline_cr4_features is synchronized to
mmu_cr4_features when the trampoline is set up.After this patch, it should be safe to defer setup_real_mode().
Signed-off-by: Andy Lutomirski
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Denys Vlasenko
Cc: H. Peter Anvin
Cc: Josh Poimboeuf
Cc: Linus Torvalds
Cc: Mario Limonciello
Cc: Matt Fleming
Cc: Matthew Garrett
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/d48a263f9912389b957dd495a7127b009259ffe0.1470821230.git.luto@kernel.org
Signed-off-by: Ingo Molnar
03 Aug, 2016
1 commit
-
Pull kbuild updates from Michal Marek:
- GCC plugin support by Emese Revfy from grsecurity, with a fixup from
Kees Cook. The plugins are meant to be used for static analysis of
the kernel code. Two plugins are provided already.- reduction of the gcc commandline by Arnd Bergmann.
- IS_ENABLED / IS_REACHABLE macro enhancements by Masahiro Yamada
- bin2c fix by Michael Tautschnig
- setlocalversion fix by Wolfram Sang
* 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild:
gcc-plugins: disable under COMPILE_TEST
kbuild: Abort build on bad stack protector flag
scripts: Fix size mismatch of kexec_purgatory_size
kbuild: make samples depend on headers_install
Kbuild: don't add obj tree in additional includes
Kbuild: arch: look for generated headers in obtree
Kbuild: always prefix objtree in LINUXINCLUDE
Kbuild: avoid duplicate include path
Kbuild: don't add ../../ to include path
vmlinux.lds.h: replace config_enabled() with IS_ENABLED()
kconfig.h: allow to use IS_{ENABLE,REACHABLE} in macro expansion
kconfig.h: use already defined macros for IS_REACHABLE() define
export.h: use __is_defined() to check if __KSYM_* is defined
kconfig.h: use __is_defined() to check if MODULE is defined
kbuild: setlocalversion: print error to STDERR
Add sancov plugin
Add Cyclomatic complexity GCC plugin
GCC plugin infrastructure
Shared library support
19 Jul, 2016
1 commit
-
There are very few files that need add an -I$(obj) gcc for the preprocessor
or the assembler. For C files, we add always these for both the objtree and
srctree, but for the other ones we require the Makefile to add them, and
Kbuild then adds it for both trees.As a preparation for changing the meaning of the -I$(obj) directive to
only refer to the srctree, this changes the two instances in arch/x86 to use
an explictit $(objtree) prefix where needed, otherwise we won't find the
headers any more, as reported by the kbuild 0day builder.arch/x86/realmode/rm/realmode.lds.S:75:20: fatal error: pasyms.h: No such file or directory
Signed-off-by: Arnd Bergmann
Signed-off-by: Michal Marek
08 Jul, 2016
1 commit
-
Use a separate global variable to define the trampoline PGD used to
start other processors. This change will allow KALSR memory
randomization to change the trampoline PGD to be correctly aligned with
physical memory.Signed-off-by: Thomas Garnier
Signed-off-by: Kees Cook
Cc: Alexander Kuleshov
Cc: Alexander Popov
Cc: Andrew Morton
Cc: Andy Lutomirski
Cc: Aneesh Kumar K.V
Cc: Baoquan He
Cc: Boris Ostrovsky
Cc: Borislav Petkov
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Christian Borntraeger
Cc: Dan Williams
Cc: Dave Hansen
Cc: Dave Young
Cc: Denys Vlasenko
Cc: Dmitry Vyukov
Cc: H. Peter Anvin
Cc: Jan Beulich
Cc: Joerg Roedel
Cc: Jonathan Corbet
Cc: Josh Poimboeuf
Cc: Juergen Gross
Cc: Kirill A. Shutemov
Cc: Linus Torvalds
Cc: Lv Zheng
Cc: Mark Salter
Cc: Martin Schwidefsky
Cc: Matt Fleming
Cc: Peter Zijlstra
Cc: Stephen Smalley
Cc: Thomas Gleixner
Cc: Toshi Kani
Cc: Xiao Guangrong
Cc: Yinghai Lu
Cc: kernel-hardening@lists.openwall.com
Cc: linux-doc@vger.kernel.org
Link: http://lkml.kernel.org/r/1466556426-32664-5-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar
20 Apr, 2016
1 commit
-
Since commit 2aedcd098a94 ('kbuild: suppress annoying "... is up to
date." message'), $(call if_changed,...) is evaluated to "@:"
when there is nothing to do.We no longer need to add "@:" after $(call if_changed,...) to
suppress "... is up to date." message.Signed-off-by: Masahiro Yamada
Signed-off-by: Michal Marek
23 Mar, 2016
1 commit
-
kcov provides code coverage collection for coverage-guided fuzzing
(randomized testing). Coverage-guided fuzzing is a testing technique
that uses coverage feedback to determine new interesting inputs to a
system. A notable user-space example is AFL
(http://lcamtuf.coredump.cx/afl/). However, this technique is not
widely used for kernel testing due to missing compiler and kernel
support.kcov does not aim to collect as much coverage as possible. It aims to
collect more or less stable coverage that is function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard
interrupts and instrumentation of some inherently non-deterministic or
non-interesting parts of kernel is disbled (e.g. scheduler, locking).Currently there is a single coverage collection mode (tracing), but the
API anticipates additional collection modes. Initially I also
implemented a second mode which exposes coverage in a fixed-size hash
table of counters (what Quentin used in his original patch). I've
dropped the second mode for simplicity.This patch adds the necessary support on kernel side. The complimentary
compiler support was added in gcc revision 231296.We've used this support to build syzkaller system call fuzzer, which has
found 90 kernel bugs in just 2 months:https://github.com/google/syzkaller/wiki/Found-Bugs
We've also found 30+ bugs in our internal systems with syzkaller.
Another (yet unexplored) direction where kcov coverage would greatly
help is more traditional "blob mutation". For example, mounting a
random blob as a filesystem, or receiving a random blob over wire.Why not gcov. Typical fuzzing loop looks as follows: (1) reset
coverage, (2) execute a bit of code, (3) collect coverage, repeat. A
typical coverage can be just a dozen of basic blocks (e.g. an invalid
input). In such context gcov becomes prohibitively expensive as
reset/collect coverage steps depend on total number of basic
blocks/edges in program (in case of kernel it is about 2M). Cost of
kcov depends only on number of executed basic blocks/edges. On top of
that, kernel requires per-thread coverage because there are always
background threads and unrelated processes that also produce coverage.
With inlined gcov instrumentation per-thread coverage is not possible.kcov exposes kernel PCs and control flow to user-space which is
insecure. But debugfs should not be mapped as user accessible.Based on a patch by Quentin Casasnovas.
[akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
[akpm@linux-foundation.org: unbreak allmodconfig]
[akpm@linux-foundation.org: follow x86 Makefile layout standards]
Signed-off-by: Dmitry Vyukov
Reviewed-by: Kees Cook
Cc: syzkaller
Cc: Vegard Nossum
Cc: Catalin Marinas
Cc: Tavis Ormandy
Cc: Will Deacon
Cc: Quentin Casasnovas
Cc: Kostya Serebryany
Cc: Eric Dumazet
Cc: Alexander Potapenko
Cc: Kees Cook
Cc: Bjorn Helgaas
Cc: Sasha Levin
Cc: David Drysdale
Cc: Ard Biesheuvel
Cc: Andrey Ryabinin
Cc: Kirill A. Shutemov
Cc: Jiri Slaby
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
29 Feb, 2016
1 commit
-
Code which runs outside the kernel's normal mode of operation often does
unusual things which can cause a static analysis tool like objtool to
emit false positive warnings:- boot image
- vdso image
- relocation
- realmode
- efi
- head
- purgatory
- modpostSet OBJECT_FILES_NON_STANDARD for their related files and directories,
which will tell objtool to skip checking them. It's ok to skip them
because they don't affect runtime stack traces.Also skip the following code which does the right thing with respect to
frame pointers, but is too "special" to be validated by a tool:- entry
- mcountAlso skip the test_nx module because it modifies its exception handling
table at runtime, which objtool can't understand. Fortunately it's
just a test module so it doesn't matter much.Currently objtool is the only user of OBJECT_FILES_NON_STANDARD, but it
might eventually be useful for other tools.Signed-off-by: Josh Poimboeuf
Cc: Andrew Morton
Cc: Andy Lutomirski
Cc: Arnaldo Carvalho de Melo
Cc: Bernd Petrovitsch
Cc: Borislav Petkov
Cc: Chris J Arges
Cc: Jiri Slaby
Cc: Linus Torvalds
Cc: Michal Marek
Cc: Namhyung Kim
Cc: Pedro Alves
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: live-patching@vger.kernel.org
Link: http://lkml.kernel.org/r/366c080e3844e8a5b6a0327dc7e8c2b90ca3baeb.1456719558.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar
21 Jan, 2016
1 commit
-
UBSAN uses compile-time instrumentation to catch undefined behavior
(UB). Compiler inserts code that perform certain kinds of checks before
operations that could cause UB. If check fails (i.e. UB detected)
__ubsan_handle_* function called to print error message.So the most of the work is done by compiler. This patch just implements
ubsan handlers printing errors.GCC has this capability since 4.9.x [1] (see -fsanitize=undefined
option and its suboptions).
However GCC 5.x has more checkers implemented [2].
Article [3] has a bit more details about UBSAN in the GCC.[1] - https://gcc.gnu.org/onlinedocs/gcc-4.9.0/gcc/Debugging-Options.html
[2] - https://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html
[3] - http://developerblog.redhat.com/2014/10/16/gcc-undefined-behavior-sanitizer-ubsan/Issues which UBSAN has found thus far are:
Found bugs:
* out-of-bounds access - 97840cb67ff5 ("netfilter: nfnetlink: fix
insufficient validation in nfnetlink_bind")undefined shifts:
* d48458d4a768 ("jbd2: use a better hash function for the revoke
table")* 10632008b9e1 ("clockevents: Prevent shift out of bounds")
* 'x << -1' shift in ext4 -
http://lkml.kernel.org/r/* undefined rol32(0) -
http://lkml.kernel.org/r/* undefined dirty_ratelimit calculation -
http://lkml.kernel.org/r/* undefined roundown_pow_of_two(0) -
http://lkml.kernel.org/r/* [WONTFIX] undefined shift in __bpf_prog_run -
http://lkml.kernel.org/r/WONTFIX here because it should be fixed in bpf program, not in kernel.
signed overflows:
* 32a8df4e0b33f ("sched: Fix odd values in effective_load()
calculations")* mul overflow in ntp -
http://lkml.kernel.org/r/* incorrect conversion into rtc_time in rtc_time64_to_tm() -
http://lkml.kernel.org/r/* unvalidated timespec in io_getevents() -
http://lkml.kernel.org/r/* [NOTABUG] signed overflow in ktime_add_safe() -
http://lkml.kernel.org/r/[akpm@linux-foundation.org: fix unused local warning]
[akpm@linux-foundation.org: fix __int128 build woes]
Signed-off-by: Andrey Ryabinin
Cc: Peter Zijlstra
Cc: Sasha Levin
Cc: Randy Dunlap
Cc: Rasmus Villemoes
Cc: Jonathan Corbet
Cc: Michal Marek
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Yury Gribov
Cc: Dmitry Vyukov
Cc: Konstantin Khlebnikov
Cc: Kostya Serebryany
Cc: Johannes Berg
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
17 Feb, 2015
1 commit
-
Pull x86 perf updates from Ingo Molnar:
"This series tightens up RDPMC permissions: currently even highly
sandboxed x86 execution environments (such as seccomp) have permission
to execute RDPMC, which may leak various perf events / PMU state such
as timing information and other CPU execution details.This 'all is allowed' RDPMC mode is still preserved as the
(non-default) /sys/devices/cpu/rdpmc=2 setting. The new default is
that RDPMC access is only allowed if a perf event is mmap-ed (which is
needed to correctly interpret RDPMC counter values in any case).As a side effect of these changes CR4 handling is cleaned up in the
x86 code and a shadow copy of the CR4 value is added.The extra CR4 manipulation adds ~ of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86: Add /sys/devices/cpu/rdpmc=2 to allow rdpmc for all tasks
perf/x86: Only allow rdpmc if a perf_event is mapped
perf: Pass the event to arch_perf_update_userpage()
perf: Add pmu callbacks to track event mapping and unmapping
x86: Add a comment clarifying LDT context switching
x86: Store a per-cpu shadow copy of CR4
x86: Clean up cr4 manipulation
14 Feb, 2015
1 commit
-
This patch adds arch specific code for kernel address sanitizer.
16TB of virtual addressed used for shadow memory. It's located in range
[ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
stacks.At early stage we map whole shadow region with zero page. Latter, after
pages mapped to direct mapping address range we unmap zero pages from
corresponding shadow (see kasan_map_shadow()) and allocate and map a real
shadow memory reusing vmemmap_populate() function.Also replace __pa with __pa_nodebug before shadow initialized. __pa with
CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before shadow
area initialized.Signed-off-by: Andrey Ryabinin
Cc: Dmitry Vyukov
Cc: Konstantin Serebryany
Cc: Dmitry Chernenkov
Signed-off-by: Andrey Konovalov
Cc: Yuri Gribov
Cc: Konstantin Khlebnikov
Cc: Sasha Levin
Cc: Christoph Lameter
Cc: Joonsoo Kim
Cc: Dave Hansen
Cc: Andi Kleen
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: David Rientjes
Cc: Jim Davis
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
04 Feb, 2015
1 commit
-
Context switches and TLB flushes can change individual bits of CR4.
CR4 reads take several cycles, so store a shadow copy of CR4 in a
per-cpu variable.To avoid wasting a cache line, I added the CR4 shadow to
cpu_tlbstate, which is already touched in switch_mm. The heaviest
users of the cr4 shadow will be switch_mm and __switch_to_xtra, and
__switch_to_xtra is called shortly after switch_mm during context
switch, so the cacheline is likely to be hot.Signed-off-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Signed-off-by: Peter Zijlstra (Intel)
Cc: Kees Cook
Cc: Andrea Arcangeli
Cc: Vince Weaver
Cc: "hillf.zj"
Cc: Valdis Kletnieks
Cc: Paul Mackerras
Cc: Arnaldo Carvalho de Melo
Cc: Linus Torvalds
Link: http://lkml.kernel.org/r/3a54dd3353fffbf84804398e00dfdc5b7c1afd7d.1414190806.git.luto@amacapital.net
Signed-off-by: Ingo Molnar
16 Apr, 2014
1 commit
-
Supress this unnecessary message during kernel re-build:
make[3]: 'arch/x86/realmode/rm/realmode.bin' is up to date.
Signed-off-by: Peter Foley
Link: http://lkml.kernel.org/r/1397584693-15902-1-git-send-email-pefoley2@pefoley.com
Signed-off-by: Ingo Molnar
30 Jan, 2014
1 commit
-
Bring in upstream merge of x86/kaslr for future patches.
Signed-off-by: H. Peter Anvin
22 Jan, 2014
1 commit
-
Define them once in arch/x86/Makefile instead of twice.
Signed-off-by: David Woodhouse
Link: http://lkml.kernel.org/r/1389180083-23249-1-git-send-email-David.Woodhouse@intel.com
Signed-off-by: H. Peter Anvin
21 Jan, 2014
1 commit
-
Pull x86 cleanups from Ingo Molnar:
"Misc cleanups"* 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86, cpu, amd: Fix a shadowed variable situation
um, x86: Fix vDSO build
x86: Delete non-required instances of include
x86, realmode: Pointer walk cleanups, pull out invariant use of __pa()
x86/traps: Clean up error exception handler definitions
07 Jan, 2014
1 commit
-
None of these files are actually using any __init type directives
and hence don't need to include . Most are just a
left over from __devinit and __cpuinit removal, or simply due to
code getting copied from one driver to the next.[ hpa: undid incorrect removal from arch/x86/kernel/head_32.S ]
Signed-off-by: Paul Gortmaker
Link: http://lkml.kernel.org/r/1389054026-12947-1-git-send-email-paul.gortmaker@windriver.com
Signed-off-by: H. Peter Anvin
19 Dec, 2013
1 commit
-
The pointer arithmetic in this function was really bizarre, where in
fact all we really wanted was a simple pointer array walk. Use the
much more idiomatic construction for that (*ptr++).Factor an invariant use of __pa() out of the relocation loop. At
least on 64 bits it seems gcc isn't capable of doing that
automatically.Change the scope of a couple of variables to make it extra obvious
that they are extremely local temp variables.Signed-off-by: H. Peter Anvin
Link: http://lkml.kernel.org/n/tip-rd908t9c8kvcojdabtmm94mb@git.kernel.org
10 Dec, 2013
1 commit
-
In checkin
5551a34e5aea x86-64, build: Always pass in -mno-sse
we unconditionally added -mno-sse to the main build, to keep newer
compilers from generating SSE instructions from autovectorization.
However, this did not extend to the special environments
(arch/x86/boot, arch/x86/boot/compressed, and arch/x86/realmode/rm).
Add -mno-sse to the compiler command line for these environments, and
add -mno-mmx to all the environments as well, as we don't want a
compiler to generate MMX code either.This patch also removes a $(cc-option) call for -m32, since we have
long since stopped supporting compilers too old for the -m32 option,
and in fact hardcode it in other places in the Makefiles.Reported-by: Kevin B. Smith
Cc: Sunil K. Pandey
Signed-off-by: H. Peter Anvin
Cc: H. J. Lu
Link: http://lkml.kernel.org/n/tip-j21wzqv790q834n7yc6g80j1@git.kernel.org
Cc: # build fix only
01 Feb, 2013
1 commit
-
Explicitly merging these two branches due to nontrivial conflicts and
to allow further work.Resolved Conflicts:
arch/x86/kernel/head32.c
arch/x86/kernel/head64.c
arch/x86/mm/init_64.c
arch/x86/realmode/init.cSigned-off-by: H. Peter Anvin
30 Jan, 2013
3 commits
-
After we switch to use #PF handler help to set page table, init_level4_pgt
will only have entries set after init_mem_mapping().
We need to move copying init_level4_pgt to trampoline_pgd after that.So split reserve and setup, and move the setup after init_mem_mapping()
Signed-off-by: Yinghai Lu
Link: http://lkml.kernel.org/r/1359058816-7615-11-git-send-email-yinghai@kernel.org
Cc: Jarkko Sakkinen
Acked-by: Jarkko Sakkinen
Signed-off-by: H. Peter Anvin -
with #PF handler way to set early page table, level3_ident will go away with
64bit native path.So just use entries in init_level4_pgt to set them in trampoline_pgd.
Signed-off-by: Yinghai Lu
Link: http://lkml.kernel.org/r/1359058816-7615-10-git-send-email-yinghai@kernel.org
Cc: Jarkko Sakkinen
Acked-by: Jarkko Sakkinen
Signed-off-by: H. Peter Anvin -
Trampoline code is executed by APs with kernel low mapping on 64bit.
We need to set trampoline code to EXEC early before we boot APs.Found the problem after switching to #PF handler set page table,
and we do not set initial kernel low mapping with EXEC anymore in
arch/x86/kernel/head_64.S.Change to use early_initcall instead that will make sure trampoline
will have EXEC set.-v2: Merge two comments according to Borislav Petkov
Signed-off-by: Yinghai Lu
Link: http://lkml.kernel.org/r/1359058816-7615-7-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin
17 Nov, 2012
1 commit
-
When I made an attempt at separating __pa_symbol and __pa I found that there
were a number of cases where __pa was used on an obvious symbol.I also caught one non-obvious case as _brk_start and _brk_end are based on the
address of __brk_base which is a C visible symbol.In mark_rodata_ro I was able to reduce the overhead of kernel symbol to
virtual memory translation by using a combination of __va(__pa_symbol())
instead of page_address(virt_to_page()).Signed-off-by: Alexander Duyck
Link: http://lkml.kernel.org/r/20121116215640.8521.80483.stgit@ahduyck-cp1.jf.intel.com
Signed-off-by: H. Peter Anvin
02 Oct, 2012
1 commit
-
The patch:
73201dbe x86, suspend: On wakeup always initialize cr4 and EFER
... was incorrectly committed in an intermediate (unfinished) form.
- We need to test CF, not ZF, for a bit test with btl.
- We don't actually need to compute the existence of EFLAGS.ID,
since we set a flag at suspend time if CR4 should be restored.Signed-off-by: H. Peter Anvin
Cc: Rafael J. Wysocki
Link: http://lkml.kernel.org/r/1348529239-17943-1-git-send-email-hpa@linux.intel.com
Signed-off-by: Ingo Molnar
27 Sep, 2012
1 commit
-
We already have a flag word to indicate the existence of MISC_ENABLES,
so use the same flag word to indicate existence of cr4 and EFER, and
always restore them if they exist. That way if something passes a
nonzero value when the value *should* be zero, we will still
initialize it.Signed-off-by: H. Peter Anvin
Cc: Rafael J. Wysocki
Link: http://lkml.kernel.org/r/1348529239-17943-1-git-send-email-hpa@linux.intel.com
11 Aug, 2012
1 commit
-
GCC built with nonstandard options can enable -fpic by default.
We never want this for 32-bit kernels and it will break the build.[ hpa: Notably the Android toolchain apparently does this. ]
Change-Id: Iaab7d66e598b1c65ac4a4f0229eca2cd3d0d2898
Signed-off-by: Andrew Boie
Link: http://lkml.kernel.org/r/1344624546-29691-1-git-send-email-andrew.p.boie@intel.com
Signed-off-by: H. Peter Anvin
22 Jun, 2012
1 commit
-
Be a bit more paranoid in the transition back to 16-bit mode. In
particular, in case the kernel is residing above the 4 GiB mark,
switch to the trampoline GDT, and make the jump after turning off
paging a far jump. In theory, none of this should matter, but it is
exactly the kind of things that broken SMM or virtualization software
could trip up on.Signed-off-by: H. Peter Anvin
Link: http://lkml.kernel.org/r/tip-jopx7y6g6dbcx4tpal8q0jlr@git.kernel.org
18 Jun, 2012
1 commit
-
With the revamped realmode trampoline code, it is trivial to extend
support for reboot=bios to x86-64. Furthermore, while we are at it,
remove the restriction that only we can only override the reboot CPU
on 32 bits.Signed-off-by: H. Peter Anvin
Link: http://lkml.kernel.org/n/tip-jopx7y6g6dbcx4tpal8q0jlr@git.kernel.org
21 May, 2012
1 commit
-
The end signature was defined in wakeup_asm.S as it originally came
from the ACPI wakeup code. However, we rely on the existence of the
.signature section to expand .bss, otherwise we would have to include
code to explicitly zero the .bss depending on the configuration.
Since the expanded .bss is just in .init.data anyway, it's easier to
always have it expanded.This fixes failures when compiled without CONFIG_ACPI_SLEEP.
Reported-by: Ingo Molnar
Signed-off-by: H. Peter Anvin
Cc: Jarkko Sakkinen
17 May, 2012
2 commits
-
Change EFER to be a single u64 field instead of two u32 fields; change
the order to maintain alignment. Note that on x86-64 cr4 is really
also a 64-bit quantity, although we can only set the low 32 bits from
the trampoline code since it is still executing in 32-bit mode at that
point.Signed-off-by: H. Peter Anvin
Cc: Jarkko Sakkinen -
Keep all the realmode code together, including initialization (only
the rm/ subdirectory is actually built as real-mode code, anyway.)Signed-off-by: H. Peter Anvin
Cc: Jarkko Sakkinen