17 Dec, 2020

2 commits

  • This change adds a --noinstr flag to objtool to allow us to specify
    that we're processing vmlinux.o without also enabling noinstr
    validation. This is needed to avoid false positives with LTO when we
    run objtool on vmlinux.o without CONFIG_DEBUG_ENTRY.

    Bug: 145210207
    Change-Id: I479c72d2733844d2059253035391a0c6e8ad7771
    Link: https://lore.kernel.org/lkml/20201013003203.4168817-11-samitolvanen@google.com/
    Signed-off-by: Sami Tolvanen

    Sami Tolvanen
     
  • Add the --mcount option for generating __mcount_loc sections
    needed for dynamic ftrace. Using this pass requires the kernel to
    be compiled with -mfentry and CC_USING_NOP_MCOUNT to be defined
    in Makefile.

    Bug: 145210207
    Change-Id: I34eeeb00c184bf265391549094fc15525536886b
    Link: https://lore.kernel.org/lkml/20200625200235.GQ4781@hirez.programming.kicks-ass.net/
    Signed-off-by: Peter Zijlstra
    [Sami: rebased, dropped config changes, fixed to actually use --mcount,
    and wrote a commit message.]
    Signed-off-by: Sami Tolvanen
    Reviewed-by: Kees Cook

    Peter Zijlstra
     

22 Apr, 2020

1 commit

  • Validate that any call out of .noinstr.text is in between
    instr_begin() and instr_end() annotations.

    This annotation is useful to ensure correct behaviour wrt tracing
    sensitive code like entry/exit and idle code. When we run code in a
    sensitive context we want a guarantee no unknown code is ran.

    Since this validation relies on knowing the section of call
    destination symbols, we must run it on vmlinux.o instead of on
    individual object files.

    Add two options:

    -d/--duplicate "duplicate validation for vmlinux"
    -l/--vmlinux "vmlinux.o validation"

    Where the latter auto-detects when objname ends with "vmlinux.o" and
    the former will force all validations, also those already done on
    !vmlinux object files.

    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Miroslav Benes
    Reviewed-by: Alexandre Chartre
    Acked-by: Josh Poimboeuf
    Link: https://lkml.kernel.org/r/20200416115119.106268040@infradead.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

26 Mar, 2020

1 commit

  • Have it print a few numbers which can be used to size the hashtables.

    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Link: https://lkml.kernel.org/r/20200324160924.321381240@infradead.org

    Peter Zijlstra
     

21 May, 2019

1 commit

  • Based on 2 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details you
    should have received a copy of the gnu general public license along
    with this program if not see http www gnu org licenses

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details [based]
    [from] [clk] [highbank] [c] you should have received a copy of the
    gnu general public license along with this program if not see http
    www gnu org licenses

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 355 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Kate Stewart
    Reviewed-by: Jilayne Lovejoy
    Reviewed-by: Steve Winslow
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190519154041.837383322@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

03 Apr, 2019

2 commits

  • It is important that UACCESS regions are as small as possible;
    furthermore the UACCESS state is not scheduled, so doing anything that
    might directly call into the scheduler will cause random code to be
    ran with UACCESS enabled.

    Teach objtool too track UACCESS state and warn about any CALL made
    while UACCESS is enabled. This very much includes the __fentry__()
    and __preempt_schedule() calls.

    Note that exceptions _do_ save/restore the UACCESS state, and therefore
    they can drive preemption. This also means that all exception handlers
    must have an otherwise redundant UACCESS disable instruction;
    therefore ignore this warning for !STT_FUNC code (exception handlers
    are not normal functions).

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Josh Poimboeuf
    Cc: Borislav Petkov
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • For when you want to know the path that reached your fail state:

    $ ./objtool check --no-fp --backtrace arch/x86/lib/usercopy_64.o
    arch/x86/lib/usercopy_64.o: warning: objtool: .altinstr_replacement+0x3: UACCESS disable without MEMOPs: __clear_user()
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x3a: (alt)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x2e: (branch)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x18: (branch)
    arch/x86/lib/usercopy_64.o: warning: objtool: .altinstr_replacement+0xffffffffffffffff: (branch)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x5: (alt)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x0: :
    0: e8 00 00 00 00 callq 5
    1: R_X86_64_PLT32 __fentry__-0x4
    5: 90 nop
    6: 90 nop
    7: 90 nop
    8: 48 89 f0 mov %rsi,%rax
    b: 48 c1 ee 03 shr $0x3,%rsi
    f: 83 e0 07 and $0x7,%eax
    12: 48 89 f1 mov %rsi,%rcx
    15: 48 85 c9 test %rcx,%rcx
    18: 74 0f je 29
    1a: 48 c7 07 00 00 00 00 movq $0x0,(%rdi)
    21: 48 83 c7 08 add $0x8,%rdi
    25: ff c9 dec %ecx
    27: 75 f1 jne 1a
    29: 48 89 c1 mov %rax,%rcx
    2c: 85 c9 test %ecx,%ecx
    2e: 74 0a je 3a
    30: c6 07 00 movb $0x0,(%rdi)
    33: 48 ff c7 inc %rdi
    36: ff c9 dec %ecx
    38: 75 f6 jne 30
    3a: 90 nop
    3b: 90 nop
    3c: 90 nop
    3d: 48 89 c8 mov %rcx,%rax
    40: c3 retq

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Josh Poimboeuf
    Cc: Borislav Petkov
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

21 Feb, 2018

3 commits

  • David allowed retpolines in .init.text, except for modules, which will
    trip up objtool retpoline validation, fix that.

    Requested-by: David Woodhouse
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Thomas Gleixner
    Acked-by: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Arjan van de Ven
    Cc: Borislav Petkov
    Cc: Dan Williams
    Cc: Dave Hansen
    Cc: David Woodhouse
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • David requested a objtool validation pass for CONFIG_RETPOLINE=y enabled
    builds, where it validates no unannotated indirect jumps or calls are
    left.

    Add an additional .discard.retpoline_safe section to allow annotating
    the few indirect sites that are required and safe.

    Requested-by: David Woodhouse
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: David Woodhouse
    Acked-by: Thomas Gleixner
    Acked-by: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Arjan van de Ven
    Cc: Borislav Petkov
    Cc: Dan Williams
    Cc: Dave Hansen
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Use the existing global variables instead of passing them around and
    creating duplicate global variables.

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Thomas Gleixner
    Acked-by: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Arjan van de Ven
    Cc: Borislav Petkov
    Cc: Dan Williams
    Cc: Dave Hansen
    Cc: David Woodhouse
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

18 Jul, 2017

1 commit

  • Now that objtool knows the states of all registers on the stack for each
    instruction, it's straightforward to generate debuginfo for an unwinder
    to use.

    Instead of generating DWARF, generate a new format called ORC, which is
    more suitable for an in-kernel unwinder. See
    Documentation/x86/orc-unwinder.txt for a more detailed description of
    this new debuginfo format and why it's preferable to DWARF.

    Signed-off-by: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Brian Gerst
    Cc: Denys Vlasenko
    Cc: H. Peter Anvin
    Cc: Jiri Slaby
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: live-patching@vger.kernel.org
    Link: http://lkml.kernel.org/r/c9b9f01ba6c5ed2bdc9bb0957b78167fdbf9632e.1499786555.git.jpoimboe@redhat.com
    Signed-off-by: Ingo Molnar

    Josh Poimboeuf
     

29 Feb, 2016

1 commit

  • This adds a host tool named objtool which has a "check" subcommand which
    analyzes .o files to ensure the validity of stack metadata. It enforces
    a set of rules on asm code and C inline assembly code so that stack
    traces can be reliable.

    For each function, it recursively follows all possible code paths and
    validates the correct frame pointer state at each instruction.

    It also follows code paths involving kernel special sections, like
    .altinstructions, __jump_table, and __ex_table, which can add
    alternative execution paths to a given instruction (or set of
    instructions). Similarly, it knows how to follow switch statements, for
    which gcc sometimes uses jump tables.

    Here are some of the benefits of validating stack metadata:

    a) More reliable stack traces for frame pointer enabled kernels

    Frame pointers are used for debugging purposes. They allow runtime
    code and debug tools to be able to walk the stack to determine the
    chain of function call sites that led to the currently executing
    code.

    For some architectures, frame pointers are enabled by
    CONFIG_FRAME_POINTER. For some other architectures they may be
    required by the ABI (sometimes referred to as "backchain pointers").

    For C code, gcc automatically generates instructions for setting up
    frame pointers when the -fno-omit-frame-pointer option is used.

    But for asm code, the frame setup instructions have to be written by
    hand, which most people don't do. So the end result is that
    CONFIG_FRAME_POINTER is honored for C code but not for most asm code.

    For stack traces based on frame pointers to be reliable, all
    functions which call other functions must first create a stack frame
    and update the frame pointer. If a first function doesn't properly
    create a stack frame before calling a second function, the *caller*
    of the first function will be skipped on the stack trace.

    For example, consider the following example backtrace with frame
    pointers enabled:

    [] dump_stack+0x4b/0x63
    [] cmdline_proc_show+0x12/0x30
    [] seq_read+0x108/0x3e0
    [] proc_reg_read+0x42/0x70
    [] __vfs_read+0x37/0x100
    [] vfs_read+0x86/0x130
    [] SyS_read+0x58/0xd0
    [] entry_SYSCALL_64_fastpath+0x12/0x76

    It correctly shows that the caller of cmdline_proc_show() is
    seq_read().

    If we remove the frame pointer logic from cmdline_proc_show() by
    replacing the frame pointer related instructions with nops, here's
    what it looks like instead:

    [] dump_stack+0x4b/0x63
    [] cmdline_proc_show+0x12/0x30
    [] proc_reg_read+0x42/0x70
    [] __vfs_read+0x37/0x100
    [] vfs_read+0x86/0x130
    [] SyS_read+0x58/0xd0
    [] entry_SYSCALL_64_fastpath+0x12/0x76

    Notice that cmdline_proc_show()'s caller, seq_read(), has been
    skipped. Instead the stack trace seems to show that
    cmdline_proc_show() was called by proc_reg_read().

    The benefit of "objtool check" here is that because it ensures that
    *all* functions honor CONFIG_FRAME_POINTER, no functions will ever[*]
    be skipped on a stack trace.

    [*] unless an interrupt or exception has occurred at the very
    beginning of a function before the stack frame has been created,
    or at the very end of the function after the stack frame has been
    destroyed. This is an inherent limitation of frame pointers.

    b) 100% reliable stack traces for DWARF enabled kernels

    This is not yet implemented. For more details about what is planned,
    see tools/objtool/Documentation/stack-validation.txt.

    c) Higher live patching compatibility rate

    This is not yet implemented. For more details about what is planned,
    see tools/objtool/Documentation/stack-validation.txt.

    To achieve the validation, "objtool check" enforces the following rules:

    1. Each callable function must be annotated as such with the ELF
    function type. In asm code, this is typically done using the
    ENTRY/ENDPROC macros. If objtool finds a return instruction
    outside of a function, it flags an error since that usually indicates
    callable code which should be annotated accordingly.

    This rule is needed so that objtool can properly identify each
    callable function in order to analyze its stack metadata.

    2. Conversely, each section of code which is *not* callable should *not*
    be annotated as an ELF function. The ENDPROC macro shouldn't be used
    in this case.

    This rule is needed so that objtool can ignore non-callable code.
    Such code doesn't have to follow any of the other rules.

    3. Each callable function which calls another function must have the
    correct frame pointer logic, if required by CONFIG_FRAME_POINTER or
    the architecture's back chain rules. This can by done in asm code
    with the FRAME_BEGIN/FRAME_END macros.

    This rule ensures that frame pointer based stack traces will work as
    designed. If function A doesn't create a stack frame before calling
    function B, the _caller_ of function A will be skipped on the stack
    trace.

    4. Dynamic jumps and jumps to undefined symbols are only allowed if:

    a) the jump is part of a switch statement; or

    b) the jump matches sibling call semantics and the frame pointer has
    the same value it had on function entry.

    This rule is needed so that objtool can reliably analyze all of a
    function's code paths. If a function jumps to code in another file,
    and it's not a sibling call, objtool has no way to follow the jump
    because it only analyzes a single file at a time.

    5. A callable function may not execute kernel entry/exit instructions.
    The only code which needs such instructions is kernel entry code,
    which shouldn't be be in callable functions anyway.

    This rule is just a sanity check to ensure that callable functions
    return normally.

    It currently only supports x86_64. I tried to make the code generic so
    that support for other architectures can hopefully be plugged in
    relatively easily.

    On my Lenovo laptop with a i7-4810MQ 4-core/8-thread CPU, building the
    kernel with objtool checking every .o file adds about three seconds of
    total build time. It hasn't been optimized for performance yet, so
    there are probably some opportunities for better build performance.

    Signed-off-by: Josh Poimboeuf
    Cc: Andrew Morton
    Cc: Andy Lutomirski
    Cc: Arnaldo Carvalho de Melo
    Cc: Bernd Petrovitsch
    Cc: Borislav Petkov
    Cc: Chris J Arges
    Cc: Jiri Slaby
    Cc: Linus Torvalds
    Cc: Michal Marek
    Cc: Namhyung Kim
    Cc: Pedro Alves
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: live-patching@vger.kernel.org
    Link: http://lkml.kernel.org/r/f3efb173de43bd067b060de73f856567c0fa1174.1456719558.git.jpoimboe@redhat.com
    Signed-off-by: Ingo Molnar

    Josh Poimboeuf