26 Mar, 2020

1 commit

  • For consistency; we have:

    find_symbol_by_offset() / find_symbol_containing()
    find_func_by_offset() / find_containing_func()

    fix that.

    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Link: https://lkml.kernel.org/r/20200324160924.558470724@infradead.org

    Peter Zijlstra
     

21 May, 2019

1 commit

  • Based on 2 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details you
    should have received a copy of the gnu general public license along
    with this program if not see http www gnu org licenses

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details [based]
    [from] [clk] [highbank] [c] you should have received a copy of the
    gnu general public license along with this program if not see http
    www gnu org licenses

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 355 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Kate Stewart
    Reviewed-by: Jilayne Lovejoy
    Reviewed-by: Steve Winslow
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190519154041.837383322@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

03 Apr, 2019

1 commit

  • For when you want to know the path that reached your fail state:

    $ ./objtool check --no-fp --backtrace arch/x86/lib/usercopy_64.o
    arch/x86/lib/usercopy_64.o: warning: objtool: .altinstr_replacement+0x3: UACCESS disable without MEMOPs: __clear_user()
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x3a: (alt)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x2e: (branch)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x18: (branch)
    arch/x86/lib/usercopy_64.o: warning: objtool: .altinstr_replacement+0xffffffffffffffff: (branch)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x5: (alt)
    arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x0: :
    0: e8 00 00 00 00 callq 5
    1: R_X86_64_PLT32 __fentry__-0x4
    5: 90 nop
    6: 90 nop
    7: 90 nop
    8: 48 89 f0 mov %rsi,%rax
    b: 48 c1 ee 03 shr $0x3,%rsi
    f: 83 e0 07 and $0x7,%eax
    12: 48 89 f1 mov %rsi,%rcx
    15: 48 85 c9 test %rcx,%rcx
    18: 74 0f je 29
    1a: 48 c7 07 00 00 00 00 movq $0x0,(%rdi)
    21: 48 83 c7 08 add $0x8,%rdi
    25: ff c9 dec %ecx
    27: 75 f1 jne 1a
    29: 48 89 c1 mov %rax,%rcx
    2c: 85 c9 test %ecx,%ecx
    2e: 74 0a je 3a
    30: c6 07 00 movb $0x0,(%rdi)
    33: 48 ff c7 inc %rdi
    36: ff c9 dec %ecx
    38: 75 f6 jne 30
    3a: 90 nop
    3b: 90 nop
    3c: 90 nop
    3d: 48 89 c8 mov %rcx,%rax
    40: c3 retq

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Josh Poimboeuf
    Cc: Borislav Petkov
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

30 Jun, 2017

1 commit

  • This is a major rewrite of objtool. Instead of only tracking frame
    pointer changes, it now tracks all stack-related operations, including
    all register saves/restores.

    In addition to making stack validation more robust, this also paves the
    way for undwarf generation.

    Signed-off-by: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Jiri Slaby
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: live-patching@vger.kernel.org
    Link: http://lkml.kernel.org/r/678bd94c0566c6129bcc376cddb259c4c5633004.1498659915.git.jpoimboe@redhat.com
    Signed-off-by: Ingo Molnar

    Josh Poimboeuf
     

29 Feb, 2016

1 commit

  • This adds a host tool named objtool which has a "check" subcommand which
    analyzes .o files to ensure the validity of stack metadata. It enforces
    a set of rules on asm code and C inline assembly code so that stack
    traces can be reliable.

    For each function, it recursively follows all possible code paths and
    validates the correct frame pointer state at each instruction.

    It also follows code paths involving kernel special sections, like
    .altinstructions, __jump_table, and __ex_table, which can add
    alternative execution paths to a given instruction (or set of
    instructions). Similarly, it knows how to follow switch statements, for
    which gcc sometimes uses jump tables.

    Here are some of the benefits of validating stack metadata:

    a) More reliable stack traces for frame pointer enabled kernels

    Frame pointers are used for debugging purposes. They allow runtime
    code and debug tools to be able to walk the stack to determine the
    chain of function call sites that led to the currently executing
    code.

    For some architectures, frame pointers are enabled by
    CONFIG_FRAME_POINTER. For some other architectures they may be
    required by the ABI (sometimes referred to as "backchain pointers").

    For C code, gcc automatically generates instructions for setting up
    frame pointers when the -fno-omit-frame-pointer option is used.

    But for asm code, the frame setup instructions have to be written by
    hand, which most people don't do. So the end result is that
    CONFIG_FRAME_POINTER is honored for C code but not for most asm code.

    For stack traces based on frame pointers to be reliable, all
    functions which call other functions must first create a stack frame
    and update the frame pointer. If a first function doesn't properly
    create a stack frame before calling a second function, the *caller*
    of the first function will be skipped on the stack trace.

    For example, consider the following example backtrace with frame
    pointers enabled:

    [] dump_stack+0x4b/0x63
    [] cmdline_proc_show+0x12/0x30
    [] seq_read+0x108/0x3e0
    [] proc_reg_read+0x42/0x70
    [] __vfs_read+0x37/0x100
    [] vfs_read+0x86/0x130
    [] SyS_read+0x58/0xd0
    [] entry_SYSCALL_64_fastpath+0x12/0x76

    It correctly shows that the caller of cmdline_proc_show() is
    seq_read().

    If we remove the frame pointer logic from cmdline_proc_show() by
    replacing the frame pointer related instructions with nops, here's
    what it looks like instead:

    [] dump_stack+0x4b/0x63
    [] cmdline_proc_show+0x12/0x30
    [] proc_reg_read+0x42/0x70
    [] __vfs_read+0x37/0x100
    [] vfs_read+0x86/0x130
    [] SyS_read+0x58/0xd0
    [] entry_SYSCALL_64_fastpath+0x12/0x76

    Notice that cmdline_proc_show()'s caller, seq_read(), has been
    skipped. Instead the stack trace seems to show that
    cmdline_proc_show() was called by proc_reg_read().

    The benefit of "objtool check" here is that because it ensures that
    *all* functions honor CONFIG_FRAME_POINTER, no functions will ever[*]
    be skipped on a stack trace.

    [*] unless an interrupt or exception has occurred at the very
    beginning of a function before the stack frame has been created,
    or at the very end of the function after the stack frame has been
    destroyed. This is an inherent limitation of frame pointers.

    b) 100% reliable stack traces for DWARF enabled kernels

    This is not yet implemented. For more details about what is planned,
    see tools/objtool/Documentation/stack-validation.txt.

    c) Higher live patching compatibility rate

    This is not yet implemented. For more details about what is planned,
    see tools/objtool/Documentation/stack-validation.txt.

    To achieve the validation, "objtool check" enforces the following rules:

    1. Each callable function must be annotated as such with the ELF
    function type. In asm code, this is typically done using the
    ENTRY/ENDPROC macros. If objtool finds a return instruction
    outside of a function, it flags an error since that usually indicates
    callable code which should be annotated accordingly.

    This rule is needed so that objtool can properly identify each
    callable function in order to analyze its stack metadata.

    2. Conversely, each section of code which is *not* callable should *not*
    be annotated as an ELF function. The ENDPROC macro shouldn't be used
    in this case.

    This rule is needed so that objtool can ignore non-callable code.
    Such code doesn't have to follow any of the other rules.

    3. Each callable function which calls another function must have the
    correct frame pointer logic, if required by CONFIG_FRAME_POINTER or
    the architecture's back chain rules. This can by done in asm code
    with the FRAME_BEGIN/FRAME_END macros.

    This rule ensures that frame pointer based stack traces will work as
    designed. If function A doesn't create a stack frame before calling
    function B, the _caller_ of function A will be skipped on the stack
    trace.

    4. Dynamic jumps and jumps to undefined symbols are only allowed if:

    a) the jump is part of a switch statement; or

    b) the jump matches sibling call semantics and the frame pointer has
    the same value it had on function entry.

    This rule is needed so that objtool can reliably analyze all of a
    function's code paths. If a function jumps to code in another file,
    and it's not a sibling call, objtool has no way to follow the jump
    because it only analyzes a single file at a time.

    5. A callable function may not execute kernel entry/exit instructions.
    The only code which needs such instructions is kernel entry code,
    which shouldn't be be in callable functions anyway.

    This rule is just a sanity check to ensure that callable functions
    return normally.

    It currently only supports x86_64. I tried to make the code generic so
    that support for other architectures can hopefully be plugged in
    relatively easily.

    On my Lenovo laptop with a i7-4810MQ 4-core/8-thread CPU, building the
    kernel with objtool checking every .o file adds about three seconds of
    total build time. It hasn't been optimized for performance yet, so
    there are probably some opportunities for better build performance.

    Signed-off-by: Josh Poimboeuf
    Cc: Andrew Morton
    Cc: Andy Lutomirski
    Cc: Arnaldo Carvalho de Melo
    Cc: Bernd Petrovitsch
    Cc: Borislav Petkov
    Cc: Chris J Arges
    Cc: Jiri Slaby
    Cc: Linus Torvalds
    Cc: Michal Marek
    Cc: Namhyung Kim
    Cc: Pedro Alves
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: live-patching@vger.kernel.org
    Link: http://lkml.kernel.org/r/f3efb173de43bd067b060de73f856567c0fa1174.1456719558.git.jpoimboe@redhat.com
    Signed-off-by: Ingo Molnar

    Josh Poimboeuf