10 Sep, 2018

1 commit

  • commit 6e9df95b76cad18f7b217bdad7bb8a26d63b8c47 upstream.

    livepatch module author can pass module name/old function name with more
    than the defined character limit. With obj->name length greater than
    MODULE_NAME_LEN, the livepatch module gets loaded but waits forever on
    the module specified by obj->name to be loaded. It also populates a /sys
    directory with an untruncated object name.

    In the case of funcs->old_name length greater then KSYM_NAME_LEN, it
    would not match against any of the symbol table entries. Instead loop
    through the symbol table comparing them against a nonexisting function,
    which can be avoided.

    The same issues apply, to misspelled/incorrect names. At least gatekeep
    the modules with over the limit string length, by checking for their
    length during livepatch module registration.

    Cc: stable@vger.kernel.org
    Signed-off-by: Kamalesh Babulal
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina
    Signed-off-by: Greg Kroah-Hartman

    Kamalesh Babulal
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

11 Oct, 2017

1 commit

  • When an incoming module is considered for livepatching by
    klp_module_coming(), it iterates over multiple patches and multiple
    kernel objects in this order:

    list_for_each_entry(patch, &klp_patches, list) {
    klp_for_each_object(patch, obj) {

    which means that if one of the kernel objects fails to patch,
    klp_module_coming()'s error path needs to unpatch and cleanup any kernel
    objects that were already patched by a previous patch.

    Reported-by: Miroslav Benes
    Suggested-by: Petr Mladek
    Signed-off-by: Joe Lawrence
    Acked-by: Josh Poimboeuf
    Reviewed-by: Petr Mladek
    Signed-off-by: Jiri Kosina

    Joe Lawrence
     

20 Jun, 2017

1 commit

  • rcu_read_(un)lock(), list_*_rcu(), and synchronize_rcu() are used for a secure
    access and manipulation of the list of patches that modify the same function.
    In particular, it is the variable func_stack that is accessible from the ftrace
    handler via struct ftrace_ops and klp_ops.

    Of course, it synchronizes also some states of the patch on the top of the
    stack, e.g. func->transition in klp_ftrace_handler.

    At the same time, this mechanism guards also the manipulation of
    task->patch_state. It is modified according to the state of the transition and
    the state of the process.

    Now, all this works well as long as RCU works well. Sadly livepatching might
    get into some corner cases when this is not true. For example, RCU is not
    watching when rcu_read_lock() is taken in idle threads. It is because they
    might sleep and prevent reaching the grace period for too long.

    There are ways how to make RCU watching even in idle threads, see
    rcu_irq_enter(). But there is a small location inside RCU infrastructure when
    even this does not work.

    This small problematic location can be detected either before calling
    rcu_irq_enter() by rcu_irq_enter_disabled() or later by rcu_is_watching().
    Sadly, there is no safe way how to handle it. Once we detect that RCU was not
    watching, we might see inconsistent state of the function stack and the related
    variables in klp_ftrace_handler(). Then we could do a wrong decision, use an
    incompatible implementation of the function and break the consistency of the
    system. We could warn but we could not avoid the damage.

    Fortunately, ftrace has similar problems and they seem to be solved well there.
    It uses a heavy weight implementation of some RCU operations. In particular, it
    replaces:

    + rcu_read_lock() with preempt_disable_notrace()
    + rcu_read_unlock() with preempt_enable_notrace()
    + synchronize_rcu() with schedule_on_each_cpu(sync_work)

    My understanding is that this is RCU implementation from a stone age. It meets
    the core RCU requirements but it is rather ineffective. Especially, it does not
    allow to batch or speed up the synchronize calls.

    On the other hand, it is very trivial. It allows to safely trace and/or
    livepatch even the RCU core infrastructure. And the effectiveness is a not a
    big issue because using ftrace or livepatches on productive systems is a rare
    operation. The safety is much more important than a negligible extra load.

    Note that the alternative implementation follows the RCU principles. Therefore,
    we could and actually must use list_*_rcu() variants when manipulating the
    func_stack. These functions allow to access the pointers in the right
    order and with the right barriers. But they do not use any other
    information that would be set only by rcu_read_lock().

    Also note that there are actually two problems solved in ftrace:

    First, it cares about the consistency of RCU read sections. It is being solved
    the way as described and used in this patch.

    Second, ftrace needs to make sure that nobody is inside the dynamic trampoline
    when it is being freed. For this, it also calls synchronize_rcu_tasks() in
    preemptive kernel in ftrace_shutdown().

    Livepatch has similar problem but it is solved by ftrace for free.
    klp_ftrace_handler() is a good guy and never sleeps. In addition, it is
    registered with FTRACE_OPS_FL_DYNAMIC. It causes that
    unregister_ftrace_function() calls:

    * schedule_on_each_cpu(ftrace_sync) - always
    * synchronize_rcu_tasks() - in preemptive kernel

    The effect is that nobody is neither inside the dynamic trampoline nor inside
    the ftrace handler after unregister_ftrace_function() returns.

    [jkosina@suse.cz: reformat changelog, fix comment]
    Signed-off-by: Petr Mladek
    Acked-by: Josh Poimboeuf
    Acked-by: Miroslav Benes
    Signed-off-by: Jiri Kosina

    Petr Mladek
     

27 May, 2017

1 commit

  • If TRIM_UNUSED_KSYMS is enabled, all unneeded exported symbols are made
    unexported. Two-pass build of the kernel is done to find out which
    symbols are needed based on a configuration. This effectively
    complicates things for out-of-tree modules.

    Livepatch exports functions to (un)register and enable/disable a live
    patch. The only in-tree module which uses these functions is a sample in
    samples/livepatch/. If the sample is disabled, the functions are
    trimmed and out-of-tree live patches cannot be built.

    Note that live patches are intended to be built out-of-tree.

    Suggested-by: Michal Marek
    Acked-by: Josh Poimboeuf
    Acked-by: Jessica Yu
    Signed-off-by: Miroslav Benes
    Signed-off-by: Jiri Kosina

    Miroslav Benes
     

02 May, 2017

1 commit


17 Apr, 2017

1 commit


12 Apr, 2017

1 commit

  • klp_init_transition() does not set func->transition for immediate patches.
    Then klp_ftrace_handler() could use the new code immediately. As a result,
    it is not safe to put the livepatch module in klp_cancel_transition().

    This patch reverts most of the last minute changes klp_cancel_transition().
    It keeps the warning about a misuse because it still makes sense.

    Fixes: 3ec24776bfd0 ("livepatch: allow removal of a disabled patch")
    Signed-off-by: Petr Mladek
    Acked-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Petr Mladek
     

30 Mar, 2017

1 commit

  • It's reported that the time of insmoding a klp.ko for one of our
    out-tree modules is too long.

    ~ time sudo insmod klp.ko
    real 0m23.799s
    user 0m0.036s
    sys 0m21.256s

    Then we found the reason: our out-tree module used a lot of static local
    variables, so klp.ko has a lot of relocation records which reference the
    module. Then for each such entry klp_find_object_symbol() is called to
    resolve it, but this function uses the interface kallsyms_on_each_symbol()
    even for finding module symbols, so will waste a lot of time on walking
    through vmlinux kallsyms table many times.

    This patch changes it to use module_kallsyms_on_each_symbol() for modules
    symbols. After we apply this patch, the sys time reduced dramatically.

    ~ time sudo insmod klp.ko
    real 0m1.007s
    user 0m0.032s
    sys 0m0.924s

    Signed-off-by: Zhou Chengming
    Acked-by: Josh Poimboeuf
    Acked-by: Jessica Yu
    Acked-by: Miroslav Benes
    Signed-off-by: Jiri Kosina

    Zhou Chengming
     

08 Mar, 2017

9 commits

  • klp_mutex is shared between core.c and transition.c, and as such would
    rather be properly located in a header so that we don't have to play
    'extern' games from .c sources.

    This also silences sparse warning (wrongly) suggesting that klp_mutex
    should be defined static.

    Acked-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Jiri Kosina
     
  • Currently we do not allow patch module to unload since there is no
    method to determine if a task is still running in the patched code.

    The consistency model gives us the way because when the unpatching
    finishes we know that all tasks were marked as safe to call an original
    function. Thus every new call to the function calls the original code
    and at the same time no task can be somewhere in the patched code,
    because it had to leave that code to be marked as safe.

    We can safely let the patch module go after that.

    Completion is used for synchronization between module removal and sysfs
    infrastructure in a similar way to commit 942e443127e9 ("module: Fix
    mod->mkobj.kobj potentially freed too early").

    Note that we still do not allow the removal for immediate model, that is
    no consistency model. The module refcount may increase in this case if
    somebody disables and enables the patch several times. This should not
    cause any harm.

    With this change a call to try_module_get() is moved to
    __klp_enable_patch from klp_register_patch to make module reference
    counting symmetric (module_put() is in a patch disable path) and to
    allow to take a new reference to a disabled module when being enabled.

    Finally, we need to be very careful about possible races between
    klp_unregister_patch(), kobject_put() functions and operations
    on the related sysfs files.

    kobject_put(&patch->kobj) must be called without klp_mutex. Otherwise,
    it might be blocked by enabled_store() that needs the mutex as well.
    In addition, enabled_store() must check if the patch was not
    unregisted in the meantime.

    There is no need to do the same for other kobject_put() callsites
    at the moment. Their sysfs operations neither take the lock nor
    they access any data that might be freed in the meantime.

    There was an attempt to use kobjects the right way and prevent these
    races by design. But it made the patch definition more complicated
    and opened another can of worms. See
    https://lkml.kernel.org/r/1464018848-4303-1-git-send-email-pmladek@suse.com

    [Thanks to Petr Mladek for improving the commit message.]

    Signed-off-by: Miroslav Benes
    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Petr Mladek
    Acked-by: Miroslav Benes
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • Change livepatch to use a basic per-task consistency model. This is the
    foundation which will eventually enable us to patch those ~10% of
    security patches which change function or data semantics. This is the
    biggest remaining piece needed to make livepatch more generally useful.

    This code stems from the design proposal made by Vojtech [1] in November
    2014. It's a hybrid of kGraft and kpatch: it uses kGraft's per-task
    consistency and syscall barrier switching combined with kpatch's stack
    trace switching. There are also a number of fallback options which make
    it quite flexible.

    Patches are applied on a per-task basis, when the task is deemed safe to
    switch over. When a patch is enabled, livepatch enters into a
    transition state where tasks are converging to the patched state.
    Usually this transition state can complete in a few seconds. The same
    sequence occurs when a patch is disabled, except the tasks converge from
    the patched state to the unpatched state.

    An interrupt handler inherits the patched state of the task it
    interrupts. The same is true for forked tasks: the child inherits the
    patched state of the parent.

    Livepatch uses several complementary approaches to determine when it's
    safe to patch tasks:

    1. The first and most effective approach is stack checking of sleeping
    tasks. If no affected functions are on the stack of a given task,
    the task is patched. In most cases this will patch most or all of
    the tasks on the first try. Otherwise it'll keep trying
    periodically. This option is only available if the architecture has
    reliable stacks (HAVE_RELIABLE_STACKTRACE).

    2. The second approach, if needed, is kernel exit switching. A
    task is switched when it returns to user space from a system call, a
    user space IRQ, or a signal. It's useful in the following cases:

    a) Patching I/O-bound user tasks which are sleeping on an affected
    function. In this case you have to send SIGSTOP and SIGCONT to
    force it to exit the kernel and be patched.
    b) Patching CPU-bound user tasks. If the task is highly CPU-bound
    then it will get patched the next time it gets interrupted by an
    IRQ.
    c) In the future it could be useful for applying patches for
    architectures which don't yet have HAVE_RELIABLE_STACKTRACE. In
    this case you would have to signal most of the tasks on the
    system. However this isn't supported yet because there's
    currently no way to patch kthreads without
    HAVE_RELIABLE_STACKTRACE.

    3. For idle "swapper" tasks, since they don't ever exit the kernel, they
    instead have a klp_update_patch_state() call in the idle loop which
    allows them to be patched before the CPU enters the idle state.

    (Note there's not yet such an approach for kthreads.)

    All the above approaches may be skipped by setting the 'immediate' flag
    in the 'klp_patch' struct, which will disable per-task consistency and
    patch all tasks immediately. This can be useful if the patch doesn't
    change any function or data semantics. Note that, even with this flag
    set, it's possible that some tasks may still be running with an old
    version of the function, until that function returns.

    There's also an 'immediate' flag in the 'klp_func' struct which allows
    you to specify that certain functions in the patch can be applied
    without per-task consistency. This might be useful if you want to patch
    a common function like schedule(), and the function change doesn't need
    consistency but the rest of the patch does.

    For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user
    must set patch->immediate which causes all tasks to be patched
    immediately. This option should be used with care, only when the patch
    doesn't change any function or data semantics.

    In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE
    may be allowed to use per-task consistency if we can come up with
    another way to patch kthreads.

    The /sys/kernel/livepatch//transition file shows whether a patch
    is in transition. Only a single patch (the topmost patch on the stack)
    can be in transition at a given time. A patch can remain in transition
    indefinitely, if any of the tasks are stuck in the initial patch state.

    A transition can be reversed and effectively canceled by writing the
    opposite value to the /sys/kernel/livepatch//enabled file while
    the transition is in progress. Then all the tasks will attempt to
    converge back to the original patch state.

    [1] https://lkml.kernel.org/r/20141107140458.GA21774@suse.cz

    Signed-off-by: Josh Poimboeuf
    Acked-by: Miroslav Benes
    Acked-by: Ingo Molnar # for the scheduler changes
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • For the consistency model we'll need to know the sizes of the old and
    new functions to determine if they're on the stacks of any tasks.

    Signed-off-by: Josh Poimboeuf
    Acked-by: Miroslav Benes
    Reviewed-by: Petr Mladek
    Reviewed-by: Kamalesh Babulal
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • The sysfs enabled value is a boolean, so kstrtobool() is a better fit
    for parsing the input string since it does the range checking for us.

    Suggested-by: Petr Mladek
    Signed-off-by: Josh Poimboeuf
    Acked-by: Miroslav Benes
    Reviewed-by: Petr Mladek
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • Move functions related to the actual patching of functions and objects
    into a new patch.c file.

    Signed-off-by: Josh Poimboeuf
    Acked-by: Miroslav Benes
    Reviewed-by: Petr Mladek
    Reviewed-by: Kamalesh Babulal
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • klp_patch_object()'s callers already ensure that the object is loaded,
    so its call to klp_is_object_loaded() is unnecessary.

    This will also make it possible to move the patching code into a
    separate file.

    Signed-off-by: Josh Poimboeuf
    Acked-by: Miroslav Benes
    Reviewed-by: Petr Mladek
    Reviewed-by: Kamalesh Babulal
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • Once we have a consistency model, patches and their objects will be
    enabled and disabled at different times. For example, when a patch is
    disabled, its loaded objects' funcs can remain registered with ftrace
    indefinitely until the unpatching operation is complete and they're no
    longer in use.

    It's less confusing if we give them different names: patches can be
    enabled or disabled; objects (and their funcs) can be patched or
    unpatched:

    - Enabled means that a patch is logically enabled (but not necessarily
    fully applied).

    - Patched means that an object's funcs are registered with ftrace and
    added to the klp_ops func stack.

    Also, since these states are binary, represent them with booleans
    instead of ints.

    Signed-off-by: Josh Poimboeuf
    Acked-by: Miroslav Benes
    Reviewed-by: Petr Mladek
    Reviewed-by: Kamalesh Babulal
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • Create temporary stubs for klp_update_patch_state() so we can add
    TIF_PATCH_PENDING to different architectures in separate patches without
    breaking build bisectability.

    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Petr Mladek
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     

26 Aug, 2016

1 commit

  • There's no reliable way to determine which module tainted the kernel
    with TAINT_LIVEPATCH. For example, /sys/module//taint
    doesn't report it. Neither does the "mod -t" command in the crash tool.

    Make it crystal clear who the guilty party is by associating
    TAINT_LIVEPATCH with any module which sets the "livepatch" modinfo
    attribute. The flag will still get set in the kernel like before, but
    now it also sets the same flag in mod->taint.

    Note that now the taint flag gets set when the module is loaded rather
    than when it's enabled.

    I also renamed find_livepatch_modinfo() to check_modinfo_livepatch() to
    better reflect its purpose: it's basically a livepatch-specific
    sub-function of check_modinfo().

    Reported-by: Chunyu Hu
    Reviewed-by: Petr Mladek
    Acked-by: Miroslav Benes
    Acked-by: Jessica Yu
    Acked-by: Rusty Russell
    Signed-off-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     

19 Aug, 2016

1 commit


04 Aug, 2016

1 commit

  • Add ro_after_init support for modules by adding a new page-aligned section
    in the module layout (after rodata) for ro_after_init data and enabling RO
    protection for that section after module init runs.

    Signed-off-by: Jessica Yu
    Acked-by: Kees Cook
    Signed-off-by: Rusty Russell

    Jessica Yu
     

17 May, 2016

1 commit


30 Apr, 2016

1 commit

  • Current object-walking helper checks the presence of obj->funcs to
    determine the end of objs array in klp_object structure. This is
    somewhat fragile because one can easily forget about funcs definition
    during livepatch creation. In such a case the livepatch module is
    successfully loaded and all objects after the incorrect one are omitted.
    This is very confusing. Let's make the helper more robust and check also
    for the other external member, name. Thus the helper correctly stops on
    an empty item of the array. We need to have a check for obj->funcs in
    klp_init_object() to make it work.

    The same applies to a func-walking helper.

    As a benefit we'll check for new_func member definition during the
    livepatch initialization. There is no such check anywhere in the code
    now.

    [jkosina@suse.cz: fix shortlog]
    Signed-off-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Acked-by: Jessica Yu
    Signed-off-by: Jiri Kosina

    Miroslav Benes
     

15 Apr, 2016

1 commit


14 Apr, 2016

1 commit

  • When livepatch tries to patch a function it takes the function address
    and asks ftrace to install the livepatch handler at that location.
    ftrace will look for an mcount call site at that exact address.

    On powerpc the mcount location is not the first instruction of the
    function, and in fact it's not at a constant offset from the start of
    the function. To accommodate this add a hook which arch code can
    override to customise the behaviour.

    Signed-off-by: Torsten Duwe
    Signed-off-by: Balbir Singh
    Signed-off-by: Petr Mladek
    Signed-off-by: Michael Ellerman

    Michael Ellerman
     

08 Apr, 2016

1 commit

  • Commit 425595a7fc20 ("livepatch: reuse module loader code to write
    relocations") adds a possibility of dereferncing pointers supplied by the
    consumer of the livepatch API before sanity (NULL) checking them (patch
    and patch->mod).

    Spotted by smatch tool.

    Reported-by: Dan Carpenter
    Acked-by: Josh Poimboeuf
    Acked-by: Jessica Yu
    Signed-off-by: Jiri Kosina

    Jiri Kosina
     

01 Apr, 2016

1 commit

  • Reuse module loader code to write relocations, thereby eliminating the need
    for architecture specific relocation code in livepatch. Specifically, reuse
    the apply_relocate_add() function in the module loader to write relocations
    instead of duplicating functionality in livepatch's arch-dependent
    klp_write_module_reloc() function.

    In order to accomplish this, livepatch modules manage their own relocation
    sections (marked with the SHF_RELA_LIVEPATCH section flag) and
    livepatch-specific symbols (marked with SHN_LIVEPATCH symbol section
    index). To apply livepatch relocation sections, livepatch symbols
    referenced by relocs are resolved and then apply_relocate_add() is called
    to apply those relocations.

    In addition, remove x86 livepatch relocation code and the s390
    klp_write_module_reloc() function stub. They are no longer needed since
    relocation work has been offloaded to module loader.

    Lastly, mark the module as a livepatch module so that the module loader
    canappropriately identify and initialize it.

    Signed-off-by: Jessica Yu
    Reviewed-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Acked-by: Heiko Carstens # for s390 changes
    Signed-off-by: Jiri Kosina

    Jessica Yu
     

17 Mar, 2016

1 commit

  • Remove the livepatch module notifier in favor of directly enabling and
    disabling patches to modules in the module loader. Hard-coding the
    function calls ensures that ftrace_module_enable() is run before
    klp_module_coming() during module load, and that klp_module_going() is
    run before ftrace_release_mod() during module unload. This way, ftrace
    and livepatch code is run in the correct order during the module
    load/unload sequence without dependence on the module notifier call chain.

    Signed-off-by: Jessica Yu
    Reviewed-by: Petr Mladek
    Acked-by: Josh Poimboeuf
    Acked-by: Rusty Russell
    Signed-off-by: Jiri Kosina

    Jessica Yu
     

10 Mar, 2016

1 commit

  • klp_find_callback() stops the search when sympos is not defined and
    a second symbol of the same name is found. It means that the current
    error message about the unresolvable ambiguity always prints "(2 matches)".

    Let's remove this information. The total number of occurrences is
    not much helpful. The author of the patch still must put a non-trivial
    effort into searching the right position in the object file.

    [jkosina@suse.cz: fixed grammar as suggested by Josh]
    Signed-off-by: Petr Mladek
    Acked-by: Josh Poimboeuf
    Acked-by: Chris J Arges
    Signed-off-by: Jiri Kosina

    Petr Mladek
     

05 Dec, 2015

1 commit

  • Calling set_memory_rw() and set_memory_ro() for every iteration of the
    loop in klp_write_object_relocations() is messy, inefficient, and
    error-prone.

    Change all the read-only pages to read-write before the loop and convert
    them back to read-only again afterwards.

    Suggested-by: Miroslav Benes
    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Petr Mladek
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     

04 Dec, 2015

3 commits

  • The following directory structure will allow for cases when the same
    function name exists in a single object.
    /sys/kernel/livepatch///

    The sympos number corresponds to the nth occurrence of the symbol name in
    kallsyms for the patched object.

    An example of patching multiple symbols can be found here:
    https://github.com/dynup/kpatch/issues/493

    Signed-off-by: Chris J Arges
    Reviewed-by: Petr Mladek
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Chris J Arges
     
  • In cases of duplicate symbols, sympos will be used to disambiguate instead
    of val. By default sympos will be 0, and patching will only succeed if
    the symbol is unique. Specifying a positive value will ensure that
    occurrence of the symbol in kallsyms for the patched object will be used
    for patching if it is valid. For external relocations sympos is not
    supported.

    Remove klp_verify_callback, klp_verify_args and klp_verify_vmlinux_symbol
    as they are no longer used.

    From the klp_reloc structure remove val, as it can be refactored as a
    local variable in klp_write_object_relocations.

    Signed-off-by: Chris J Arges
    Reviewed-by: Petr Mladek
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Chris J Arges
     
  • Currently, patching objects with duplicate symbol names fail because the
    creation of the sysfs function directory collides with the previous
    attempt. Appending old_addr to the function name is problematic as it
    reveals the address of the function being patch to a normal user. Using
    the symbol's occurrence in kallsyms to postfix the function name in the
    sysfs directory solves the issue of having consistent unique names and
    ensuring that the address is not exposed to a normal user.

    In addition, using the symbol position as the user's method to disambiguate
    symbols instead of addr allows for disambiguating symbols in modules as
    well for both function addresses and for relocs. This also simplifies much
    of the code. Special handling for kASLR is no longer needed and can be
    removed. The klp_find_verify_func_addr function can be replaced by
    klp_find_object_symbol, and klp_verify_vmlinux_symbol and its callback can
    be removed completely.

    In cases of duplicate symbols, old_sympos will be used to disambiguate
    instead of old_addr. By default old_sympos will be 0, and patching will
    only succeed if the symbol is unique. Specifying a positive value will
    ensure that occurrence of the symbol in kallsyms for the patched object
    will be used for patching if it is valid.

    In addition, make old_addr an internal structure field not to be specified
    by the user. Finally, remove klp_find_verify_func_addr as it can be
    replaced by klp_find_object_symbol directly.

    Support for symbol position disambiguation for relocations is added in the
    next patch in this series.

    Signed-off-by: Chris J Arges
    Reviewed-by: Petr Mladek
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Chris J Arges
     

12 Nov, 2015

1 commit

  • With kASLR enabled, old_addr provided by patch module is being shifted
    accrodingly so that the symbol lookups work. To have module relocations
    handled properly as well, the same transformation needs to be perfomed
    on relocation address information.

    [jkosina@suse.cz: extended / reworded changelog a bit]
    Reported-by: Cyril B.
    Signed-off-by: Zhou Chengming
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Zhou Chengming
     

15 Jul, 2015

1 commit

  • In case of func->state or func->old_addr not having expected values,
    we'd rather bail out immediately from klp_disable_func().

    This can't really happen with the current codebase, but fix this
    anyway in the sake of robustness.

    [jkosina@suse.com: reworded the changelog a bit]
    Signed-off-by: Minfei Huang
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Minfei Huang
     

22 Jun, 2015

1 commit


03 Jun, 2015

1 commit

  • The list of loaded modules is walked through in
    module_kallsyms_on_each_symbol (called by kallsyms_on_each_symbol). The
    module_mutex lock should be acquired to prevent potential corruptions
    in the list.

    This was uncovered with new lockdep asserts in module code introduced by
    the commit 0be964be0d45 ("module: Sanitize RCU usage and locking") in
    recent next- trees.

    Signed-off-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Cc: stable@vger.kernel.org
    Signed-off-by: Jiri Kosina

    Miroslav Benes
     

25 May, 2015

1 commit


20 May, 2015

2 commits

  • klp_for_each_object and klp_for_each_func are now used all over the
    code. One need not think what is the proper condition to check in the
    for loop now.

    Signed-off-by: Jiri Slaby
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Jiri Slaby
     
  • Make kobj variable (of type struct kobject) statically allocated in
    klp_object structure. It will allow us to move in the func-object-patch
    hierarchy through kobject links.

    The only reason to have it dynamic was to not have empty release
    callback in the code. However we have empty callbacks for function and
    patch in the code now, so it is no longer valid and the advantage of
    static allocation is clear.

    Signed-off-by: Miroslav Benes
    Signed-off-by: Jiri Slaby
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Miroslav Benes