22 Jul, 2015

1 commit

  • commit 9a1bd63cdae4b623494c4ebaf723a91c35ec49fb upstream.

    The list of loaded modules is walked through in
    module_kallsyms_on_each_symbol (called by kallsyms_on_each_symbol). The
    module_mutex lock should be acquired to prevent potential corruptions
    in the list.

    This was uncovered with new lockdep asserts in module code introduced by
    the commit 0be964be0d45 ("module: Sanitize RCU usage and locking") in
    recent next- trees.

    Signed-off-by: Miroslav Benes
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina
    Signed-off-by: Greg Kroah-Hartman

    Miroslav Benes
     

14 Apr, 2015

1 commit


17 Mar, 2015

1 commit

  • There is a notifier that handles live patches for coming and going modules.
    It takes klp_mutex lock to avoid races with coming and going patches but
    it does not keep the lock all the time. Therefore the following races are
    possible:

    1. The notifier is called sometime in STATE_MODULE_COMING. The module
    is visible by find_module() in this state all the time. It means that
    new patch can be registered and enabled even before the notifier is
    called. It might create wrong order of stacked patches, see below
    for an example.

    2. New patch could still see the module in the GOING state even after
    the notifier has been called. It will try to initialize the related
    object structures but the module could disappear at any time. There
    will stay mess in the structures. It might even cause an invalid
    memory access.

    This patch solves the problem by adding a boolean variable into struct module.
    The value is true after the coming and before the going handler is called.
    New patches need to be applied when the value is true and they need to ignore
    the module when the value is false.

    Note that we need to know state of all modules on the system. The races are
    related to new patches. Therefore we do not know what modules will get
    patched.

    Also note that we could not simply ignore going modules. The code from the
    module could be called even in the GOING state until mod->exit() finishes.
    If we start supporting patches with semantic changes between function
    calls, we need to apply new patches to any still usable code.
    See below for an example.

    Finally note that the patch solves only the situation when a new patch is
    registered. There are no such problems when the patch is being removed.
    It does not matter who disable the patch first, whether the normal
    disable_patch() or the module notifier. There is nothing to do
    once the patch is disabled.

    Alternative solutions:
    ======================

    + reject new patches when a patched module is coming or going; this is ugly

    + wait with adding new patch until the module leaves the COMING and GOING
    states; this might be dangerous and complicated; we would need to release
    kgr_lock in the middle of the patch registration to avoid a deadlock
    with the coming and going handlers; also we might need a waitqueue for
    each module which seems to be even bigger overhead than the boolean

    + stop modules from entering COMING and GOING states; wait until modules
    leave these states when they are already there; looks complicated; we would
    need to ignore the module that asked to stop the others to avoid a deadlock;
    also it is unclear what to do when two modules asked to stop others and
    both are in COMING state (situation when two new patches are applied)

    + always register/enable new patches and fix up the potential mess (registered
    patches order) in klp_module_init(); this is nasty and prone to regressions
    in the future development

    + add another MODULE_STATE where the kallsyms are visible but the module is not
    used yet; this looks too complex; the module states are checked on "many"
    locations

    Example of patch stacking breakage:
    ===================================

    The notifier could _not_ _simply_ ignore already initialized module objects.
    For example, let's have three patches (P1, P2, P3) for functions a() and b()
    where a() is from vmcore and b() is from a module M. Something like:

    a() b()
    P1 a1() b1()
    P2 a2() b2()
    P3 a3() b3(3)

    If you load the module M after all patches are registered and enabled.
    The ftrace ops for function a() and b() has listed the functions in this
    order:

    ops_a->func_stack -> list(a3,a2,a1)
    ops_b->func_stack -> list(b3,b2,b1)

    , so the pointer to b3() is the first and will be used.

    Then you might have the following scenario. Let's start with state when patches
    P1 and P2 are registered and enabled but the module M is not loaded. Then ftrace
    ops for b() does not exist. Then we get into the following race:

    CPU0 CPU1

    load_module(M)

    complete_formation()

    mod->state = MODULE_STATE_COMING;
    mutex_unlock(&module_mutex);

    klp_register_patch(P3);
    klp_enable_patch(P3);

    # STATE 1

    klp_module_notify(M)
    klp_module_notify_coming(P1);
    klp_module_notify_coming(P2);
    klp_module_notify_coming(P3);

    # STATE 2

    The ftrace ops for a() and b() then looks:

    STATE1:

    ops_a->func_stack -> list(a3,a2,a1);
    ops_b->func_stack -> list(b3);

    STATE2:
    ops_a->func_stack -> list(a3,a2,a1);
    ops_b->func_stack -> list(b2,b1,b3);

    therefore, b2() is used for the module but a3() is used for vmcore
    because they were the last added.

    Example of the race with going modules:
    =======================================

    CPU0 CPU1

    delete_module() #SYSCALL

    try_stop_module()
    mod->state = MODULE_STATE_GOING;

    mutex_unlock(&module_mutex);

    klp_register_patch()
    klp_enable_patch()

    #save place to switch universe

    b() # from module that is going
    a() # from core (patched)

    mod->exit();

    Note that the function b() can be called until we call mod->exit().

    If we do not apply patch against b() because it is in MODULE_STATE_GOING,
    it will call patched a() with modified semantic and things might get wrong.

    [jpoimboe@redhat.com: use one boolean instead of two]
    Signed-off-by: Petr Mladek
    Acked-by: Josh Poimboeuf
    Acked-by: Rusty Russell
    Signed-off-by: Jiri Kosina

    Petr Mladek
     

05 Mar, 2015

1 commit


03 Mar, 2015

1 commit

  • While one must hold RCU-sched (aka. preempt_disable) for find_symbol()
    one must equally hold it over the use of the object returned.

    The moment you release the RCU-sched read lock, the object can be dead
    and gone.

    [jkosina@suse.cz: change subject line to be aligned with other patches]
    Cc: Seth Jennings
    Cc: Josh Poimboeuf
    Cc: Masami Hiramatsu
    Cc: Miroslav Benes
    Cc: Petr Mladek
    Cc: Jiri Kosina
    Cc: "Paul E. McKenney"
    Cc: Rusty Russell
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Masami Hiramatsu
    Acked-by: Paul E. McKenney
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Peter Zijlstra
     

23 Feb, 2015

1 commit


19 Feb, 2015

1 commit

  • If registering the function with ftrace has previously succeeded,
    unregistering will almost never fail. Even if it does, it's not a fatal
    error. We can still carry on and disable the klp_func from being used
    by removing it from the klp_ops func stack.

    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Miroslav Benes
    Reviewed-by: Petr Mladek
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     

16 Feb, 2015

1 commit


07 Feb, 2015

1 commit


04 Feb, 2015

1 commit


21 Jan, 2015

3 commits

  • Fix a potentially uninitialized return value in klp_enable_func().

    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Miroslav Benes
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • Add support for patching a function multiple times. If multiple patches
    affect a function, the function in the most recently enabled patch
    "wins". This enables a cumulative patch upgrade path, where each patch
    is a superset of previous patches.

    This requires restructuring the data a little bit. With the current
    design, where each klp_func struct has its own ftrace_ops, we'd have to
    unregister the old ops and then register the new ops, because
    FTRACE_OPS_FL_IPMODIFY prevents us from having two ops registered for
    the same function at the same time. That would leave a regression
    window where the function isn't patched at all (not good for a patch
    upgrade path).

    This patch replaces the per-klp_func ftrace_ops with a global klp_ops
    list, with one ftrace_ops per original function. A single ftrace_ops is
    shared between all klp_funcs which have the same old_addr. This allows
    the switch between function versions to happen instantaneously by
    updating the klp_ops struct's func_stack list. The winner is the
    klp_func at the top of the func_stack (front of the list).

    [ jkosina@suse.cz: turn WARN_ON() into WARN_ON_ONCE() in ftrace handler to
    avoid storm in pathological cases ]

    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Jiri Slaby
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     
  • Only allow the topmost patch on the stack to be enabled or disabled, so
    that patches can't be removed or added in an arbitrary order.

    Suggested-by: Jiri Kosina
    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Jiri Slaby
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     

20 Jan, 2015

1 commit


10 Jan, 2015

1 commit

  • When applying multiple patches to a module, if the module is loaded
    after the patches are loaded, the patches are applied in reverse order:

    $ insmod patch1.ko
    [ 43.172992] livepatch: enabling patch 'patch1'

    $ insmod patch2.ko
    [ 46.571563] livepatch: enabling patch 'patch2'

    $ modprobe nfsd
    [ 52.888922] livepatch: applying patch 'patch2' to loading module 'nfsd'
    [ 52.899847] livepatch: applying patch 'patch1' to loading module 'nfsd'

    Fix the loading order by storing the klp_patches list in queue order.

    Signed-off-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Josh Poimboeuf
     

09 Jan, 2015

1 commit

  • We are aborting a build in case when gcc doesn't support fentry on x86_64
    (regs->ip modification can't really reliably work with mcount).

    This however breaks allmodconfig for people with older gccs that don't
    support -mfentry.

    Turn the build-time failure into runtime failure, resulting in the whole
    infrastructure not being initialized if CC_USING_FENTRY is unset.

    Reported-by: Andrew Morton
    Signed-off-by: Jiri Kosina
    Signed-off-by: Andrew Morton
    Acked-by: Josh Poimboeuf

    Jiri Kosina
     

07 Jan, 2015

1 commit

  • Keyword 'boolean' for type definition attributes is considered deprecated and
    should not be used anymore. No functional changes.

    Reference: http://lkml.kernel.org/r/cover.1418003065.git.cj@linux.com
    Reference: http://lkml.kernel.org/r/1419108071-11607-1-git-send-email-cj@linux.com

    Signed-off-by: Christoph Jaeger
    Reviewed-by: Petr Mladek
    Reviewed-by: Jingoo Han
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Christoph Jaeger
     

23 Dec, 2014

1 commit


22 Dec, 2014

2 commits

  • The execution flow redirection related implemention in the livepatch
    ftrace handler is depended on the specific architecture. This patch
    introduces klp_arch_set_pc(like kgdb_arch_set_pc) interface to change
    the pt_regs.

    Signed-off-by: Li Bin
    Acked-by: Josh Poimboeuf
    Signed-off-by: Jiri Kosina

    Li Bin
     
  • This commit introduces code for the live patching core. It implements
    an ftrace-based mechanism and kernel interface for doing live patching
    of kernel and kernel module functions.

    It represents the greatest common functionality set between kpatch and
    kgraft and can accept patches built using either method.

    This first version does not implement any consistency mechanism that
    ensures that old and new code do not run together. In practice, ~90% of
    CVEs are safe to apply in this way, since they simply add a conditional
    check. However, any function change that can not execute safely with
    the old version of the function can _not_ be safely applied in this
    version.

    [ jkosina@suse.cz: due to the number of contributions that got folded into
    this original patch from Seth Jennings, add SUSE's copyright as well, as
    discussed via e-mail ]

    Signed-off-by: Seth Jennings
    Signed-off-by: Josh Poimboeuf
    Reviewed-by: Miroslav Benes
    Reviewed-by: Petr Mladek
    Reviewed-by: Masami Hiramatsu
    Signed-off-by: Miroslav Benes
    Signed-off-by: Petr Mladek
    Signed-off-by: Jiri Kosina

    Seth Jennings