23 Oct, 2020

1 commit


07 Oct, 2020

2 commits

  • [ Upstream commit 795d6379a47bcbb88bd95a69920e4acc52849f88 ]

    For 64bit CONFIG_BASE_SMALL=0 systems PID_MAX_LIMIT is set by default to
    4194304. During boot the kernel sets a new value based on number of CPUs
    but no lower than 32768. It is 1024 per CPU so with 128 CPUs the default
    becomes 131072 which needs six digits.
    This value can be increased during run time but must not exceed the
    initial upper limit.

    Systemd sometime after v241 sets it to the upper limit during boot. The
    result is that when the pid exceeds five digits, the trace output is a
    little hard to read because it is no longer properly padded (same like
    on big iron with 98+ CPUs).

    Increase the pid padding to seven digits.

    Link: https://lkml.kernel.org/r/20200904082331.dcdkrr3bkn3e4qlg@linutronix.de

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Sasha Levin

    Sebastian Andrzej Siewior
     
  • commit b40341fad6cc2daa195f8090fd3348f18fff640a upstream.

    The first thing that the ftrace function callback helper functions should do
    is to check for recursion. Peter Zijlstra found that when
    "rcu_is_watching()" had its notrace removed, it caused perf function tracing
    to crash. This is because the call of rcu_is_watching() is tested before
    function recursion is checked and and if it is traced, it will cause an
    infinite recursion loop.

    rcu_is_watching() should still stay notrace, but to prevent this should
    never had crashed in the first place. The recursion prevention must be the
    first thing done in callback functions.

    Link: https://lore.kernel.org/r/20200929112541.GM2628@hirez.programming.kicks-ass.net

    Cc: stable@vger.kernel.org
    Cc: Paul McKenney
    Fixes: c68c0fa293417 ("ftrace: Have ftrace_ops_get_func() handle RCU and PER_CPU flags too")
    Acked-by: Peter Zijlstra (Intel)
    Reported-by: Peter Zijlstra (Intel)
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Greg Kroah-Hartman

    Steven Rostedt (VMware)
     

01 Oct, 2020

25 commits

  • commit 10de795a5addd1962406796a6e13ba6cc0fc6bee upstream.

    Fix compiler warning(as show below) for !CONFIG_KPROBES_ON_FTRACE.

    kernel/kprobes.c: In function 'kill_kprobe':
    kernel/kprobes.c:1116:33: warning: statement with no effect
    [-Wunused-value]
    1116 | #define disarm_kprobe_ftrace(p) (-ENODEV)
    | ^
    kernel/kprobes.c:2154:3: note: in expansion of macro
    'disarm_kprobe_ftrace'
    2154 | disarm_kprobe_ftrace(p);

    Link: https://lore.kernel.org/r/20200805142136.0331f7ea@canb.auug.org.au
    Link: https://lkml.kernel.org/r/20200805172046.19066-1-songmuchun@bytedance.com

    Reported-by: Stephen Rothwell
    Fixes: 0cb2f1372baa ("kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler")
    Acked-by: Masami Hiramatsu
    Acked-by: John Fastabend
    Signed-off-by: Muchun Song
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Greg Kroah-Hartman

    Muchun Song
     
  • commit 82d083ab60c3693201c6f5c7a5f23a6ed422098d upstream.

    Since kprobe_event= cmdline option allows user to put kprobes on the
    functions in initmem, kprobe has to make such probes gone after boot.
    Currently the probes on the init functions in modules will be handled
    by module callback, but the kernel init text isn't handled.
    Without this, kprobes may access non-exist text area to disable or
    remove it.

    Link: https://lkml.kernel.org/r/159972810544.428528.1839307531600646955.stgit@devnote2

    Fixes: 970988e19eb0 ("tracing/kprobe: Add kprobe_event= boot parameter")
    Cc: Jonathan Corbet
    Cc: Shuah Khan
    Cc: Randy Dunlap
    Cc: Ingo Molnar
    Cc: stable@vger.kernel.org
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Greg Kroah-Hartman

    Masami Hiramatsu
     
  • commit 3031313eb3d549b7ad6f9fbcc52ba04412e3eb9e upstream.

    Commit 0cb2f1372baa ("kprobes: Fix NULL pointer dereference at
    kprobe_ftrace_handler") fixed one bug but not completely fixed yet.
    If we run a kprobe_module.tc of ftracetest, kernel showed a warning
    as below.

    # ./ftracetest test.d/kprobe/kprobe_module.tc
    === Ftrace unit tests ===
    [1] Kprobe dynamic event - probing module
    ...
    [ 22.400215] ------------[ cut here ]------------
    [ 22.400962] Failed to disarm kprobe-ftrace at trace_printk_irq_work+0x0/0x7e [trace_printk] (-2)
    [ 22.402139] WARNING: CPU: 7 PID: 200 at kernel/kprobes.c:1091 __disarm_kprobe_ftrace.isra.0+0x7e/0xa0
    [ 22.403358] Modules linked in: trace_printk(-)
    [ 22.404028] CPU: 7 PID: 200 Comm: rmmod Not tainted 5.9.0-rc2+ #66
    [ 22.404870] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1 04/01/2014
    [ 22.406139] RIP: 0010:__disarm_kprobe_ftrace.isra.0+0x7e/0xa0
    [ 22.406947] Code: 30 8b 03 eb c9 80 3d e5 09 1f 01 00 75 dc 49 8b 34 24 89 c2 48 c7 c7 a0 c2 05 82 89 45 e4 c6 05 cc 09 1f 01 01 e8 a9 c7 f0 ff 0b 8b 45 e4 eb b9 89 c6 48 c7 c7 70 c2 05 82 89 45 e4 e8 91 c7
    [ 22.409544] RSP: 0018:ffffc90000237df0 EFLAGS: 00010286
    [ 22.410385] RAX: 0000000000000000 RBX: ffffffff83066024 RCX: 0000000000000000
    [ 22.411434] RDX: 0000000000000001 RSI: ffffffff810de8d3 RDI: ffffffff810de8d3
    [ 22.412687] RBP: ffffc90000237e10 R08: 0000000000000001 R09: 0000000000000001
    [ 22.413762] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88807c478640
    [ 22.414852] R13: ffffffff8235ebc0 R14: ffffffffa00060c0 R15: 0000000000000000
    [ 22.415941] FS: 00000000019d48c0(0000) GS:ffff88807d7c0000(0000) knlGS:0000000000000000
    [ 22.417264] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 22.418176] CR2: 00000000005bb7e3 CR3: 0000000078f7a000 CR4: 00000000000006a0
    [ 22.419309] Call Trace:
    [ 22.419990] kill_kprobe+0x94/0x160
    [ 22.420652] kprobes_module_callback+0x64/0x230
    [ 22.421470] notifier_call_chain+0x4f/0x70
    [ 22.422184] blocking_notifier_call_chain+0x49/0x70
    [ 22.422979] __x64_sys_delete_module+0x1ac/0x240
    [ 22.423733] do_syscall_64+0x38/0x50
    [ 22.424366] entry_SYSCALL_64_after_hwframe+0x44/0xa9
    [ 22.425176] RIP: 0033:0x4bb81d
    [ 22.425741] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e0 ff ff ff f7 d8 64 89 01 48
    [ 22.428726] RSP: 002b:00007ffc70fef008 EFLAGS: 00000246 ORIG_RAX: 00000000000000b0
    [ 22.430169] RAX: ffffffffffffffda RBX: 00000000019d48a0 RCX: 00000000004bb81d
    [ 22.431375] RDX: 0000000000000000 RSI: 0000000000000880 RDI: 00007ffc70fef028
    [ 22.432543] RBP: 0000000000000880 R08: 00000000ffffffff R09: 00007ffc70fef320
    [ 22.433692] R10: 0000000000656300 R11: 0000000000000246 R12: 00007ffc70fef028
    [ 22.434635] R13: 0000000000000000 R14: 0000000000000002 R15: 0000000000000000
    [ 22.435682] irq event stamp: 1169
    [ 22.436240] hardirqs last enabled at (1179): [] console_unlock+0x422/0x580
    [ 22.437466] hardirqs last disabled at (1188): [] console_unlock+0x7b/0x580
    [ 22.438608] softirqs last enabled at (866): [] __do_softirq+0x38e/0x490
    [ 22.439637] softirqs last disabled at (859): [] asm_call_on_stack+0x12/0x20
    [ 22.440690] ---[ end trace 1e7ce7e1e4567276 ]---
    [ 22.472832] trace_kprobe: This probe might be able to register after target module is loaded. Continue.

    This is because the kill_kprobe() calls disarm_kprobe_ftrace() even
    if the given probe is not enabled. In that case, ftrace_set_filter_ip()
    fails because the given probe point is not registered to ftrace.

    Fix to check the given (going) probe is enabled before invoking
    disarm_kprobe_ftrace().

    Link: https://lkml.kernel.org/r/159888672694.1411785.5987998076694782591.stgit@devnote2

    Fixes: 0cb2f1372baa ("kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler")
    Cc: Ingo Molnar
    Cc: "Naveen N . Rao"
    Cc: Anil S Keshavamurthy
    Cc: David Miller
    Cc: Muchun Song
    Cc: Chengming Zhou
    Cc: stable@vger.kernel.org
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Greg Kroah-Hartman

    Masami Hiramatsu
     
  • commit 46bbe5c671e06f070428b9be142cc4ee5cedebac upstream.

    clang static analyzer reports this problem

    trace_events_hist.c:3824:3: warning: Attempt to free
    released memory
    kfree(hist_data->attrs->var_defs.name[i]);

    In parse_var_defs() if there is a problem allocating
    var_defs.expr, the earlier var_defs.name is freed.
    This free is duplicated by free_var_defs() which frees
    the rest of the list.

    Because free_var_defs() has to run anyway, remove the
    second free fom parse_var_defs().

    Link: https://lkml.kernel.org/r/20200907135845.15804-1-trix@redhat.com

    Cc: stable@vger.kernel.org
    Fixes: 30350d65ac56 ("tracing: Add variable support to hist triggers")
    Reviewed-by: Tom Zanussi
    Signed-off-by: Tom Rix
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Greg Kroah-Hartman

    Tom Rix
     
  • [ Upstream commit ce880cb825fcc22d4e39046a6c3a3a7f6603883d ]

    Running selftest
    ./btf_btf -p
    the kernel had the following warning:
    [ 51.528185] WARNING: CPU: 3 PID: 1756 at kernel/bpf/hashtab.c:717 htab_map_get_next_key+0x2eb/0x300
    [ 51.529217] Modules linked in:
    [ 51.529583] CPU: 3 PID: 1756 Comm: test_btf Not tainted 5.9.0-rc1+ #878
    [ 51.530346] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.3-1.el7.centos 04/01/2014
    [ 51.531410] RIP: 0010:htab_map_get_next_key+0x2eb/0x300
    ...
    [ 51.542826] Call Trace:
    [ 51.543119] map_seq_next+0x53/0x80
    [ 51.543528] seq_read+0x263/0x400
    [ 51.543932] vfs_read+0xad/0x1c0
    [ 51.544311] ksys_read+0x5f/0xe0
    [ 51.544689] do_syscall_64+0x33/0x40
    [ 51.545116] entry_SYSCALL_64_after_hwframe+0x44/0xa9

    The related source code in kernel/bpf/hashtab.c:
    709 static int htab_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
    710 {
    711 struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
    712 struct hlist_nulls_head *head;
    713 struct htab_elem *l, *next_l;
    714 u32 hash, key_size;
    715 int i = 0;
    716
    717 WARN_ON_ONCE(!rcu_read_lock_held());

    In kernel/bpf/inode.c, bpffs map pretty print calls map->ops->map_get_next_key()
    without holding a rcu_read_lock(), hence causing the above warning.
    To fix the issue, just surrounding map->ops->map_get_next_key() with rcu read lock.

    Fixes: a26ca7c982cb ("bpf: btf: Add pretty print support to the basic arraymap")
    Reported-by: Alexei Starovoitov
    Signed-off-by: Yonghong Song
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Cc: Martin KaFai Lau
    Link: https://lore.kernel.org/bpf/20200916004401.146277-1-yhs@fb.com
    Signed-off-by: Sasha Levin

    Yonghong Song
     
  • [ Upstream commit 73ac74c7d489756d2313219a108809921dbfaea1 ]

    Switch order so that locking state is consistent even
    if the IRQ tracer calls into lockdep again.

    Acked-by: Peter Zijlstra
    Signed-off-by: Sven Schnelle
    Signed-off-by: Vasily Gorbik
    Signed-off-by: Sasha Levin

    Sven Schnelle
     
  • [ Upstream commit 48021f98130880dd74286459a1ef48b5e9bc374f ]

    If uboot passes a blank string to console_setup then it results in
    a trashed memory. Ultimately, the kernel crashes during freeing up
    the memory.

    This fix checks if there is a blank parameter being
    passed to console_setup from uboot. In case it detects that
    the console parameter is blank then it doesn't setup the serial
    device and it gracefully exits.

    Link: https://lore.kernel.org/r/20200522065306.83-1-shreyas.joshi@biamp.com
    Signed-off-by: Shreyas Joshi
    Acked-by: Sergey Senozhatsky
    [pmladek@suse.com: Better format the commit message and code, remove unnecessary brackets.]
    Signed-off-by: Petr Mladek
    Signed-off-by: Sasha Levin

    Shreyas Joshi
     
  • [ Upstream commit e98fa02c4f2ea4991dae422ac7e34d102d2f0599 ]

    There is a race window in which an entity begins throttling before quota
    is added to the pool, but does not finish throttling until after we have
    finished with distribute_cfs_runtime(). This entity is not observed by
    distribute_cfs_runtime() because it was not on the throttled list at the
    time that distribution was running. This race manifests as rare
    period-length statlls for such entities.

    Rather than heavy-weight the synchronization with the progress of
    distribution, we can fix this by aborting throttling if bandwidth has
    become available. Otherwise, we immediately add the entity to the
    throttled list so that it can be observed by a subsequent distribution.

    Additionally, we can remove the case of adding the throttled entity to
    the head of the throttled list, and simply always add to the tail.
    Thanks to 26a8b12747c97, distribute_cfs_runtime() no longer holds onto
    its own pool of runtime. This means that if we do hit the !assign and
    distribute_running case, we know that distribution is about to end.

    Signed-off-by: Paul Turner
    Signed-off-by: Ben Segall
    Signed-off-by: Josh Don
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Phil Auld
    Link: https://lkml.kernel.org/r/20200410225208.109717-2-joshdon@google.com
    Signed-off-by: Sasha Levin

    Paul Turner
     
  • [ Upstream commit 62849a9612924a655c67cf6962920544aa5c20db ]

    The kernel test robot triggered a warning with the following race:
    task-ctx A interrupt-ctx B
    worker
    -> process_one_work()
    -> work_item()
    -> schedule();
    -> sched_submit_work()
    -> wq_worker_sleeping()
    -> ->sleeping = 1
    atomic_dec_and_test(nr_running)
    __schedule(); *interrupt*
    async_page_fault()
    -> local_irq_enable();
    -> schedule();
    -> sched_submit_work()
    -> wq_worker_sleeping()
    -> if (WARN_ON(->sleeping)) return
    -> __schedule()
    -> sched_update_worker()
    -> wq_worker_running()
    -> atomic_inc(nr_running);
    -> ->sleeping = 0;

    -> sched_update_worker()
    -> wq_worker_running()
    if (!->sleeping) return

    In this context the warning is pointless everything is fine.
    An interrupt before wq_worker_sleeping() will perform the ->sleeping
    assignment (0 -> 1 > 0) twice.
    An interrupt after wq_worker_sleeping() will trigger the warning and
    nr_running will be decremented (by A) and incremented once (only by B, A
    will skip it). This is the case until the ->sleeping is zeroed again in
    wq_worker_running().

    Remove the WARN statement because this condition may happen. Document
    that preemption around wq_worker_sleeping() needs to be disabled to
    protect ->sleeping and not just as an optimisation.

    Fixes: 6d25be5782e48 ("sched/core, workqueues: Distangle worker accounting from rq lock")
    Reported-by: kernel test robot
    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Ingo Molnar
    Cc: Tejun Heo
    Link: https://lkml.kernel.org/r/20200327074308.GY11705@shao2-debian
    Signed-off-by: Sasha Levin

    Sebastian Andrzej Siewior
     
  • [ Upstream commit 6914303824bb572278568330d72fc1f8f9814e67 ]

    This changes perf_event_set_clock to use the new exec_update_mutex
    instead of cred_guard_mutex.

    This should be safe, as the credentials are only used for reading.

    Signed-off-by: Bernd Edlinger
    Signed-off-by: Eric W. Biederman
    Signed-off-by: Sasha Levin

    Bernd Edlinger
     
  • [ Upstream commit 454e3126cb842388e22df6b3ac3da44062c00765 ]

    This changes kcmp_epoll_target to use the new exec_update_mutex
    instead of cred_guard_mutex.

    This should be safe, as the credentials are only used for reading,
    and furthermore ->mm and ->sighand are updated on execve,
    but only under the new exec_update_mutex.

    Signed-off-by: Bernd Edlinger
    Signed-off-by: Eric W. Biederman
    Signed-off-by: Sasha Levin

    Bernd Edlinger
     
  • [ Upstream commit 3e74fabd39710ee29fa25618d2c2b40cfa7d76c7 ]

    This fixes a deadlock in the tracer when tracing a multi-threaded
    application that calls execve while more than one thread are running.

    I observed that when running strace on the gcc test suite, it always
    blocks after a while, when expect calls execve, because other threads
    have to be terminated. They send ptrace events, but the strace is no
    longer able to respond, since it is blocked in vm_access.

    The deadlock is always happening when strace needs to access the
    tracees process mmap, while another thread in the tracee starts to
    execve a child process, but that cannot continue until the
    PTRACE_EVENT_EXIT is handled and the WIFEXITED event is received:

    strace D 0 30614 30584 0x00000000
    Call Trace:
    __schedule+0x3ce/0x6e0
    schedule+0x5c/0xd0
    schedule_preempt_disabled+0x15/0x20
    __mutex_lock.isra.13+0x1ec/0x520
    __mutex_lock_killable_slowpath+0x13/0x20
    mutex_lock_killable+0x28/0x30
    mm_access+0x27/0xa0
    process_vm_rw_core.isra.3+0xff/0x550
    process_vm_rw+0xdd/0xf0
    __x64_sys_process_vm_readv+0x31/0x40
    do_syscall_64+0x64/0x220
    entry_SYSCALL_64_after_hwframe+0x44/0xa9

    expect D 0 31933 30876 0x80004003
    Call Trace:
    __schedule+0x3ce/0x6e0
    schedule+0x5c/0xd0
    flush_old_exec+0xc4/0x770
    load_elf_binary+0x35a/0x16c0
    search_binary_handler+0x97/0x1d0
    __do_execve_file.isra.40+0x5d4/0x8a0
    __x64_sys_execve+0x49/0x60
    do_syscall_64+0x64/0x220
    entry_SYSCALL_64_after_hwframe+0x44/0xa9

    This changes mm_access to use the new exec_update_mutex
    instead of cred_guard_mutex.

    This patch is based on the following patch by Eric W. Biederman:
    "[PATCH 0/5] Infrastructure to allow fixing exec deadlocks"
    Link: https://lore.kernel.org/lkml/87v9ne5y4y.fsf_-_@x220.int.ebiederm.org/

    Signed-off-by: Bernd Edlinger
    Reviewed-by: Kees Cook
    Signed-off-by: Eric W. Biederman
    Signed-off-by: Sasha Levin

    Bernd Edlinger
     
  • [ Upstream commit eea9673250db4e854e9998ef9da6d4584857f0ea ]

    The cred_guard_mutex is problematic as it is held over possibly
    indefinite waits for userspace. The possible indefinite waits for
    userspace that I have identified are: The cred_guard_mutex is held in
    PTRACE_EVENT_EXIT waiting for the tracer. The cred_guard_mutex is
    held over "put_user(0, tsk->clear_child_tid)" in exit_mm(). The
    cred_guard_mutex is held over "get_user(futex_offset, ...") in
    exit_robust_list. The cred_guard_mutex held over copy_strings.

    The functions get_user and put_user can trigger a page fault which can
    potentially wait indefinitely in the case of userfaultfd or if
    userspace implements part of the page fault path.

    In any of those cases the userspace process that the kernel is waiting
    for might make a different system call that winds up taking the
    cred_guard_mutex and result in deadlock.

    Holding a mutex over any of those possibly indefinite waits for
    userspace does not appear necessary. Add exec_update_mutex that will
    just cover updating the process during exec where the permissions and
    the objects pointed to by the task struct may be out of sync.

    The plan is to switch the users of cred_guard_mutex to
    exec_update_mutex one by one. This lets us move forward while still
    being careful and not introducing any regressions.

    Link: https://lore.kernel.org/lkml/20160921152946.GA24210@dhcp22.suse.cz/
    Link: https://lore.kernel.org/lkml/AM6PR03MB5170B06F3A2B75EFB98D071AE4E60@AM6PR03MB5170.eurprd03.prod.outlook.com/
    Link: https://lore.kernel.org/linux-fsdevel/20161102181806.GB1112@redhat.com/
    Link: https://lore.kernel.org/lkml/20160923095031.GA14923@redhat.com/
    Link: https://lore.kernel.org/lkml/20170213141452.GA30203@redhat.com/
    Ref: 45c1a159b85b ("Add PTRACE_O_TRACEVFORKDONE and PTRACE_O_TRACEEXIT facilities.")
    Ref: 456f17cd1a28 ("[PATCH] user-vm-unlock-2.5.31-A2")
    Reviewed-by: Kirill Tkhai
    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: Bernd Edlinger
    Signed-off-by: Eric W. Biederman
    Signed-off-by: Sasha Levin

    Eric W. Biederman
     
  • [ Upstream commit bf2cbe044da275021b2de5917240411a19e5c50d ]

    Clang warns:

    ../kernel/trace/trace.c:9335:33: warning: array comparison always
    evaluates to true [-Wtautological-compare]
    if (__stop___trace_bprintk_fmt != __start___trace_bprintk_fmt)
    ^
    1 warning generated.

    These are not true arrays, they are linker defined symbols, which are
    just addresses. Using the address of operator silences the warning and
    does not change the runtime result of the check (tested with some print
    statements compiled in with clang + ld.lld and gcc + ld.bfd in QEMU).

    Link: http://lkml.kernel.org/r/20200220051011.26113-1-natechancellor@gmail.com

    Link: https://github.com/ClangBuiltLinux/linux/issues/893
    Suggested-by: Nick Desaulniers
    Signed-off-by: Nathan Chancellor
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Sasha Levin

    Nathan Chancellor
     
  • [ Upstream commit 4cbbc3a0eeed675449b1a4d080008927121f3da3 ]

    While unlikely the divisor in scale64_check_overflow() could be >= 32bit in
    scale64_check_overflow(). do_div() truncates the divisor to 32bit at least
    on 32bit platforms.

    Use div64_u64() instead to avoid the truncation to 32-bit.

    [ tglx: Massaged changelog ]

    Signed-off-by: Wen Yang
    Signed-off-by: Thomas Gleixner
    Link: https://lkml.kernel.org/r/20200120100523.45656-1-wenyang@linux.alibaba.com
    Signed-off-by: Sasha Levin

    Wen Yang
     
  • [ Upstream commit 8a37963c7ac9ecb7f86f8ebda020e3f8d6d7b8a0 ]

    If an element is freed via RCU then recursion into BPF instrumentation
    functions is not a concern. The element is already detached from the map
    and the RCU callback does not hold any locks on which a kprobe, perf event
    or tracepoint attached BPF program could deadlock.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20200224145643.259118710@linutronix.de
    Signed-off-by: Sasha Levin

    Thomas Gleixner
     
  • [ Upstream commit b3b9c187dc2544923a601733a85352b9ddaba9b3 ]

    There are currently three counters to track the IRQ context of a lock
    chain - nr_hardirq_chains, nr_softirq_chains and nr_process_chains.
    They are incremented when a new lock chain is added, but they are
    not decremented when a lock chain is removed. That causes some of the
    statistic counts reported by /proc/lockdep_stats to be incorrect.
    IRQ
    Fix that by decrementing the right counter when a lock chain is removed.

    Since inc_chains() no longer accesses hardirq_context and softirq_context
    directly, it is moved out from the CONFIG_TRACE_IRQFLAGS conditional
    compilation block.

    Fixes: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use")
    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Ingo Molnar
    Link: https://lkml.kernel.org/r/20200206152408.24165-2-longman@redhat.com
    Signed-off-by: Sasha Levin

    Waiman Long
     
  • [ Upstream commit 70b3eeed49e8190d97139806f6fbaf8964306cdb ]

    Common Criteria calls out for any action that modifies the audit trail to
    be recorded. That usually is interpreted to mean insertion or removal of
    rules. It is not required to log modification of the inode information
    since the watch is still in effect. Additionally, if the rule is a never
    rule and the underlying file is one they do not want events for, they
    get an event for this bookkeeping update against their wishes.

    Since no device/inode info is logged at insertion and no device/inode
    information is logged on update, there is nothing meaningful being
    communicated to the admin by the CONFIG_CHANGE updated_rules event. One
    can assume that the rule was not "modified" because it is still watching
    the intended target. If the device or inode cannot be resolved, then
    audit_panic is called which is sufficient.

    The correct resolution is to drop logging config_update events since
    the watch is still in effect but just on another unknown inode.

    Signed-off-by: Steve Grubb
    Signed-off-by: Paul Moore
    Signed-off-by: Sasha Levin

    Steve Grubb
     
  • [ Upstream commit cbc3b92ce037f5e7536f6db157d185cd8b8f615c ]

    I noticed when trying to use the trace-cmd python interface that reading the raw
    buffer wasn't working for kernel_stack events. This is because it uses a
    stubbed version of __dynamic_array that doesn't do the __data_loc trick and
    encode the length of the array into the field. Instead it just shows up as a
    size of 0. So change this to __array and set the len to FTRACE_STACK_ENTRIES
    since this is what we actually do in practice and matches how user_stack_trace
    works.

    Link: http://lkml.kernel.org/r/1411589652-1318-1-git-send-email-jbacik@fb.com

    Signed-off-by: Josef Bacik
    [ Pulled from the archeological digging of my INBOX ]
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Sasha Levin

    Josef Bacik
     
  • [ Upstream commit af74262337faa65d5ac2944553437d3f5fb29123 ]

    When pulling in Divya Indi's patch, I made a minor fix to remove unneeded
    braces. I commited my fix up via "git commit -a --amend". Unfortunately, I
    didn't realize I had some changes I was testing in the module code, and
    those changes were applied to Divya's patch as well.

    This reverts the accidental updates to the module code.

    Cc: Jessica Yu
    Cc: Divya Indi
    Reported-by: Peter Zijlstra
    Fixes: e585e6469d6f ("tracing: Verify if trace array exists before destroying it.")
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Sasha Levin

    Steven Rostedt (VMware)
     
  • [ Upstream commit 5e1aada08cd19ea652b2d32a250501d09b02ff2e ]

    Initialization is not guaranteed to zero padding bytes so use an
    explicit memset instead to avoid leaking any kernel content in any
    possible padding bytes.

    Link: http://lkml.kernel.org/r/dfa331c00881d61c8ee51577a082d8bebd61805c.camel@perches.com
    Signed-off-by: Joe Perches
    Cc: Dan Carpenter
    Cc: Julia Lawall
    Cc: Thomas Gleixner
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Joe Perches
     
  • [ Upstream commit 1a50cb80f219c44adb6265f5071b81fc3c1deced ]

    Registering the same notifier to a hook repeatedly can cause the hook
    list to form a ring or lose other members of the list.

    case1: An infinite loop in notifier_chain_register() can cause soft lockup
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test2);

    case2: An infinite loop in notifier_chain_register() can cause soft lockup
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_call_chain(&test_notifier_list, 0, NULL);

    case3: lose other hook test2
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test2);
    atomic_notifier_chain_register(&test_notifier_list, &test1);

    case4: Unregister returns 0, but the hook is still in the linked list,
    and it is not really registered. If you call
    notifier_call_chain after ko is unloaded, it will trigger oops.

    If the system is configured with softlockup_panic and the same hook is
    repeatedly registered on the panic_notifier_list, it will cause a loop
    panic.

    Add a check in notifier_chain_register(), intercepting duplicate
    registrations to avoid infinite loops

    Link: http://lkml.kernel.org/r/1568861888-34045-2-git-send-email-nixiaoming@huawei.com
    Signed-off-by: Xiaoming Ni
    Reviewed-by: Vasily Averin
    Reviewed-by: Andrew Morton
    Cc: Alexey Dobriyan
    Cc: Anna Schumaker
    Cc: Arjan van de Ven
    Cc: J. Bruce Fields
    Cc: Chuck Lever
    Cc: David S. Miller
    Cc: Jeff Layton
    Cc: Andy Lutomirski
    Cc: Ingo Molnar
    Cc: Nadia Derbey
    Cc: "Paul E. McKenney"
    Cc: Sam Protsenko
    Cc: Alan Stern
    Cc: Thomas Gleixner
    Cc: Trond Myklebust
    Cc: Viresh Kumar
    Cc: Xiaoming Ni
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Xiaoming Ni
     
  • [ Upstream commit 953ae45a0c25e09428d4a03d7654f97ab8a36647 ]

    As part of commit f45d1225adb0 ("tracing: Kernel access to Ftrace
    instances") we exported certain functions. Here, we are adding some additional
    NULL checks to ensure safe usage by users of these APIs.

    Link: http://lkml.kernel.org/r/1565805327-579-4-git-send-email-divya.indi@oracle.com

    Signed-off-by: Divya Indi
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Sasha Levin

    Divya Indi
     
  • [ Upstream commit e585e6469d6f476b82aa148dc44aaf7ae269a4e2 ]

    A trace array can be destroyed from userspace or kernel. Verify if the
    trace array exists before proceeding to destroy/remove it.

    Link: http://lkml.kernel.org/r/1565805327-579-3-git-send-email-divya.indi@oracle.com

    Reviewed-by: Aruna Ramakrishna
    Signed-off-by: Divya Indi
    [ Removed unneeded braces ]
    Signed-off-by: Steven Rostedt (VMware)
    Signed-off-by: Sasha Levin

    Divya Indi
     
  • [ Upstream commit 2cb80dbbbaba4f2f86f686c34cb79ea5cbfb0edb ]

    KUnit tests for initialized data behavior of proc_dointvec that is
    explicitly checked in the code. Includes basic parsing tests including
    int min/max overflow.

    Signed-off-by: Iurii Zaikin
    Signed-off-by: Brendan Higgins
    Reviewed-by: Greg Kroah-Hartman
    Reviewed-by: Logan Gunthorpe
    Acked-by: Luis Chamberlain
    Reviewed-by: Stephen Boyd
    Signed-off-by: Shuah Khan
    Signed-off-by: Sasha Levin

    Iurii Zaikin
     

27 Sep, 2020

1 commit

  • [ Upstream commit b0399092ccebd9feef68d4ceb8d6219a8c0caa05 ]

    If a kprobe is marked as gone, we should not kill it again. Otherwise, we
    can disarm the kprobe more than once. In that case, the statistics of
    kprobe_ftrace_enabled can unbalance which can lead to that kprobe do not
    work.

    Fixes: e8386a0cb22f ("kprobes: support probing module __exit function")
    Co-developed-by: Chengming Zhou
    Signed-off-by: Muchun Song
    Signed-off-by: Chengming Zhou
    Signed-off-by: Andrew Morton
    Acked-by: Masami Hiramatsu
    Cc: "Naveen N . Rao"
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Cc: Song Liu
    Cc: Steven Rostedt
    Cc:
    Link: https://lkml.kernel.org/r/20200822030055.32383-1-songmuchun@bytedance.com
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Muchun Song
     

17 Sep, 2020

2 commits

  • [ Upstream commit 40249c6962075c040fd071339acae524f18bfac9 ]

    Using gcov to collect coverage data for kernels compiled with GCC 10.1
    causes random malfunctions and kernel crashes. This is the result of a
    changed GCOV_COUNTERS value in GCC 10.1 that causes a mismatch between
    the layout of the gcov_info structure created by GCC profiling code and
    the related structure used by the kernel.

    Fix this by updating the in-kernel GCOV_COUNTERS value. Also re-enable
    config GCOV_KERNEL for use with GCC 10.

    Reported-by: Colin Ian King
    Reported-by: Leon Romanovsky
    Signed-off-by: Peter Oberparleiter
    Tested-by: Leon Romanovsky
    Tested-and-Acked-by: Colin Ian King
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Peter Oberparleiter
     
  • [ Upstream commit cfc905f158eaa099d6258031614d11869e7ef71c ]

    GCOV built with GCC 10 doesn't initialize n_function variable. This
    produces different kernel panics as was seen by Colin in Ubuntu and me
    in FC 32.

    As a workaround, let's disable GCOV build for broken GCC 10 version.

    Link: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1891288
    Link: https://lore.kernel.org/lkml/20200827133932.3338519-1-leon@kernel.org
    Link: https://lore.kernel.org/lkml/CAHk-=whbijeSdSvx-Xcr0DPMj0BiwhJ+uiNnDSVZcr_h_kg7UA@mail.gmail.com/
    Cc: Colin Ian King
    Signed-off-by: Leon Romanovsky
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Leon Romanovsky
     

03 Sep, 2020

9 commits

  • commit 8dfb61dcbaceb19a5ded5e9c9dcf8d05acc32294 upstream.

    Allow user to use alternative implementations of compression tools,
    such as pigz, pbzip2, pxz. For example, multi-threaded tools to
    speed up the build:
    $ make GZIP=pigz BZIP2=pbzip2

    Variables _GZIP, _BZIP2, _LZOP are used internally because original env
    vars are reserved by the tools. The use of GZIP in gzip tool is obsolete
    since 2015. However, alternative implementations (e.g., pigz) still rely
    on it. BZIP2, BZIP, LZOP vars are not obsolescent.

    The credit goes to @grsecurity.

    As a sidenote, for multi-threaded lzma, xz compression one can use:
    $ export XZ_OPT="--threads=0"

    Signed-off-by: Denis Efremov
    Signed-off-by: Masahiro Yamada
    Signed-off-by: Matthias Maennich
    Signed-off-by: Greg Kroah-Hartman

    Denis Efremov
     
  • commit f276031b4e2f4c961ed6d8a42f0f0124ccac2e09 upstream.

    This comment block explains why include/generated/compile.h is omitted,
    but nothing about include/generated/autoconf.h, which might be more
    difficult to understand. Add more comments.

    Signed-off-by: Masahiro Yamada
    Signed-off-by: Matthias Maennich
    Signed-off-by: Greg Kroah-Hartman

    Masahiro Yamada
     
  • commit 1463f74f492eea7191f0178e01f3d38371a48210 upstream.

    'pushd' ... 'popd' is the last bash-specific code in this script.
    One way to avoid it is to run the code in a sub-shell.

    With that addressed, you can run this script with sh.

    I replaced $(BASH) with $(CONFIG_SHELL), and I changed the hashbang
    to #!/bin/sh.

    Signed-off-by: Masahiro Yamada
    Signed-off-by: Matthias Maennich
    Signed-off-by: Greg Kroah-Hartman

    Masahiro Yamada
     
  • commit ea79e5168be644fdaf7d4e6a73eceaf07b3da76a upstream.

    This script copies headers by the cpio command twice; first from
    srctree, and then from objtree. However, when we building in-tree,
    we know the srctree and the objtree are the same. That is, all the
    headers copied by the first cpio are overwritten by the second one.

    Skip the first cpio when we are building in-tree.

    Signed-off-by: Masahiro Yamada
    Signed-off-by: Matthias Maennich
    Signed-off-by: Greg Kroah-Hartman

    Masahiro Yamada
     
  • commit 0e11773e76098729552b750ccff79374d1e62002 upstream.

    This script computes md5sum of headers in srctree and in objtree.
    However, when we are building in-tree, we know the srctree and the
    objtree are the same. That is, we end up with the same computation
    twice. In fact, the first two lines of kernel/kheaders.md5 are always
    the same for in-tree builds.

    Unify the two md5sum calculations.

    For in-tree builds ($building_out_of_srctree is empty), we check
    only two directories, "include", and "arch/$SRCARCH/include".

    For out-of-tree builds ($building_out_of_srctree is 1), we check
    4 directories, "$srctree/include", "$srctree/arch/$SRCARCH/include",
    "include", and "arch/$SRCARCH/include" since we know they are all
    different.

    Signed-off-by: Masahiro Yamada
    Signed-off-by: Matthias Maennich
    Signed-off-by: Greg Kroah-Hartman

    Masahiro Yamada
     
  • commit 9a066357184485784f782719093ff804d05b85db upstream.

    The 'head' and 'tail' commands can take a file path directly.
    So, you do not need to run 'cat'.

    cat kernel/kheaders.md5 | head -1

    ... is equivalent to:

    head -1 kernel/kheaders.md5

    and the latter saves forking one process.

    While I was here, I replaced 'head -1' with 'head -n 1'.

    I also replaced '==' with '=' since we do not have a good reason to
    use the bashism.

    Signed-off-by: Masahiro Yamada
    Signed-off-by: Matthias Maennich
    Signed-off-by: Greg Kroah-Hartman

    Masahiro Yamada
     
  • commit 784a0830377d0761834e385975bc46861fea9fa0 upstream.

    Most of the CPU mask operations behave the same way, but for_each_cpu() and
    it's variants ignore the cpumask argument and claim that CPU0 is always in
    the mask. This is historical, inconsistent and annoying behaviour.

    The matrix allocator uses for_each_cpu() and can be called on UP with an
    empty cpumask. The calling code does not expect that this succeeds but
    until commit e027fffff799 ("x86/irq: Unbreak interrupt affinity setting")
    this went unnoticed. That commit added a WARN_ON() to catch cases which
    move an interrupt from one vector to another on the same CPU. The warning
    triggers on UP.

    Add a check for the cpumask being empty to prevent this.

    Fixes: 2f75d9e1c905 ("genirq: Implement bitmap matrix allocator")
    Reported-by: kernel test robot
    Signed-off-by: Thomas Gleixner
    Cc: stable@vger.kernel.org
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     
  • [ Upstream commit e65855a52b479f98674998cb23b21ef5a8144b04 ]

    The following splat was caught when setting uclamp value of a task:

    BUG: sleeping function called from invalid context at ./include/linux/percpu-rwsem.h:49

    cpus_read_lock+0x68/0x130
    static_key_enable+0x1c/0x38
    __sched_setscheduler+0x900/0xad8

    Fix by ensuring we enable the key outside of the critical section in
    __sched_setscheduler()

    Fixes: 46609ce22703 ("sched/uclamp: Protect uclamp fast path code with static key")
    Signed-off-by: Qais Yousef
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200716110347.19553-4-qais.yousef@arm.com
    Signed-off-by: Qais Yousef
    Signed-off-by: Sasha Levin

    Qais Yousef
     
  • [ Upstream commit 46609ce227039fd192e0ecc7d940bed587fd2c78 ]

    There is a report that when uclamp is enabled, a netperf UDP test
    regresses compared to a kernel compiled without uclamp.

    https://lore.kernel.org/lkml/20200529100806.GA3070@suse.de/

    While investigating the root cause, there were no sign that the uclamp
    code is doing anything particularly expensive but could suffer from bad
    cache behavior under certain circumstances that are yet to be
    understood.

    https://lore.kernel.org/lkml/20200616110824.dgkkbyapn3io6wik@e107158-lin/

    To reduce the pressure on the fast path anyway, add a static key that is
    by default will skip executing uclamp logic in the
    enqueue/dequeue_task() fast path until it's needed.

    As soon as the user start using util clamp by:

    1. Changing uclamp value of a task with sched_setattr()
    2. Modifying the default sysctl_sched_util_clamp_{min, max}
    3. Modifying the default cpu.uclamp.{min, max} value in cgroup

    We flip the static key now that the user has opted to use util clamp.
    Effectively re-introducing uclamp logic in the enqueue/dequeue_task()
    fast path. It stays on from that point forward until the next reboot.

    This should help minimize the effect of util clamp on workloads that
    don't need it but still allow distros to ship their kernels with uclamp
    compiled in by default.

    SCHED_WARN_ON() in uclamp_rq_dec_id() was removed since now we can end
    up with unbalanced call to uclamp_rq_dec_id() if we flip the key while
    a task is running in the rq. Since we know it is harmless we just
    quietly return if we attempt a uclamp_rq_dec_id() when
    rq->uclamp[].bucket[].tasks is 0.

    In schedutil, we introduce a new uclamp_is_enabled() helper which takes
    the static key into account to ensure RT boosting behavior is retained.

    The following results demonstrates how this helps on 2 Sockets Xeon E5
    2x10-Cores system.

    nouclamp uclamp uclamp-static-key
    Hmean send-64 162.43 ( 0.00%) 157.84 * -2.82%* 163.39 * 0.59%*
    Hmean send-128 324.71 ( 0.00%) 314.78 * -3.06%* 326.18 * 0.45%*
    Hmean send-256 641.55 ( 0.00%) 628.67 * -2.01%* 648.12 * 1.02%*
    Hmean send-1024 2525.28 ( 0.00%) 2448.26 * -3.05%* 2543.73 * 0.73%*
    Hmean send-2048 4836.14 ( 0.00%) 4712.08 * -2.57%* 4867.69 * 0.65%*
    Hmean send-3312 7540.83 ( 0.00%) 7425.45 * -1.53%* 7621.06 * 1.06%*
    Hmean send-4096 9124.53 ( 0.00%) 8948.82 * -1.93%* 9276.25 * 1.66%*
    Hmean send-8192 15589.67 ( 0.00%) 15486.35 * -0.66%* 15819.98 * 1.48%*
    Hmean send-16384 26386.47 ( 0.00%) 25752.25 * -2.40%* 26773.74 * 1.47%*

    The perf diff between nouclamp and uclamp-static-key when uclamp is
    disabled in the fast path:

    8.73% -1.55% [kernel.kallsyms] [k] try_to_wake_up
    0.07% +0.04% [kernel.kallsyms] [k] deactivate_task
    0.13% -0.02% [kernel.kallsyms] [k] activate_task

    The diff between nouclamp and uclamp-static-key when uclamp is enabled
    in the fast path:

    8.73% -0.72% [kernel.kallsyms] [k] try_to_wake_up
    0.13% +0.39% [kernel.kallsyms] [k] activate_task
    0.07% +0.38% [kernel.kallsyms] [k] deactivate_task

    Fixes: 69842cba9ace ("sched/uclamp: Add CPU's clamp buckets refcounting")
    Reported-by: Mel Gorman
    Signed-off-by: Qais Yousef
    Signed-off-by: Peter Zijlstra (Intel)
    Tested-by: Lukasz Luba
    Link: https://lkml.kernel.org/r/20200630112123.12076-3-qais.yousef@arm.com
    [ Fix minor conflict with kernel/sched.h because of function renamed
    later ]
    Signed-off-by: Qais Yousef
    Signed-off-by: Sasha Levin

    Qais Yousef