01 Sep, 2020

1 commit

  • The current notifiers have the following error handling pattern all
    over the place:

    int err, nr;

    err = __foo_notifier_call_chain(&chain, val_up, v, -1, &nr);
    if (err & NOTIFIER_STOP_MASK)
    __foo_notifier_call_chain(&chain, val_down, v, nr-1, NULL)

    And aside from the endless repetition thereof, it is broken. Consider
    blocking notifiers; both calls take and drop the rwsem, this means
    that the notifier list can change in between the two calls, making @nr
    meaningless.

    Fix this by replacing all the __foo_notifier_call_chain() functions
    with foo_notifier_call_chain_robust() that embeds the above pattern,
    but ensures it is inside a single lock region.

    Note: I switched atomic_notifier_call_chain_robust() to use
    the spinlock, since RCU cannot provide the guarantee
    required for the recovery.

    Note: software_resume() error handling was broken afaict.

    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Ingo Molnar
    Acked-by: Rafael J. Wysocki
    Link: https://lore.kernel.org/r/20200818135804.325626653@infradead.org

    Peter Zijlstra
     

03 Jun, 2020

1 commit

  • These functions are not needed anymore because the vmalloc and ioremap
    mappings are now synchronized when they are created or torn down.

    Remove all callers and function definitions.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Andrew Morton
    Tested-by: Steven Rostedt (VMware)
    Acked-by: Andy Lutomirski
    Acked-by: Peter Zijlstra (Intel)
    Cc: Arnd Bergmann
    Cc: Christoph Hellwig
    Cc: Dave Hansen
    Cc: "H . Peter Anvin"
    Cc: Ingo Molnar
    Cc: Matthew Wilcox (Oracle)
    Cc: Michal Hocko
    Cc: "Rafael J. Wysocki"
    Cc: Thomas Gleixner
    Cc: Vlastimil Babka
    Link: http://lkml.kernel.org/r/20200515140023.25469-7-joro@8bytes.org
    Signed-off-by: Linus Torvalds

    Joerg Roedel
     

22 Mar, 2020

1 commit

  • Commit 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in
    __purge_vmap_area_lazy()") introduced a call to vmalloc_sync_all() in
    the vunmap() code-path. While this change was necessary to maintain
    correctness on x86-32-pae kernels, it also adds additional cycles for
    architectures that don't need it.

    Specifically on x86-64 with CONFIG_VMAP_STACK=y some people reported
    severe performance regressions in micro-benchmarks because it now also
    calls the x86-64 implementation of vmalloc_sync_all() on vunmap(). But
    the vmalloc_sync_all() implementation on x86-64 is only needed for newly
    created mappings.

    To avoid the unnecessary work on x86-64 and to gain the performance
    back, split up vmalloc_sync_all() into two functions:

    * vmalloc_sync_mappings(), and
    * vmalloc_sync_unmappings()

    Most call-sites to vmalloc_sync_all() only care about new mappings being
    synchronized. The only exception is the new call-site added in the
    above mentioned commit.

    Shile Zhang directed us to a report of an 80% regression in reaim
    throughput.

    Fixes: 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy()")
    Reported-by: kernel test robot
    Reported-by: Shile Zhang
    Signed-off-by: Joerg Roedel
    Signed-off-by: Andrew Morton
    Tested-by: Borislav Petkov
    Acked-by: Rafael J. Wysocki [GHES]
    Cc: Dave Hansen
    Cc: Andy Lutomirski
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc:
    Link: http://lkml.kernel.org/r/20191009124418.8286-1-joro@8bytes.org
    Link: https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/4D3JPPHBNOSPFK2KEPC6KGKS6J25AIDB/
    Link: http://lkml.kernel.org/r/20191113095530.228959-1-shile.zhang@linux.alibaba.com
    Signed-off-by: Linus Torvalds

    Joerg Roedel
     

05 Dec, 2019

3 commits

  • blocking_notifier_chain_cond_register() does not consider system_booting
    state, which is the only difference between this function and
    blocking_notifier_cain_register(). This can be a bug and is a piece of
    duplicate code.

    Delete blocking_notifier_chain_cond_register()

    Link: http://lkml.kernel.org/r/1568861888-34045-4-git-send-email-nixiaoming@huawei.com
    Signed-off-by: Xiaoming Ni
    Reviewed-by: Andrew Morton
    Cc: Alan Stern
    Cc: Alexey Dobriyan
    Cc: Andy Lutomirski
    Cc: Anna Schumaker
    Cc: Arjan van de Ven
    Cc: Chuck Lever
    Cc: David S. Miller
    Cc: Ingo Molnar
    Cc: J. Bruce Fields
    Cc: Jeff Layton
    Cc: Nadia Derbey
    Cc: "Paul E. McKenney"
    Cc: Sam Protsenko
    Cc: Thomas Gleixner
    Cc: Trond Myklebust
    Cc: Vasily Averin
    Cc: Viresh Kumar
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xiaoming Ni
     
  • The only difference between notifier_chain_cond_register() and
    notifier_chain_register() is the lack of warning hints for duplicate
    registrations. Use notifier_chain_register() instead of
    notifier_chain_cond_register() to avoid duplicate code

    Link: http://lkml.kernel.org/r/1568861888-34045-3-git-send-email-nixiaoming@huawei.com
    Signed-off-by: Xiaoming Ni
    Reviewed-by: Andrew Morton
    Cc: Alan Stern
    Cc: Alexey Dobriyan
    Cc: Andy Lutomirski
    Cc: Anna Schumaker
    Cc: Arjan van de Ven
    Cc: Chuck Lever
    Cc: David S. Miller
    Cc: Ingo Molnar
    Cc: J. Bruce Fields
    Cc: Jeff Layton
    Cc: Nadia Derbey
    Cc: "Paul E. McKenney"
    Cc: Sam Protsenko
    Cc: Thomas Gleixner
    Cc: Trond Myklebust
    Cc: Vasily Averin
    Cc: Viresh Kumar
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xiaoming Ni
     
  • Registering the same notifier to a hook repeatedly can cause the hook
    list to form a ring or lose other members of the list.

    case1: An infinite loop in notifier_chain_register() can cause soft lockup
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test2);

    case2: An infinite loop in notifier_chain_register() can cause soft lockup
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_call_chain(&test_notifier_list, 0, NULL);

    case3: lose other hook test2
    atomic_notifier_chain_register(&test_notifier_list, &test1);
    atomic_notifier_chain_register(&test_notifier_list, &test2);
    atomic_notifier_chain_register(&test_notifier_list, &test1);

    case4: Unregister returns 0, but the hook is still in the linked list,
    and it is not really registered. If you call
    notifier_call_chain after ko is unloaded, it will trigger oops.

    If the system is configured with softlockup_panic and the same hook is
    repeatedly registered on the panic_notifier_list, it will cause a loop
    panic.

    Add a check in notifier_chain_register(), intercepting duplicate
    registrations to avoid infinite loops

    Link: http://lkml.kernel.org/r/1568861888-34045-2-git-send-email-nixiaoming@huawei.com
    Signed-off-by: Xiaoming Ni
    Reviewed-by: Vasily Averin
    Reviewed-by: Andrew Morton
    Cc: Alexey Dobriyan
    Cc: Anna Schumaker
    Cc: Arjan van de Ven
    Cc: J. Bruce Fields
    Cc: Chuck Lever
    Cc: David S. Miller
    Cc: Jeff Layton
    Cc: Andy Lutomirski
    Cc: Ingo Molnar
    Cc: Nadia Derbey
    Cc: "Paul E. McKenney"
    Cc: Sam Protsenko
    Cc: Alan Stern
    Cc: Thomas Gleixner
    Cc: Trond Myklebust
    Cc: Viresh Kumar
    Cc: Xiaoming Ni
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xiaoming Ni
     

21 May, 2019

1 commit

  • Add SPDX license identifiers to all files which:

    - Have no license information of any form

    - Have EXPORT_.*_SYMBOL_GPL inside which was used in the
    initial scan/conversion to ignore the file

    These files fall under the project license, GPL v2 only. The resulting SPDX
    license identifier is:

    GPL-2.0-only

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

15 May, 2019

1 commit

  • By design notifiers can be registerd once only, 2nd register attempt
    called by mistake silently corrupts notifiers list.

    A few years ago I investigated described problem, the host was power
    cycled because of notifier list corruption. I've prepared this patch
    and applied it to the OpenVZ kernel and sent this patch but nobody
    commented on it. Later it helped us to detect a similar problem in the
    OpenVz kernel.

    Mistakes with notifier registration can happen for example during
    subsystem initialization from different namespaces, or because of a lost
    unregister in the roll-back path on initialization failures.

    The proposed check cannot prevent the described problem, however it
    allows us to detect its reason quickly without coredump analysis.

    Link: http://lkml.kernel.org/r/04127e71-4782-9bbb-fe5a-7c01e93a99b0@virtuozzo.com
    Signed-off-by: Vasily Averin
    Reviewed-by: Andrew Morton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vasily Averin
     

25 Feb, 2017

1 commit

  • NOTIFY_STOP_MASK (0x8000) has only one bit set and there is no need to
    compare output of "ret & NOTIFY_STOP_MASK" to NOTIFY_STOP_MASK. We just
    need to make sure the output is non-zero, that's it.

    Link: http://lkml.kernel.org/r/88ee58264a2bfab1c97ffc8ac753e25f55f57c10.1483593065.git.viresh.kumar@linaro.org
    Signed-off-by: Viresh Kumar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Viresh Kumar
     

01 Sep, 2015

1 commit

  • Pull x86 asm changes from Ingo Molnar:
    "The biggest changes in this cycle were:

    - Revamp, simplify (and in some cases fix) Time Stamp Counter (TSC)
    primitives. (Andy Lutomirski)

    - Add new, comprehensible entry and exit handlers written in C.
    (Andy Lutomirski)

    - vm86 mode cleanups and fixes. (Brian Gerst)

    - 32-bit compat code cleanups. (Brian Gerst)

    The amount of simplification in low level assembly code is already
    palpable:

    arch/x86/entry/entry_32.S | 130 +----
    arch/x86/entry/entry_64.S | 197 ++-----

    but more simplifications are planned.

    There's also the usual laudry mix of low level changes - see the
    changelog for details"

    * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (83 commits)
    x86/asm: Drop repeated macro of X86_EFLAGS_AC definition
    x86/asm/msr: Make wrmsrl() a function
    x86/asm/delay: Introduce an MWAITX-based delay with a configurable timer
    x86/asm: Add MONITORX/MWAITX instruction support
    x86/traps: Weaken context tracking entry assertions
    x86/asm/tsc: Add rdtscll() merge helper
    selftests/x86: Add syscall_nt selftest
    selftests/x86: Disable sigreturn_64
    x86/vdso: Emit a GNU hash
    x86/entry: Remove do_notify_resume(), syscall_trace_leave(), and their TIF masks
    x86/entry/32: Migrate to C exit path
    x86/entry/32: Remove 32-bit syscall audit optimizations
    x86/vm86: Rename vm86->v86flags and v86mask
    x86/vm86: Rename vm86->vm86_info to user_vm86
    x86/vm86: Clean up vm86.h includes
    x86/vm86: Move the vm86 IRQ definitions to vm86.h
    x86/vm86: Use the normal pt_regs area for vm86
    x86/vm86: Eliminate 'struct kernel_vm86_struct'
    x86/vm86: Move fields from 'struct kernel_vm86_struct' to 'struct vm86'
    x86/vm86: Move vm86 fields out of 'thread_struct'
    ...

    Linus Torvalds
     

07 Jul, 2015

1 commit

  • Low-level arch entries often call notify_die(), and it's easy for
    arch code to fail to exit an RCU quiescent state first. Assert
    that we're not quiescent in notify_die().

    Signed-off-by: Andy Lutomirski
    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Brian Gerst
    Cc: Denys Vlasenko
    Cc: Denys Vlasenko
    Cc: Frederic Weisbecker
    Cc: H. Peter Anvin
    Cc: Paul E. McKenney
    Cc: Kees Cook
    Cc: Linus Torvalds
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: Thomas Gleixner
    Cc: paulmck@linux.vnet.ibm.com
    Link: http://lkml.kernel.org/r/1f5fe6c23d5b432a23267102f2d72b787d80fdd8.1435952415.git.luto@kernel.org
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     

07 Jan, 2015

1 commit

  • SRCU is not necessary to be compiled by default in all cases. For tinification
    efforts not compiling SRCU unless necessary is desirable.

    The current patch tries to make compiling SRCU optional by introducing a new
    Kconfig option CONFIG_SRCU which is selected when any of the components making
    use of SRCU are selected.

    If we do not select CONFIG_SRCU, srcu.o will not be compiled at all.

    text data bss dec hex filename
    2007 0 0 2007 7d7 kernel/rcu/srcu.o

    Size of arch/powerpc/boot/zImage changes from

    text data bss dec hex filename
    831552 64180 23944 919676 e087c arch/powerpc/boot/zImage : before
    829504 64180 23952 917636 e0084 arch/powerpc/boot/zImage : after

    so the savings are about ~2000 bytes.

    Signed-off-by: Pranith Kumar
    CC: Paul E. McKenney
    CC: Josh Triplett
    CC: Lai Jiangshan
    Signed-off-by: Paul E. McKenney
    [ paulmck: resolve conflict due to removal of arch/ia64/kvm/Kconfig. ]

    Pranith Kumar
     

24 Apr, 2014

1 commit

  • Use NOKPROBE_SYMBOL macro to protect functions from
    kprobes instead of __kprobes annotation in notifier.

    Signed-off-by: Masami Hiramatsu
    Reviewed-by: Steven Rostedt
    Reviewed-by: Josh Triplett
    Cc: Paul E. McKenney
    Link: http://lkml.kernel.org/r/20140417081835.26341.56128.stgit@ltc230.yrl.intra.hitachi.co.jp
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

26 Feb, 2014

1 commit

  • (Trivial patch.)

    If the code is looking at the RCU-protected pointer itself, but not
    dereferencing it, the rcu_dereference() functions can be downgraded
    to rcu_access_pointer(). This commit makes this downgrade in
    __blocking_notifier_call_chain() which simply compares the RCU-protected
    pointer against NULL with no dereferencing.

    Signed-off-by: Paul E. McKenney
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     

31 Oct, 2011

1 commit

  • The changed files were only including linux/module.h for the
    EXPORT_SYMBOL infrastructure, and nothing else. Revector them
    onto the isolated export header for faster compile times.

    Nothing to see here but a whole lot of instances of:

    -#include
    +#include

    This commit is only changing the kernel dir; next targets
    will probably be mm, fs, the arch dirs, etc.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

26 Jul, 2011

1 commit

  • It is not necessary to share the same notifier.h.

    This patch already moves register_reboot_notifier() and
    unregister_reboot_notifier() from kernel/notifier.c to kernel/sys.c.

    [amwang@redhat.com: make allyesconfig succeed on ppc64]
    Signed-off-by: WANG Cong
    Cc: David Miller
    Cc: "Rafael J. Wysocki"
    Cc: Greg KH
    Signed-off-by: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Amerigo Wang
     

25 Feb, 2010

1 commit

  • Update the rcu_dereference() usages to take advantage of the new
    lockdep-based checking.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    [ -v2: fix allmodconfig missing symbol export build failure on x86 ]
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

30 Aug, 2009

1 commit

  • Add __kprobes to notify_die() because do_int3() calls notify_die()
    instead of atomic_notify_call_chain() which is already marked as
    __kprobes.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Masami Hiramatsu
     

25 Nov, 2008

1 commit


14 Oct, 2008

1 commit


10 Sep, 2008

2 commits

  • - unbreak ia64 (and powerpc) where function pointers dont
    point at code but at data (reported by Tony Luck)

    [ mingo@elte.hu: various cleanups ]

    Signed-off-by: Arjan van de Ven
    Signed-off-by: Ingo Molnar

    Arjan van de Ven
     
  • during some development we suspected a case where we left something
    in a notifier chain that was from a module that was unloaded already...
    and that sort of thing is rather hard to track down.

    This patch adds a very simple sanity check (which isn't all that
    expensive) to make sure the notifier we're about to call is
    actually from either the kernel itself of from a still-loaded
    module, avoiding a hard-to-chase-down crash.

    Signed-off-by: Arjan van de Ven
    Signed-off-by: Ingo Molnar

    Arjan van de Ven
     

29 Apr, 2008

1 commit

  • The enhancement as asked for by Yasunori: if msgmni is set to a negative
    value, register it back into the ipcns notifier chain.

    A new interface has been added to the notification mechanism:
    notifier_chain_cond_register() registers a notifier block only if not already
    registered. With that new interface we avoid taking care of the states
    changes in procfs.

    Signed-off-by: Nadia Derbey
    Cc: Yasunori Goto
    Cc: Matt Helsley
    Cc: Mingming Cao
    Cc: Pierre Peiffer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nadia Derbey
     

07 Feb, 2008

1 commit


20 Oct, 2007

1 commit

  • There is separate notifier header, but no separate notifier .c file.

    Extract notifier code out of kernel/sys.c which will remain for
    misc syscalls I hope. Merge kernel/die_notifier.c into kernel/notifier.c.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan