13 Jan, 2021

1 commit

  • [ Upstream commit 87dbc209ea04645fd2351981f09eff5d23f8e2e9 ]

    Make mandatory in include/asm-generic/Kbuild and
    remove all arch/*/include/asm/local64.h arch-specific files since they
    only #include .

    This fixes build errors on arch/c6x/ and arch/nios2/ for
    block/blk-iocost.c.

    Build-tested on 21 of 25 arch-es. (tools problems on the others)

    Yes, we could even rename to
    and change all #includes to use
    instead.

    Link: https://lkml.kernel.org/r/20201227024446.17018-1-rdunlap@infradead.org
    Signed-off-by: Randy Dunlap
    Suggested-by: Christoph Hellwig
    Reviewed-by: Masahiro Yamada
    Cc: Jens Axboe
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Aurelien Jacquiot
    Cc: Peter Zijlstra
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Randy Dunlap
     

06 Jan, 2021

1 commit

  • commit dc2da7b45ffe954a0090f5d0310ed7b0b37d2bd2 upstream.

    VMware observed a performance regression during memmap init on their
    platform, and bisected to commit 73a6e474cb376 ("mm: memmap_init:
    iterate over memblock regions rather that check each PFN") causing it.

    Before the commit:

    [0.033176] Normal zone: 1445888 pages used for memmap
    [0.033176] Normal zone: 89391104 pages, LIFO batch:63
    [0.035851] ACPI: PM-Timer IO Port: 0x448

    With commit

    [0.026874] Normal zone: 1445888 pages used for memmap
    [0.026875] Normal zone: 89391104 pages, LIFO batch:63
    [2.028450] ACPI: PM-Timer IO Port: 0x448

    The root cause is the current memmap defer init doesn't work as expected.

    Before, memmap_init_zone() was used to do memmap init of one whole zone,
    to initialize all low zones of one numa node, but defer memmap init of
    the last zone in that numa node. However, since commit 73a6e474cb376,
    function memmap_init() is adapted to iterater over memblock regions
    inside one zone, then call memmap_init_zone() to do memmap init for each
    region.

    E.g, on VMware's system, the memory layout is as below, there are two
    memory regions in node 2. The current code will mistakenly initialize the
    whole 1st region [mem 0xab00000000-0xfcffffffff], then do memmap defer to
    iniatialize only one memmory section on the 2nd region [mem
    0x10000000000-0x1033fffffff]. In fact, we only expect to see that there's
    only one memory section's memmap initialized. That's why more time is
    costed at the time.

    [ 0.008842] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
    [ 0.008842] ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff]
    [ 0.008843] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x55ffffffff]
    [ 0.008844] ACPI: SRAT: Node 1 PXM 1 [mem 0x5600000000-0xaaffffffff]
    [ 0.008844] ACPI: SRAT: Node 2 PXM 2 [mem 0xab00000000-0xfcffffffff]
    [ 0.008845] ACPI: SRAT: Node 2 PXM 2 [mem 0x10000000000-0x1033fffffff]

    Now, let's add a parameter 'zone_end_pfn' to memmap_init_zone() to pass
    down the real zone end pfn so that defer_init() can use it to judge
    whether defer need be taken in zone wide.

    Link: https://lkml.kernel.org/r/20201223080811.16211-1-bhe@redhat.com
    Link: https://lkml.kernel.org/r/20201223080811.16211-2-bhe@redhat.com
    Fixes: commit 73a6e474cb376 ("mm: memmap_init: iterate over memblock regions rather that check each PFN")
    Signed-off-by: Baoquan He
    Reported-by: Rahul Gopakumar
    Reviewed-by: Mike Rapoport
    Cc: David Hildenbrand
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Baoquan He
     

30 Nov, 2020

1 commit


24 Nov, 2020

1 commit

  • We call arch_cpu_idle() with RCU disabled, but then use
    local_irq_{en,dis}able(), which invokes tracing, which relies on RCU.

    Switch all arch_cpu_idle() implementations to use
    raw_local_irq_{en,dis}able() and carefully manage the
    lockdep,rcu,tracing state like we do in entry.

    (XXX: we really should change arch_cpu_idle() to not return with
    interrupts enabled)

    Reported-by: Sven Schnelle
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Mark Rutland
    Tested-by: Mark Rutland
    Link: https://lkml.kernel.org/r/20201120114925.594122626@infradead.org

    Peter Zijlstra
     

23 Nov, 2020

1 commit

  • The core-mm has a default __weak implementation of phys_to_target_node()
    to mirror the weak definition of memory_add_physaddr_to_nid(). That
    symbol is exported for modules. However, while the export in
    mm/memory_hotplug.c exported the symbol in the configuration cases of:

    CONFIG_NUMA_KEEP_MEMINFO=y
    CONFIG_MEMORY_HOTPLUG=y

    ...and:

    CONFIG_NUMA_KEEP_MEMINFO=n
    CONFIG_MEMORY_HOTPLUG=y

    ...it failed to export the symbol in the case of:

    CONFIG_NUMA_KEEP_MEMINFO=y
    CONFIG_MEMORY_HOTPLUG=n

    Not only is that broken, but Christoph points out that the kernel should
    not be exporting any __weak symbol, which means that
    memory_add_physaddr_to_nid() example that phys_to_target_node() copied
    is broken too.

    Rework the definition of phys_to_target_node() and
    memory_add_physaddr_to_nid() to not require weak symbols. Move to the
    common arch override design-pattern of an asm header defining a symbol
    to replace the default implementation.

    The only common header that all memory_add_physaddr_to_nid() producing
    architectures implement is asm/sparsemem.h. In fact, powerpc already
    defines its memory_add_physaddr_to_nid() helper in sparsemem.h.
    Double-down on that observation and define phys_to_target_node() where
    necessary in asm/sparsemem.h. An alternate consideration that was
    discarded was to put this override in asm/numa.h, but that entangles
    with the definition of MAX_NUMNODES relative to the inclusion of
    linux/nodemask.h, and requires powerpc to grow a new header.

    The dependency on NUMA_KEEP_MEMINFO for DEV_DAX_HMEM_DEVICES is invalid
    now that the symbol is properly exported / stubbed in all combinations
    of CONFIG_NUMA_KEEP_MEMINFO and CONFIG_MEMORY_HOTPLUG.

    [dan.j.williams@intel.com: v4]
    Link: https://lkml.kernel.org/r/160461461867.1505359.5301571728749534585.stgit@dwillia2-desk3.amr.corp.intel.com
    [dan.j.williams@intel.com: powerpc: fix create_section_mapping compile warning]
    Link: https://lkml.kernel.org/r/160558386174.2948926.2740149041249041764.stgit@dwillia2-desk3.amr.corp.intel.com

    Fixes: a035b6bf863e ("mm/memory_hotplug: introduce default phys_to_target_node() implementation")
    Reported-by: Randy Dunlap
    Reported-by: Thomas Gleixner
    Reported-by: kernel test robot
    Reported-by: Christoph Hellwig
    Signed-off-by: Dan Williams
    Signed-off-by: Andrew Morton
    Tested-by: Randy Dunlap
    Tested-by: Thomas Gleixner
    Reviewed-by: Thomas Gleixner
    Reviewed-by: Christoph Hellwig
    Cc: Joao Martins
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: Michael Ellerman
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Vishal Verma
    Cc: Stephen Rothwell
    Link: https://lkml.kernel.org/r/160447639846.1133764.7044090803980177548.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Linus Torvalds

    Dan Williams
     

26 Oct, 2020

1 commit

  • Use a more generic form for __section that requires quotes to avoid
    complications with clang and gcc differences.

    Remove the quote operator # from compiler_attributes.h __section macro.

    Convert all unquoted __section(foo) uses to quoted __section("foo").
    Also convert __attribute__((section("foo"))) uses to __section("foo")
    even if the __attribute__ has multiple list entry forms.

    Conversion done using the script at:

    https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl

    Signed-off-by: Joe Perches
    Reviewed-by: Nick Desaulniers
    Reviewed-by: Miguel Ojeda
    Signed-off-by: Linus Torvalds

    Joe Perches
     

24 Oct, 2020

1 commit

  • Pull arch task_work cleanups from Jens Axboe:
    "Two cleanups that don't fit other categories:

    - Finally get the task_work_add() cleanup done properly, so we don't
    have random 0/1/false/true/TWA_SIGNAL confusing use cases. Updates
    all callers, and also fixes up the documentation for
    task_work_add().

    - While working on some TIF related changes for 5.11, this
    TIF_NOTIFY_RESUME cleanup fell out of that. Remove some arch
    duplication for how that is handled"

    * tag 'arch-cleanup-2020-10-22' of git://git.kernel.dk/linux-block:
    task_work: cleanup notification modes
    tracehook: clear TIF_NOTIFY_RESUME in tracehook_notify_resume()

    Linus Torvalds
     

23 Oct, 2020

2 commits

  • Pull Kbuild updates from Masahiro Yamada:

    - Support 'make compile_commands.json' to generate the compilation
    database more easily, avoiding stale entries

    - Support 'make clang-analyzer' and 'make clang-tidy' for static checks
    using clang-tidy

    - Preprocess scripts/modules.lds.S to allow CONFIG options in the
    module linker script

    - Drop cc-option tests from compiler flags supported by our minimal
    GCC/Clang versions

    - Use always 12-digits commit hash for CONFIG_LOCALVERSION_AUTO=y

    - Use sha1 build id for both BFD linker and LLD

    - Improve deb-pkg for reproducible builds and rootless builds

    - Remove stale, useless scripts/namespace.pl

    - Turn -Wreturn-type warning into error

    - Fix build error of deb-pkg when CONFIG_MODULES=n

    - Replace 'hostname' command with more portable 'uname -n'

    - Various Makefile cleanups

    * tag 'kbuild-v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (34 commits)
    kbuild: Use uname for LINUX_COMPILE_HOST detection
    kbuild: Only add -fno-var-tracking-assignments for old GCC versions
    kbuild: remove leftover comment for filechk utility
    treewide: remove DISABLE_LTO
    kbuild: deb-pkg: clean up package name variables
    kbuild: deb-pkg: do not build linux-headers package if CONFIG_MODULES=n
    kbuild: enforce -Werror=return-type
    scripts: remove namespace.pl
    builddeb: Add support for all required debian/rules targets
    builddeb: Enable rootless builds
    builddeb: Pass -n to gzip for reproducible packages
    kbuild: split the build log of kallsyms
    kbuild: explicitly specify the build id style
    scripts/setlocalversion: make git describe output more reliable
    kbuild: remove cc-option test of -Werror=date-time
    kbuild: remove cc-option test of -fno-stack-check
    kbuild: remove cc-option test of -fno-strict-overflow
    kbuild: move CFLAGS_{KASAN,UBSAN,KCSAN} exports to relevant Makefiles
    kbuild: remove redundant CONFIG_KASAN check from scripts/Makefile.kasan
    kbuild: do not create built-in objects for external module builds
    ...

    Linus Torvalds
     
  • Pull initial set_fs() removal from Al Viro:
    "Christoph's set_fs base series + fixups"

    * 'work.set_fs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    fs: Allow a NULL pos pointer to __kernel_read
    fs: Allow a NULL pos pointer to __kernel_write
    powerpc: remove address space overrides using set_fs()
    powerpc: use non-set_fs based maccess routines
    x86: remove address space overrides using set_fs()
    x86: make TASK_SIZE_MAX usable from assembly code
    x86: move PAGE_OFFSET, TASK_SIZE & friends to page_{32,64}_types.h
    lkdtm: remove set_fs-based tests
    test_bitmap: remove user bitmap tests
    uaccess: add infrastructure for kernel builds with set_fs()
    fs: don't allow splice read/write without explicit ops
    fs: don't allow kernel reads and writes without iter ops
    sysctl: Convert to iter interfaces
    proc: add a read_iter method to proc proc_ops
    proc: cleanup the compat vs no compat file ops
    proc: remove a level of indentation in proc_get_inode

    Linus Torvalds
     

19 Oct, 2020

2 commits

  • There is usecase that System Management Software(SMS) want to give a
    memory hint like MADV_[COLD|PAGEEOUT] to other processes and in the
    case of Android, it is the ActivityManagerService.

    The information required to make the reclaim decision is not known to the
    app. Instead, it is known to the centralized userspace
    daemon(ActivityManagerService), and that daemon must be able to initiate
    reclaim on its own without any app involvement.

    To solve the issue, this patch introduces a new syscall
    process_madvise(2). It uses pidfd of an external process to give the
    hint. It also supports vector address range because Android app has
    thousands of vmas due to zygote so it's totally waste of CPU and power if
    we should call the syscall one by one for each vma.(With testing 2000-vma
    syscall vs 1-vector syscall, it showed 15% performance improvement. I
    think it would be bigger in real practice because the testing ran very
    cache friendly environment).

    Another potential use case for the vector range is to amortize the cost
    ofTLB shootdowns for multiple ranges when using MADV_DONTNEED; this could
    benefit users like TCP receive zerocopy and malloc implementations. In
    future, we could find more usecases for other advises so let's make it
    happens as API since we introduce a new syscall at this moment. With
    that, existing madvise(2) user could replace it with process_madvise(2)
    with their own pid if they want to have batch address ranges support
    feature.

    ince it could affect other process's address range, only privileged
    process(PTRACE_MODE_ATTACH_FSCREDS) or something else(e.g., being the same
    UID) gives it the right to ptrace the process could use it successfully.
    The flag argument is reserved for future use if we need to extend the API.

    I think supporting all hints madvise has/will supported/support to
    process_madvise is rather risky. Because we are not sure all hints make
    sense from external process and implementation for the hint may rely on
    the caller being in the current context so it could be error-prone. Thus,
    I just limited hints as MADV_[COLD|PAGEOUT] in this patch.

    If someone want to add other hints, we could hear the usecase and review
    it for each hint. It's safer for maintenance rather than introducing a
    buggy syscall but hard to fix it later.

    So finally, the API is as follows,

    ssize_t process_madvise(int pidfd, const struct iovec *iovec,
    unsigned long vlen, int advice, unsigned int flags);

    DESCRIPTION
    The process_madvise() system call is used to give advice or directions
    to the kernel about the address ranges from external process as well as
    local process. It provides the advice to address ranges of process
    described by iovec and vlen. The goal of such advice is to improve
    system or application performance.

    The pidfd selects the process referred to by the PID file descriptor
    specified in pidfd. (See pidofd_open(2) for further information)

    The pointer iovec points to an array of iovec structures, defined in
    as:

    struct iovec {
    void *iov_base; /* starting address */
    size_t iov_len; /* number of bytes to be advised */
    };

    The iovec describes address ranges beginning at address(iov_base)
    and with size length of bytes(iov_len).

    The vlen represents the number of elements in iovec.

    The advice is indicated in the advice argument, which is one of the
    following at this moment if the target process specified by pidfd is
    external.

    MADV_COLD
    MADV_PAGEOUT

    Permission to provide a hint to external process is governed by a
    ptrace access mode PTRACE_MODE_ATTACH_FSCREDS check; see ptrace(2).

    The process_madvise supports every advice madvise(2) has if target
    process is in same thread group with calling process so user could
    use process_madvise(2) to extend existing madvise(2) to support
    vector address ranges.

    RETURN VALUE
    On success, process_madvise() returns the number of bytes advised.
    This return value may be less than the total number of requested
    bytes, if an error occurred. The caller should check return value
    to determine whether a partial advice occurred.

    FAQ:

    Q.1 - Why does any external entity have better knowledge?

    Quote from Sandeep

    "For Android, every application (including the special SystemServer)
    are forked from Zygote. The reason of course is to share as many
    libraries and classes between the two as possible to benefit from the
    preloading during boot.

    After applications start, (almost) all of the APIs end up calling into
    this SystemServer process over IPC (binder) and back to the
    application.

    In a fully running system, the SystemServer monitors every single
    process periodically to calculate their PSS / RSS and also decides
    which process is "important" to the user for interactivity.

    So, because of how these processes start _and_ the fact that the
    SystemServer is looping to monitor each process, it does tend to *know*
    which address range of the application is not used / useful.

    Besides, we can never rely on applications to clean things up
    themselves. We've had the "hey app1, the system is low on memory,
    please trim your memory usage down" notifications for a long time[1].
    They rely on applications honoring the broadcasts and very few do.

    So, if we want to avoid the inevitable killing of the application and
    restarting it, some way to be able to tell the OS about unimportant
    memory in these applications will be useful.

    - ssp

    Q.2 - How to guarantee the race(i.e., object validation) between when
    giving a hint from an external process and get the hint from the target
    process?

    process_madvise operates on the target process's address space as it
    exists at the instant that process_madvise is called. If the space
    target process can run between the time the process_madvise process
    inspects the target process address space and the time that
    process_madvise is actually called, process_madvise may operate on
    memory regions that the calling process does not expect. It's the
    responsibility of the process calling process_madvise to close this
    race condition. For example, the calling process can suspend the
    target process with ptrace, SIGSTOP, or the freezer cgroup so that it
    doesn't have an opportunity to change its own address space before
    process_madvise is called. Another option is to operate on memory
    regions that the caller knows a priori will be unchanged in the target
    process. Yet another option is to accept the race for certain
    process_madvise calls after reasoning that mistargeting will do no
    harm. The suggested API itself does not provide synchronization. It
    also apply other APIs like move_pages, process_vm_write.

    The race isn't really a problem though. Why is it so wrong to require
    that callers do their own synchronization in some manner? Nobody
    objects to write(2) merely because it's possible for two processes to
    open the same file and clobber each other's writes --- instead, we tell
    people to use flock or something. Think about mmap. It never
    guarantees newly allocated address space is still valid when the user
    tries to access it because other threads could unmap the memory right
    before. That's where we need synchronization by using other API or
    design from userside. It shouldn't be part of API itself. If someone
    needs more fine-grained synchronization rather than process level,
    there were two ideas suggested - cookie[2] and anon-fd[3]. Both are
    applicable via using last reserved argument of the API but I don't
    think it's necessary right now since we have already ways to prevent
    the race so don't want to add additional complexity with more
    fine-grained optimization model.

    To make the API extend, it reserved an unsigned long as last argument
    so we could support it in future if someone really needs it.

    Q.3 - Why doesn't ptrace work?

    Injecting an madvise in the target process using ptrace would not work
    for us because such injected madvise would have to be executed by the
    target process, which means that process would have to be runnable and
    that creates the risk of the abovementioned race and hinting a wrong
    VMA. Furthermore, we want to act the hint in caller's context, not the
    callee's, because the callee is usually limited in cpuset/cgroups or
    even freezed state so they can't act by themselves quick enough, which
    causes more thrashing/kill. It doesn't work if the target process are
    ptraced(e.g., strace, debugger, minidump) because a process can have at
    most one ptracer.

    [1] https://developer.android.com/topic/performance/memory"

    [2] process_getinfo for getting the cookie which is updated whenever
    vma of process address layout are changed - Daniel Colascione -
    https://lore.kernel.org/lkml/20190520035254.57579-1-minchan@kernel.org/T/#m7694416fd179b2066a2c62b5b139b14e3894e224

    [3] anonymous fd which is used for the object(i.e., address range)
    validation - Michal Hocko -
    https://lore.kernel.org/lkml/20200120112722.GY18451@dhcp22.suse.cz/

    [minchan@kernel.org: fix process_madvise build break for arm64]
    Link: http://lkml.kernel.org/r/20200303145756.GA219683@google.com
    [minchan@kernel.org: fix build error for mips of process_madvise]
    Link: http://lkml.kernel.org/r/20200508052517.GA197378@google.com
    [akpm@linux-foundation.org: fix patch ordering issue]
    [akpm@linux-foundation.org: fix arm64 whoops]
    [minchan@kernel.org: make process_madvise() vlen arg have type size_t, per Florian]
    [akpm@linux-foundation.org: fix i386 build]
    [sfr@canb.auug.org.au: fix syscall numbering]
    Link: https://lkml.kernel.org/r/20200905142639.49fc3f1a@canb.auug.org.au
    [sfr@canb.auug.org.au: madvise.c needs compat.h]
    Link: https://lkml.kernel.org/r/20200908204547.285646b4@canb.auug.org.au
    [minchan@kernel.org: fix mips build]
    Link: https://lkml.kernel.org/r/20200909173655.GC2435453@google.com
    [yuehaibing@huawei.com: remove duplicate header which is included twice]
    Link: https://lkml.kernel.org/r/20200915121550.30584-1-yuehaibing@huawei.com
    [minchan@kernel.org: do not use helper functions for process_madvise]
    Link: https://lkml.kernel.org/r/20200921175539.GB387368@google.com
    [akpm@linux-foundation.org: pidfd_get_pid() gained an argument]
    [sfr@canb.auug.org.au: fix up for "iov_iter: transparently handle compat iovecs in import_iovec"]
    Link: https://lkml.kernel.org/r/20200928212542.468e1fef@canb.auug.org.au

    Signed-off-by: Minchan Kim
    Signed-off-by: YueHaibing
    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Reviewed-by: Suren Baghdasaryan
    Reviewed-by: Vlastimil Babka
    Acked-by: David Rientjes
    Cc: Alexander Duyck
    Cc: Brian Geffon
    Cc: Christian Brauner
    Cc: Daniel Colascione
    Cc: Jann Horn
    Cc: Jens Axboe
    Cc: Joel Fernandes
    Cc: Johannes Weiner
    Cc: John Dias
    Cc: Kirill Tkhai
    Cc: Michal Hocko
    Cc: Oleksandr Natalenko
    Cc: Sandeep Patil
    Cc: SeongJae Park
    Cc: SeongJae Park
    Cc: Shakeel Butt
    Cc: Sonny Rao
    Cc: Tim Murray
    Cc: Christian Brauner
    Cc: Florian Weimer
    Cc:
    Link: http://lkml.kernel.org/r/20200302193630.68771-3-minchan@kernel.org
    Link: http://lkml.kernel.org/r/20200508183320.GA125527@google.com
    Link: http://lkml.kernel.org/r/20200622192900.22757-4-minchan@kernel.org
    Link: https://lkml.kernel.org/r/20200901000633.1920247-4-minchan@kernel.org
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • Fix linkage error when CONFIG_BINFMT_ELF is selected but CONFIG_COREDUMP
    is not:

    ia64-linux-ld: arch/ia64/kernel/elfcore.o: in function `elf_core_write_extra_phdrs':
    elfcore.c:(.text+0x172): undefined reference to `dump_emit'
    ia64-linux-ld: arch/ia64/kernel/elfcore.o: in function `elf_core_write_extra_data':
    elfcore.c:(.text+0x2b2): undefined reference to `dump_emit'

    Fixes: 1fcccbac89f5 ("elf coredump: replace ELF_CORE_EXTRA_* macros by functions")
    Reported-by: kernel test robot
    Signed-off-by: Krzysztof Kozlowski
    Signed-off-by: Andrew Morton
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc:
    Link: https://lkml.kernel.org/r/20200819064146.12529-1-krzk@kernel.org
    Signed-off-by: Linus Torvalds

    Krzysztof Kozlowski
     

18 Oct, 2020

1 commit


17 Oct, 2020

1 commit

  • On the memory onlining path, we want to start with MIGRATE_ISOLATE, to
    un-isolate the pages after memory onlining is complete. Let's allow
    passing in the migratetype.

    Signed-off-by: David Hildenbrand
    Signed-off-by: Andrew Morton
    Reviewed-by: Oscar Salvador
    Acked-by: Michal Hocko
    Cc: Wei Yang
    Cc: Baoquan He
    Cc: Pankaj Gupta
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: Logan Gunthorpe
    Cc: Dan Williams
    Cc: Mike Rapoport
    Cc: "Matthew Wilcox (Oracle)"
    Cc: Michel Lespinasse
    Cc: Charan Teja Reddy
    Cc: Mel Gorman
    Link: https://lkml.kernel.org/r/20200819175957.28465-10-david@redhat.com
    Signed-off-by: Linus Torvalds

    David Hildenbrand
     

16 Oct, 2020

1 commit

  • Pull dma-mapping updates from Christoph Hellwig:

    - rework the non-coherent DMA allocator

    - move private definitions out of

    - lower CMA_ALIGNMENT (Paul Cercueil)

    - remove the omap1 dma address translation in favor of the common code

    - make dma-direct aware of multiple dma offset ranges (Jim Quinlan)

    - support per-node DMA CMA areas (Barry Song)

    - increase the default seg boundary limit (Nicolin Chen)

    - misc fixes (Robin Murphy, Thomas Tai, Xu Wang)

    - various cleanups

    * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits)
    ARM/ixp4xx: add a missing include of dma-map-ops.h
    dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling
    dma-direct: factor out a dma_direct_alloc_from_pool helper
    dma-direct check for highmem pages in dma_direct_alloc_pages
    dma-mapping: merge into
    dma-mapping: move large parts of to kernel/dma
    dma-mapping: move dma-debug.h to kernel/dma/
    dma-mapping: remove
    dma-mapping: merge into
    dma-contiguous: remove dma_contiguous_set_default
    dma-contiguous: remove dev_set_cma_area
    dma-contiguous: remove dma_declare_contiguous
    dma-mapping: split
    cma: decrease CMA_ALIGNMENT lower limit to 2
    firewire-ohci: use dma_alloc_pages
    dma-iommu: implement ->alloc_noncoherent
    dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods
    dma-mapping: add a new dma_alloc_pages API
    dma-mapping: remove dma_cache_sync
    53c700: convert to dma_alloc_noncoherent
    ...

    Linus Torvalds
     

15 Oct, 2020

1 commit

  • Pull kernel_clone() updates from Christian Brauner:
    "During the v5.9 merge window we reworked the process creation
    codepaths across multiple architectures. After this work we were only
    left with the _do_fork() helper based on the struct kernel_clone_args
    calling convention. As was pointed out _do_fork() isn't valid
    kernelese especially for a helper that isn't just static.

    This series removes the _do_fork() helper and introduces the new
    kernel_clone() helper. The process creation cleanup didn't change the
    name to something more reasonable mainly because _do_fork() was used
    in quite a few places. So sending this as a separate series seemed the
    better strategy.

    I originally intended to send this early in the v5.9 development cycle
    after the merge window had closed but given that this was touching
    quite a few places I decided to defer this until the v5.10 merge
    window"

    * tag 'kernel-clone-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
    sched: remove _do_fork()
    tracing: switch to kernel_clone()
    kgdbts: switch to kernel_clone()
    kprobes: switch to kernel_clone()
    x86: switch to kernel_clone()
    sparc: switch to kernel_clone()
    nios2: switch to kernel_clone()
    m68k: switch to kernel_clone()
    ia64: switch to kernel_clone()
    h8300: switch to kernel_clone()
    fork: introduce kernel_clone()

    Linus Torvalds
     

13 Oct, 2020

5 commits

  • Pull copy_and_csum cleanups from Al Viro:
    "Saner calling conventions for csum_and_copy_..._user() and friends"

    [ Removing 800+ lines of code and cleaning stuff up is good - Linus ]

    * 'work.csum_and_copy' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    ppc: propagate the calling conventions change down to csum_partial_copy_generic()
    amd64: switch csum_partial_copy_generic() to new calling conventions
    sparc64: propagate the calling convention changes down to __csum_partial_copy_...()
    xtensa: propagate the calling conventions change down into csum_partial_copy_generic()
    mips: propagate the calling convention change down into __csum_partial_copy_..._user()
    mips: __csum_partial_copy_kernel() has no users left
    mips: csum_and_copy_{to,from}_user() are never called under KERNEL_DS
    sparc32: propagate the calling conventions change down to __csum_partial_copy_sparc_generic()
    i386: propagate the calling conventions change down to csum_partial_copy_generic()
    sh: propage the calling conventions change down to csum_partial_copy_generic()
    m68k: get rid of zeroing destination on error in csum_and_copy_from_user()
    arm: propagate the calling convention changes down to csum_partial_copy_from_user()
    alpha: propagate the calling convention changes down to csum_partial_copy.c helpers
    saner calling conventions for csum_and_copy_..._user()
    csum_and_copy_..._user(): pass 0xffffffff instead of 0 as initial sum
    csum_partial_copy_nocheck(): drop the last argument
    unify generic instances of csum_partial_copy_nocheck()
    icmp_push_reply(): reorder adding the checksum up
    skb_copy_and_csum_bits(): don't bother with the last argument

    Linus Torvalds
     
  • Pull ia64 updates from Tony Luck:
    "Cleanups by Christoph:

    - Switch to libata instead of legacy ide driver

    - Drop ia64 perfmon (it's been broken for a while)"

    * tag 'ia64_for_5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux:
    ia64: Use libata instead of the legacy ide driver in defconfigs
    ia64: Remove perfmon

    Linus Torvalds
     
  • Pull perf/kprobes updates from Ingo Molnar:
    "This prepares to unify the kretprobe trampoline handler and make
    kretprobe lockless (those patches are still work in progress)"

    * tag 'perf-kprobes-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    kprobes: Fix to check probe enabled before disarm_kprobe_ftrace()
    kprobes: Make local functions static
    kprobes: Free kretprobe_instance with RCU callback
    kprobes: Remove NMI context check
    sparc: kprobes: Use generic kretprobe trampoline handler
    sh: kprobes: Use generic kretprobe trampoline handler
    s390: kprobes: Use generic kretprobe trampoline handler
    powerpc: kprobes: Use generic kretprobe trampoline handler
    parisc: kprobes: Use generic kretprobe trampoline handler
    mips: kprobes: Use generic kretprobe trampoline handler
    ia64: kprobes: Use generic kretprobe trampoline handler
    csky: kprobes: Use generic kretprobe trampoline handler
    arc: kprobes: Use generic kretprobe trampoline handler
    arm64: kprobes: Use generic kretprobe trampoline handler
    arm: kprobes: Use generic kretprobe trampoline handler
    x86/kprobes: Use generic kretprobe trampoline handler
    kprobes: Add generic kretprobe trampoline handler

    Linus Torvalds
     
  • Pull orphan section checking from Ingo Molnar:
    "Orphan link sections were a long-standing source of obscure bugs,
    because the heuristics that various linkers & compilers use to handle
    them (include these bits into the output image vs discarding them
    silently) are both highly idiosyncratic and also version dependent.

    Instead of this historically problematic mess, this tree by Kees Cook
    (et al) adds build time asserts and build time warnings if there's any
    orphan section in the kernel or if a section is not sized as expected.

    And because we relied on so many silent assumptions in this area, fix
    a metric ton of dependencies and some outright bugs related to this,
    before we can finally enable the checks on the x86, ARM and ARM64
    platforms"

    * tag 'core-build-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits)
    x86/boot/compressed: Warn on orphan section placement
    x86/build: Warn on orphan section placement
    arm/boot: Warn on orphan section placement
    arm/build: Warn on orphan section placement
    arm64/build: Warn on orphan section placement
    x86/boot/compressed: Add missing debugging sections to output
    x86/boot/compressed: Remove, discard, or assert for unwanted sections
    x86/boot/compressed: Reorganize zero-size section asserts
    x86/build: Add asserts for unwanted sections
    x86/build: Enforce an empty .got.plt section
    x86/asm: Avoid generating unused kprobe sections
    arm/boot: Handle all sections explicitly
    arm/build: Assert for unwanted sections
    arm/build: Add missing sections
    arm/build: Explicitly keep .ARM.attributes sections
    arm/build: Refactor linker script headers
    arm64/build: Assert for unwanted sections
    arm64/build: Add missing DWARF sections
    arm64/build: Use common DISCARDS in linker script
    arm64/build: Remove .eh_frame* sections due to unwind tables
    ...

    Linus Torvalds
     
  • Pull x86 irq updates from Thomas Gleixner:
    "Surgery of the MSI interrupt handling to prepare the support of
    upcoming devices which require non-PCI based MSI handling:

    - Cleanup historical leftovers all over the place

    - Rework the code to utilize more core functionality

    - Wrap XEN PCI/MSI interrupts into an irqdomain to make irqdomain
    assignment to PCI devices possible.

    - Assign irqdomains to PCI devices at initialization time which
    allows to utilize the full functionality of hierarchical
    irqdomains.

    - Remove arch_.*_msi_irq() functions from X86 and utilize the
    irqdomain which is assigned to the device for interrupt management.

    - Make the arch_.*_msi_irq() support conditional on a config switch
    and let the last few users select it"

    * tag 'x86-irq-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (40 commits)
    PCI: MSI: Fix Kconfig dependencies for PCI_MSI_ARCH_FALLBACKS
    x86/apic/msi: Unbreak DMAR and HPET MSI
    iommu/amd: Remove domain search for PCI/MSI
    iommu/vt-d: Remove domain search for PCI/MSI[X]
    x86/irq: Make most MSI ops XEN private
    x86/irq: Cleanup the arch_*_msi_irqs() leftovers
    PCI/MSI: Make arch_.*_msi_irq[s] fallbacks selectable
    x86/pci: Set default irq domain in pcibios_add_device()
    iommm/amd: Store irq domain in struct device
    iommm/vt-d: Store irq domain in struct device
    x86/xen: Wrap XEN MSI management into irqdomain
    irqdomain/msi: Allow to override msi_domain_alloc/free_irqs()
    x86/xen: Consolidate XEN-MSI init
    x86/xen: Rework MSI teardown
    x86/xen: Make xen_msi_init() static and rename it to xen_hvm_msi_init()
    PCI/MSI: Provide pci_dev_has_special_msi_domain() helper
    PCI_vmd_Mark_VMD_irqdomain_with_DOMAIN_BUS_VMD_MSI
    irqdomain/msi: Provide DOMAIN_BUS_VMD_MSI
    x86/irq: Initialize PCI/MSI domain at PCI init time
    x86/pci: Reducde #ifdeffery in PCI init code
    ...

    Linus Torvalds
     

06 Oct, 2020

2 commits


28 Sep, 2020

1 commit

  • The unconditional selection of PCI_MSI_ARCH_FALLBACKS has an unmet
    dependency because PCI_MSI_ARCH_FALLBACKS is defined in a 'if PCI' clause.

    As it is only relevant when PCI_MSI is enabled, update the affected
    architecture Kconfigs to make the selection of PCI_MSI_ARCH_FALLBACKS
    depend on 'if PCI_MSI'.

    Fixes: 077ee78e3928 ("PCI/MSI: Make arch_.*_msi_irq[s] fallbacks selectable")
    Reported-by: Qian Cai
    Signed-off-by: Thomas Gleixner
    Cc: Vasily Gorbik
    Links: https://lore.kernel.org/r/cdfd63305caa57785b0925dd24c0711ea02c8527.camel@redhat.com

    Thomas Gleixner
     

27 Sep, 2020

1 commit

  • Patch series "mm: fix memory to node bad links in sysfs", v3.

    Sometimes, firmware may expose interleaved memory layout like this:

    Early memory node ranges
    node 1: [mem 0x0000000000000000-0x000000011fffffff]
    node 2: [mem 0x0000000120000000-0x000000014fffffff]
    node 1: [mem 0x0000000150000000-0x00000001ffffffff]
    node 0: [mem 0x0000000200000000-0x000000048fffffff]
    node 2: [mem 0x0000000490000000-0x00000007ffffffff]

    In that case, we can see memory blocks assigned to multiple nodes in
    sysfs:

    $ ls -l /sys/devices/system/memory/memory21
    total 0
    lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
    lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
    -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
    -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
    -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
    drwxr-xr-x 2 root root 0 Aug 24 05:27 power
    -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
    -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
    lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
    -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
    -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones

    The same applies in the node's directory with a memory21 link in both
    the node1 and node2's directory.

    This is wrong but doesn't prevent the system to run. However when
    later, one of these memory blocks is hot-unplugged and then hot-plugged,
    the system is detecting an inconsistency in the sysfs layout and a
    BUG_ON() is raised:

    kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
    LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
    Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
    CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
    Call Trace:
    add_memory_resource+0x23c/0x340 (unreliable)
    __add_memory+0x5c/0xf0
    dlpar_add_lmb+0x1b4/0x500
    dlpar_memory+0x1f8/0xb80
    handle_dlpar_errorlog+0xc0/0x190
    dlpar_store+0x198/0x4a0
    kobj_attr_store+0x30/0x50
    sysfs_kf_write+0x64/0x90
    kernfs_fop_write+0x1b0/0x290
    vfs_write+0xe8/0x290
    ksys_write+0xdc/0x130
    system_call_exception+0x160/0x270
    system_call_common+0xf0/0x27c

    This has been seen on PowerPC LPAR.

    The root cause of this issue is that when node's memory is registered,
    the range used can overlap another node's range, thus the memory block
    is registered to multiple nodes in sysfs.

    There are two issues here:

    (a) The sysfs memory and node's layouts are broken due to these
    multiple links

    (b) The link errors in link_mem_sections() should not lead to a system
    panic.

    To address (a) register_mem_sect_under_node should not rely on the
    system state to detect whether the link operation is triggered by a hot
    plug operation or not. This is addressed by the patches 1 and 2 of this
    series.

    Issue (b) will be addressed separately.

    This patch (of 2):

    The memmap_context enum is used to detect whether a memory operation is
    due to a hot-add operation or happening at boot time.

    Make it general to the hotplug operation and rename it as
    meminit_context.

    There is no functional change introduced by this patch

    Suggested-by: David Hildenbrand
    Signed-off-by: Laurent Dufour
    Signed-off-by: Andrew Morton
    Reviewed-by: David Hildenbrand
    Reviewed-by: Oscar Salvador
    Acked-by: Michal Hocko
    Cc: Greg Kroah-Hartman
    Cc: "Rafael J . Wysocki"
    Cc: Nathan Lynch
    Cc: Scott Cheloha
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc:
    Link: https://lkml.kernel.org/r/20200915094143.79181-1-ldufour@linux.ibm.com
    Link: https://lkml.kernel.org/r/20200915132624.9723-1-ldufour@linux.ibm.com
    Signed-off-by: Linus Torvalds

    Laurent Dufour
     

25 Sep, 2020

2 commits


24 Sep, 2020

1 commit

  • There was a request to preprocess the module linker script like we
    do for the vmlinux one. (https://lkml.org/lkml/2020/8/21/512)

    The difference between vmlinux.lds and module.lds is that the latter
    is needed for external module builds, thus must be cleaned up by
    'make mrproper' instead of 'make clean'. Also, it must be created
    by 'make modules_prepare'.

    You cannot put it in arch/$(SRCARCH)/kernel/, which is cleaned up by
    'make clean'. I moved arch/$(SRCARCH)/kernel/module.lds to
    arch/$(SRCARCH)/include/asm/module.lds.h, which is included from
    scripts/module.lds.S.

    scripts/module.lds is fine because 'make clean' keeps all the
    build artifacts under scripts/.

    You can add arch-specific sections in .

    Signed-off-by: Masahiro Yamada
    Tested-by: Jessica Yu
    Acked-by: Will Deacon
    Acked-by: Geert Uytterhoeven
    Acked-by: Palmer Dabbelt
    Reviewed-by: Kees Cook
    Acked-by: Jessica Yu

    Masahiro Yamada
     

17 Sep, 2020

1 commit


16 Sep, 2020

1 commit

  • The arch_.*_msi_irq[s] fallbacks are compiled in whether an architecture
    requires them or not. Architectures which are fully utilizing hierarchical
    irq domains should never call into that code.

    It's not only architectures which depend on that by implementing one or
    more of the weak functions, there is also a bunch of drivers which relies
    on the weak functions which invoke msi_controller::setup_irq[s] and
    msi_controller::teardown_irq.

    Make the architectures and drivers which rely on them select them in Kconfig
    and if not selected replace them by stub functions which emit a warning and
    fail the PCI/MSI interrupt allocation.

    Signed-off-by: Thomas Gleixner
    Link: https://lore.kernel.org/r/20200826112333.992429909@linutronix.de

    Thomas Gleixner
     

12 Sep, 2020

2 commits


11 Sep, 2020

1 commit


09 Sep, 2020

1 commit

  • Add a CONFIG_SET_FS option that is selected by architecturess that
    implement set_fs, which is all of them initially. If the option is not
    set stubs for routines related to overriding the address space are
    provided so that architectures can start to opt out of providing set_fs.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Kees Cook
    Signed-off-by: Al Viro

    Christoph Hellwig
     

08 Sep, 2020

1 commit


04 Sep, 2020

1 commit

  • We found that callers of dma_get_seg_boundary mostly do an ALIGN
    with page mask and then do a page shift to get number of pages:
    ALIGN(boundary + 1, 1 << shift) >> shift

    However, the boundary might be as large as ULONG_MAX, which means
    that a device has no specific boundary limit. So either "+ 1" or
    passing it to ALIGN() would potentially overflow.

    According to kernel defines:
    #define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
    #define ALIGN(x, a) ALIGN_MASK(x, (typeof(x))(a) - 1)

    We can simplify the logic here into a helper function doing:
    ALIGN(boundary + 1, 1 << shift) >> shift
    = ALIGN_MASK(b + 1, (1 << s) - 1) >> s
    = {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s
    = [b + 1 + (1 << s) - 1] >> s
    = [b + (1 << s)] >> s
    = (b >> s) + 1

    This patch introduces and applies dma_get_seg_boundary_nr_pages()
    as an overflow-free helper for the dma_get_seg_boundary() callers
    to get numbers of pages. It also takes care of the NULL dev case
    for non-DMA API callers.

    Suggested-by: Christoph Hellwig
    Signed-off-by: Nicolin Chen
    Acked-by: Niklas Schnelle
    Acked-by: Michael Ellerman (powerpc)
    Signed-off-by: Christoph Hellwig

    Nicolin Chen
     

02 Sep, 2020

1 commit

  • Fix min_low_pfn/max_low_pfn build errors for arch/ia64/: (e.g.)

    ERROR: "max_low_pfn" [drivers/rpmsg/virtio_rpmsg_bus.ko] undefined!
    ERROR: "min_low_pfn" [drivers/rpmsg/virtio_rpmsg_bus.ko] undefined!
    ERROR: "min_low_pfn" [drivers/hwtracing/intel_th/intel_th_msu.ko] undefined!
    ERROR: "max_low_pfn" [drivers/hwtracing/intel_th/intel_th_msu.ko] undefined!
    ERROR: "min_low_pfn" [drivers/crypto/cavium/nitrox/n5pf.ko] undefined!
    ERROR: "max_low_pfn" [drivers/crypto/cavium/nitrox/n5pf.ko] undefined!
    ERROR: "max_low_pfn" [drivers/md/dm-integrity.ko] undefined!
    ERROR: "min_low_pfn" [drivers/md/dm-integrity.ko] undefined!
    ERROR: "max_low_pfn" [crypto/tcrypt.ko] undefined!
    ERROR: "min_low_pfn" [crypto/tcrypt.ko] undefined!
    ERROR: "min_low_pfn" [security/keys/encrypted-keys/encrypted-keys.ko] undefined!
    ERROR: "max_low_pfn" [security/keys/encrypted-keys/encrypted-keys.ko] undefined!
    ERROR: "min_low_pfn" [arch/ia64/kernel/mca_recovery.ko] undefined!
    ERROR: "max_low_pfn" [arch/ia64/kernel/mca_recovery.ko] undefined!

    David suggested just exporting min_low_pfn & max_low_pfn in
    mm/memblock.c:
    https://lore.kernel.org/lkml/alpine.DEB.2.22.394.2006291911220.1118534@chino.kir.corp.google.com/

    Reported-by: kernel test robot
    Signed-off-by: Randy Dunlap
    Acked-by: David Rientjes
    Acked-by: Tony Luck
    Cc: linux-mm@kvack.org
    Cc: Andrew Morton
    Cc: David Rientjes
    Cc: Mike Rapoport
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: linux-ia64@vger.kernel.org
    Signed-off-by: Mike Rapoport

    Randy Dunlap
     

01 Sep, 2020

1 commit

  • The .comment section doesn't belong in STABS_DEBUG. Split it out into a
    new macro named ELF_DETAILS. This will gain other non-debug sections
    that need to be accounted for when linking with --orphan-handling=warn.

    Signed-off-by: Kees Cook
    Signed-off-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org
    Link: https://lore.kernel.org/r/20200821194310.3089815-5-keescook@chromium.org

    Kees Cook
     

24 Aug, 2020

1 commit

  • Replace the existing /* fall through */ comments and its variants with
    the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
    fall-through markings when it is the case.

    [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through

    Signed-off-by: Gustavo A. R. Silva

    Gustavo A. R. Silva
     

21 Aug, 2020

1 commit

  • quite a few architectures have the same csum_partial_copy_nocheck() -
    simply memcpy() the data and then return the csum of the copy.

    hexagon, parisc, ia64, s390, um: explicitly spelled out that way.

    arc, arm64, csky, h8300, m68k/nommu, microblaze, mips/GENERIC_CSUM, nds32,
    nios2, openrisc, riscv, unicore32: end up picking the same thing spelled
    out in lib/checksum.h (with varying amounts of perversions along the way).

    everybody else (alpha, arm, c6x, m68k/mmu, mips/!GENERIC_CSUM, powerpc,
    sh, sparc, x86, xtensa) have non-generic variants. For all except c6x
    the declaration is in their asm/checksum.h. c6x uses the wrapper
    from asm-generic/checksum.h that would normally lead to the lib/checksum.h
    instance, but in case of c6x we end up using an asm function from arch/c6x
    instead.

    Screw that mess - have architectures with private instances define
    _HAVE_ARCH_CSUM_AND_COPY in their asm/checksum.h and have the default
    one right in net/checksum.h conditional on _HAVE_ARCH_CSUM_AND_COPY
    *not* defined.

    Signed-off-by: Al Viro

    Al Viro
     

20 Aug, 2020

1 commit

  • The old _do_fork() helper is removed in favor of the new kernel_clone() helper.
    The latter adheres to naming conventions for kernel internal syscall helpers.

    Signed-off-by: Christian Brauner
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: linux-ia64@vger.kernel.org
    Link: https://lore.kernel.org/r/20200819104655.436656-4-christian.brauner@ubuntu.com

    Christian Brauner