13 Jan, 2021

1 commit

  • [ Upstream commit 87dbc209ea04645fd2351981f09eff5d23f8e2e9 ]

    Make mandatory in include/asm-generic/Kbuild and
    remove all arch/*/include/asm/local64.h arch-specific files since they
    only #include .

    This fixes build errors on arch/c6x/ and arch/nios2/ for
    block/blk-iocost.c.

    Build-tested on 21 of 25 arch-es. (tools problems on the others)

    Yes, we could even rename to
    and change all #includes to use
    instead.

    Link: https://lkml.kernel.org/r/20201227024446.17018-1-rdunlap@infradead.org
    Signed-off-by: Randy Dunlap
    Suggested-by: Christoph Hellwig
    Reviewed-by: Masahiro Yamada
    Cc: Jens Axboe
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Aurelien Jacquiot
    Cc: Peter Zijlstra
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Randy Dunlap
     

24 Nov, 2020

1 commit

  • We call arch_cpu_idle() with RCU disabled, but then use
    local_irq_{en,dis}able(), which invokes tracing, which relies on RCU.

    Switch all arch_cpu_idle() implementations to use
    raw_local_irq_{en,dis}able() and carefully manage the
    lockdep,rcu,tracing state like we do in entry.

    (XXX: we really should change arch_cpu_idle() to not return with
    interrupts enabled)

    Reported-by: Sven Schnelle
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Mark Rutland
    Tested-by: Mark Rutland
    Link: https://lkml.kernel.org/r/20201120114925.594122626@infradead.org

    Peter Zijlstra
     

24 Oct, 2020

1 commit

  • Pull arch task_work cleanups from Jens Axboe:
    "Two cleanups that don't fit other categories:

    - Finally get the task_work_add() cleanup done properly, so we don't
    have random 0/1/false/true/TWA_SIGNAL confusing use cases. Updates
    all callers, and also fixes up the documentation for
    task_work_add().

    - While working on some TIF related changes for 5.11, this
    TIF_NOTIFY_RESUME cleanup fell out of that. Remove some arch
    duplication for how that is handled"

    * tag 'arch-cleanup-2020-10-22' of git://git.kernel.dk/linux-block:
    task_work: cleanup notification modes
    tracehook: clear TIF_NOTIFY_RESUME in tracehook_notify_resume()

    Linus Torvalds
     

23 Oct, 2020

1 commit

  • Pull initial set_fs() removal from Al Viro:
    "Christoph's set_fs base series + fixups"

    * 'work.set_fs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    fs: Allow a NULL pos pointer to __kernel_read
    fs: Allow a NULL pos pointer to __kernel_write
    powerpc: remove address space overrides using set_fs()
    powerpc: use non-set_fs based maccess routines
    x86: remove address space overrides using set_fs()
    x86: make TASK_SIZE_MAX usable from assembly code
    x86: move PAGE_OFFSET, TASK_SIZE & friends to page_{32,64}_types.h
    lkdtm: remove set_fs-based tests
    test_bitmap: remove user bitmap tests
    uaccess: add infrastructure for kernel builds with set_fs()
    fs: don't allow splice read/write without explicit ops
    fs: don't allow kernel reads and writes without iter ops
    sysctl: Convert to iter interfaces
    proc: add a read_iter method to proc proc_ops
    proc: cleanup the compat vs no compat file ops
    proc: remove a level of indentation in proc_get_inode

    Linus Torvalds
     

18 Oct, 2020

1 commit


16 Oct, 2020

1 commit

  • Pull dma-mapping updates from Christoph Hellwig:

    - rework the non-coherent DMA allocator

    - move private definitions out of

    - lower CMA_ALIGNMENT (Paul Cercueil)

    - remove the omap1 dma address translation in favor of the common code

    - make dma-direct aware of multiple dma offset ranges (Jim Quinlan)

    - support per-node DMA CMA areas (Barry Song)

    - increase the default seg boundary limit (Nicolin Chen)

    - misc fixes (Robin Murphy, Thomas Tai, Xu Wang)

    - various cleanups

    * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits)
    ARM/ixp4xx: add a missing include of dma-map-ops.h
    dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling
    dma-direct: factor out a dma_direct_alloc_from_pool helper
    dma-direct check for highmem pages in dma_direct_alloc_pages
    dma-mapping: merge into
    dma-mapping: move large parts of to kernel/dma
    dma-mapping: move dma-debug.h to kernel/dma/
    dma-mapping: remove
    dma-mapping: merge into
    dma-contiguous: remove dma_contiguous_set_default
    dma-contiguous: remove dev_set_cma_area
    dma-contiguous: remove dma_declare_contiguous
    dma-mapping: split
    cma: decrease CMA_ALIGNMENT lower limit to 2
    firewire-ohci: use dma_alloc_pages
    dma-iommu: implement ->alloc_noncoherent
    dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods
    dma-mapping: add a new dma_alloc_pages API
    dma-mapping: remove dma_cache_sync
    53c700: convert to dma_alloc_noncoherent
    ...

    Linus Torvalds
     

13 Oct, 2020

2 commits

  • Pull copy_and_csum cleanups from Al Viro:
    "Saner calling conventions for csum_and_copy_..._user() and friends"

    [ Removing 800+ lines of code and cleaning stuff up is good - Linus ]

    * 'work.csum_and_copy' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    ppc: propagate the calling conventions change down to csum_partial_copy_generic()
    amd64: switch csum_partial_copy_generic() to new calling conventions
    sparc64: propagate the calling convention changes down to __csum_partial_copy_...()
    xtensa: propagate the calling conventions change down into csum_partial_copy_generic()
    mips: propagate the calling convention change down into __csum_partial_copy_..._user()
    mips: __csum_partial_copy_kernel() has no users left
    mips: csum_and_copy_{to,from}_user() are never called under KERNEL_DS
    sparc32: propagate the calling conventions change down to __csum_partial_copy_sparc_generic()
    i386: propagate the calling conventions change down to csum_partial_copy_generic()
    sh: propage the calling conventions change down to csum_partial_copy_generic()
    m68k: get rid of zeroing destination on error in csum_and_copy_from_user()
    arm: propagate the calling convention changes down to csum_partial_copy_from_user()
    alpha: propagate the calling convention changes down to csum_partial_copy.c helpers
    saner calling conventions for csum_and_copy_..._user()
    csum_and_copy_..._user(): pass 0xffffffff instead of 0 as initial sum
    csum_partial_copy_nocheck(): drop the last argument
    unify generic instances of csum_partial_copy_nocheck()
    icmp_push_reply(): reorder adding the checksum up
    skb_copy_and_csum_bits(): don't bother with the last argument

    Linus Torvalds
     
  • Pull orphan section checking from Ingo Molnar:
    "Orphan link sections were a long-standing source of obscure bugs,
    because the heuristics that various linkers & compilers use to handle
    them (include these bits into the output image vs discarding them
    silently) are both highly idiosyncratic and also version dependent.

    Instead of this historically problematic mess, this tree by Kees Cook
    (et al) adds build time asserts and build time warnings if there's any
    orphan section in the kernel or if a section is not sized as expected.

    And because we relied on so many silent assumptions in this area, fix
    a metric ton of dependencies and some outright bugs related to this,
    before we can finally enable the checks on the x86, ARM and ARM64
    platforms"

    * tag 'core-build-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits)
    x86/boot/compressed: Warn on orphan section placement
    x86/build: Warn on orphan section placement
    arm/boot: Warn on orphan section placement
    arm/build: Warn on orphan section placement
    arm64/build: Warn on orphan section placement
    x86/boot/compressed: Add missing debugging sections to output
    x86/boot/compressed: Remove, discard, or assert for unwanted sections
    x86/boot/compressed: Reorganize zero-size section asserts
    x86/build: Add asserts for unwanted sections
    x86/build: Enforce an empty .got.plt section
    x86/asm: Avoid generating unused kprobe sections
    arm/boot: Handle all sections explicitly
    arm/build: Assert for unwanted sections
    arm/build: Add missing sections
    arm/build: Explicitly keep .ARM.attributes sections
    arm/build: Refactor linker script headers
    arm64/build: Assert for unwanted sections
    arm64/build: Add missing DWARF sections
    arm64/build: Use common DISCARDS in linker script
    arm64/build: Remove .eh_frame* sections due to unwind tables
    ...

    Linus Torvalds
     

06 Oct, 2020

1 commit


09 Sep, 2020

1 commit

  • Add a CONFIG_SET_FS option that is selected by architecturess that
    implement set_fs, which is all of them initially. If the option is not
    set stubs for routines related to overriding the address space are
    provided so that architectures can start to opt out of providing set_fs.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Kees Cook
    Signed-off-by: Al Viro

    Christoph Hellwig
     

01 Sep, 2020

1 commit

  • The .comment section doesn't belong in STABS_DEBUG. Split it out into a
    new macro named ELF_DETAILS. This will gain other non-debug sections
    that need to be accounted for when linking with --orphan-handling=warn.

    Signed-off-by: Kees Cook
    Signed-off-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org
    Link: https://lore.kernel.org/r/20200821194310.3089815-5-keescook@chromium.org

    Kees Cook
     

24 Aug, 2020

1 commit

  • Replace the existing /* fall through */ comments and its variants with
    the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
    fall-through markings when it is the case.

    [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through

    Signed-off-by: Gustavo A. R. Silva

    Gustavo A. R. Silva
     

21 Aug, 2020

1 commit

  • quite a few architectures have the same csum_partial_copy_nocheck() -
    simply memcpy() the data and then return the csum of the copy.

    hexagon, parisc, ia64, s390, um: explicitly spelled out that way.

    arc, arm64, csky, h8300, m68k/nommu, microblaze, mips/GENERIC_CSUM, nds32,
    nios2, openrisc, riscv, unicore32: end up picking the same thing spelled
    out in lib/checksum.h (with varying amounts of perversions along the way).

    everybody else (alpha, arm, c6x, m68k/mmu, mips/!GENERIC_CSUM, powerpc,
    sh, sparc, x86, xtensa) have non-generic variants. For all except c6x
    the declaration is in their asm/checksum.h. c6x uses the wrapper
    from asm-generic/checksum.h that would normally lead to the lib/checksum.h
    instance, but in case of c6x we end up using an asm function from arch/c6x
    instead.

    Screw that mess - have architectures with private instances define
    _HAVE_ARCH_CSUM_AND_COPY in their asm/checksum.h and have the default
    one right in net/checksum.h conditional on _HAVE_ARCH_CSUM_AND_COPY
    *not* defined.

    Signed-off-by: Al Viro

    Al Viro
     

13 Aug, 2020

2 commits

  • Use the general page fault accounting by passing regs into
    handle_mm_fault(). It naturally solve the issue of multiple page fault
    accounting when page fault retry happened.

    Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the
    other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in
    handle_mm_fault().

    Signed-off-by: Peter Xu
    Signed-off-by: Andrew Morton
    Acked-by: Brian Cain
    Link: http://lkml.kernel.org/r/20200707225021.200906-8-peterx@redhat.com
    Signed-off-by: Linus Torvalds

    Peter Xu
     
  • Patch series "mm: Page fault accounting cleanups", v5.

    This is v5 of the pf accounting cleanup series. It originates from Gerald
    Schaefer's report on an issue a week ago regarding to incorrect page fault
    accountings for retried page fault after commit 4064b9827063 ("mm: allow
    VM_FAULT_RETRY for multiple times"):

    https://lore.kernel.org/lkml/20200610174811.44b94525@thinkpad/

    What this series did:

    - Correct page fault accounting: we do accounting for a page fault
    (no matter whether it's from #PF handling, or gup, or anything else)
    only with the one that completed the fault. For example, page fault
    retries should not be counted in page fault counters. Same to the
    perf events.

    - Unify definition of PERF_COUNT_SW_PAGE_FAULTS: currently this perf
    event is used in an adhoc way across different archs.

    Case (1): for many archs it's done at the entry of a page fault
    handler, so that it will also cover e.g. errornous faults.

    Case (2): for some other archs, it is only accounted when the page
    fault is resolved successfully.

    Case (3): there're still quite some archs that have not enabled
    this perf event.

    Since this series will touch merely all the archs, we unify this
    perf event to always follow case (1), which is the one that makes most
    sense. And since we moved the accounting into handle_mm_fault, the
    other two MAJ/MIN perf events are well taken care of naturally.

    - Unify definition of "major faults": the definition of "major
    fault" is slightly changed when used in accounting (not
    VM_FAULT_MAJOR). More information in patch 1.

    - Always account the page fault onto the one that triggered the page
    fault. This does not matter much for #PF handlings, but mostly for
    gup. More information on this in patch 25.

    Patchset layout:

    Patch 1: Introduced the accounting in handle_mm_fault(), not enabled.
    Patch 2-23: Enable the new accounting for arch #PF handlers one by one.
    Patch 24: Enable the new accounting for the rest outliers (gup, iommu, etc.)
    Patch 25: Cleanup GUP task_struct pointer since it's not needed any more

    This patch (of 25):

    This is a preparation patch to move page fault accountings into the
    general code in handle_mm_fault(). This includes both the per task
    flt_maj/flt_min counters, and the major/minor page fault perf events. To
    do this, the pt_regs pointer is passed into handle_mm_fault().

    PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault
    handlers.

    So far, all the pt_regs pointer that passed into handle_mm_fault() is
    NULL, which means this patch should have no intented functional change.

    Suggested-by: Linus Torvalds
    Signed-off-by: Peter Xu
    Signed-off-by: Andrew Morton
    Cc: Albert Ou
    Cc: Alexander Gordeev
    Cc: Andy Lutomirski
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: Christian Borntraeger
    Cc: Chris Zankel
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Geert Uytterhoeven
    Cc: Gerald Schaefer
    Cc: Greentime Hu
    Cc: Guo Ren
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: H. Peter Anvin
    Cc: Ingo Molnar
    Cc: Ivan Kokshaysky
    Cc: James E.J. Bottomley
    Cc: John Hubbard
    Cc: Jonas Bonn
    Cc: Ley Foon Tan
    Cc: "Luck, Tony"
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Nick Hu
    Cc: Palmer Dabbelt
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Pekka Enberg
    Cc: Peter Zijlstra
    Cc: Richard Henderson
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Stefan Kristiansson
    Cc: Thomas Bogendoerfer
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vincent Chen
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200707225021.200906-1-peterx@redhat.com
    Link: http://lkml.kernel.org/r/20200707225021.200906-2-peterx@redhat.com
    Signed-off-by: Linus Torvalds

    Peter Xu
     

10 Aug, 2020

1 commit

  • Pull regset conversion fix from Al Viro:
    "Fix a regression from an unnoticed bisect hazard in the regset series.

    A bunch of old (aout, originally) primitives used by coredumps became
    dead code after fdpic conversion to regsets. Removal of that dead code
    had been the first commit in the followups to regset series;
    unfortunately, it happened to hide the bisect hazard on sh (extern for
    fpregs_get() had not been updated in the main series when it should
    have been; followup simply made fpregs_get() static). And without that
    followup commit this bisect hazard became breakage in the mainline"

    Tested-by: John Paul Adrian Glaubitz

    * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    kill unused dump_fpu() instances

    Linus Torvalds
     

08 Aug, 2020

3 commits

  • Merge misc updates from Andrew Morton:

    - a few MM hotfixes

    - kthread, tools, scripts, ntfs and ocfs2

    - some of MM

    Subsystems affected by this patch series: kthread, tools, scripts, ntfs,
    ocfs2 and mm (hofixes, pagealloc, slab-generic, slab, slub, kcsan,
    debug, pagecache, gup, swap, shmem, memcg, pagemap, mremap, mincore,
    sparsemem, vmalloc, kasan, pagealloc, hugetlb and vmscan).

    * emailed patches from Andrew Morton : (162 commits)
    mm: vmscan: consistent update to pgrefill
    mm/vmscan.c: fix typo
    khugepaged: khugepaged_test_exit() check mmget_still_valid()
    khugepaged: retract_page_tables() remember to test exit
    khugepaged: collapse_pte_mapped_thp() protect the pmd lock
    khugepaged: collapse_pte_mapped_thp() flush the right range
    mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible
    mm: thp: replace HTTP links with HTTPS ones
    mm/page_alloc: fix memalloc_nocma_{save/restore} APIs
    mm/page_alloc.c: skip setting nodemask when we are in interrupt
    mm/page_alloc: fallbacks at most has 3 elements
    mm/page_alloc: silence a KASAN false positive
    mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask()
    mm/page_alloc.c: simplify pageblock bitmap access
    mm/page_alloc.c: extract the common part in pfn_to_bitidx()
    mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits
    mm/shuffle: remove dynamic reconfiguration
    mm/memory_hotplug: document why shuffle_zone() is relevant
    mm/page_alloc: remove nr_free_pagecache_pages()
    mm: remove vm_total_pages
    ...

    Linus Torvalds
     
  • Most architectures define pgd_free() as a wrapper for free_page().

    Provide a generic version in asm-generic/pgalloc.h and enable its use for
    most architectures.

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Reviewed-by: Pekka Enberg
    Acked-by: Geert Uytterhoeven [m68k]
    Cc: Abdul Haleem
    Cc: Andy Lutomirski
    Cc: Arnd Bergmann
    Cc: Christophe Leroy
    Cc: Joerg Roedel
    Cc: Joerg Roedel
    Cc: Max Filippov
    Cc: Peter Zijlstra (Intel)
    Cc: Satheesh Rajendran
    Cc: Stafford Horne
    Cc: Stephen Rothwell
    Cc: Steven Rostedt
    Cc: Matthew Wilcox
    Link: http://lkml.kernel.org/r/20200627143453.31835-7-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • Pull ptrace regset updates from Al Viro:
    "Internal regset API changes:

    - regularize copy_regset_{to,from}_user() callers

    - switch to saner calling conventions for ->get()

    - kill user_regset_copyout()

    The ->put() side of things will have to wait for the next cycle,
    unfortunately.

    The balance is about -1KLoC and replacements for ->get() instances are
    a lot saner"

    * 'work.regset' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (41 commits)
    regset: kill user_regset_copyout{,_zero}()
    regset(): kill ->get_size()
    regset: kill ->get()
    csky: switch to ->regset_get()
    xtensa: switch to ->regset_get()
    parisc: switch to ->regset_get()
    nds32: switch to ->regset_get()
    nios2: switch to ->regset_get()
    hexagon: switch to ->regset_get()
    h8300: switch to ->regset_get()
    openrisc: switch to ->regset_get()
    riscv: switch to ->regset_get()
    c6x: switch to ->regset_get()
    ia64: switch to ->regset_get()
    arc: switch to ->regset_get()
    arm: switch to ->regset_get()
    sh: convert to ->regset_get()
    arm64: switch to ->regset_get()
    mips: switch to ->regset_get()
    sparc: switch to ->regset_get()
    ...

    Linus Torvalds
     

05 Aug, 2020

1 commit

  • Pull fork cleanups from Christian Brauner:
    "This is cleanup series from when we reworked a chunk of the process
    creation paths in the kernel and switched to struct
    {kernel_}clone_args.

    High-level this does two main things:

    - Remove the double export of both do_fork() and _do_fork() where
    do_fork() used the incosistent legacy clone calling convention.

    Now we only export _do_fork() which is based on struct
    kernel_clone_args.

    - Remove the copy_thread_tls()/copy_thread() split making the
    architecture specific HAVE_COYP_THREAD_TLS config option obsolete.

    This switches all remaining architectures to select
    HAVE_COPY_THREAD_TLS and thus to the copy_thread_tls() calling
    convention. The current split makes the process creation codepaths
    more convoluted than they need to be. Each architecture has their own
    copy_thread() function unless it selects HAVE_COPY_THREAD_TLS then it
    has a copy_thread_tls() function.

    The split is not needed anymore nowadays, all architectures support
    CLONE_SETTLS but quite a few of them never bothered to select
    HAVE_COPY_THREAD_TLS and instead simply continued to use copy_thread()
    and use the old calling convention. Removing this split cleans up the
    process creation codepaths and paves the way for implementing clone3()
    on such architectures since it requires the copy_thread_tls() calling
    convention.

    After having made each architectures support copy_thread_tls() this
    series simply renames that function back to copy_thread(). It also
    switches all architectures that call do_fork() directly over to
    _do_fork() and the struct kernel_clone_args calling convention. This
    is a corollary of switching the architectures that did not yet support
    it over to copy_thread_tls() since do_fork() is conditional on not
    supporting copy_thread_tls() (Mostly because it lacks a separate
    argument for tls which is trivial to fix but there's no need for this
    function to exist.).

    The do_fork() removal is in itself already useful as it allows to to
    remove the export of both do_fork() and _do_fork() we currently have
    in favor of only _do_fork(). This has already been discussed back when
    we added clone3(). The legacy clone() calling convention is - as is
    probably well-known - somewhat odd:

    #
    # ABI hall of shame
    #
    config CLONE_BACKWARDS
    config CLONE_BACKWARDS2
    config CLONE_BACKWARDS3

    that is aggravated by the fact that some architectures such as sparc
    follow the CLONE_BACKWARDSx calling convention but don't really select
    the corresponding config option since they call do_fork() directly.

    So do_fork() enforces a somewhat arbitrary calling convention in the
    first place that doesn't really help the individual architectures that
    deviate from it. They can thus simply be switched to _do_fork()
    enforcing a single calling convention. (I really hope that any new
    architectures will __not__ try to implement their own calling
    conventions...)

    Most architectures already have made a similar switch (m68k comes to
    mind).

    Overall this removes more code than it adds even with a good portion
    of added comments. It simplifies a chunk of arch specific assembly
    either by moving the code into C or by simply rewriting the assembly.

    Architectures that have been touched in non-trivial ways have all been
    actually boot and stress tested: sparc and ia64 have been tested with
    Debian 9 images. They are the two architectures which have been
    touched the most. All non-trivial changes to architectures have seen
    acks from the relevant maintainers. nios2 with a custom built
    buildroot image. h8300 I couldn't get something bootable to test on
    but the changes have been fairly automatic and I'm sure we'll hear
    people yell if I broke something there.

    All other architectures that have been touched in trivial ways have
    been compile tested for each single patch of the series via git rebase
    -x "make ..." v5.8-rc2. arm{64} and x86{_64} have been boot tested
    even though they have just been trivially touched (removal of the
    HAVE_COPY_THREAD_TLS macro from their Kconfig) because well they are
    basically "core architectures" and since it is trivial to get your
    hands on a useable image"

    * tag 'fork-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
    arch: rename copy_thread_tls() back to copy_thread()
    arch: remove HAVE_COPY_THREAD_TLS
    unicore: switch to copy_thread_tls()
    sh: switch to copy_thread_tls()
    nds32: switch to copy_thread_tls()
    microblaze: switch to copy_thread_tls()
    hexagon: switch to copy_thread_tls()
    c6x: switch to copy_thread_tls()
    alpha: switch to copy_thread_tls()
    fork: remove do_fork()
    h8300: select HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
    nios2: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
    ia64: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
    sparc: unconditionally enable HAVE_COPY_THREAD_TLS
    sparc: share process creation helpers between sparc and sparc64
    sparc64: enable HAVE_COPY_THREAD_TLS
    fork: fold legacy_clone_args_valid() into _do_fork()

    Linus Torvalds
     

29 Jul, 2020

1 commit

  • This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h.
    This allows users of atomic_t to use ATOMIC_INIT without having to
    include atomic.h as that way may lead to header loops.

    Signed-off-by: Herbert Xu
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Waiman Long
    Link: https://lkml.kernel.org/r/20200729123105.GB7047@gondor.apana.org.au

    Herbert Xu
     

28 Jul, 2020

2 commits


05 Jul, 2020

3 commits

  • Now that HAVE_COPY_THREAD_TLS has been removed, rename copy_thread_tls()
    back simply copy_thread(). It's a simpler name, and doesn't imply that only
    tls is copied here. This finishes an outstanding chunk of internal process
    creation work since we've added clone3().

    Cc: linux-arch@vger.kernel.org
    Acked-by: Thomas Bogendoerfer A
    Acked-by: Stafford Horne
    Acked-by: Greentime Hu
    Acked-by: Geert Uytterhoeven A
    Reviewed-by: Kees Cook
    Signed-off-by: Christian Brauner

    Christian Brauner
     
  • All architectures support copy_thread_tls() now, so remove the legacy
    copy_thread() function and the HAVE_COPY_THREAD_TLS config option. Everyone
    uses the same process creation calling convention based on
    copy_thread_tls() and struct kernel_clone_args. This will make it easier to
    maintain the core process creation code under kernel/, simplifies the
    callpaths and makes the identical for all architectures.

    Cc: linux-arch@vger.kernel.org
    Acked-by: Thomas Bogendoerfer
    Acked-by: Greentime Hu
    Acked-by: Geert Uytterhoeven
    Reviewed-by: Kees Cook
    Signed-off-by: Christian Brauner

    Christian Brauner
     
  • Use the copy_thread_tls() calling convention which passes tls through a
    register. This is required so we can remove the copy_thread{_tls}() split
    and remove the HAVE_COPY_THREAD_TLS macro.

    Cc: linux-hexagon@vger.kernel.org
    Acked-by: Brian Cain
    Signed-off-by: Christian Brauner

    Christian Brauner
     

14 Jun, 2020

1 commit

  • Since commit 84af7a6194e4 ("checkpatch: kconfig: prefer 'help' over
    '---help---'"), the number of '---help---' has been gradually
    decreasing, but there are still more than 2400 instances.

    This commit finishes the conversion. While I touched the lines,
    I also fixed the indentation.

    There are a variety of indentation styles found.

    a) 4 spaces + '---help---'
    b) 7 spaces + '---help---'
    c) 8 spaces + '---help---'
    d) 1 space + 1 tab + '---help---'
    e) 1 tab + '---help---' (correct indentation)
    f) 1 tab + 1 space + '---help---'
    g) 1 tab + 2 spaces + '---help---'

    In order to convert all of them to 1 tab + 'help', I ran the
    following commend:

    $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'

    Signed-off-by: Masahiro Yamada

    Masahiro Yamada
     

10 Jun, 2020

7 commits

  • This change converts the existing mmap_sem rwsem calls to use the new mmap
    locking API instead.

    The change is generated using coccinelle with the following rule:

    // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir .

    @@
    expression mm;
    @@
    (
    -init_rwsem
    +mmap_init_lock
    |
    -down_write
    +mmap_write_lock
    |
    -down_write_killable
    +mmap_write_lock_killable
    |
    -down_write_trylock
    +mmap_write_trylock
    |
    -up_write
    +mmap_write_unlock
    |
    -downgrade_write
    +mmap_write_downgrade
    |
    -down_read
    +mmap_read_lock
    |
    -down_read_killable
    +mmap_read_lock_killable
    |
    -down_read_trylock
    +mmap_read_trylock
    |
    -up_read
    +mmap_read_unlock
    )
    -(&mm->mmap_sem)
    +(mm)

    Signed-off-by: Michel Lespinasse
    Signed-off-by: Andrew Morton
    Reviewed-by: Daniel Jordan
    Reviewed-by: Laurent Dufour
    Reviewed-by: Vlastimil Babka
    Cc: Davidlohr Bueso
    Cc: David Rientjes
    Cc: Hugh Dickins
    Cc: Jason Gunthorpe
    Cc: Jerome Glisse
    Cc: John Hubbard
    Cc: Liam Howlett
    Cc: Matthew Wilcox
    Cc: Peter Zijlstra
    Cc: Ying Han
    Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.com
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     
  • All architectures define pte_index() as

    (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)

    and all architectures define pte_offset_kernel() as an entry in the array
    of PTEs indexed by the pte_index().

    For the most architectures the pte_offset_kernel() implementation relies
    on the availability of pmd_page_vaddr() that converts a PMD entry value to
    the virtual address of the page containing PTEs array.

    Let's move x86 definitions of the PTE accessors to the generic place in
    and then simply drop the respective definitions from the
    other architectures.

    The architectures that didn't provide pmd_page_vaddr() are updated to have
    that defined.

    The generic implementation of pte_offset_kernel() can be overridden by an
    architecture and alpha makes use of this because it has special ordering
    requirements for its version of pte_offset_kernel().

    [rppt@linux.ibm.com: v2]
    Link: http://lkml.kernel.org/r/20200514170327.31389-11-rppt@kernel.org
    [rppt@linux.ibm.com: update]
    Link: http://lkml.kernel.org/r/20200514170327.31389-12-rppt@kernel.org
    [rppt@linux.ibm.com: update]
    Link: http://lkml.kernel.org/r/20200514170327.31389-13-rppt@kernel.org
    [akpm@linux-foundation.org: fix x86 warning]
    [sfr@canb.auug.org.au: fix powerpc build]
    Link: http://lkml.kernel.org/r/20200607153443.GB738695@linux.ibm.com

    Signed-off-by: Mike Rapoport
    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Borislav Petkov
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: Chris Zankel
    Cc: "David S. Miller"
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Greg Ungerer
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: Ingo Molnar
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Matthew Wilcox
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Nick Hu
    Cc: Paul Walmsley
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Thomas Bogendoerfer
    Cc: Thomas Gleixner
    Cc: Tony Luck
    Cc: Vincent Chen
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200514170327.31389-10-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • The powerpc 32-bit implementation of pgtable has nice shortcuts for
    accessing kernel PMD and PTE for a given virtual address. Make these
    helpers available for all architectures.

    [rppt@linux.ibm.com: microblaze: fix page table traversal in setup_rt_frame()]
    Link: http://lkml.kernel.org/r/20200518191511.GD1118872@kernel.org
    [akpm@linux-foundation.org: s/pmd_ptr_k/pmd_off_k/ in various powerpc places]

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Borislav Petkov
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: Chris Zankel
    Cc: "David S. Miller"
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Greg Ungerer
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: Ingo Molnar
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Matthew Wilcox
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Nick Hu
    Cc: Paul Walmsley
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Thomas Bogendoerfer
    Cc: Thomas Gleixner
    Cc: Tony Luck
    Cc: Vincent Chen
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200514170327.31389-9-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • The include/linux/pgtable.h is going to be the home of generic page table
    manipulation functions.

    Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
    make the latter include asm/pgtable.h.

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Borislav Petkov
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: Chris Zankel
    Cc: "David S. Miller"
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Greg Ungerer
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: Ingo Molnar
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Matthew Wilcox
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Nick Hu
    Cc: Paul Walmsley
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Thomas Bogendoerfer
    Cc: Thomas Gleixner
    Cc: Tony Luck
    Cc: Vincent Chen
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • Patch series "mm: consolidate definitions of page table accessors", v2.

    The low level page table accessors (pXY_index(), pXY_offset()) are
    duplicated across all architectures and sometimes more than once. For
    instance, we have 31 definition of pgd_offset() for 25 supported
    architectures.

    Most of these definitions are actually identical and typically it boils
    down to, e.g.

    static inline unsigned long pmd_index(unsigned long address)
    {
    return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
    }

    static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
    {
    return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
    }

    These definitions can be shared among 90% of the arches provided
    XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.

    For architectures that really need a custom version there is always
    possibility to override the generic version with the usual ifdefs magic.

    These patches introduce include/linux/pgtable.h that replaces
    include/asm-generic/pgtable.h and add the definitions of the page table
    accessors to the new header.

    This patch (of 12):

    The linux/mm.h header includes to allow inlining of the
    functions involving page table manipulations, e.g. pte_alloc() and
    pmd_alloc(). So, there is no point to explicitly include
    in the files that include .

    The include statements in such cases are remove with a simple loop:

    for f in $(git grep -l "include ") ; do
    sed -i -e '/include / d' $f
    done

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Borislav Petkov
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: Chris Zankel
    Cc: "David S. Miller"
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Greg Ungerer
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: Ingo Molnar
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Matthew Wilcox
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Mike Rapoport
    Cc: Nick Hu
    Cc: Paul Walmsley
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Thomas Bogendoerfer
    Cc: Thomas Gleixner
    Cc: Tony Luck
    Cc: Vincent Chen
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
    Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • Now the last users of show_stack() got converted to use an explicit log
    level, show_stack_loglvl() can drop it's redundant suffix and become once
    again well known show_stack().

    Signed-off-by: Dmitry Safonov
    Signed-off-by: Andrew Morton
    Link: http://lkml.kernel.org/r/20200418201944.482088-51-dima@arista.com
    Signed-off-by: Linus Torvalds

    Dmitry Safonov
     
  • Currently, the log-level of show_stack() depends on a platform
    realization. It creates situations where the headers are printed with
    lower log level or higher than the stacktrace (depending on a platform or
    user).

    Furthermore, it forces the logic decision from user to an architecture
    side. In result, some users as sysrq/kdb/etc are doing tricks with
    temporary rising console_loglevel while printing their messages. And in
    result it not only may print unwanted messages from other CPUs, but also
    omit printing at all in the unlucky case where the printk() was deferred.

    Introducing log-level parameter and KERN_UNSUPPRESSED [1] seems an easier
    approach than introducing more printk buffers. Also, it will consolidate
    printings with headers.

    Introduce show_stack_loglvl(), that eventually will substitute
    show_stack().

    As a good side-effect die() now prints the stacktrace with KERN_EMERG
    aligned with other messages.

    [1]: https://lore.kernel.org/lkml/20190528002412.1625-1-dima@arista.com/T/#u

    Signed-off-by: Dmitry Safonov
    Signed-off-by: Andrew Morton
    Acked-by: Brian Cain
    Link: http://lkml.kernel.org/r/20200418201944.482088-15-dima@arista.com
    Signed-off-by: Linus Torvalds

    Dmitry Safonov
     

09 Jun, 2020

1 commit

  • Hexagon needs almost no cache flushing routines of its own. Rely on
    asm-generic/cacheflush.h for the defaults.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Acked-by: Brian Cain
    Link: http://lkml.kernel.org/r/20200515143646.3857579-12-hch@lst.de
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     

07 Jun, 2020

1 commit

  • Pull Kbuild updates from Masahiro Yamada:

    - fix warnings in 'make clean' for ARCH=um, hexagon, h8300, unicore32

    - ensure to rebuild all objects when the compiler is upgraded

    - exclude system headers from dependency tracking and fixdep processing

    - fix potential bit-size mismatch between the kernel and BPF user-mode
    helper

    - add the new syntax 'userprogs' to build user-space programs for the
    target architecture (the same arch as the kernel)

    - compile user-space sample code under samples/ for the target arch
    instead of the host arch

    - make headers_install fail if a CONFIG option is leaked to user-space

    - sanitize the output format of scripts/checkstack.pl

    - handle ARM 'push' instruction in scripts/checkstack.pl

    - error out before modpost if a module name conflict is found

    - error out when multiple directories are passed to M= because this
    feature is broken for a long time

    - add CONFIG_DEBUG_INFO_COMPRESSED to support compressed debug info

    - a lot of cleanups of modpost

    - dump vmlinux symbols out into vmlinux.symvers, and reuse it in the
    second pass of modpost

    - do not run the second pass of modpost if nothing in modules is
    updated

    - install modules.builtin(.modinfo) by 'make install' as well as by
    'make modules_install' because it is useful even when
    CONFIG_MODULES=n

    - add new command line variables, GZIP, BZIP2, LZOP, LZMA, LZ4, and XZ
    to allow users to use alternatives such as pigz, pbzip2, etc.

    * tag 'kbuild-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (96 commits)
    kbuild: add variables for compression tools
    Makefile: install modules.builtin even if CONFIG_MODULES=n
    mksysmap: Fix the mismatch of '.L' symbols in System.map
    kbuild: doc: rename LDFLAGS to KBUILD_LDFLAGS
    modpost: change elf_info->size to size_t
    modpost: remove is_vmlinux() helper
    modpost: strip .o from modname before calling new_module()
    modpost: set have_vmlinux in new_module()
    modpost: remove mod->skip struct member
    modpost: add mod->is_vmlinux struct member
    modpost: remove is_vmlinux() call in check_for_{gpl_usage,unused}()
    modpost: remove mod->is_dot_o struct member
    modpost: move -d option in scripts/Makefile.modpost
    modpost: remove -s option
    modpost: remove get_next_text() and make {grab,release_}file static
    modpost: use read_text_file() and get_line() for reading text files
    modpost: avoid false-positive file open error
    modpost: fix potential mmap'ed file overrun in get_src_version()
    modpost: add read_text_file() and get_line() helpers
    modpost: do not call get_modinfo() for vmlinux(.o)
    ...

    Linus Torvalds
     

05 Jun, 2020

1 commit

  • The hexagon architecture has 2 level page tables and as such most of the
    page table folding is already implemented in asm-generic/pgtable-nopmd.h.

    Fixup the only place in arch/hexagon to unfold the p4d level and remove
    __ARCH_USE_5LEVEL_HACK.

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: Christophe Leroy
    Cc: Fenghua Yu
    Cc: Geert Uytterhoeven
    Cc: Guan Xuetao
    Cc: James Morse
    Cc: Jonas Bonn
    Cc: Julien Thierry
    Cc: Ley Foon Tan
    Cc: Marc Zyngier
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Stefan Kristiansson
    Cc: Suzuki K Poulose
    Cc: Tony Luck
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200414153455.21744-5-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

04 Jun, 2020

1 commit

  • Currently, architectures that use free_area_init() to initialize memory
    map and node and zone structures need to calculate zone and hole sizes.
    We can use free_area_init_nodes() instead and let it detect the zone
    boundaries while the architectures will only have to supply the possible
    limits for the zones.

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Tested-by: Hoan Tran [arm64]
    Reviewed-by: Baoquan He
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: "David S. Miller"
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Greg Ungerer
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: "James E.J. Bottomley"
    Cc: Jonathan Corbet
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Hocko
    Cc: Michal Simek
    Cc: Nick Hu
    Cc: Paul Walmsley
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Thomas Bogendoerfer
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200412194859.12663-5-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

12 May, 2020

1 commit

  • 'make ARCH=hexagon clean' emits an error message as follows:

    $ make ARCH=hexagon clean
    gcc: error: unrecognized command line option '-G0'

    You can suppress it by setting the correct CROSS_COMPILE=,
    but we should not require any compiler for cleaning.

    Signed-off-by: Masahiro Yamada
    Acked-by: Brian Cain

    Masahiro Yamada
     

23 Apr, 2020

1 commit

  • As the bug report [1] pointed out, must be included
    after .

    I believe we should not impose any include order restriction. We often
    sort include directives alphabetically, but it is just coding style
    convention. Technically, we can include header files in any order by
    making every header self-contained.

    Currently, arch-specific MODULE_ARCH_VERMAGIC is defined in
    , which is not included from .

    Hence, the straight-forward fix-up would be as follows:

    |--- a/include/linux/vermagic.h
    |+++ b/include/linux/vermagic.h
    |@@ -1,5 +1,6 @@
    | /* SPDX-License-Identifier: GPL-2.0 */
    | #include
    |+#include
    |
    | /* Simply sanity version stamp for modules. */
    | #ifdef CONFIG_SMP

    This works enough, but for further cleanups, I split MODULE_ARCH_VERMAGIC
    definitions into .

    With this, and will be orthogonal,
    and the location of MODULE_ARCH_VERMAGIC definitions will be consistent.

    For arc and ia64, MODULE_PROC_FAMILY is only used for defining
    MODULE_ARCH_VERMAGIC. I squashed it.

    For hexagon, nds32, and xtensa, I removed entirely
    because they contained nothing but MODULE_ARCH_VERMAGIC definition.
    Kbuild will automatically generate at build-time,
    wrapping .

    [1] https://lore.kernel.org/lkml/20200411155623.GA22175@zn.tnic

    Reported-by: Borislav Petkov
    Signed-off-by: Masahiro Yamada
    Acked-by: Jessica Yu

    Masahiro Yamada