01 May, 2020

1 commit

  • When opening user access to only perform reads, only open read access.
    When opening user access to only perform writes, only open write
    access.

    Signed-off-by: Christophe Leroy
    Reviewed-by: Kees Cook
    Signed-off-by: Michael Ellerman
    Link: https://lore.kernel.org/r/2e73bc57125c2c6ab12a587586a4eed3a47105fc.1585898438.git.christophe.leroy@c-s.fr

    Christophe Leroy
     

25 Jan, 2020

1 commit

  • The range passed to user_access_begin() by strncpy_from_user() and
    strnlen_user() starts at 'src' and goes up to the limit of userspace
    although reads will be limited by the 'count' param.

    On 32 bits powerpc (book3s/32) access has to be granted for each
    256Mbytes segment and the cost increases with the number of segments to
    unlock.

    Limit the range with 'count' param.

    Fixes: 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'")
    Signed-off-by: Christophe Leroy
    Signed-off-by: Linus Torvalds

    Christophe Leroy
     

01 Oct, 2019

1 commit

  • A common pattern for syscall extensions is increasing the size of a
    struct passed from userspace, such that the zero-value of the new fields
    result in the old kernel behaviour (allowing for a mix of userspace and
    kernel vintages to operate on one another in most cases).

    While this interface exists for communication in both directions, only
    one interface is straightforward to have reasonable semantics for
    (userspace passing a struct to the kernel). For kernel returns to
    userspace, what the correct semantics are (whether there should be an
    error if userspace is unaware of a new extension) is very
    syscall-dependent and thus probably cannot be unified between syscalls
    (a good example of this problem is [1]).

    Previously there was no common lib/ function that implemented
    the necessary extension-checking semantics (and different syscalls
    implemented them slightly differently or incompletely[2]). Future
    patches replace common uses of this pattern to make use of
    copy_struct_from_user().

    Some in-kernel selftests that insure that the handling of alignment and
    various byte patterns are all handled identically to memchr_inv() usage.

    [1]: commit 1251201c0d34 ("sched/core: Fix uclamp ABI bug, clean up and
    robustify sched_read_attr() ABI logic and code")

    [2]: For instance {sched_setattr,perf_event_open,clone3}(2) all do do
    similar checks to copy_struct_from_user() while rt_sigprocmask(2)
    always rejects differently-sized struct arguments.

    Suggested-by: Rasmus Villemoes
    Signed-off-by: Aleksa Sarai
    Reviewed-by: Kees Cook
    Reviewed-by: Christian Brauner
    Link: https://lore.kernel.org/r/20191001011055.19283-2-cyphar@cyphar.com
    Signed-off-by: Christian Brauner

    Aleksa Sarai
     

26 Sep, 2019

1 commit

  • Patch series "arm64: untag user pointers passed to the kernel", v19.

    === Overview

    arm64 has a feature called Top Byte Ignore, which allows to embed pointer
    tags into the top byte of each pointer. Userspace programs (such as
    HWASan, a memory debugging tool [1]) might use this feature and pass
    tagged user pointers to the kernel through syscalls or other interfaces.

    Right now the kernel is already able to handle user faults with tagged
    pointers, due to these patches:

    1. 81cddd65 ("arm64: traps: fix userspace cache maintenance emulation on a
    tagged pointer")
    2. 7dcd9dd8 ("arm64: hw_breakpoint: fix watchpoint matching for tagged
    pointers")
    3. 276e9327 ("arm64: entry: improve data abort handling of tagged
    pointers")

    This patchset extends tagged pointer support to syscall arguments.

    As per the proposed ABI change [3], tagged pointers are only allowed to be
    passed to syscalls when they point to memory ranges obtained by anonymous
    mmap() or sbrk() (see the patchset [3] for more details).

    For non-memory syscalls this is done by untaging user pointers when the
    kernel performs pointer checking to find out whether the pointer comes
    from userspace (most notably in access_ok). The untagging is done only
    when the pointer is being checked, the tag is preserved as the pointer
    makes its way through the kernel and stays tagged when the kernel
    dereferences the pointer when perfoming user memory accesses.

    The mmap and mremap (only new_addr) syscalls do not currently accept
    tagged addresses. Architectures may interpret the tag as a background
    colour for the corresponding vma.

    Other memory syscalls (mprotect, etc.) don't do user memory accesses but
    rather deal with memory ranges, and untagged pointers are better suited to
    describe memory ranges internally. Thus for memory syscalls we untag
    pointers completely when they enter the kernel.

    === Other approaches

    One of the alternative approaches to untagging that was considered is to
    completely strip the pointer tag as the pointer enters the kernel with
    some kind of a syscall wrapper, but that won't work with the countless
    number of different ioctl calls. With this approach we would need a
    custom wrapper for each ioctl variation, which doesn't seem practical.

    An alternative approach to untagging pointers in memory syscalls prologues
    is to inspead allow tagged pointers to be passed to find_vma() (and other
    vma related functions) and untag them there. Unfortunately, a lot of
    find_vma() callers then compare or subtract the returned vma start and end
    fields against the pointer that was being searched. Thus this approach
    would still require changing all find_vma() callers.

    === Testing

    The following testing approaches has been taken to find potential issues
    with user pointer untagging:

    1. Static testing (with sparse [2] and separately with a custom static
    analyzer based on Clang) to track casts of __user pointers to integer
    types to find places where untagging needs to be done.

    2. Static testing with grep to find parts of the kernel that call
    find_vma() (and other similar functions) or directly compare against
    vm_start/vm_end fields of vma.

    3. Static testing with grep to find parts of the kernel that compare
    user pointers with TASK_SIZE or other similar consts and macros.

    4. Dynamic testing: adding BUG_ON(has_tag(addr)) to find_vma() and running
    a modified syzkaller version that passes tagged pointers to the kernel.

    Based on the results of the testing the requried patches have been added
    to the patchset.

    === Notes

    This patchset is meant to be merged together with "arm64 relaxed ABI" [3].

    This patchset is a prerequisite for ARM's memory tagging hardware feature
    support [4].

    This patchset has been merged into the Pixel 2 & 3 kernel trees and is
    now being used to enable testing of Pixel phones with HWASan.

    Thanks!

    [1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html

    [2] https://github.com/lucvoo/sparse-dev/commit/5f960cb10f56ec2017c128ef9d16060e0145f292

    [3] https://lkml.org/lkml/2019/6/12/745

    [4] https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture-2018-developments-armv85a

    This patch (of 11)

    This patch is a part of a series that extends kernel ABI to allow to pass
    tagged user pointers (with the top byte set to something else other than
    0x00) as syscall arguments.

    strncpy_from_user and strnlen_user accept user addresses as arguments, and
    do not go through the same path as copy_from_user and others, so here we
    need to handle the case of tagged user addresses separately.

    Untag user pointers passed to these functions.

    Note, that this patch only temporarily untags the pointers to perform
    validity checks, but then uses them as is to perform user memory accesses.

    [andreyknvl@google.com: fix sparc4 build]
    Link: http://lkml.kernel.org/r/CAAeHK+yx4a-P0sDrXTUxMvO2H0CJZUFPffBrg_cU7oJOZyC7ew@mail.gmail.com
    Link: http://lkml.kernel.org/r/c5a78bcad3e94d6cda71fcaa60a423231ae71e4c.1563904656.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Vincenzo Frascino
    Reviewed-by: Khalid Aziz
    Acked-by: Kees Cook
    Reviewed-by: Catalin Marinas
    Cc: Al Viro
    Cc: Dave Hansen
    Cc: Eric Auger
    Cc: Felix Kuehling
    Cc: Jens Wiklander
    Cc: Mauro Carvalho Chehab
    Cc: Mike Rapoport
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

24 Apr, 2019

1 commit

  • Randy reported objtool triggered on his (GCC-7.4) build:

    lib/strncpy_from_user.o: warning: objtool: strncpy_from_user()+0x315: call to __ubsan_handle_add_overflow() with UACCESS enabled
    lib/strnlen_user.o: warning: objtool: strnlen_user()+0x337: call to __ubsan_handle_sub_overflow() with UACCESS enabled

    This is due to UBSAN generating signed-overflow-UB warnings where it
    should not. Prior to GCC-8 UBSAN ignored -fwrapv (which the kernel
    uses through -fno-strict-overflow).

    Make the functions use 'unsigned long' throughout.

    Reported-by: Randy Dunlap
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Randy Dunlap # build-tested
    Acked-by: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: luto@kernel.org
    Link: http://lkml.kernel.org/r/20190424072208.754094071@infradead.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

05 Jan, 2019

1 commit

  • Originally, the rule used to be that you'd have to do access_ok()
    separately, and then user_access_begin() before actually doing the
    direct (optimized) user access.

    But experience has shown that people then decide not to do access_ok()
    at all, and instead rely on it being implied by other operations or
    similar. Which makes it very hard to verify that the access has
    actually been range-checked.

    If you use the unsafe direct user accesses, hardware features (either
    SMAP - Supervisor Mode Access Protection - on x86, or PAN - Privileged
    Access Never - on ARM) do force you to use user_access_begin(). But
    nothing really forces the range check.

    By putting the range check into user_access_begin(), we actually force
    people to do the right thing (tm), and the range check vill be visible
    near the actual accesses. We have way too long a history of people
    trying to avoid them.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

16 May, 2017

1 commit


09 Aug, 2016

1 commit

  • When I initially added the unsafe_[get|put]_user() helpers in commit
    5b24a7a2aa20 ("Add 'unsafe' user access functions for batched
    accesses"), I made the mistake of modeling the interface on our
    traditional __[get|put]_user() functions, which return zero on success,
    or -EFAULT on failure.

    That interface is fairly easy to use, but it's actually fairly nasty for
    good code generation, since it essentially forces the caller to check
    the error value for each access.

    In particular, since the error handling is already internally
    implemented with an exception handler, and we already use "asm goto" for
    various other things, we could fairly easily make the error cases just
    jump directly to an error label instead, and avoid the need for explicit
    checking after each operation.

    So switch the interface to pass in an error label, rather than checking
    the error value in the caller. Best do it now before we start growing
    more users (the signal handling code in particular would be a good place
    to use the new interface).

    So rather than

    if (unsafe_get_user(x, ptr))
    ... handle error ..

    the interface is now

    unsafe_get_user(x, ptr, label);

    where an error during the user mode fetch will now just cause a jump to
    'label' in the caller.

    Right now the actual _implementation_ of this all still ends up being a
    "if (err) goto label", and does not take advantage of any exception
    label tricks, but for "unsafe_put_user()" in particular it should be
    fairly straightforward to convert to using the exception table model.

    Note that "unsafe_get_user()" is much harder to convert to a clever
    exception table model, because current versions of gcc do not allow the
    use of "asm goto" (for the exception) with output values (for the actual
    value to be fetched). But that is hopefully not a limitation in the
    long term.

    [ Also note that it might be a good idea to switch unsafe_get_user() to
    actually _return_ the value it fetches from user space, but this
    commit only changes the error handling semantics ]

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

18 Dec, 2015

1 commit

  • This converts the generic user string functions to use the batched user
    access functions.

    It makes a big difference on Skylake, which is the first x86
    microarchitecture to implement SMAP. The STAC/CLAC instructions are not
    very fast, and doing them for each access inside the loop that copies
    strings from user space (which is what the pathname handling does for
    every pathname the kernel uses, for example) is very inefficient.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

23 Jun, 2015

1 commit

  • Pull scheduler updates from Ingo Molnar:
    "The main changes are:

    - lockless wakeup support for futexes and IPC message queues
    (Davidlohr Bueso, Peter Zijlstra)

    - Replace spinlocks with atomics in thread_group_cputimer(), to
    improve scalability (Jason Low)

    - NUMA balancing improvements (Rik van Riel)

    - SCHED_DEADLINE improvements (Wanpeng Li)

    - clean up and reorganize preemption helpers (Frederic Weisbecker)

    - decouple page fault disabling machinery from the preemption
    counter, to improve debuggability and robustness (David
    Hildenbrand)

    - SCHED_DEADLINE documentation updates (Luca Abeni)

    - topology CPU masks cleanups (Bartosz Golaszewski)

    - /proc/sched_debug improvements (Srikar Dronamraju)"

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (79 commits)
    sched/deadline: Remove needless parameter in dl_runtime_exceeded()
    sched: Remove superfluous resetting of the p->dl_throttled flag
    sched/deadline: Drop duplicate init_sched_dl_class() declaration
    sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target
    sched/deadline: Make init_sched_dl_class() __init
    sched/deadline: Optimize pull_dl_task()
    sched/preempt: Add static_key() to preempt_notifiers
    sched/preempt: Fix preempt notifiers documentation about hlist_del() within unsafe iteration
    sched/stop_machine: Fix deadlock between multiple stop_two_cpus()
    sched/debug: Add sum_sleep_runtime to /proc//sched
    sched/debug: Replace vruntime with wait_sum in /proc/sched_debug
    sched/debug: Properly format runnable tasks in /proc/sched_debug
    sched/numa: Only consider less busy nodes as numa balancing destinations
    Revert 095bebf61a46 ("sched/numa: Do not move past the balance point if unbalanced")
    sched/fair: Prevent throttling in early pick_next_task_fair()
    preempt: Reorganize the notrace definitions a bit
    preempt: Use preempt_schedule_context() as the official tracing preemption point
    sched: Make preempt_schedule_context() function-tracing safe
    x86: Remove cpu_sibling_mask() and cpu_core_mask()
    x86: Replace cpu_**_mask() with topology_**_cpumask()
    ...

    Linus Torvalds
     

03 Jun, 2015

2 commits

  • strnlen_user() can return a number in a range 0 to count +
    sizeof(unsigned long) - 1. Clarify the comment at the top of the
    function so that users don't think the function returns at most count+1.

    Signed-off-by: Jan Kara
    [ Also added commentary about preferably not using this function ]
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • If the specified maximum length of the string is a multiple of unsigned
    long, we would load one long behind the specified maximum. If that
    happens to be in a next page, we can hit a page fault although we were
    not expected to.

    Fix the off-by-one bug in the test whether we are at the end of the
    specified range.

    Signed-off-by: Jan Kara
    Cc: stable@vger.kernel.org
    Signed-off-by: Linus Torvalds

    Jan Kara
     

19 May, 2015

1 commit

  • In general, non-atomic variants of user access functions must not sleep
    if pagefaults are disabled.

    Let's update all relevant comments in uaccess code. This also reflects
    the might_sleep() checks in might_fault().

    Reviewed-and-tested-by: Thomas Gleixner
    Signed-off-by: David Hildenbrand
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: David.Laight@ACULAB.COM
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: airlied@linux.ie
    Cc: akpm@linux-foundation.org
    Cc: benh@kernel.crashing.org
    Cc: bigeasy@linutronix.de
    Cc: borntraeger@de.ibm.com
    Cc: daniel.vetter@intel.com
    Cc: heiko.carstens@de.ibm.com
    Cc: herbert@gondor.apana.org.au
    Cc: hocko@suse.cz
    Cc: hughd@google.com
    Cc: mst@redhat.com
    Cc: paulus@samba.org
    Cc: ralf@linux-mips.org
    Cc: schwidefsky@de.ibm.com
    Cc: yang.shi@windriver.com
    Link: http://lkml.kernel.org/r/1431359540-32227-4-git-send-email-dahi@linux.vnet.ibm.com
    Signed-off-by: Ingo Molnar

    David Hildenbrand
     

28 May, 2012

1 commit


27 May, 2012

1 commit

  • This adds a new generic optimized strnlen_user() function that uses the
    infrastructure to portably do efficient string
    handling.

    In many ways, strnlen is much simpler than strncpy, and in particular we
    can always pre-align the words we load from memory. That means that all
    the worries about alignment etc are a non-issue, so this one can easily
    be used on any architecture. You obviously do have to do the
    appropriate word-at-a-time.h macros.

    Signed-off-by: Linus Torvalds

    Linus Torvalds