09 Aug, 2016

1 commit

  • When I initially added the unsafe_[get|put]_user() helpers in commit
    5b24a7a2aa20 ("Add 'unsafe' user access functions for batched
    accesses"), I made the mistake of modeling the interface on our
    traditional __[get|put]_user() functions, which return zero on success,
    or -EFAULT on failure.

    That interface is fairly easy to use, but it's actually fairly nasty for
    good code generation, since it essentially forces the caller to check
    the error value for each access.

    In particular, since the error handling is already internally
    implemented with an exception handler, and we already use "asm goto" for
    various other things, we could fairly easily make the error cases just
    jump directly to an error label instead, and avoid the need for explicit
    checking after each operation.

    So switch the interface to pass in an error label, rather than checking
    the error value in the caller. Best do it now before we start growing
    more users (the signal handling code in particular would be a good place
    to use the new interface).

    So rather than

    if (unsafe_get_user(x, ptr))
    ... handle error ..

    the interface is now

    unsafe_get_user(x, ptr, label);

    where an error during the user mode fetch will now just cause a jump to
    'label' in the caller.

    Right now the actual _implementation_ of this all still ends up being a
    "if (err) goto label", and does not take advantage of any exception
    label tricks, but for "unsafe_put_user()" in particular it should be
    fairly straightforward to convert to using the exception table model.

    Note that "unsafe_get_user()" is much harder to convert to a clever
    exception table model, because current versions of gcc do not allow the
    use of "asm goto" (for the exception) with output values (for the actual
    value to be fetched). But that is hopefully not a limitation in the
    long term.

    [ Also note that it might be a good idea to switch unsafe_get_user() to
    actually _return_ the value it fetches from user space, but this
    commit only changes the error handling semantics ]

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

18 Dec, 2015

1 commit

  • This converts the generic user string functions to use the batched user
    access functions.

    It makes a big difference on Skylake, which is the first x86
    microarchitecture to implement SMAP. The STAC/CLAC instructions are not
    very fast, and doing them for each access inside the loop that copies
    strings from user space (which is what the pathname handling does for
    every pathname the kernel uses, for example) is very inefficient.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

23 Jun, 2015

1 commit

  • Pull scheduler updates from Ingo Molnar:
    "The main changes are:

    - lockless wakeup support for futexes and IPC message queues
    (Davidlohr Bueso, Peter Zijlstra)

    - Replace spinlocks with atomics in thread_group_cputimer(), to
    improve scalability (Jason Low)

    - NUMA balancing improvements (Rik van Riel)

    - SCHED_DEADLINE improvements (Wanpeng Li)

    - clean up and reorganize preemption helpers (Frederic Weisbecker)

    - decouple page fault disabling machinery from the preemption
    counter, to improve debuggability and robustness (David
    Hildenbrand)

    - SCHED_DEADLINE documentation updates (Luca Abeni)

    - topology CPU masks cleanups (Bartosz Golaszewski)

    - /proc/sched_debug improvements (Srikar Dronamraju)"

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (79 commits)
    sched/deadline: Remove needless parameter in dl_runtime_exceeded()
    sched: Remove superfluous resetting of the p->dl_throttled flag
    sched/deadline: Drop duplicate init_sched_dl_class() declaration
    sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target
    sched/deadline: Make init_sched_dl_class() __init
    sched/deadline: Optimize pull_dl_task()
    sched/preempt: Add static_key() to preempt_notifiers
    sched/preempt: Fix preempt notifiers documentation about hlist_del() within unsafe iteration
    sched/stop_machine: Fix deadlock between multiple stop_two_cpus()
    sched/debug: Add sum_sleep_runtime to /proc//sched
    sched/debug: Replace vruntime with wait_sum in /proc/sched_debug
    sched/debug: Properly format runnable tasks in /proc/sched_debug
    sched/numa: Only consider less busy nodes as numa balancing destinations
    Revert 095bebf61a46 ("sched/numa: Do not move past the balance point if unbalanced")
    sched/fair: Prevent throttling in early pick_next_task_fair()
    preempt: Reorganize the notrace definitions a bit
    preempt: Use preempt_schedule_context() as the official tracing preemption point
    sched: Make preempt_schedule_context() function-tracing safe
    x86: Remove cpu_sibling_mask() and cpu_core_mask()
    x86: Replace cpu_**_mask() with topology_**_cpumask()
    ...

    Linus Torvalds
     

03 Jun, 2015

2 commits

  • strnlen_user() can return a number in a range 0 to count +
    sizeof(unsigned long) - 1. Clarify the comment at the top of the
    function so that users don't think the function returns at most count+1.

    Signed-off-by: Jan Kara
    [ Also added commentary about preferably not using this function ]
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • If the specified maximum length of the string is a multiple of unsigned
    long, we would load one long behind the specified maximum. If that
    happens to be in a next page, we can hit a page fault although we were
    not expected to.

    Fix the off-by-one bug in the test whether we are at the end of the
    specified range.

    Signed-off-by: Jan Kara
    Cc: stable@vger.kernel.org
    Signed-off-by: Linus Torvalds

    Jan Kara
     

19 May, 2015

1 commit

  • In general, non-atomic variants of user access functions must not sleep
    if pagefaults are disabled.

    Let's update all relevant comments in uaccess code. This also reflects
    the might_sleep() checks in might_fault().

    Reviewed-and-tested-by: Thomas Gleixner
    Signed-off-by: David Hildenbrand
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: David.Laight@ACULAB.COM
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: airlied@linux.ie
    Cc: akpm@linux-foundation.org
    Cc: benh@kernel.crashing.org
    Cc: bigeasy@linutronix.de
    Cc: borntraeger@de.ibm.com
    Cc: daniel.vetter@intel.com
    Cc: heiko.carstens@de.ibm.com
    Cc: herbert@gondor.apana.org.au
    Cc: hocko@suse.cz
    Cc: hughd@google.com
    Cc: mst@redhat.com
    Cc: paulus@samba.org
    Cc: ralf@linux-mips.org
    Cc: schwidefsky@de.ibm.com
    Cc: yang.shi@windriver.com
    Link: http://lkml.kernel.org/r/1431359540-32227-4-git-send-email-dahi@linux.vnet.ibm.com
    Signed-off-by: Ingo Molnar

    David Hildenbrand
     

28 May, 2012

1 commit


27 May, 2012

1 commit

  • This adds a new generic optimized strnlen_user() function that uses the
    infrastructure to portably do efficient string
    handling.

    In many ways, strnlen is much simpler than strncpy, and in particular we
    can always pre-align the words we load from memory. That means that all
    the worries about alignment etc are a non-issue, so this one can easily
    be used on any architecture. You obviously do have to do the
    appropriate word-at-a-time.h macros.

    Signed-off-by: Linus Torvalds

    Linus Torvalds