04 Dec, 2015

12 commits

  • In an overcommitted guest where some vCPUs have to be halted to make
    forward progress in other areas, it is highly likely that a vCPU later
    in the spinlock queue will be spinning while the ones earlier in the
    queue would have been halted. The spinning in the later vCPUs is then
    just a waste of precious CPU cycles because they are not going to
    get the lock soon as the earlier ones have to be woken up and take
    their turn to get the lock.

    This patch implements an adaptive spinning mechanism where the vCPU
    will call pv_wait() if the previous vCPU is not running.

    Linux kernel builds were run in KVM guest on an 8-socket, 4
    cores/socket Westmere-EX system and a 4-socket, 8 cores/socket
    Haswell-EX system. Both systems are configured to have 32 physical
    CPUs. The kernel build times before and after the patch were:

    Westmere Haswell
    Patch 32 vCPUs 48 vCPUs 32 vCPUs 48 vCPUs
    ----- -------- -------- -------- --------
    Before patch 3m02.3s 5m00.2s 1m43.7s 3m03.5s
    After patch 3m03.0s 4m37.5s 1m43.0s 2m47.2s

    For 32 vCPUs, this patch doesn't cause any noticeable change in
    performance. For 48 vCPUs (over-committed), there is about 8%
    performance improvement.

    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Douglas Hatch
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447114167-47185-8-git-send-email-Waiman.Long@hpe.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • This patch allows one attempt for the lock waiter to steal the lock
    when entering the PV slowpath. To prevent lock starvation, the pending
    bit will be set by the queue head vCPU when it is in the active lock
    spinning loop to disable any lock stealing attempt. This helps to
    reduce the performance penalty caused by lock waiter preemption while
    not having much of the downsides of a real unfair lock.

    The pv_wait_head() function was renamed as pv_wait_head_or_lock()
    as it was modified to acquire the lock before returning. This is
    necessary because of possible lock stealing attempts from other tasks.

    Linux kernel builds were run in KVM guest on an 8-socket, 4
    cores/socket Westmere-EX system and a 4-socket, 8 cores/socket
    Haswell-EX system. Both systems are configured to have 32 physical
    CPUs. The kernel build times before and after the patch were:

    Westmere Haswell
    Patch 32 vCPUs 48 vCPUs 32 vCPUs 48 vCPUs
    ----- -------- -------- -------- --------
    Before patch 3m15.6s 10m56.1s 1m44.1s 5m29.1s
    After patch 3m02.3s 5m00.2s 1m43.7s 3m03.5s

    For the overcommited case (48 vCPUs), this patch is able to reduce
    kernel build time by more than 54% for Westmere and 44% for Haswell.

    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Douglas Hatch
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447190336-53317-1-git-send-email-Waiman.Long@hpe.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • This patch enables the accumulation of kicking and waiting related
    PV qspinlock statistics when the new QUEUED_LOCK_STAT configuration
    option is selected. It also enables the collection of data which
    enable us to calculate the kicking and wakeup latencies which have
    a heavy dependency on the CPUs being used.

    The statistical counters are per-cpu variables to minimize the
    performance overhead in their updates. These counters are exported
    via the debugfs filesystem under the qlockstat directory. When the
    corresponding debugfs files are read, summation and computing of the
    required data are then performed.

    The measured latencies for different CPUs are:

    CPU Wakeup Kicking
    --- ------ -------
    Haswell-EX 63.6us 7.4us
    Westmere-EX 67.6us 9.3us

    The measured latencies varied a bit from run-to-run. The wakeup
    latency is much higher than the kicking latency.

    A sample of statistical counters after system bootup (with vCPU
    overcommit) was:

    pv_hash_hops=1.00
    pv_kick_unlock=1148
    pv_kick_wake=1146
    pv_latency_kick=11040
    pv_latency_wake=194840
    pv_spurious_wakeup=7
    pv_wait_again=4
    pv_wait_head=23
    pv_wait_node=1129

    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Douglas Hatch
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447114167-47185-6-git-send-email-Waiman.Long@hpe.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • These are some notes on the scheduler locking and how it provides
    program order guarantees on SMP systems.

    ( This commit is in the locking tree, because the new documentation
    refers to a newly introduced locking primitive. )

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Boqun Feng
    Cc: David Howells
    Cc: Jonathan Corbet
    Cc: Linus Torvalds
    Cc: Michal Hocko
    Cc: Mike Galbraith
    Cc: Oleg Nesterov
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Introduce smp_cond_acquire() which combines a control dependency and a
    read barrier to form acquire semantics.

    This primitive has two benefits:

    - it documents control dependencies,
    - its typically cheaper than using smp_load_acquire() in a loop.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • So we want to change a locking API, but the scheduler uses it, and a conflict
    is generated by a recent scheduler fix.

    Pick up the pending scheduler fixes to make life easier.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Oleg noticed that its possible to falsely observe p->on_cpu == 0 such
    that we'll prematurely continue with the wakeup and effectively run p on
    two CPUs at the same time.

    Even though the overlap is very limited; the task is in the middle of
    being scheduled out; it could still result in corruption of the
    scheduler data structures.

    CPU0 CPU1

    set_current_state(...)


    context_switch(X, Y)
    prepare_lock_switch(Y)
    Y->on_cpu = 1;
    finish_lock_switch(X)
    store_release(X->on_cpu, 0);

    try_to_wake_up(X)
    LOCK(p->pi_lock);

    t = X->on_cpu; // 0

    context_switch(Y, X)
    prepare_lock_switch(X)
    X->on_cpu = 1;
    finish_lock_switch(Y)
    store_release(Y->on_cpu, 0);

    schedule();
    deactivate_task(X);
    X->on_rq = 0;

    if (X->on_rq) // false

    if (t) while (X->on_cpu)
    cpu_relax();

    context_switch(X, ..)
    finish_lock_switch(X)
    store_release(X->on_cpu, 0);

    Avoid the load of X->on_cpu being hoisted over the X->on_rq load.

    Reported-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Explain how the control dependency and smp_rmb() end up providing
    ACQUIRE semantics and pair with smp_store_release() in
    finish_lock_switch().

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Oleg Nesterov
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • /proc/stats shows invalid gtime when the thread is running in guest.
    When vtime accounting is not enabled, we cannot get a valid delta.
    The delta is calculated with now - tsk->vtime_snap, but tsk->vtime_snap
    is only updated when vtime accounting is runtime enabled.

    This patch makes task_gtime() just return gtime without computing the
    buggy non-existing tickless delta when vtime accounting is not enabled.

    Use context_tracking_is_enabled() to check if vtime is accounting on
    some cpu, in which case only we need to check the tickless delta. This
    way we fix the gtime value regression on machines not running nohz full.

    The kernel config contains CONFIG_VIRT_CPU_ACCOUNTING_GEN=y and
    CONFIG_NO_HZ_FULL_ALL=n and boot without nohz_full.

    I ran and stop a busy loop in VM and see the gtime in host.
    Dump the 43rd field which shows the gtime in every second:

    # while :; do awk '{print $3" "$43}' /proc/3955/task/4014/stat; sleep 1; done
    S 4348
    R 7064566
    R 7064766
    R 7064967
    R 7065168
    S 4759
    S 4759

    During running busy loop, it returns large value.

    After applying this patch, we can see right gtime.

    # while :; do awk '{print $3" "$43}' /proc/10913/task/10956/stat; sleep 1; done
    S 5338
    R 5365
    R 5465
    R 5566
    R 5666
    S 5726
    S 5726

    Signed-off-by: Hiroshi Shimamoto
    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Chris Metcalf
    Cc: Christoph Lameter
    Cc: Linus Torvalds
    Cc: Luiz Capitulino
    Cc: Mike Galbraith
    Cc: Paul E . McKenney
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447948054-28668-2-git-send-email-fweisbec@gmail.com
    Signed-off-by: Ingo Molnar

    Hiroshi Shimamoto
     
  • root_domain::rto_mask allocated through alloc_cpumask_var()
    contains garbage data, this may cause problems. For instance,
    When doing pull_rt_task(), it may do useless iterations if
    rto_mask retains some extra garbage bits. Worse still, this
    violates the isolated domain rule for clustered scheduling
    using cpuset, because the tasks(with all the cpus allowed)
    belongs to one root domain can be pulled away into another
    root domain.

    The patch cleans the garbage by using zalloc_cpumask_var()
    instead of alloc_cpumask_var() for root_domain::rto_mask
    allocation, thereby addressing the issues.

    Do the same thing for root_domain's other cpumask memembers:
    dlo_mask, span, and online.

    Signed-off-by: Xunlei Pang
    Signed-off-by: Peter Zijlstra (Intel)
    Cc:
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1449057179-29321-1-git-send-email-xlpang@redhat.com
    Signed-off-by: Ingo Molnar

    Xunlei Pang
     
  • Because wakeups can (fundamentally) be late, a task might not be in
    the expected state. Therefore testing against a task's state is racy,
    and can yield false positives.

    Signed-off-by: Sasha Levin
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: oleg@redhat.com
    Fixes: 9067ac85d533 ("wake_up_process() should be never used to wakeup a TASK_STOPPED/TRACED task")
    Link: http://lkml.kernel.org/r/1448933660-23082-1-git-send-email-sasha.levin@oracle.com
    Signed-off-by: Ingo Molnar

    Sasha Levin
     
  • Vladimir reported getting RCU stall warnings and bisected it back to
    commit:

    743162013d40 ("sched: Remove proliferation of wait_on_bit() action functions")

    That commit inadvertently reversed the calls to schedule() and signal_pending(),
    thereby not handling the case where the signal receives while we sleep.

    Reported-by: Vladimir Murzin
    Tested-by: Vladimir Murzin
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: mark.rutland@arm.com
    Cc: neilb@suse.de
    Cc: oleg@redhat.com
    Fixes: 743162013d40 ("sched: Remove proliferation of wait_on_bit() action functions")
    Fixes: cbbce8220949 ("SCHED: add some "wait..on_bit...timeout()" interfaces.")
    Link: http://lkml.kernel.org/r/20151201130404.GL3816@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

23 Nov, 2015

20 commits

  • The unlock function in queued spinlocks was optimized for better
    performance on bare metal systems at the expense of virtualized guests.

    For x86-64 systems, the unlock call needs to go through a
    PV_CALLEE_SAVE_REGS_THUNK() which saves and restores 8 64-bit
    registers before calling the real __pv_queued_spin_unlock()
    function. The thunk code may also be in a separate cacheline from
    __pv_queued_spin_unlock().

    This patch optimizes the PV unlock code path by:

    1) Moving the unlock slowpath code from the fastpath into a separate
    __pv_queued_spin_unlock_slowpath() function to make the fastpath
    as simple as possible..

    2) For x86-64, hand-coded an assembly function to combine the register
    saving thunk code with the fastpath code. Only registers that
    are used in the fastpath will be saved and restored. If the
    fastpath fails, the slowpath function will be called via another
    PV_CALLEE_SAVE_REGS_THUNK(). For 32-bit, it falls back to the C
    __pv_queued_spin_unlock() code as the thunk saves and restores
    only one 32-bit register.

    With a microbenchmark of 5M lock-unlock loop, the table below shows
    the execution times before and after the patch with different number
    of threads in a VM running on a 32-core Westmere-EX box with x86-64
    4.2-rc1 based kernels:

    Threads Before patch After patch % Change
    ------- ------------ ----------- --------
    1 134.1 ms 119.3 ms -11%
    2 1286 ms 953 ms -26%
    3 3715 ms 3480 ms -6.3%
    4 4092 ms 3764 ms -8.0%

    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Douglas Hatch
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447114167-47185-5-git-send-email-Waiman.Long@hpe.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • With optimistic prefetch of the next node cacheline, the next pointer
    may have been properly inititalized. As a result, the reading
    of node->next in the contended path may be redundant. This patch
    eliminates the redundant read if the next pointer value is not NULL.

    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Douglas Hatch
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447114167-47185-4-git-send-email-Waiman.Long@hpe.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • A queue head CPU, after acquiring the lock, will have to notify
    the next CPU in the wait queue that it has became the new queue
    head. This involves loading a new cacheline from the MCS node of the
    next CPU. That operation can be expensive and add to the latency of
    locking operation.

    This patch addes code to optmistically prefetch the next MCS node
    cacheline if the next pointer is defined and it has been spinning
    for the MCS lock for a while. This reduces the locking latency and
    improves the system throughput.

    The performance change will depend on whether the prefetch overhead
    can be hidden within the latency of the lock spin loop. On really
    short critical section, there may not be performance gain at all. With
    longer critical section, however, it was found to have a performance
    boost of 5-10% over a range of different queue depths with a spinlock
    loop microbenchmark.

    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Douglas Hatch
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447114167-47185-3-git-send-email-Waiman.Long@hpe.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • This patch replaces the cmpxchg() and xchg() calls in the native
    qspinlock code with the more relaxed _acquire or _release versions of
    those calls to enable other architectures to adopt queued spinlocks
    with less memory barrier performance overhead.

    Signed-off-by: Waiman Long
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Douglas Hatch
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1447114167-47185-2-git-send-email-Waiman.Long@hpe.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • Some atomic operations now have _relaxed/acquire/release variants, this
    patch adds some trivial tests for two purposes:

    1. test the behavior of these new operations in single-CPU
    environment.

    2. make their code generated before we actually use them somewhere,
    so that we can examine their assembly code.

    Signed-off-by: Boqun Feng
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Waiman Long
    Cc: Will Deacon
    Link: http://lkml.kernel.org/r/1446634365-25176-1-git-send-email-boqun.feng@gmail.com
    Signed-off-by: Ingo Molnar

    Boqun Feng
     
  • The push_irq_work_func() function is conditionally defined only
    when both CONFIG_SMP and HAVE_RT_PUSH_IPI are defined, but the
    forward declaration remains visibile without HAVE_RT_PUSH_IPI,
    causing a gcc warning in ARM64 allnoconfig:

    kernel/sched/rt.c:68:13: warning: 'push_irq_work_func' declared 'static' but never defined [-Wunused-function]

    This changes the code to use the same condition for both the
    declaration and the function definition, which gets rid of the
    warning.

    As Peter Zijlstra, we can possibly get rid of the whole HAVE_RT_PUSH_IPI
    thing after:

    8053871d0f7f ("smp: Fix smp_call_function_single_async() locking")

    Until that is done, this patch can be used to avoid the warning.

    Signed-off-by: Arnd Bergmann
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Steven Rostedt
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Fixes: b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
    Link: http://lkml.kernel.org/r/3828565.oKfGk7yNIT@wuerfel
    Signed-off-by: Ingo Molnar

    Arnd Bergmann
     
  • Linus Torvalds
     
  • Merge slub bulk allocator updates from Andrew Morton:
    "This missed the merge window because I was waiting for some repairs to
    come in. Nothing actually uses the bulk allocator yet and the changes
    to other code paths are pretty small. And the net guys are waiting
    for this so they can start merging the client code"

    More comments from Jesper Dangaard Brouer:
    "The kmem_cache_alloc_bulk() call, in mm/slub.c, were included in
    previous kernel. The present version contains a bug. Vladimir
    Davydov noticed it contained a bug, when kernel is compiled with
    CONFIG_MEMCG_KMEM (see commit 03ec0ed57ffc: "slub: fix kmem cgroup
    bug in kmem_cache_alloc_bulk"). Plus the mem cgroup counterpart in
    kmem_cache_free_bulk() were missing (see commit 033745189b1b "slub:
    add missing kmem cgroup support to kmem_cache_free_bulk").

    I don't consider the fix stable-material because there are no in-tree
    users of the API.

    But with known bugs (for memcg) I cannot start using the API in the
    net-tree"

    * emailed patches from Andrew Morton :
    slab/slub: adjust kmem_cache_alloc_bulk API
    slub: add missing kmem cgroup support to kmem_cache_free_bulk
    slub: fix kmem cgroup bug in kmem_cache_alloc_bulk
    slub: optimize bulk slowpath free by detached freelist
    slub: support for bulk free with SLUB freelists

    Linus Torvalds
     
  • Pull tty/serial fixes from Greg KH:
    "Here are a few small tty/serial driver fixes for 4.4-rc2 that resolve
    some reported problems.

    All have been in linux-next, full details are in the shortlog below"

    * tag 'tty-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
    serial: export fsl8250_handle_irq
    serial: 8250_mid: Add missing dependency
    tty: audit: Fix audit source
    serial: etraxfs-uart: Fix crash
    serial: fsl_lpuart: Fix earlycon support
    bcm63xx_uart: Use the device name when registering an interrupt
    tty: Fix direct use of tty buffer work
    tty: Fix tty_send_xchar() lock order inversion

    Linus Torvalds
     
  • Pull staging/IIO fixes from Greg KH:
    "Here are some staging and iio driver fixes for 4.4-rc2. All of these
    are in response to issues that have been reported and have been in
    linux-next for a while"

    * tag 'staging-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
    Revert "Staging: wilc1000: coreconfigurator: Drop unneeded wrapper functions"
    iio: adc: xilinx: Fix VREFN scale
    iio: si7020: Swap data byte order
    iio: adc: vf610_adc: Fix division by zero error
    iio:ad7793: Fix ad7785 product ID
    iio: ad5064: Fix ad5629/ad5669 shift
    iio:ad5064: Make sure ad5064_i2c_write() returns 0 on success
    iio: lpc32xx_adc: fix warnings caused by enabling unprepared clock
    staging: iio: select IRQ_WORK for IIO_DUMMY_EVGEN
    vf610_adc: Fix internal temperature calculation

    Linus Torvalds
     
  • Pull USB fixes from Greg KH:
    "Here are a number of USB fixes and new device ids for 4.4-rc2. All
    have been in linux-next and the details are in the shortlog"

    * tag 'usb-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (28 commits)
    usblp: do not set TASK_INTERRUPTIBLE before lock
    USB: MAINTAINERS: cxacru
    usb: kconfig: fix warning of select USB_OTG
    USB: option: add XS Stick W100-2 from 4G Systems
    xhci: Fix a race in usb2 LPM resume, blocking U3 for usb2 devices
    usb: xhci: fix checking ep busy for CFC
    xhci: Workaround to get Intel xHCI reset working more reliably
    usb: chipidea: imx: fix a possible NULL dereference
    usb: chipidea: usbmisc_imx: fix a possible NULL dereference
    usb: chipidea: otg: gadget module load and unload support
    usb: chipidea: debug: disable usb irq while role switch
    ARM: dts: imx27.dtsi: change the clock information for usb
    usb: chipidea: imx: refine clock operations to adapt for all platforms
    usb: gadget: atmel_usba_udc: Expose correct device speed
    usb: musb: enable usb_dma parameter
    usb: phy: phy-mxs-usb: fix a possible NULL dereference
    usb: dwc3: gadget: let us set lower max_speed
    usb: musb: fix tx fifo flush handling
    usb: gadget: f_loopback: fix the warning during the enumeration
    usb: dwc2: host: Fix remote wakeup when not in DWC2_L2
    ...

    Linus Torvalds
     
  • Pull MIPS fixes from Ralf Baechle:

    - Fix a flood of annoying build warnings

    - A number of fixes for Atheros 79xx platforms

    * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
    MIPS: ath79: Add a machine entry for booting OF machines
    MIPS: ath79: Fix the size of the MISC INTC registers in ar9132.dtsi
    MIPS: ath79: Fix the DDR control initialization on ar71xx and ar934x
    MIPS: Fix flood of warnings about comparsion being always true.

    Linus Torvalds
     
  • Pull parisc update from Helge Deller:
    "This patchset adds Huge Page and HUGETLBFS support for parisc"

    Honestly, the hugepage support should have gone through in the merge
    window, and is not really an rc-time fix. But it only touches
    arch/parisc, and I cannot find it in myself to care. If one of the
    three parisc users notices a breakage, I will point at Helge and make
    rude farting noises.

    * 'parisc-4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
    parisc: Map kernel text and data on huge pages
    parisc: Add Huge Page and HUGETLBFS support
    parisc: Use long branch to do_syscall_trace_exit
    parisc: Increase initial kernel mapping to 32MB on 64bit kernel
    parisc: Initialize the fault vector earlier in the boot process.
    parisc: Add defines for Huge page support
    parisc: Drop unused MADV_xxxK_PAGES flags from asm/mman.h
    parisc: Drop definition of start_thread_som for HP-UX SOM binaries
    parisc: Fix wrong comment regarding first pmd entry flags

    Linus Torvalds
     
  • Pull perf tool fixes from Thomas Gleixner:
    "A couple of fixes for perf tools:

    - Build system updates

    - Plug a memory leak in an error path of perf probe

    - Tear down probes correctly when adding fails

    - Fixes to the perf symbol handling

    - Fix ordering of event processing in buildid-list

    - Fix per DSO filtering in the histogram browser"

    * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    perf probe: Clear probe_trace_event when add_probe_trace_event() fails
    perf probe: Fix memory leaking on failure by clearing all probe_trace_events
    perf inject: Also re-pipe lost_samples event
    perf buildid-list: Requires ordered events
    perf symbols: Fix dso lookup by long name and missing buildids
    perf symbols: Allow forcing reading of non-root owned files by root
    perf hists browser: The dso can be obtained from popup_action->ms.map->dso
    perf hists browser: Fix 'd' hotkey action to filter by DSO
    perf symbols: Rebuild rbtree when adjusting symbols for kcore
    tools: Add a "make all" rule
    tools: Actually install tmon in the install rule

    Linus Torvalds
     
  • Pull x86 fixes from Thomas Gleixner:
    "This update contains:

    - MPX updates for handling 32bit processes

    - A fix for a long standing bug in 32bit signal frame handling
    related to FPU/XSAVE state

    - Handle get_xsave_addr() correctly in KVM

    - Fix SMAP check under paravirtualization

    - Add a comment to the static function trace entry to avoid further
    confusion about the difference to dynamic tracing"

    * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    x86/cpu: Fix SMAP check in PVOPS environments
    x86/ftrace: Add comment on static function tracing
    x86/fpu: Fix get_xsave_addr() behavior under virtualization
    x86/fpu: Fix 32-bit signal frame handling
    x86/mpx: Fix 32-bit address space calculation
    x86/mpx: Do proper get_user() when running 32-bit binaries on 64-bit kernels

    Linus Torvalds
     
  • Adjust kmem_cache_alloc_bulk API before we have any real users.

    Adjust API to return type 'int' instead of previously type 'bool'. This
    is done to allow future extension of the bulk alloc API.

    A future extension could be to allow SLUB to stop at a page boundary, when
    specified by a flag, and then return the number of objects.

    The advantage of this approach, would make it easier to make bulk alloc
    run without local IRQs disabled. With an approach of cmpxchg "stealing"
    the entire c->freelist or page->freelist. To avoid overshooting we would
    stop processing at a slab-page boundary. Else we always end up returning
    some objects at the cost of another cmpxchg.

    To keep compatible with future users of this API linking against an older
    kernel when using the new flag, we need to return the number of allocated
    objects with this API change.

    Signed-off-by: Jesper Dangaard Brouer
    Cc: Vladimir Davydov
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Dangaard Brouer
     
  • Initial implementation missed support for kmem cgroup support in
    kmem_cache_free_bulk() call, add this.

    If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough
    to not add any asm code.

    Incoming bulk free objects can belong to different kmem cgroups, and
    object free call can happen at a later point outside memcg context. Thus,
    we need to keep the orig kmem_cache, to correctly verify if a memcg object
    match against its "root_cache" (s->memcg_params.root_cache).

    Signed-off-by: Jesper Dangaard Brouer
    Reviewed-by: Vladimir Davydov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Dangaard Brouer
     
  • The call slab_pre_alloc_hook() interacts with kmemgc and is not allowed to
    be called several times inside the bulk alloc for loop, due to the call to
    memcg_kmem_get_cache().

    This would result in hitting the VM_BUG_ON in __memcg_kmem_get_cache.

    As suggested by Vladimir Davydov, change slab_post_alloc_hook() to be able
    to handle an array of objects.

    A subtle detail is, loop iterator "i" in slab_post_alloc_hook() must have
    same type (size_t) as size argument. This helps the compiler to easier
    realize that it can remove the loop, when all debug statements inside loop
    evaluates to nothing. Note, this is only an issue because the kernel is
    compiled with GCC option: -fno-strict-overflow

    In slab_alloc_node() the compiler inlines and optimizes the invocation of
    slab_post_alloc_hook(s, flags, 1, &object) by removing the loop and access
    object directly.

    Signed-off-by: Jesper Dangaard Brouer
    Reported-by: Vladimir Davydov
    Suggested-by: Vladimir Davydov
    Reviewed-by: Vladimir Davydov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Dangaard Brouer
     
  • This change focus on improving the speed of object freeing in the
    "slowpath" of kmem_cache_free_bulk.

    The calls slab_free (fastpath) and __slab_free (slowpath) have been
    extended with support for bulk free, which amortize the overhead of
    the (locked) cmpxchg_double.

    To use the new bulking feature, we build what I call a detached
    freelist. The detached freelist takes advantage of three properties:

    1) the free function call owns the object that is about to be freed,
    thus writing into this memory is synchronization-free.

    2) many freelist's can co-exist side-by-side in the same slab-page
    each with a separate head pointer.

    3) it is the visibility of the head pointer that needs synchronization.

    Given these properties, the brilliant part is that the detached
    freelist can be constructed without any need for synchronization. The
    freelist is constructed directly in the page objects, without any
    synchronization needed. The detached freelist is allocated on the
    stack of the function call kmem_cache_free_bulk. Thus, the freelist
    head pointer is not visible to other CPUs.

    All objects in a SLUB freelist must belong to the same slab-page.
    Thus, constructing the detached freelist is about matching objects
    that belong to the same slab-page. The bulk free array is scanned is
    a progressive manor with a limited look-ahead facility.

    Kmem debug support is handled in call of slab_free().

    Notice kmem_cache_free_bulk no longer need to disable IRQs. This
    only slowed down single free bulk with approx 3 cycles.

    Performance data:
    Benchmarked[1] obj size 256 bytes on CPU i7-4790K @ 4.00GHz

    SLUB fastpath single object quick reuse: 47 cycles(tsc) 11.931 ns

    To get stable and comparable numbers, the kernel have been booted with
    "slab_merge" (this also improve performance for larger bulk sizes).

    Performance data, compared against fallback bulking:

    bulk - fallback bulk - improvement with this patch
    1 - 62 cycles(tsc) 15.662 ns - 49 cycles(tsc) 12.407 ns- improved 21.0%
    2 - 55 cycles(tsc) 13.935 ns - 30 cycles(tsc) 7.506 ns - improved 45.5%
    3 - 53 cycles(tsc) 13.341 ns - 23 cycles(tsc) 5.865 ns - improved 56.6%
    4 - 52 cycles(tsc) 13.081 ns - 20 cycles(tsc) 5.048 ns - improved 61.5%
    8 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.659 ns - improved 64.0%
    16 - 49 cycles(tsc) 12.412 ns - 17 cycles(tsc) 4.495 ns - improved 65.3%
    30 - 49 cycles(tsc) 12.484 ns - 18 cycles(tsc) 4.533 ns - improved 63.3%
    32 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.707 ns - improved 64.0%
    34 - 96 cycles(tsc) 24.243 ns - 23 cycles(tsc) 5.976 ns - improved 76.0%
    48 - 83 cycles(tsc) 20.818 ns - 21 cycles(tsc) 5.329 ns - improved 74.7%
    64 - 74 cycles(tsc) 18.700 ns - 20 cycles(tsc) 5.127 ns - improved 73.0%
    128 - 90 cycles(tsc) 22.734 ns - 27 cycles(tsc) 6.833 ns - improved 70.0%
    158 - 99 cycles(tsc) 24.776 ns - 30 cycles(tsc) 7.583 ns - improved 69.7%
    250 - 104 cycles(tsc) 26.089 ns - 37 cycles(tsc) 9.280 ns - improved 64.4%

    Performance data, compared current in-kernel bulking:

    bulk - curr in-kernel - improvement with this patch
    1 - 46 cycles(tsc) - 49 cycles(tsc) - improved (cycles:-3) -6.5%
    2 - 27 cycles(tsc) - 30 cycles(tsc) - improved (cycles:-3) -11.1%
    3 - 21 cycles(tsc) - 23 cycles(tsc) - improved (cycles:-2) -9.5%
    4 - 18 cycles(tsc) - 20 cycles(tsc) - improved (cycles:-2) -11.1%
    8 - 17 cycles(tsc) - 18 cycles(tsc) - improved (cycles:-1) -5.9%
    16 - 18 cycles(tsc) - 17 cycles(tsc) - improved (cycles: 1) 5.6%
    30 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
    32 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
    34 - 78 cycles(tsc) - 23 cycles(tsc) - improved (cycles:55) 70.5%
    48 - 60 cycles(tsc) - 21 cycles(tsc) - improved (cycles:39) 65.0%
    64 - 49 cycles(tsc) - 20 cycles(tsc) - improved (cycles:29) 59.2%
    128 - 69 cycles(tsc) - 27 cycles(tsc) - improved (cycles:42) 60.9%
    158 - 79 cycles(tsc) - 30 cycles(tsc) - improved (cycles:49) 62.0%
    250 - 86 cycles(tsc) - 37 cycles(tsc) - improved (cycles:49) 57.0%

    Performance with normal SLUB merging is significantly slower for
    larger bulking. This is believed to (primarily) be an effect of not
    having to share the per-CPU data-structures, as tuning per-CPU size
    can achieve similar performance.

    bulk - slab_nomerge - normal SLUB merge
    1 - 49 cycles(tsc) - 49 cycles(tsc) - merge slower with cycles:0
    2 - 30 cycles(tsc) - 30 cycles(tsc) - merge slower with cycles:0
    3 - 23 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:0
    4 - 20 cycles(tsc) - 20 cycles(tsc) - merge slower with cycles:0
    8 - 18 cycles(tsc) - 18 cycles(tsc) - merge slower with cycles:0
    16 - 17 cycles(tsc) - 17 cycles(tsc) - merge slower with cycles:0
    30 - 18 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:5
    32 - 18 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:4
    34 - 23 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:-1
    48 - 21 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:1
    64 - 20 cycles(tsc) - 48 cycles(tsc) - merge slower with cycles:28
    128 - 27 cycles(tsc) - 57 cycles(tsc) - merge slower with cycles:30
    158 - 30 cycles(tsc) - 59 cycles(tsc) - merge slower with cycles:29
    250 - 37 cycles(tsc) - 56 cycles(tsc) - merge slower with cycles:19

    Joint work with Alexander Duyck.

    [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c

    [akpm@linux-foundation.org: BUG_ON -> WARN_ON;return]
    Signed-off-by: Jesper Dangaard Brouer
    Signed-off-by: Alexander Duyck
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Dangaard Brouer
     
  • Make it possible to free a freelist with several objects by adjusting API
    of slab_free() and __slab_free() to have head, tail and an objects counter
    (cnt).

    Tail being NULL indicate single object free of head object. This allow
    compiler inline constant propagation in slab_free() and
    slab_free_freelist_hook() to avoid adding any overhead in case of single
    object free.

    This allows a freelist with several objects (all within the same
    slab-page) to be free'ed using a single locked cmpxchg_double in
    __slab_free() and with an unlocked cmpxchg_double in slab_free().

    Object debugging on the free path is also extended to handle these
    freelists. When CONFIG_SLUB_DEBUG is enabled it will also detect if
    objects don't belong to the same slab-page.

    These changes are needed for the next patch to bulk free the detached
    freelists it introduces and constructs.

    Micro benchmarking showed no performance reduction due to this change,
    when debugging is turned off (compiled with CONFIG_SLUB_DEBUG).

    Signed-off-by: Jesper Dangaard Brouer
    Signed-off-by: Alexander Duyck
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Dangaard Brouer
     

22 Nov, 2015

8 commits

  • Adjust the linker script and map_pages() to map kernel text and data on
    physical 1MB huge/large pages.

    Signed-off-by: Helge Deller

    Helge Deller
     
  • This patch adds huge page support to allow userspace to allocate huge
    pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels.
    A later patch will add kernel support to map kernel text and data on
    huge pages.

    The only requirement is, that the kernel needs to be compiled for a
    PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support
    variable page sizes. 64bit Kernels are compiled for PA2.0 by default.

    Technically on parisc multiple physical huge pages may be needed to
    emulate standard 2MB huge pages.

    Signed-off-by: Helge Deller

    Helge Deller
     
  • Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
    to reach the do_syscall_trace_exit function from the gateway page.
    A huge page enabled kernel may need the additional branch distance bits.

    Signed-off-by: Helge Deller

    Helge Deller
     
  • For the 64bit kernel the initially 16 MB kernel memory might become too
    small if you build a kernel with many modules built-in and with kernel
    text and data areas mapped on huge pages.

    This patch increases the initial mapping to 32MB for 64bit kernels and
    keeps 16MB for 32bit kernels.

    Signed-off-by: Helge Deller

    Helge Deller
     
  • A fault vector on parisc needs to be 2K aligned. Furthermore the
    checksum of the fault vector needs to sum up to 0 which is being
    calculated and written at runtime.

    Up to now we aligned both PA20 and PA11 fault vectors on the same 4K
    page in order to easily write the checksum after having mapped the
    kernel read-only (by mapping this page only as read-write).
    But when we want to map the kernel text and data on huge pages this
    makes things harder.
    So, simplify it by aligning both fault vectors on 2K boundries and write
    the checksum before we map the page read-only.

    Signed-off-by: Helge Deller

    Helge Deller
     
  • Huge pages on parisc will have the same size as one pmd table, which
    is on a 64bit kernel 2MB on a kernel with 4K kernel page sizes, and
    on a 32bit kernel 4MB when used with 4K kernel pages.

    Since parisc does not physically supports 2MB huge page sizes, emulate
    it with two consecutive 1MB page sizes instead. Keeping the same huge
    page size as one pmd will allow us to add transparent huge page support
    later on.

    Bit 21 in the pte flags was unused and will now be used to mark a page
    as huge page (_PAGE_HPAGE_BIT).

    Signed-off-by: Helge Deller

    Helge Deller
     
  • Drop the MADV_xxK_PAGES flags, which were never used and were from a proposed
    API which was never integrated into the generic Linux kernel code.

    Cc: stable@vger.kernel.org
    Signed-off-by: Helge Deller

    Helge Deller
     
  • Merge misc fixes from Andrew Morton:
    "A bunch of fixes"

    * emailed patches from Andrew Morton :
    slub: mark the dangling ifdef #else of CONFIG_SLUB_DEBUG
    slub: avoid irqoff/on in bulk allocation
    slub: create new ___slab_alloc function that can be called with irqs disabled
    mm: fix up sparse warning in gfpflags_allow_blocking
    ocfs2: fix umask ignored issue
    PM/OPP: add entry in MAINTAINERS
    kernel/panic.c: turn off locks debug before releasing console lock
    kernel/signal.c: unexport sigsuspend()
    kasan: fix kmemleak false-positive in kasan_module_alloc()
    fat: fix fake_offset handling on error path
    mm/hugetlbfs: fix bugs in fallocate hole punch of areas with holes
    mm/page-writeback.c: initialize m_dirty to avoid compile warning
    various: fix pci_set_dma_mask return value checking
    mm: loosen MADV_NOHUGEPAGE to enable Qemu postcopy on s390
    mm: vmalloc: don't remove inexistent guard hole in remove_vm_area()
    tools/vm/page-types.c: support KPF_IDLE
    ncpfs: don't allow negative timeouts
    configfs: allow dynamic group creation
    MAINTAINERS: add Moritz as reviewer for FPGA Manager Framework
    slab.h: sprinkle __assume_aligned attributes

    Linus Torvalds