08 Jun, 2016

1 commit

  • Use __jhash_mix() to mix the class_idx into the class_key. This
    function provides better mixing than the previously used, home grown
    mix function.

    Leave hashing to the professionals :-)

    Suggested-by: George Spelvin
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

03 Jun, 2016

7 commits

  • lockless_dereference() is planned to grow a sanity check to ensure
    that the input parameter is a pointer. __ref_is_percpu() passes in an
    unsinged long value which is a combination of a pointer and a flag.
    While it can be casted to a pointer lvalue, the casting looks messy
    and it's a special case anyway. Let's revert back to open-coding
    READ_ONCE() and explicit barrier.

    This doesn't cause any functional changes.

    Signed-off-by: Tejun Heo
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Paul McKenney
    Cc: Peter Zijlstra
    Cc: Pranith Kumar
    Cc: Thomas Gleixner
    Cc: kernel-team@fb.com
    Link: http://lkml.kernel.org/g/20160522185040.GA23664@p183.telecom.by
    Signed-off-by: Ingo Molnar

    Tejun Heo
     
  • The mutex owner can get read and written to locklessly.
    Use WRITE_ONCE when setting and clearing the owner field
    in order to avoid optimizations such as store tearing. This
    avoids situations where the owner field gets written to with
    multiple stores and another thread could concurrently read
    and use a partially written owner value.

    Signed-off-by: Jason Low
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Davidlohr Bueso
    Acked-by: Waiman Long
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Scott J Norton
    Cc: Terry Rudd
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1463782776.2479.9.camel@j-VirtualBox
    Signed-off-by: Ingo Molnar

    Jason Low
     
  • When acquiring the rwsem write lock in the slowpath, we first try
    to set count to RWSEM_WAITING_BIAS. When that is successful,
    we then atomically add the RWSEM_WAITING_BIAS in cases where
    there are other tasks on the wait list. This causes write lock
    operations to often issue multiple atomic operations.

    We can instead make the list_is_singular() check first, and then
    set the count accordingly, so that we issue at most 1 atomic
    operation when acquiring the write lock and reduce unnecessary
    cacheline contention.

    Signed-off-by: Jason Low
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Waiman Long
    Acked-by: Davidlohr Bueso
    Cc: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Christoph Lameter
    Cc: Fenghua Yu
    Cc: Heiko Carstens
    Cc: Ivan Kokshaysky
    Cc: Jason Low
    Cc: Linus Torvalds
    Cc: Martin Schwidefsky
    Cc: Matt Turner
    Cc: Paul E. McKenney
    Cc: Peter Hurley
    Cc: Peter Zijlstra
    Cc: Richard Henderson
    Cc: Terry Rudd
    Cc: Thomas Gleixner
    Cc: Tim Chen
    Cc: Tony Luck
    Link: http://lkml.kernel.org/r/1463445486-16078-2-git-send-email-jason.low2@hpe.com
    Signed-off-by: Ingo Molnar

    Jason Low
     
  • Readers that are awoken will expect a nil ->task indicating
    that a wakeup has occurred. Because of the way readers are
    implemented, there's a small chance that the waiter will never
    block in the slowpath (rwsem_down_read_failed), and therefore
    requires some form of reference counting to avoid the following
    scenario:

    rwsem_down_read_failed() rwsem_wake()
    get_task_struct();
    spin_lock_irq(&wait_lock);
    list_add_tail(&waiter.list)
    spin_unlock_irq(&wait_lock);
    raw_spin_lock_irqsave(&wait_lock)
    __rwsem_do_wake()
    while (1) {
    set_task_state(TASK_UNINTERRUPTIBLE);
    waiter->task = NULL
    if (!waiter.task) // true
    break;
    schedule() // never reached

    __set_task_state(TASK_RUNNING);
    do_exit();
    wake_up_process(tsk); // boom

    ... and therefore race with do_exit() when the caller returns.

    There is also a mismatch between the smp_mb() and its documentation,
    in that the serialization is done between reading the task and the
    nil store. Furthermore, in addition to having the overlapping of
    loads and stores to waiter->task guaranteed to be ordered within
    that CPU, both wake_up_process() originally and now wake_q_add()
    already imply barriers upon successful calls, which serves the
    comment.

    Now, as an alternative to perhaps inverting the checks in the blocker
    side (which has its own penalty in that schedule is unavoidable),
    with lockless wakeups this situation is naturally addressed and we
    can just use the refcount held by wake_q_add(), instead doing so
    explicitly. Of course, we must guarantee that the nil store is done
    as the _last_ operation in that the task must already be marked for
    deletion to not fall into the race above. Spurious wakeups are also
    handled transparently in that the task's reference is only removed
    when wake_up_q() is actually called _after_ the nil store.

    Signed-off-by: Davidlohr Bueso
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Waiman.Long@hpe.com
    Cc: dave@stgolabs.net
    Cc: jason.low2@hp.com
    Cc: peter@hurleysoftware.com
    Link: http://lkml.kernel.org/r/1463165787-25937-3-git-send-email-dave@stgolabs.net
    Signed-off-by: Ingo Molnar

    Davidlohr Bueso
     
  • As wake_qs gain users, we can teach rwsems about them such that
    waiters can be awoken without the wait_lock. This is for both
    readers and writer, the former being the most ideal candidate
    as we can batch the wakeups shortening the critical region that
    much more -- ie writer task blocking a bunch of tasks waiting to
    service page-faults (mmap_sem readers).

    In general applying wake_qs to rwsem (xadd) is not difficult as
    the wait_lock is intended to be released soon _anyways_, with
    the exception of when a writer slowpath will proactively wakeup
    any queued readers if it sees that the lock is owned by a reader,
    in which we simply do the wakeups with the lock held (see comment
    in __rwsem_down_write_failed_common()).

    Similar to other locking primitives, delaying the waiter being
    awoken does allow, at least in theory, the lock to be stolen in
    the case of writers, however no harm was seen in this (in fact
    lock stealing tends to be a _good_ thing in most workloads), and
    this is a tiny window anyways.

    Some page-fault (pft) and mmap_sem intensive benchmarks show some
    pretty constant reduction in systime (by up to ~8 and ~10%) on a
    2-socket, 12 core AMD box. In addition, on an 8-core Westmere doing
    page allocations (page_test)

    aim9:
    4.6-rc6 4.6-rc6
    rwsemv2
    Min page_test 378167.89 ( 0.00%) 382613.33 ( 1.18%)
    Min exec_test 499.00 ( 0.00%) 502.67 ( 0.74%)
    Min fork_test 3395.47 ( 0.00%) 3537.64 ( 4.19%)
    Hmean page_test 395433.06 ( 0.00%) 414693.68 ( 4.87%)
    Hmean exec_test 499.67 ( 0.00%) 505.30 ( 1.13%)
    Hmean fork_test 3504.22 ( 0.00%) 3594.95 ( 2.59%)
    Stddev page_test 17426.57 ( 0.00%) 26649.92 (-52.93%)
    Stddev exec_test 0.47 ( 0.00%) 1.41 (-199.05%)
    Stddev fork_test 63.74 ( 0.00%) 32.59 ( 48.86%)
    Max page_test 429873.33 ( 0.00%) 456960.00 ( 6.30%)
    Max exec_test 500.33 ( 0.00%) 507.66 ( 1.47%)
    Max fork_test 3653.33 ( 0.00%) 3650.90 ( -0.07%)

    4.6-rc6 4.6-rc6
    rwsemv2
    User 1.12 0.04
    System 0.23 0.04
    Elapsed 727.27 721.98

    Signed-off-by: Davidlohr Bueso
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Waiman.Long@hpe.com
    Cc: dave@stgolabs.net
    Cc: jason.low2@hp.com
    Cc: peter@hurleysoftware.com
    Link: http://lkml.kernel.org/r/1463165787-25937-2-git-send-email-dave@stgolabs.net
    Signed-off-by: Ingo Molnar

    Davidlohr Bueso
     
  • Recursive locking for ww_mutexes was originally conceived as an
    exception. However, it is heavily used by the DRM atomic modesetting
    code. Currently, the recursive deadlock is checked after we have queued
    up for a busy-spin and as we never release the lock, we spin until
    kicked, whereupon the deadlock is discovered and reported.

    A simple solution for the now common problem is to move the recursive
    deadlock discovery to the first action when taking the ww_mutex.

    Suggested-by: Maarten Lankhorst
    Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Maarten Lankhorst
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: stable@vger.kernel.org
    Link: http://lkml.kernel.org/r/1464293297-19777-1-git-send-email-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson
     
  • Commit 50755bc1c305 ("seqlock: fix raw_read_seqcount_latch()") broke
    raw_read_seqcount_latch().

    If you look at the comment that was modified; the thing that changes is
    the seq count, not the latch pointer.

    * void latch_modify(struct latch_struct *latch, ...)
    * {
    * smp_wmb(); seq++;
    * smp_wmb(); data[0], ...);
    *
    * smp_wmb(); seq++;
    * smp_wmb(); data[1], ...);
    * }
    *
    * The query will have a form like:
    *
    * struct entry *latch_query(struct latch_struct *latch, ...)
    * {
    * struct entry *entry;
    * unsigned seq, idx;
    *
    * do {
    * seq = lockless_dereference(latch->seq);

    So here we have:

    seq = READ_ONCE(latch->seq);
    smp_read_barrier_depends();

    Which is exactly what we want; the new code:

    seq = ({ p = READ_ONCE(latch);
    smp_read_barrier_depends(); p })->seq;

    is just wrong; because it looses the volatile read on seq, which can now
    be torn or worse 'optimized'. And the read_depend barrier is also placed
    wrong, we want it after the load of seq, to match the above data[]
    up-to-date wmb()s.

    Such that when we dereference latch->data[] below, we're guaranteed to
    observe the right data.

    *
    * idx = seq & 0x01;
    * entry = data_query(latch->data[idx], ...);
    *
    * smp_rmb();
    * } while (seq != latch->seq);
    *
    * return entry;
    * }

    So yes, not passing a pointer is not pretty, but the code was correct,
    and isn't anymore now.

    Change to explicit READ_ONCE()+smp_read_barrier_depends() to avoid
    confusion and allow strict lockless_dereference() checking.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Paul McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Fixes: 50755bc1c305 ("seqlock: fix raw_read_seqcount_latch()")
    Link: http://lkml.kernel.org/r/20160527111117.GL3192@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

02 Jun, 2016

2 commits


01 Jun, 2016

23 commits

  • Pull networking fixes from David Miller:

    1) Fix negative error code usage in ATM layer, from Stefan Hajnoczi.

    2) If CONFIG_SYSCTL is disabled, the default TTL is not initialized
    properly. From Ezequiel Garcia.

    3) Missing spinlock init in mvneta driver, from Gregory CLEMENT.

    4) Missing unlocks in hwmb error paths, also from Gregory CLEMENT.

    5) Fix deadlock on team->lock when propagating features, from Ivan
    Vecera.

    6) Work around buffer offset hw bug in alx chips, from Feng Tang.

    7) Fix double listing of SCTP entries in sctp_diag dumps, from Xin
    Long.

    8) Various statistics bug fixes in mlx4 from Eric Dumazet.

    9) Fix some randconfig build errors wrt fou ipv6 from Arnd Bergmann.

    10) All of l2tp was namespace aware, but the ipv6 support code was not
    doing so. From Shmulik Ladkani.

    11) Handle on-stack hrtimers properly in pktgen, from Guenter Roeck.

    12) Propagate MAC changes properly through VLAN devices, from Mike
    Manning.

    13) Fix memory leak in bnx2x_init_one(), from Vitaly Kuznetsov.

    * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (62 commits)
    sfc: Track RPS flow IDs per channel instead of per function
    usbnet: smsc95xx: fix link detection for disabled autonegotiation
    virtio_net: fix virtnet_open and virtnet_probe competing for try_fill_recv
    bnx2x: avoid leaking memory on bnx2x_init_one() failures
    fou: fix IPv6 Kconfig options
    openvswitch: update checksum in {push,pop}_mpls
    sctp: sctp_diag should dump sctp socket type
    net: fec: update dirty_tx even if no skb
    vlan: Propagate MAC address to VLANs
    atm: iphase: off by one in rx_pkt()
    atm: firestream: add more reserved strings
    vxlan: Accept user specified MTU value when create new vxlan link
    net: pktgen: Call destroy_hrtimer_on_stack()
    timer: Export destroy_hrtimer_on_stack()
    net: l2tp: Make l2tp_ip6 namespace aware
    Documentation: ip-sysctl.txt: clarify secure_redirects
    sfc: use flow dissector helpers for aRFS
    ieee802154: fix logic error in ieee802154_llsec_parse_dev_addr
    net: nps_enet: Disable interrupts before napi reschedule
    net/lapb: tuse %*ph to dump buffers
    ...

    Linus Torvalds
     
  • Pull sparc fixes from David Miller:
    "sparc64 mmu context allocation and trap return bug fixes"

    * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
    sparc64: Fix return from trap window fill crashes.
    sparc: Harden signal return frame checks.
    sparc64: Take ctx_alloc_lock properly in hugetlb_setup().

    Linus Torvalds
     
  • Otherwise we get confused when two flows on different channels get the
    same flow ID.

    Signed-off-by: Edward Cree
    Signed-off-by: David S. Miller

    Jon Cooper
     
  • To detect link status up/down for connections where autonegotiation is
    explicitly disabled, we don't get an irq but need to poll the status
    register for link up/down detection.
    This patch adds a workqueue to poll for link status.

    Signed-off-by: Christoph Fritz
    Signed-off-by: David S. Miller

    Christoph Fritz
     
  • In function virtnet_open() and virtnet_probe(), func try_fill_recv() may
    be executed at the same time. VQ in virtqueue_add() has not been protected
    well and BUG_ON will be triggered when virito_net.ko being removed.

    Signed-off-by: Yunjian Wang
    Acked-by: Jason Wang
    Acked-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    wangyunjian
     
  • bnx2x_init_bp() allocates memory with bnx2x_alloc_mem_bp() so if we
    fail later in bnx2x_init_one() we need to free this memory
    with bnx2x_free_mem_bp() to avoid leakages. E.g. I'm observing memory
    leaks reported by kmemleak when a failure (unrelated) happens in
    bnx2x_vfpf_acquire().

    Signed-off-by: Vitaly Kuznetsov
    Acked-by: Yuval Mintz
    Signed-off-by: David S. Miller

    Vitaly Kuznetsov
     
  • The Kconfig options I added to work around broken compilation ended
    up screwing up things more, as I used the wrong symbol to control
    compilation of the file, resulting in IPv6 fou support to never be built
    into the kernel.

    Changing CONFIG_NET_FOU_IPV6_TUNNELS to CONFIG_IPV6_FOU fixes that
    problem, I had renamed the symbol in one location but not the other,
    and as the file is never being used by other kernel code, this did not
    lead to a build failure that I would have caught.

    After that fix, another issue with the same patch becomes obvious, as we
    'select INET6_TUNNEL', which is related to IPV6_TUNNEL, but not the same,
    and this can still cause the original build failure when IPV6_TUNNEL is
    not built-in but IPV6_FOU is. The fix is equally trivial, we just need
    to select the right symbol.

    I have successfully build 350 randconfig kernels with this patch
    and verified that the driver is now being built.

    Signed-off-by: Arnd Bergmann
    Reported-by: Valentin Rothberg
    Fixes: fabb13db448e ("fou: add Kconfig options for IPv6 support")
    Signed-off-by: David S. Miller

    Arnd Bergmann
     
  • In the case of CHECKSUM_COMPLETE the skb checksum should be updated in
    {push,pop}_mpls() as they the type in the ethernet header.

    As suggested by Pravin Shelar.

    Cc: Pravin Shelar
    Fixes: 25cd9ba0abc0 ("openvswitch: Add basic MPLS support to kernel")
    Signed-off-by: Simon Horman
    Acked-by: Pravin B Shelar
    Signed-off-by: David S. Miller

    Simon Horman
     
  • Now we cannot distinguish that one sk is a udp or sctp style when
    we use ss to dump sctp_info. it's necessary to dump it as well.

    For sctp_diag, ss support is not officially available, thus there
    are no official users of this yet, so we can add this field in the
    middle of sctp_info without breaking user API.

    v1->v2:
    - move 'sctpi_s_type' field to the end of struct sctp_info, so
    that it won't cause incompatibility with applications already
    built.
    - add __reserved3 in sctp_info to make sure sctp_info is 8-byte
    alignment.

    Signed-off-by: Xin Long
    Acked-by: Marcelo Ricardo Leitner
    Acked-by: Neil Horman
    Signed-off-by: David S. Miller

    Xin Long
     
  • If dirty_tx isn't updated, then dma_unmap_single
    can be called twice.

    This fixes a
    [ 58.420980] ------------[ cut here ]------------
    [ 58.425667] WARNING: CPU: 0 PID: 377 at /home/schurig/d/mkarm/linux-4.5/lib/dma-debug.c:1096 check_unmap+0x9d0/0xab8()
    [ 58.436405] fec 2188000.ethernet: DMA-API: device driver tries to free DMA memory it has not allocated [device address=0x0000000000000000] [size=66 bytes]

    encountered by Holger

    Signed-off-by: Troy Kisky
    Tested-by:
    Acked-by: Fugang Duan
    Signed-off-by: David S. Miller

    Troy Kisky
     
  • The MAC address of the physical interface is only copied to the VLAN
    when it is first created, resulting in an inconsistency after MAC
    address changes of only newly created VLANs having an up-to-date MAC.

    The VLANs should continue inheriting the MAC address of the physical
    interface until the VLAN MAC address is explicitly set to any value.
    This allows IPv6 EUI64 addresses for the VLAN to reflect any changes
    to the MAC of the physical interface and thus for DAD to behave as
    expected.

    Signed-off-by: Mike Manning
    Signed-off-by: David S. Miller

    Mike Manning
     
  • The iadev->rx_open[] array holds "iadev->num_vc" pointers (this code
    assumes that pointers are 32 bits). So the > here should be >= or else
    we could end up reading a garbage pointer from one element beyond the
    end of the array.

    Signed-off-by: Dan Carpenter
    Signed-off-by: David S. Miller

    Dan Carpenter
     
  • This bug was there when the driver was first added in back in year 2000.
    It causes a Smatch warning:

    drivers/atm/firestream.c:849 process_incoming()
    error: buffer overflow 'res_strings' 60
    Signed-off-by: David S. Miller

    Dan Carpenter
     
  • When create a new vxlan link, example:
    ip link add vtap mtu 1440 type vxlan vni 1 dev eth0

    The argument "mtu" has no effect, because it is not set to conf->mtu. The
    default value is used in vxlan_dev_configure function.

    This problem was introduced by commit 0dfbdf4102b9 (vxlan: Factor out device
    configuration).

    Fixes: 0dfbdf4102b9 (vxlan: Factor out device configuration)
    Signed-off-by: Chen Haiquan
    Acked-by: Cong Wang
    Signed-off-by: David S. Miller

    Chen Haiquan
     
  • If CONFIG_DEBUG_OBJECTS_TIMERS=y, hrtimer_init_on_stack() requires
    a matching call to destroy_hrtimer_on_stack() to clean up timer
    debug objects.

    Signed-off-by: Guenter Roeck
    Signed-off-by: David S. Miller

    Guenter Roeck
     
  • hrtimer_init_on_stack() needs a matching call to
    destroy_hrtimer_on_stack(), so both need to be exported.

    Signed-off-by: Guenter Roeck
    Signed-off-by: David S. Miller

    Guenter Roeck
     
  • Replace explicit computation of vma page count by a call to
    vma_pages().
    Also, include

    Signed-off-by: Muhammad Falak R Wani
    Reviewed-by: Eric Engestrom
    Signed-off-by: Sumit Semwal

    Muhammad Falak R Wani
     
  • The members child_list and active_list were added to the fence struct
    without descriptions for the Documentation. Adding these.

    Fixes: b55b54b5db33 ("staging/android: remove struct sync_pt")
    Signed-off-by: Luis de Bethencourt
    Reviewed-by: Javier Martinez Canillas
    Reviewed-by: Gustavo Padovan
    Signed-off-by: Sumit Semwal

    Luis de Bethencourt
     
  • Split out dma-buf related parts into their own section, add missing
    files, and write a bit of overview about how it all fits together.

    Signed-off-by: Rob Clark
    Signed-off-by: Sumit Semwal

    Rob Clark
     
  • Pull s390 fixes from Martin Schwidefsky:
    "Three bugs fixes and an update for the default configuration"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
    s390: fix info leak in do_sigsegv
    s390/config: update default configuration
    s390/bpf: fix recache skb->data/hlen for skb_vlan_push/pop
    s390/bpf: reduce maximum program size to 64 KB

    Linus Torvalds
     
  • Signed-off-by: Rob Clark
    Signed-off-by: Sumit Semwal

    Rob Clark
     
  • Apparently nobody noticed that dma-buf.h wasn't actually pulled into
    docbook build. And as a result the headerdoc comments bitrot a bit.
    Add missing params/fields.

    Signed-off-by: Rob Clark
    Signed-off-by: Sumit Semwal

    Rob Clark
     
  • Pull GPIO fixes from Linus Walleij:
    "A bunch of GPIO fixes for the v4.7 series:

    - Drop the lock before reading out the GPIO direction setting in
    drivers supporting the .get_direction() callback: some of them may
    be slowpath.

    - Flush GPIO direction setting before locking a GPIO as an IRQ: some
    electronics or other poking around in the registers behind our back
    may have happened, so flush the direction status before trying to
    lock the line for use by IRQs.

    - Bail out silently when asked to perform operations on NULL GPIO
    descriptors. That is what all the get_*_optional() is about: we
    get optional GPIO handles, if they are not there, we get NULL.

    - Handle compatible ioctl() correctly: we need to convert the ioctl()
    pointer using compat_ptr() here like everyone else.

    - Disable the broken .to_irq() on the LPC32xx platform. The whole
    irqchip infrastructure was replaced in the last merge window, and a
    new implementation will be needed"

    * tag 'gpio-v4.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
    gpio: drop lock before reading GPIO direction
    gpio: bail out silently on NULL descriptors
    gpio: handle compatible ioctl() pointers
    gpio: flush direction status in gpiochip_lock_as_irq()
    gpio: lpc32xx: disable broken to_irq support

    Linus Torvalds
     

31 May, 2016

5 commits


30 May, 2016

2 commits

  • When adding the gpiochip, the GPIO HW drivers' callback get_direction()
    could get called in atomic context. Some of the GPIO HW drivers may
    sleep when accessing the register.

    Move the lock before initializing the descriptors.

    Reported-by: Laxman Dewangan
    Signed-off-by: Linus Walleij

    Linus Walleij
     
  • In fdeb8e1547cb9dd39d5d7223b33f3565cf86c28e
    ("gpio: reflect base and ngpio into gpio_device")
    assumed that GPIO descriptors are either valid or error
    pointers, but gpiod_get_[index_]optional() actually return
    NULL descriptors and then all subsequent calls should just
    bail out.

    Cc: stable@vger.kernel.org
    Cc: Sergei Shtylyov
    Cc: Florian Fainelli
    Cc: Andrew Lunn
    Fixes: fdeb8e1547cb ("gpio: reflect base and ngpio into gpio_device")
    Reported-by: Uwe Kleine-König
    Signed-off-by: Linus Walleij

    Linus Walleij