21 Jul, 2020

1 commit


14 Jan, 2020

1 commit

  • Commit 0bf7800f1799 ("ptr_ring: try vmalloc() when kmalloc() fails")
    started to use kvmalloc_array and kvfree, which are defined in mm.h,
    the previous functions kcalloc and kfree, which are defined in slab.h.

    Add the missing include of linux/mm.h. This went unnoticed as other
    include files happened to include mm.h.

    Fixes: 0bf7800f1799 ("ptr_ring: try vmalloc() when kmalloc() fails")
    Signed-off-by: Jesper Dangaard Brouer
    Acked-by: Michael S. Tsirkin
    Signed-off-by: Jakub Kicinski

    Jesper Dangaard Brouer
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Feb, 2019

1 commit


02 Jan, 2019

1 commit

  • __ptr_ring_swap_queue() tries to move pointers from the old
    ring to the new one, but it forgets to check if ->producer
    is beyond the new size at the end of the operation. This leads
    to an out-of-bound access in __ptr_ring_produce() as reported
    by syzbot.

    Reported-by: syzbot+8993c0fa96d57c399735@syzkaller.appspotmail.com
    Fixes: 5d49de532002 ("ptr_ring: resize support")
    Cc: "Michael S. Tsirkin"
    Cc: John Fastabend
    Cc: Jason Wang
    Signed-off-by: Cong Wang
    Acked-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Cong Wang
     

20 Feb, 2018

2 commits


12 Feb, 2018

1 commit

  • Switch to use dividing to prevent integer overflow when size is too
    big to calculate allocation size properly.

    Reported-by: Eric Biggers
    Fixes: 6e6e41c31122 ("ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE")
    Signed-off-by: Jason Wang
    Acked-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Jason Wang
     

10 Feb, 2018

2 commits

  • This patch switch to use kvmalloc_array() for using a vmalloc()
    fallback to help in case kmalloc() fails.

    Reported-by: syzbot+e4d4f9ddd4295539735d@syzkaller.appspotmail.com
    Fixes: 2e0ab8ca83c12 ("ptr_ring: array based FIFO for pointers")
    Signed-off-by: Jason Wang
    Acked-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Jason Wang
     
  • To avoid slab to warn about exceeded size, fail early if queue
    occupies more than KMALLOC_MAX_SIZE.

    Reported-by: syzbot+e4d4f9ddd4295539735d@syzkaller.appspotmail.com
    Fixes: 2e0ab8ca83c12 ("ptr_ring: array based FIFO for pointers")
    Signed-off-by: Jason Wang
    Acked-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Jason Wang
     

30 Jan, 2018

6 commits

  • In theory compiler could tear queue loads or stores in two. It does not
    seem to be happening in practice but it seems easier to convert the
    cases where this would be a problem to READ/WRITE_ONCE than worry about
    it.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     
  • This reverts commit bcecb4bbf88aa03171c30652bca761cf27755a6b.

    If we try to allocate an extra entry as the above commit did, and when
    the requested size is UINT_MAX, addition overflows causing zero size to
    be passed to kmalloc().

    kmalloc then returns ZERO_SIZE_PTR with a subsequent crash.

    Reported-by: syzbot+87678bcf753b44c39b67@syzkaller.appspotmail.com
    Cc: John Fastabend
    Signed-off-by: Michael S. Tsirkin
    Acked-by: John Fastabend
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     
  • Similar to bcecb4bbf88a ("net: ptr_ring: otherwise safe empty checks can
    overrun array bounds") a lockless use of __ptr_ring_full might
    cause an out of bounds access.

    We can fix this, but it's easier to just disallow lockless
    __ptr_ring_full for now.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     
  • Lockless __ptr_ring_empty requires that consumer head is read and
    written at once, atomically. Annotate accordingly to make sure compiler
    does it correctly. Switch locked callers to __ptr_ring_peek which does
    not support the lockless operation.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     
  • The only function safe to call without locks
    is __ptr_ring_empty. Move documentation about
    lockless use there to make sure people do not
    try to use __ptr_ring_peek outside locks.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     
  • The comment near __ptr_ring_peek says:

    * If ring is never resized, and if the pointer is merely
    * tested, there's no need to take the lock - see e.g. __ptr_ring_empty.

    but this was in fact never possible since consumer_head would sometimes
    point outside the ring. Refactor the code so that it's always
    pointing within a ring.

    Fixes: c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array")
    Signed-off-by: Michael S. Tsirkin
    Acked-by: John Fastabend
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     

17 Jan, 2018

1 commit


16 Jan, 2018

1 commit


03 Jan, 2018

1 commit

  • When running consumer and/or producer operations and empty checks in
    parallel its possible to have the empty check run past the end of the
    array. The scenario occurs when an empty check is run while
    __ptr_ring_discard_one() is in progress. Specifically after the
    consumer_head is incremented but before (consumer_head >= ring_size)
    check is made and the consumer head is zeroe'd.

    To resolve this, without having to rework how consumer/producer ops
    work on the array, simply add an extra dummy slot to the end of the
    array. Even if we did a rework to avoid the extra slot it looks
    like the normal case checks would suffer some so best to just
    allocate an extra pointer.

    Reported-by: Jakub Kicinski
    Fixes: c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array")
    Signed-off-by: John Fastabend
    Signed-off-by: David S. Miller

    John Fastabend
     

11 Dec, 2017

1 commit

  • Users of ptr_ring expect that it's safe to give the
    data structure a pointer and have it be available
    to consumers, but that actually requires an smb_wmb
    or a stronger barrier.

    In absence of such barriers and on architectures that reorder writes,
    consumer might read an un=initialized value from an skb pointer stored
    in the skb array. This was observed causing crashes.

    To fix, add memory barriers. The barrier we use is a wmb, the
    assumption being that producers do not need to read the value so we do
    not need to order these reads.

    Reported-by: George Cherian
    Suggested-by: Jason Wang
    Signed-off-by: Michael S. Tsirkin
    Acked-by: Jason Wang
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     

17 Aug, 2017

1 commit

  • As found by syzkaller, malicious users can set whatever tx_queue_len
    on a tun device and eventually crash the kernel.

    Lets remove the ALIGN(XXX, SMP_CACHE_BYTES) thing since a small
    ring buffer is not fast anyway.

    Fixes: 2e0ab8ca83c1 ("ptr_ring: array based FIFO for pointers")
    Signed-off-by: Eric Dumazet
    Reported-by: Dmitry Vyukov
    Cc: Michael S. Tsirkin
    Cc: Jason Wang
    Signed-off-by: David S. Miller

    Eric Dumazet
     

18 May, 2017

2 commits

  • This patch introduce a batched version of consuming, consumer can
    dequeue more than one pointers from the ring at a time. We don't care
    about the reorder of reading here so no need for compiler barrier.

    Signed-off-by: Jason Wang
    Signed-off-by: David S. Miller

    Jason Wang
     
  • Applications that consume a batch of entries in one go
    can benefit from ability to return some of them back
    into the ring.

    Add an API for that - assuming there's space. If there's no space
    naturally can't do this and have to drop entries, but this implies ring
    is full so we'd likely drop some anyway.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: Jason Wang
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     

09 May, 2017

1 commit

  • A known weakness in ptr_ring design is that it does not handle well the
    situation when ring is almost full: as entries are consumed they are
    immediately used again by the producer, so consumer and producer are
    writing to a shared cache line.

    To fix this, add batching to consume calls: as entries are
    consumed do not write NULL into the ring until we get
    a multiple (in current implementation 2x) of cache lines
    away from the producer. At that point, write them all out.

    We do the write out in the reverse order to keep
    producer from sharing cache with consumer for as long
    as possible.

    Writeout also triggers when ring wraps around - there's
    no special reason to do this but it helps keep the code
    a bit simpler.

    What should we do if getting away from producer by 2 cache lines
    would mean we are keeping the ring moe than half empty?
    Maybe we should reduce the batching in this case,
    current patch simply reduces the batching.

    Notes:
    - it is no longer true that a call to consume guarantees
    that the following call to produce will succeed.
    No users seem to assume that.
    - batching can also in theory reduce the signalling rate:
    users that would previously send interrups to the producer
    to wake it up after consuming each entry would now only
    need to do this once in a batch.
    Doing this would be easy by returning a flag to the caller.
    No users seem to do signalling on consume yet so this was not
    implemented yet.

    Signed-off-by: Michael S. Tsirkin
    Reviewed-by: Jesper Dangaard Brouer
    Acked-by: Jason Wang

    Michael S. Tsirkin
     

20 Feb, 2017

1 commit

  • Resizing currently drops consumer lock. This can cause entries to be
    reordered, which isn't good in itself. More importantly, consumer can
    detect a false ring empty condition and block forever.

    Further, nesting of consumer within producer lock is problematic for
    tun, since it produces entries in a BH, which causes a lock order
    reversal:

    CPU0 CPU1
    ---- ----
    consume:
    lock(&(&r->consumer_lock)->rlock);
    resize:
    local_irq_disable();
    lock(&(&r->producer_lock)->rlock);
    lock(&(&r->consumer_lock)->rlock);

    produce:
    lock(&(&r->producer_lock)->rlock);

    To fix, nest producer lock within consumer lock during resize,
    and keep consumer lock during the whole swap operation.

    Reported-by: Dmitry Vyukov
    Cc: stable@vger.kernel.org
    Cc: "David S. Miller"
    Acked-by: Jason Wang
    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     

01 Jul, 2016

2 commits


16 Jun, 2016

2 commits

  • This adds ring resize support. Seems to be necessary as
    users such as tun allow userspace control over queue size.

    If resize is used, this costs us ability to peek at queue without
    consumer lock - should not be a big deal as peek and consumer are
    usually run on the same CPU.

    If ring is made bigger, ring contents is preserved. If ring is made
    smaller, extra pointers are passed to an optional destructor callback.

    Cleanup function also gains destructor callback such that
    all pointers in queue can be cleaned up.

    This changes some APIs but we don't have any users yet,
    so it won't break bisect.

    Signed-off-by: Michael S. Tsirkin
    Acked-by: Jesper Dangaard Brouer
    Signed-off-by: David S. Miller

    Michael S. Tsirkin
     
  • A simple array based FIFO of pointers. Intended for net stack which
    commonly has a single consumer/producer.

    Signed-off-by: Michael S. Tsirkin
    Acked-by: Jesper Dangaard Brouer
    Signed-off-by: David S. Miller

    Michael S. Tsirkin