15 Apr, 2017

1 commit

  • This operation supports the use case of limiting the number of bits that
    can be allocated for a given operation. Rather than setting aside some
    bits at the end of the bitmap, we can set aside bits in each word of the
    bitmap. This means we can keep the allocation hints spread out and
    support sbitmap_resize() nicely at the cost of lower granularity for the
    allowed depth.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     

02 Mar, 2017

1 commit

  • is a low level header that is included early
    in affected kernel headers. But it includes
    which complicates the cleanup of sched.h dependencies.

    But kasan.h has almost no need for sched.h: its only use of
    scheduler functionality is in two inline functions which are
    not used very frequently - so uninline kasan_enable_current()
    and kasan_disable_current().

    Also add a dependency to a .c file that depended
    on kasan.h including it.

    This paves the way to remove the include from kasan.h.

    Acked-by: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

27 Jan, 2017

1 commit


19 Jan, 2017

2 commits

  • When we resize a struct sbitmap_queue, we update the wakeup batch size,
    but we don't update the wait count in the struct sbq_wait_states. If we
    resized down from a size which could use a bigger batch size, these
    counts could be too large and cause us to miss necessary wakeups. To fix
    this, update the wait counts when we resize (ensuring some careful
    memory ordering so that it's safe w.r.t. concurrent clears).

    This also fixes a theoretical issue where two threads could end up
    bumping the wait count up by the batch size, which could also
    potentially lead to hangs.

    Reported-by: Martin Raiber
    Fixes: e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs")
    Fixes: 2971c35f3588 ("blk-mq: bitmap tag: fix race on blk_mq_bitmap_tags::wake_cnt")
    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • We always do an atomic clear_bit() right before we call sbq_wake_up(),
    so we can use smp_mb__after_atomic(). While we're here, comment the
    memory barriers in here a little more.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     

19 Sep, 2016

1 commit

  • Variable weight is not being initialized to zero before it is
    used to compute the weight sum. Ensure it is initialized to zero.

    Found with static analysis with cppcheck:
    [lib/sbitmap.c:177]: (error) Uninitialized variable: weight

    Signed-off-by: Colin Ian King
    Signed-off-by: Jens Axboe

    Colin Ian King
     

18 Sep, 2016

1 commit

  • If we have a bunch of high-numbered bits allocated and then we resize
    the struct sbitmap_queue, when those bits get cleared, we'll update the
    hint and then have to re-randomize it repeatedly. Avoid that by checking
    that the cleared bit is still a valid hint. No measurable performance
    difference in the common case.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     

17 Sep, 2016

6 commits

  • After a struct sbitmap_queue is resized smaller, the allocation hints
    may still be set to bits beyond the new depth of the bitmap. This means
    that, for example, if the number of blk-mq tags is reduced through
    sysfs, more requests than the nominal queue depth may be in flight.

    It's tempting to fix this at resize time by doing a one-time
    reinitialization of the hints, but this can race with
    __sbitmap_queue_get() updating the hint. Instead, check the hint before
    we use it. This caused no measurable performance difference in my
    synthetic benchmarks.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • In order to get good cache behavior from a sbitmap, we want each CPU to
    stick to its own cacheline(s) as much as possible. This might happen
    naturally as the bitmap gets filled up and the alloc_hint values spread
    out, but we really want this behavior from the start. blk-mq apparently
    intended to do this, but the code to do this was never wired up. Get rid
    of the dead code and make it part of the sbitmap library.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • Again, there's no point in passing this in every time. Make it part of
    struct sbitmap_queue and clean up the API.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • Allocating your own per-cpu allocation hint separately makes for an
    awkward API. Instead, allocate the per-cpu hint as part of the struct
    sbitmap_queue. There's no point for a struct sbitmap_queue without the
    cache, but you can still use a bare struct sbitmap.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • The original bt_alloc() we converted from was using kzalloc(), not
    kzalloc_node(), to allocate the wait queues. This was probably an
    oversight, so fix it for sbitmap_queue_init_node().

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • This is a generally useful data structure, so make it available to
    anyone else who might want to use it. It's also a nice cleanup
    separating the allocation logic from the rest of the tag handling logic.

    The code is behind a new Kconfig option, CONFIG_SBITMAP, which is only
    selected by CONFIG_BLOCK for now.

    This should be a complete noop functionality-wise.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval