15 Jul, 2019

1 commit


09 Sep, 2017

1 commit

  • Patch series "rbtree: Cache leftmost node internally", v4.

    A series to extending rbtrees to internally cache the leftmost node such
    that we can have fast overlap check optimization for all interval tree
    users[1]. The benefits of this series are that:

    (i) Unify users that do internal leftmost node caching.
    (ii) Optimize all interval tree users.
    (iii) Convert at least two new users (epoll and procfs) to the new interface.

    This patch (of 16):

    Red-black tree semantics imply that nodes with smaller or greater (or
    equal for duplicates) keys always be to the left and right,
    respectively. For the kernel this is extremely evident when considering
    our rb_first() semantics. Enabling lookups for the smallest node in the
    tree in O(1) can save a good chunk of cycles in not having to walk down
    the tree each time. To this end there are a few core users that
    explicitly do this, such as the scheduler and rtmutexes. There is also
    the desire for interval trees to have this optimization allowing faster
    overlap checking.

    This patch introduces a new 'struct rb_root_cached' which is just the
    root with a cached pointer to the leftmost node. The reason why the
    regular rb_root was not extended instead of adding a new structure was
    that this allows the user to have the choice between memory footprint
    and actual tree performance. The new wrappers on top of the regular
    rb_root calls are:

    - rb_first_cached(cached_root) -- which is a fast replacement
    for rb_first.

    - rb_insert_color_cached(node, cached_root, new)

    - rb_erase_cached(node, cached_root)

    In addition, augmented cached interfaces are also added for basic
    insertion and deletion operations; which becomes important for the
    interval tree changes.

    With the exception of the inserts, which adds a bool for updating the
    new leftmost, the interfaces are kept the same. To this end, porting rb
    users to the cached version becomes really trivial, and keeping current
    rbtree semantics for users that don't care about the optimization
    requires zero overhead.

    Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Jan Kara
    Acked-by: Peter Zijlstra (Intel)
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

15 Jul, 2017

1 commit

  • Each text file under Documentation follows a different
    format. Some doesn't even have titles!

    Change its representation to follow the adopted standard,
    using ReST markups for it to be parseable by Sphinx:

    - Mark document title;
    - Use :Author: for authorship;
    - mark a sub-section title as such;
    - mark literal blocks;
    - adjust identation where needed.

    Signed-off-by: Mauro Carvalho Chehab
    Signed-off-by: Jonathan Corbet

    Mauro Carvalho Chehab
     

14 Sep, 2015

1 commit


05 May, 2014

1 commit


09 Oct, 2012

2 commits

  • Provide rb_insert_augmented() and rb_erase_augmented() through a new
    rbtree_augmented.h include file. rb_erase_augmented() is defined there as
    an __always_inline function, in order to allow inlining of augmented
    rbtree callbacks into it. Since this generates a relatively large
    function, each augmented rbtree user should make sure to have a single
    call site.

    Signed-off-by: Michel Lespinasse
    Cc: Rik van Riel
    Cc: Hillf Danton
    Cc: Peter Zijlstra
    Cc: Catalin Marinas
    Cc: Andrea Arcangeli
    Cc: David Woodhouse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     
  • Introduce new augmented rbtree APIs that allow minimal recalculation of
    augmented node information.

    A new callback is added to the rbtree insertion and erase rebalancing
    functions, to be called on each tree rotations. Such rotations preserve
    the subtree's root augmented value, but require recalculation of the one
    child that was previously located at the subtree root.

    In the insertion case, the handcoded search phase must be updated to
    maintain the augmented information on insertion, and then the rbtree
    coloring/rebalancing algorithms keep it up to date.

    In the erase case, things are more complicated since it is library
    code that manipulates the rbtree in order to remove internal nodes.
    This requires a couple additional callbacks to copy a subtree's
    augmented value when a new root is stitched in, and to recompute
    augmented values down the ancestry path when a node is removed from
    the tree.

    In order to preserve maximum speed for the non-augmented case,
    we provide two versions of each tree manipulation function.
    rb_insert_augmented() is the augmented equivalent of rb_insert_color(),
    and rb_erase_augmented() is the augmented equivalent of rb_erase().

    Signed-off-by: Michel Lespinasse
    Acked-by: Rik van Riel
    Cc: Peter Zijlstra
    Cc: Andrea Arcangeli
    Cc: David Woodhouse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     

25 Jul, 2011

1 commit

  • Current documentation referred to the old method of handling augmented
    trees. Update documentation to correspond with the changes done in
    commit b945d6b2554d ("rbtree: Undo augmented trees performance damage
    and regression").

    Cc: Pekka Enberg
    Cc: David Woodhouse
    Cc: Andrew Morton
    Acked-by: Ingo Molnar
    Acked-by: Peter Zijlstra
    Signed-off-by: Sasha Levin
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

11 Nov, 2010

1 commit


19 Feb, 2010

1 commit

  • Add support for augmented rbtrees in core rbtree code.

    This will be used in subsequent patches, in x86 PAT code, which needs
    interval trees to efficiently keep track of PAT ranges.

    Signed-off-by: Venkatesh Pallipadi
    LKML-Reference:
    Signed-off-by: Suresh Siddha
    Signed-off-by: H. Peter Anvin

    Pallipadi, Venkatesh
     

13 Jun, 2009

2 commits


12 Feb, 2007

1 commit