19 Mar, 2014

3 commits


09 Jan, 2014

1 commit

  • We need a reserve for allocating buckets for new btree nodes - and now that
    we've got multiple btrees, it really needs to be per btree.

    This reworks the reserves so we've got separate freelists for each reserve
    instead of watermarks, which seems to make things a bit cleaner, and it adds
    some code so that btree_split() can make sure the reserve is available before it
    starts.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     

01 Jan, 2014

1 commit

  • Needed to bring blk-mq uptodate, since changes have been going in
    since for-3.14/core was established.

    Fixup merge issues related to the immutable biovec changes.

    Signed-off-by: Jens Axboe

    Conflicts:
    block/blk-flush.c
    fs/btrfs/check-integrity.c
    fs/btrfs/extent_io.c
    fs/btrfs/scrub.c
    fs/logfs/dev_bdev.c

    Jens Axboe
     

17 Dec, 2013

3 commits


24 Nov, 2013

1 commit

  • Immutable biovecs are going to require an explicit iterator. To
    implement immutable bvecs, a later patch is going to add a bi_bvec_done
    member to this struct; for now, this patch effectively just renames
    things.

    Signed-off-by: Kent Overstreet
    Cc: Jens Axboe
    Cc: Geert Uytterhoeven
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: "Ed L. Cashin"
    Cc: Nick Piggin
    Cc: Lars Ellenberg
    Cc: Jiri Kosina
    Cc: Matthew Wilcox
    Cc: Geoff Levand
    Cc: Yehuda Sadeh
    Cc: Sage Weil
    Cc: Alex Elder
    Cc: ceph-devel@vger.kernel.org
    Cc: Joshua Morris
    Cc: Philip Kelleher
    Cc: Rusty Russell
    Cc: "Michael S. Tsirkin"
    Cc: Konrad Rzeszutek Wilk
    Cc: Jeremy Fitzhardinge
    Cc: Neil Brown
    Cc: Alasdair Kergon
    Cc: Mike Snitzer
    Cc: dm-devel@redhat.com
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: linux390@de.ibm.com
    Cc: Boaz Harrosh
    Cc: Benny Halevy
    Cc: "James E.J. Bottomley"
    Cc: Greg Kroah-Hartman
    Cc: "Nicholas A. Bellinger"
    Cc: Alexander Viro
    Cc: Chris Mason
    Cc: "Theodore Ts'o"
    Cc: Andreas Dilger
    Cc: Jaegeuk Kim
    Cc: Steven Whitehouse
    Cc: Dave Kleikamp
    Cc: Joern Engel
    Cc: Prasad Joshi
    Cc: Trond Myklebust
    Cc: KONISHI Ryusuke
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Ben Myers
    Cc: xfs@oss.sgi.com
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Len Brown
    Cc: Pavel Machek
    Cc: "Rafael J. Wysocki"
    Cc: Herton Ronaldo Krzesinski
    Cc: Ben Hutchings
    Cc: Andrew Morton
    Cc: Guo Chao
    Cc: Tejun Heo
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Cc: Wei Yongjun
    Cc: "Roger Pau Monné"
    Cc: Jan Beulich
    Cc: Stefano Stabellini
    Cc: Ian Campbell
    Cc: Sebastian Ott
    Cc: Christian Borntraeger
    Cc: Minchan Kim
    Cc: Jiang Liu
    Cc: Nitin Gupta
    Cc: Jerome Marchand
    Cc: Joe Perches
    Cc: Peng Tao
    Cc: Andy Adamson
    Cc: fanchaoting
    Cc: Jie Liu
    Cc: Sunil Mushran
    Cc: "Martin K. Petersen"
    Cc: Namjae Jeon
    Cc: Pankaj Kumar
    Cc: Dan Magenheimer
    Cc: Mel Gorman 6

    Kent Overstreet
     

11 Nov, 2013

8 commits

  • With all the recent refactoring around struct btree op struct search has
    gotten rather large.

    But we can now easily break it up in a different way - we break out
    struct btree_insert_op which is for inserting data into the cache, and
    that's now what the copying gc code uses - struct search is now specific
    to request.c

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • When we convert bch_btree_insert() to bch_btree_map_leaf_nodes(), we
    won't be passing struct btree_op to bch_btree_insert() anymore - so we
    need a different way of returning whether there was a collision (really,
    a replace collision).

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • This is prep work for converting bch_btree_insert to
    bch_btree_map_leaf_nodes() - we have to convert all its arguments to
    actual arguments. Bunch of churn, but should be straightforward.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • This isn't used for waiting asynchronously anymore - so this is a fairly
    trivial refactoring.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • Eventual goal is for struct btree_op to contain only what is necessary
    for traversing the btree.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • We needed a dedicated rescuer workqueue for gc anyways... and gc was
    conceptually a dedicated thread, just one that wasn't running all the
    time. Switch it to a dedicated thread to make the code a bit more
    straightforward.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • Making things less asynchronous that don't need to be - bch_journal()
    only has to block when the journal or journal entry is full, which is
    emphatically not a fast path. So make it a normal function that just
    returns when it finishes, to make the code and control flow easier to
    follow.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • bch_keybuf_del() takes a spinlock that can't be taken in interrupt context -
    whoops. Fortunately, this code isn't enabled by default (you have to toggle a
    sysfs thing).

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     

02 Jul, 2013

1 commit

  • Some of bcache's utility code has made it into the rest of the kernel,
    so drop the bcache versions.

    Bcache used to have a workaround for allocating from a bio set under
    generic_make_request() (if you allocated more than once, the bios you
    already allocated would get stuck on current->bio_list when you
    submitted, and you'd risk deadlock) - bcache would mask out __GFP_WAIT
    when allocating bios under generic_make_request() so that allocation
    could fail and it could retry from workqueue. But bio_alloc_bioset() has
    a workaround now, so we can drop this hack and the associated error
    handling.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     

27 Jun, 2013

2 commits

  • Now that we're tracking dirty data per stripe, we can add two
    optimizations for raid5/6:

    * If a stripe is already dirty, force writes to that stripe to
    writeback mode - to help build up full stripes of dirty data

    * When flushing dirty data, preferentially write out full stripes first
    if there are any.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • The tracepoints were reworked to be more sensible, and fixed a null
    pointer deref in one of the tracepoints.

    Converted some of the pr_debug()s to tracepoints - this is partly a
    performance optimization; it used to be that with DEBUG or
    CONFIG_DYNAMIC_DEBUG pr_debug() was an empty macro; but at some point it
    was changed to an empty inline function.

    Some of the pr_debug() statements had rather expensive function calls as
    part of the arguments, so this code was getting run unnecessarily even
    on non debug kernels - in some fast paths, too.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     

29 Mar, 2013

1 commit


26 Mar, 2013

1 commit


24 Mar, 2013

1 commit

  • Does writethrough and writeback caching, handles unclean shutdown, and
    has a bunch of other nifty features motivated by real world usage.

    See the wiki at http://bcache.evilpiepirate.org for more.

    Signed-off-by: Kent Overstreet

    Kent Overstreet