09 Jan, 2014

9 commits


24 Nov, 2013

2 commits

  • More prep work for immutable biovecs - with immutable bvecs drivers
    won't be able to use the biovec directly, they'll need to use helpers
    that take into account bio->bi_iter.bi_bvec_done.

    This updates callers for the new usage without changing the
    implementation yet.

    Signed-off-by: Kent Overstreet
    Cc: Jens Axboe
    Cc: Geert Uytterhoeven
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: "Ed L. Cashin"
    Cc: Nick Piggin
    Cc: Lars Ellenberg
    Cc: Jiri Kosina
    Cc: Paul Clements
    Cc: Jim Paris
    Cc: Geoff Levand
    Cc: Yehuda Sadeh
    Cc: Sage Weil
    Cc: Alex Elder
    Cc: ceph-devel@vger.kernel.org
    Cc: Joshua Morris
    Cc: Philip Kelleher
    Cc: Konrad Rzeszutek Wilk
    Cc: Jeremy Fitzhardinge
    Cc: Neil Brown
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: linux390@de.ibm.com
    Cc: Nagalakshmi Nandigama
    Cc: Sreekanth Reddy
    Cc: support@lsi.com
    Cc: "James E.J. Bottomley"
    Cc: Greg Kroah-Hartman
    Cc: Alexander Viro
    Cc: Steven Whitehouse
    Cc: Herton Ronaldo Krzesinski
    Cc: Tejun Heo
    Cc: Andrew Morton
    Cc: Guo Chao
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Cc: Matthew Wilcox
    Cc: Keith Busch
    Cc: Stephen Hemminger
    Cc: Quoc-Son Anh
    Cc: Sebastian Ott
    Cc: Nitin Gupta
    Cc: Minchan Kim
    Cc: Jerome Marchand
    Cc: Seth Jennings
    Cc: "Martin K. Petersen"
    Cc: Mike Snitzer
    Cc: Vivek Goyal
    Cc: "Darrick J. Wong"
    Cc: Chris Metcalf
    Cc: Jan Kara
    Cc: linux-m68k@lists.linux-m68k.org
    Cc: linuxppc-dev@lists.ozlabs.org
    Cc: drbd-user@lists.linbit.com
    Cc: nbd-general@lists.sourceforge.net
    Cc: cbe-oss-dev@lists.ozlabs.org
    Cc: xen-devel@lists.xensource.com
    Cc: virtualization@lists.linux-foundation.org
    Cc: linux-raid@vger.kernel.org
    Cc: linux-s390@vger.kernel.org
    Cc: DL-MPTFusionLinux@lsi.com
    Cc: linux-scsi@vger.kernel.org
    Cc: devel@driverdev.osuosl.org
    Cc: linux-fsdevel@vger.kernel.org
    Cc: cluster-devel@redhat.com
    Cc: linux-mm@kvack.org
    Acked-by: Geoff Levand

    Kent Overstreet
     
  • Immutable biovecs are going to require an explicit iterator. To
    implement immutable bvecs, a later patch is going to add a bi_bvec_done
    member to this struct; for now, this patch effectively just renames
    things.

    Signed-off-by: Kent Overstreet
    Cc: Jens Axboe
    Cc: Geert Uytterhoeven
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: "Ed L. Cashin"
    Cc: Nick Piggin
    Cc: Lars Ellenberg
    Cc: Jiri Kosina
    Cc: Matthew Wilcox
    Cc: Geoff Levand
    Cc: Yehuda Sadeh
    Cc: Sage Weil
    Cc: Alex Elder
    Cc: ceph-devel@vger.kernel.org
    Cc: Joshua Morris
    Cc: Philip Kelleher
    Cc: Rusty Russell
    Cc: "Michael S. Tsirkin"
    Cc: Konrad Rzeszutek Wilk
    Cc: Jeremy Fitzhardinge
    Cc: Neil Brown
    Cc: Alasdair Kergon
    Cc: Mike Snitzer
    Cc: dm-devel@redhat.com
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: linux390@de.ibm.com
    Cc: Boaz Harrosh
    Cc: Benny Halevy
    Cc: "James E.J. Bottomley"
    Cc: Greg Kroah-Hartman
    Cc: "Nicholas A. Bellinger"
    Cc: Alexander Viro
    Cc: Chris Mason
    Cc: "Theodore Ts'o"
    Cc: Andreas Dilger
    Cc: Jaegeuk Kim
    Cc: Steven Whitehouse
    Cc: Dave Kleikamp
    Cc: Joern Engel
    Cc: Prasad Joshi
    Cc: Trond Myklebust
    Cc: KONISHI Ryusuke
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Ben Myers
    Cc: xfs@oss.sgi.com
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Len Brown
    Cc: Pavel Machek
    Cc: "Rafael J. Wysocki"
    Cc: Herton Ronaldo Krzesinski
    Cc: Ben Hutchings
    Cc: Andrew Morton
    Cc: Guo Chao
    Cc: Tejun Heo
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Cc: Wei Yongjun
    Cc: "Roger Pau Monné"
    Cc: Jan Beulich
    Cc: Stefano Stabellini
    Cc: Ian Campbell
    Cc: Sebastian Ott
    Cc: Christian Borntraeger
    Cc: Minchan Kim
    Cc: Jiang Liu
    Cc: Nitin Gupta
    Cc: Jerome Marchand
    Cc: Joe Perches
    Cc: Peng Tao
    Cc: Andy Adamson
    Cc: fanchaoting
    Cc: Jie Liu
    Cc: Sunil Mushran
    Cc: "Martin K. Petersen"
    Cc: Namjae Jeon
    Cc: Pankaj Kumar
    Cc: Dan Magenheimer
    Cc: Mel Gorman 6

    Kent Overstreet
     

11 Nov, 2013

3 commits

  • More testing ftw! Also, now verify mode doesn't break if you read dirty
    data.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • Couple changes:
    * Consolidate bch_check_keys() and bch_check_key_order(), and move the
    checks that only check_key_order() could do to bch_btree_iter_next().

    * Get rid of CONFIG_BCACHE_EDEBUG - now, all that code is compiled in
    when CONFIG_BCACHE_DEBUG is enabled, and there's now a sysfs file to
    flip on the EDEBUG checks at runtime.

    * Dropped an old not terribly useful check in rw_unlock(), and
    refactored/improved a some of the other debug code.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • With all the recent refactoring around struct btree op struct search has
    gotten rather large.

    But we can now easily break it up in a different way - we break out
    struct btree_insert_op which is for inserting data into the cache, and
    that's now what the copying gc code uses - struct search is now specific
    to request.c

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     

02 Jul, 2013

2 commits

  • Some of bcache's utility code has made it into the rest of the kernel,
    so drop the bcache versions.

    Bcache used to have a workaround for allocating from a bio set under
    generic_make_request() (if you allocated more than once, the bios you
    already allocated would get stuck on current->bio_list when you
    submitted, and you'd risk deadlock) - bcache would mask out __GFP_WAIT
    when allocating bios under generic_make_request() so that allocation
    could fail and it could retry from workqueue. But bio_alloc_bioset() has
    a workaround now, so we can drop this hack and the associated error
    handling.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • This code has rotted and it hasn't been used in ages anyways.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     

27 Jun, 2013

3 commits

  • Now that we're tracking dirty data per stripe, we can add two
    optimizations for raid5/6:

    * If a stripe is already dirty, force writes to that stripe to
    writeback mode - to help build up full stripes of dirty data

    * When flushing dirty data, preferentially write out full stripes first
    if there are any.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • Old gcc doesnt like the struct hack, and it is kind of ugly. So finish
    off the work to convert pr_debug() statements to tracepoints, and delete
    pkey()/pbtree().

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     
  • The most significant change is that btree reads are now done
    synchronously, instead of asynchronously and doing the post read stuff
    from a workqueue.

    This was originally done because we can't block on IO under
    generic_make_request(). But - we already have a mechanism to punt cache
    lookups to workqueue if needed, so if we just use that we don't have to
    deal with the complexity of doing things asynchronously.

    The main benefit is this makes the locking situation saner; we can hold
    our write lock on the btree node until we're finished reading it, and we
    don't need that btree_node_read_done() flag anymore.

    Also, for writes, btree_write() was broken out into btree_node_write()
    and btree_leaf_dirty() - the old code with the boolean argument was dumb
    and confusing.

    The prio_blocked mechanism was improved a bit too, now the only counter
    is in struct btree_write, we don't mess with transfering a count from
    struct btree anymore.

    This required changing garbage collection to block prios at the start
    and unblock when it finishes, which is cleaner than what it was doing
    anyways (the old code had mostly the same effect, but was doing it in a
    convoluted way)

    And the btree iter btree_node_read_done() uses was converted to a real
    mempool.

    Signed-off-by: Kent Overstreet

    Kent Overstreet
     

09 Apr, 2013

2 commits


29 Mar, 2013

1 commit


26 Mar, 2013

1 commit


24 Mar, 2013

1 commit

  • Does writethrough and writeback caching, handles unclean shutdown, and
    has a bunch of other nifty features motivated by real world usage.

    See the wiki at http://bcache.evilpiepirate.org for more.

    Signed-off-by: Kent Overstreet

    Kent Overstreet