17 Jan, 2021

1 commit

  • commit 5342fd4255021ef0c4ce7be52eea1c4ebda11c63 upstream.

    If BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET is set in incompat feature
    set, it means the cache device is created with obsoleted layout with
    obso_bucket_site_hi. Now bcache does not support this feature bit, a new
    BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE incompat feature bit is added
    for a better layout to support large bucket size.

    For the legacy compatibility purpose, if a cache device created with
    obsoleted BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET feature bit, all bcache
    devices attached to this cache set should be set to read-only. Then the
    dirty data can be written back to backing device before re-create the
    cache device with BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE feature bit
    by the latest bcache-tools.

    This patch checks BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET feature bit
    when running a cache set and attach a bcache device to the cache set. If
    this bit is set,
    - When run a cache set, print an error kernel message to indicate all
    following attached bcache device will be read-only.
    - When attach a bcache device, print an error kernel message to indicate
    the attached bcache device will be read-only, and ask users to update
    to latest bcache-tools.

    Such change is only for cache device whose bucket size >= 32MB, this is
    for the zoned SSD and almost nobody uses such large bucket size at this
    moment. If you don't explicit set a large bucket size for a zoned SSD,
    such change is totally transparent to your bcache device.

    Fixes: ffa470327572 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
    Signed-off-by: Coly Li
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Coly Li
     

13 Jan, 2021

3 commits

  • commit b16671e8f493e3df40b1fb0dff4078f391c5099a upstream.

    When large bucket feature was added, BCH_FEATURE_INCOMPAT_LARGE_BUCKET
    was introduced into the incompat feature set. It used bucket_size_hi
    (which was added at the tail of struct cache_sb_disk) to extend current
    16bit bucket size to 32bit with existing bucket_size in struct
    cache_sb_disk.

    This is not a good idea, there are two obvious problems,
    - Bucket size is always value power of 2, if store log2(bucket size) in
    existing bucket_size of struct cache_sb_disk, it is unnecessary to add
    bucket_size_hi.
    - Macro csum_set() assumes d[SB_JOURNAL_BUCKETS] is the last member in
    struct cache_sb_disk, bucket_size_hi was added after d[] which makes
    csum_set calculate an unexpected super block checksum.

    To fix the above problems, this patch introduces a new incompat feature
    bit BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE, when this bit is set, it
    means bucket_size in struct cache_sb_disk stores the order of power-of-2
    bucket size value. When user specifies a bucket size larger than 32768
    sectors, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE will be set to
    incompat feature set, and bucket_size stores log2(bucket size) more
    than store the real bucket size value.

    The obsoleted BCH_FEATURE_INCOMPAT_LARGE_BUCKET won't be used anymore,
    it is renamed to BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET and still only
    recognized by kernel driver for legacy compatible purpose. The previous
    bucket_size_hi is renmaed to obso_bucket_size_hi in struct cache_sb_disk
    and not used in bcache-tools anymore.

    For cache device created with BCH_FEATURE_INCOMPAT_LARGE_BUCKET feature,
    bcache-tools and kernel driver still recognize the feature string and
    display it as "obso_large_bucket".

    With this change, the unnecessary extra space extend of bcache on-disk
    super block can be avoided, and csum_set() may generate expected check
    sum as well.

    Fixes: ffa470327572 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
    Signed-off-by: Coly Li
    Cc: stable@vger.kernel.org # 5.9+
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Coly Li
     
  • commit 1dfc0686c29a9bbd3a446a29f9ccde3dec3bc75a upstream.

    This patch adds the check for features which is incompatible for
    current supported feature sets.

    Now if the bcache device created by bcache-tools has features that
    current kernel doesn't support, read_super() will fail with error
    messoage. E.g. if an unsupported incompatible feature detected,
    bcache register will fail with dmesg "bcache: register_bcache() error :
    Unsupported incompatible feature found".

    Fixes: d721a43ff69c ("bcache: increase super block version for cache device and backing device")
    Fixes: ffa470327572 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
    Signed-off-by: Coly Li
    Cc: stable@vger.kernel.org # 5.9+
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Coly Li
     
  • commit f7b4943dea48a572ad751ce1f18a245d43debe7e upstream.

    This patch fixes the following typos,
    from BCH_FEATURE_COMPAT_SUUP to BCH_FEATURE_COMPAT_SUPP
    from BCH_FEATURE_INCOMPAT_SUUP to BCH_FEATURE_INCOMPAT_SUPP
    from BCH_FEATURE_INCOMPAT_SUUP to BCH_FEATURE_RO_COMPAT_SUPP

    Fixes: d721a43ff69c ("bcache: increase super block version for cache device and backing device")
    Fixes: ffa470327572 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
    Signed-off-by: Coly Li
    Cc: stable@vger.kernel.org # 5.9+
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Coly Li
     

03 Oct, 2020

15 commits

  • Since bcache code was merged into mainline kerrnel, each cache set only
    as one single cache in it. The multiple caches framework is here but the
    code is far from completed. Considering the multiple copies of cached
    data can also be stored on e.g. md raid1 devices, it is unnecessary to
    support multiple caches in one cache set indeed.

    The previous preparation patches fix the dependencies of explicitly
    making a cache set only have single cache. Now we don't have to maintain
    an embedded partial super block in struct cache_set, the in-memory super
    block can be directly referenced from struct cache.

    This patch removes the embedded struct cache_sb from struct cache_set,
    and fixes all locations where the superb lock was referenced from this
    removed super block by referencing the in-memory super block of struct
    cache.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Currently the cache's sync status is checked and set on cache set's in-
    memory partial super block. After removing the embedded struct cache_sb
    from cache set and reference cache's in-memory super block from struct
    cache_set, the sync status can set and check directly on cache's super
    block.

    This patch checks and sets the cache sync status directly on cache's
    in-memory super block. This is a preparation for later removing embedded
    struct cache_sb from struct cache_set.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • After removing the embedded struct cache_sb from struct cache_set, cache
    set will directly reference the in-memory super block of struct cache.
    It is unnecessary to compare block_size, bucket_size and nr_in_set from
    the identical in-memory super block in can_attach_cache().

    This is a preparation patch for latter removing cache_set->sb from
    struct cache_set.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • In order to update the partial super block of cache set, the seq numbers
    of cache and cache set are checked in register_cache_set(). If cache's
    seq number is larger than cache set's seq number, cache set must update
    its partial super block from cache's super block. It is unncessary when
    the embedded struct cache_sb is removed from struct cache set.

    This patch removed the seq numbers checking from register_cache_set(),
    because later there will be no such partial super block in struct cache
    set, the cache set will directly reference in-memory super block from
    struct cache. This is a preparation patch for removing embedded struct
    cache_sb from struct cache_set.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Because struct cache_set and struct cache both have struct cache_sb,
    macro bucket_bytes() currently are used on both of them. When removing
    the embedded struct cache_sb from struct cache_set, this macro won't be
    used on struct cache_set anymore.

    This patch unifies all bucket_bytes() usage only on struct cache, this is
    one of the preparation to remove the embedded struct cache_sb from
    struct cache_set.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • It seems alloc_bucket_pages() is the only user of bucket_pages().
    Considering alloc_bucket_pages() is removed from bcache code, it is safe
    to remove the useless macro bucket_pages() now.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Now no one uses alloc_bucket_pages() anymore, remove it from bcache.h.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Because struct cache_set and struct cache both have struct cache_sb,
    therefore macro block_bytes() can be used on both of them. When removing
    the embedded struct cache_sb from struct cache_set, this macro won't be
    used on struct cache_set anymore.

    This patch unifies all block_bytes() usage only on struct cache, this is
    one of the preparation to remove the embedded struct cache_sb from
    struct cache_set.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • This patch adds a separated set_uuid[16] in struct cache_set, to store
    the uuid of the cache set. This is the preparation to remove the
    embedded struct cache_sb from struct cache_set.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Since now each cache_set explicitly has single cache, for_each_cache()
    is unnecessary. This patch removes this macro, and update all locations
    where it is used, and makes sure all code logic still being consistent.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Currently although the bcache code has a framework for multiple caches
    in a cache set, but indeed the multiple caches never completed and users
    use md raid1 for multiple copies of the cached data.

    This patch does the following change in struct cache_set, to explicitly
    make a cache_set only have single cache,
    - Change pointer array "*cache[MAX_CACHES_PER_SET]" to a single pointer
    "*cache".
    - Remove pointer array "*cache_by_alloc[MAX_CACHES_PER_SET]".
    - Remove "caches_loaded".

    Now the code looks as exactly what it does in practic: only one cache is
    used in the cache set.

    Signed-off-by: Coly Li
    Signed-off-by: Jens Axboe

    Coly Li
     
  • The parameter 'int n' from bch_bucket_alloc_set() is not cleared
    defined. From the code comments n is the number of buckets to alloc, but
    from the code itself 'n' is the maximum cache to iterate. Indeed all the
    locations where bch_bucket_alloc_set() is called, 'n' is alwasy 1.

    This patch removes the confused and unnecessary 'int n' from parameter
    list of bch_bucket_alloc_set(), and explicitly allocates only 1 bucket
    for its caller.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.

    As inode->iprivate equals to third parameter of
    debugfs_create_file() which is NULL. So it's equivalent
    to original code logic.

    Signed-off-by: Qinglang Miao
    Signed-off-by: Coly Li
    Signed-off-by: Jens Axboe

    Qinglang Miao
     
  • In mca_reserve(c) macro, we are checking root whether is NULL or not.
    But that's not enough, when we read the root node in run_cache_set(),
    if we got an error in bch_btree_node_read_done(), we will return
    ERR_PTR(-EIO) to c->root.

    And then we will go continue to unregister, but before calling
    unregister_shrinker(&c->shrink), there is a possibility to call
    bch_mca_count(), and we would get a crash with call trace like that:

    [ 2149.876008] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000b5
    ... ...
    [ 2150.598931] Call trace:
    [ 2150.606439] bch_mca_count+0x58/0x98 [escache]
    [ 2150.615866] do_shrink_slab+0x54/0x310
    [ 2150.624429] shrink_slab+0x248/0x2d0
    [ 2150.632633] drop_slab_node+0x54/0x88
    [ 2150.640746] drop_slab+0x50/0x88
    [ 2150.648228] drop_caches_sysctl_handler+0xf0/0x118
    [ 2150.657219] proc_sys_call_handler.isra.18+0xb8/0x110
    [ 2150.666342] proc_sys_write+0x40/0x50
    [ 2150.673889] __vfs_write+0x48/0x90
    [ 2150.681095] vfs_write+0xac/0x1b8
    [ 2150.688145] ksys_write+0x6c/0xd0
    [ 2150.695127] __arm64_sys_write+0x24/0x30
    [ 2150.702749] el0_svc_handler+0xa0/0x128
    [ 2150.710296] el0_svc+0x8/0xc

    Signed-off-by: Dongsheng Yang
    Signed-off-by: Coly Li
    Signed-off-by: Jens Axboe

    Dongsheng Yang
     
  • Previously the experimental async registration uses a separate sysfs
    file register_async. Now the async registration code seems working well
    for a while, we can do furtuher testing with it now.

    This patch changes the async bcache registration shares the same sysfs
    file /sys/fs/bcache/register (and register_quiet). Async registration
    will be default behavior if BCACHE_ASYNC_REGISTRATION is set in kernel
    configure. By default, BCACHE_ASYNC_REGISTRATION is not configured yet.

    Signed-off-by: Coly Li
    Signed-off-by: Jens Axboe

    Coly Li
     

25 Sep, 2020

2 commits

  • Drivers shouldn't really mess with the readahead size, as that is a VM
    concept. Instead set it based on the optimal I/O size by lifting the
    algorithm from the md driver when registering the disk. Also set
    bdi->io_pages there as well by applying the same scheme based on
    max_sectors. To ensure the limits work well for stacking drivers a
    new helper is added to update the readahead limits from the block
    limits, which is also called from disk_stack_limits.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Johannes Thumshirn
    Reviewed-by: Jan Kara
    Reviewed-by: Mike Snitzer
    Reviewed-by: Martin K. Petersen
    Acked-by: Coly Li
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     
  • Inherit the optimal I/O size setting just like the readahead window,
    as any reason to do larger I/O does not apply to just readahead.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Jan Kara
    Reviewed-by: Martin K. Petersen
    Acked-by: Coly Li
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

12 Sep, 2020

1 commit


24 Aug, 2020

1 commit

  • Replace the existing /* fall through */ comments and its variants with
    the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
    fall-through markings when it is the case.

    [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through

    Signed-off-by: Gustavo A. R. Silva

    Gustavo A. R. Silva
     

06 Aug, 2020

1 commit

  • Pull block driver updates from Jens Axboe:

    - NVMe:
    - ZNS support (Aravind, Keith, Matias, Niklas)
    - Misc cleanups, optimizations, fixes (Baolin, Chaitanya, David,
    Dongli, Max, Sagi)

    - null_blk zone capacity support (Aravind)

    - MD:
    - raid5/6 fixes (ChangSyun)
    - Warning fixes (Damien)
    - raid5 stripe fixes (Guoqing, Song, Yufen)
    - sysfs deadlock fix (Junxiao)
    - raid10 deadlock fix (Vitaly)

    - struct_size conversions (Gustavo)

    - Set of bcache updates/fixes (Coly)

    * tag 'for-5.9/drivers-20200803' of git://git.kernel.dk/linux-block: (117 commits)
    md/raid5: Allow degraded raid6 to do rmw
    md/raid5: Fix Force reconstruct-write io stuck in degraded raid5
    raid5: don't duplicate code for different paths in handle_stripe
    raid5-cache: hold spinlock instead of mutex in r5c_journal_mode_show
    md: print errno in super_written
    md/raid5: remove the redundant setting of STRIPE_HANDLE
    md: register new md sysfs file 'uuid' read-only
    md: fix max sectors calculation for super 1.0
    nvme-loop: remove extra variable in create ctrl
    nvme-loop: set ctrl state connecting after init
    nvme-multipath: do not fall back to __nvme_find_path() for non-optimized paths
    nvme-multipath: fix logic for non-optimized paths
    nvme-rdma: fix controller reset hang during traffic
    nvme-tcp: fix controller reset hang during traffic
    nvmet: introduce the passthru Kconfig option
    nvmet: introduce the passthru configfs interface
    nvmet: Add passthru enable/disable helpers
    nvmet: add passthru code to process commands
    nvme: export nvme_find_get_ns() and nvme_put_ns()
    nvme: introduce nvme_ctrl_get_by_path()
    ...

    Linus Torvalds
     

28 Jul, 2020

1 commit

  • This patch is a fix to patch "bcache: fix bio_{start,end}_io_acct with
    proper device". The previous patch uses a hack to temporarily set
    bi_disk to bcache device, which is mistaken too.

    As Christoph suggests, this patch uses disk_{start,end}_io_acct() to
    count I/O for bcache device in the correct way.

    Fixes: 85750aeb748f ("bcache: use bio_{start,end}_io_acct")
    Signed-off-by: Coly Li
    Cc: Christoph Hellwig
    Cc: stable@vger.kernel.org
    Signed-off-by: Jens Axboe

    Coly Li
     

25 Jul, 2020

15 commits

  • Commit 85750aeb748f ("bcache: use bio_{start,end}_io_acct") moves the
    io account code to the location after bio_set_dev(bio, dc->bdev) in
    cached_dev_make_request(). Then the account is performed incorrectly on
    backing device, indeed the I/O should be counted to bcache device like
    /dev/bcache0.

    With the mistaken I/O account, iostat does not display I/O counts for
    bcache device and all the numbers go to backing device. In writeback
    mode, the hard drive may have 340K+ IOPS which is impossible and wrong
    for spinning disk.

    This patch introduces bch_bio_start_io_acct() and bch_bio_end_io_acct(),
    which switches bio->bi_disk to bcache device before calling
    bio_start_io_acct() or bio_end_io_acct(). Now the I/Os are counted to
    bcache device, and bcache device, cache device and backing device have
    their correct I/O count information back.

    Fixes: 85750aeb748f ("bcache: use bio_{start,end}_io_acct")
    Signed-off-by: Coly Li
    Cc: Christoph Hellwig
    Cc: stable@vger.kernel.org
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Bcache uses struct bbio to do I/Os for meta data pages like uuids,
    disk_buckets, prio_buckets, and btree nodes.

    Example writing a btree node onto cache device, the process is,
    - Allocate a struct bbio from mempool c->bio_meta.
    - Inside struct bbio embedded a struct bio, initialize bi_inline_vecs
    for this embedded bio.
    - Call bch_bio_map() to map each meta data page to each bv from the
    inlined bi_io_vec table.
    - Call bch_submit_bbio() to submit the bio into underlying block layer.
    - When the I/O completed, only release the struct bbio, don't touch the
    reference counter of the meta data pages.

    The struct bbio is defined as,
    738 struct bbio {
    739 unsigned int submit_time_us;
    [snipped]
    748 struct bio bio;
    749 };

    Because struct bio is embedded at the end of struct bbio, therefore the
    actual size of struct bbio is sizeof(struct bio) + size of the embedded
    bio->bi_inline_vecs.

    Now all the meta data bucket size are limited to meta_bucket_pages(), if
    the bucket size is large than meta_bucket_pages()*PAGE_SECTORS, rested
    space in the bucket is unused. Therefore the most used space in meta
    bucket is (1<<bio_meta as,
    mempool_init_kmalloc_pool(&c->bio_meta, 2,
    sizeof(struct bbio) +
    sizeof(struct bio_vec) * bucket_pages(c))
    It is too large, neither the Linux buddy allocator cannot allocate so
    much continuous pages, nor the extra allocated pages are wasted.

    This patch replace bucket_pages() to meta_bucket_pages() in two places,
    - In bch_cache_set_alloc(), when initialize mempool c->bio_meta, uses
    sizeof(struct bbio) + sizeof(struct bio_vec) * bucket_pages(c) to set
    the allocating object size.
    - In bch_bbio_alloc(), when calling bio_init() to set inline bvec talbe
    bi_inline_bvecs, uses meta_bucket_pages() to indicate number of the
    inline bio vencs number.

    Now the maximum size of embedded bio inside struct bbio exactly matches
    the limit of meta_bucket_pages(), no extra page wasted.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Mempool c->fill_iter is used to allocate memory for struct btree_iter in
    bch_btree_node_read_done() to iterate all keys of a read-in btree node.

    The allocation size is defined in bch_cache_set_alloc() by,
    mempool_init_kmalloc_pool(&c->fill_iter, 1, iter_size))
    where iter_size is defined by a calculation,
    (sb->bucket_size / sb->block_size + 1) * sizeof(struct btree_iter_set)

    For 16bit width bucket_size the calculation is OK, but now the bucket
    size is extended to 32bit, the bucket size can be 2GB. By the above
    calculation, iter_size can be 2048 pages (order 11 is still accepted by
    buddy allocator).

    But the actual size holds the bkeys in meta data bucket is limited to
    meta_bucket_pages() already, which is 16MB. By the above calculation,
    if replace sb->bucket_size by meta_bucket_pages() * PAGE_SECTORS, the
    result is 16 pages. This is the size large enough for the mempool
    allocation to struct btree_iter.

    Therefore in worst case every time mempool c->fill_iter allocates, at
    most 4080 pages are wasted and won't be used. Therefore this patch uses
    meta_bucket_pages() * PAGE_SECTORS to calculate the iter size in
    bch_cache_set_alloc(), to avoid extra memory allocation from mempool
    c->fill_iter.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • The following three sysfs files are created to display according feature
    set information of bcache:
    /sys/fs/bcache//internal/feature_compat
    /sys/fs/bcache//internal/feature_ro_compat
    /sys/fs/bcache//internal/feature_incompat
    is added by this patch, to display feature sets information of the cache
    set.

    Now only an incompat feature 'large_bucket' added in bcache, the sysfs
    file content is:
    [large_bucket]
    string large_bucket means the running bcache drive supports incompat
    feature 'large_bucket', the wrapping [] means the 'large_bucket' feature
    is currently enabled on this cache set.

    This patch is ready to display compat and ro_compat features, in future
    once bcache code implements such feature sets, the according feature
    strings will be displayed in their sysfs files too.

    Signed-off-by: Coly Li
    Signed-off-by: Jens Axboe

    Coly Li
     
  • The large bucket feature is to extend bucket_size from 16bit to 32bit.

    When create cache device on zoned device (e.g. zoned NVMe SSD), making
    a single bucket cover one or more zones of the zoned device is the
    simplest way to support zoned device as cache by bcache.

    But current maximum bucket size is 16MB and a typical zone size of zoned
    device is 256MB, this is the major motiviation to extend bucket size to
    a larger bit width.

    This patch is the basic and first change to support large bucket size,
    the major changes it makes are,
    - Add BCH_FEATURE_INCOMPAT_LARGE_BUCKET for the large bucket feature,
    INCOMPAT means it introduces incompatible on-disk format change.
    - Add BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LARGE_BUCKET) routines.
    - Adds __le16 bucket_size_hi into struct cache_sb_disk at offset 0x8d0
    for the on-disk super block format.
    - For the in-memory super block struct cache_sb, member bucket_size is
    extended from __u16 to __32.
    - Add get_bucket_size() to combine the bucket_size and bucket_size_hi
    from struct cache_sb_disk into an unsigned int value.

    Since we already have large bucket size helpers meta_bucket_pages(),
    meta_bucket_bytes() and alloc_meta_bucket_pages(), they make sure when
    bucket size > 8MB, the memory allocation for bcache meta data bucket
    won't fail no matter how large the bucket size extended. So these meta
    data buckets are handled properly when the bucket size width increase
    from 16bit to 32bit, we don't need to worry about them.

    Signed-off-by: Coly Li
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Currently the bcache internal btree node occupies a whole bucket. When
    loading the btree node from cache device into memory, mca_data_alloc()
    will call bch_btree_keys_alloc() to allocate memory for the whole bucket
    size, ilog2(b->c->btree_pages) is send to bch_btree_keys_alloc() as the
    parameter 'page_order'.

    c->btree_pages is set as bucket_pages() in bch_cache_set_alloc(), for
    bucket size > 8MB, ilog2(b->c->btree_pages) is 12 for 4KB page size. By
    default the maximum page order __get_free_pages() accepts is MAX_ORDER
    (11), in this condition bch_btree_keys_alloc() will always fail.

    Because of other over-page-order allocation failure fails the cache
    device registration, such btree node allocation failure wasn't observed
    during runtime. After other blocking page allocation failures for bucket
    size > 8MB, this btree node allocation issue may trigger potentical risk
    e.g. infinite dead-loop to retry btree node allocation after failure.

    This patch fixes the potential problem by setting c->btree_pages to
    meta_bucket_pages() in bch_cache_set_alloc(). In the condition that
    bucket size > 8MB, meta_bucket_pages() will always return a number which
    won't exceed the maximum page order of the buddy allocator.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • In bch_btree_cache_alloc() when CONFIG_BCACHE_DEBUG is configured,
    allocate memory for c->verify_ondisk may fail if the bucket size > 8MB,
    which will require __get_free_pages() to allocate continuous pages
    with order > 11 (the default MAX_ORDER of Linux buddy allocator). Such
    over size allocation will fail, and cause 2 problems,
    - When CONFIG_BCACHE_DEBUG is configured, bch_btree_verify() does not
    work, because c->verify_ondisk is NULL and bch_btree_verify() returns
    immediately.
    - bch_btree_cache_alloc() will fail due to c->verify_ondisk allocation
    failed, then the whole cache device registration fails. And because of
    this failure, the first problem of bch_btree_verify() has no chance to
    be triggered.

    This patch fixes the above problem by two means,
    1) If pages allocation of c->verify_ondisk fails, set it to NULL and
    returns bch_btree_cache_alloc() with -ENOMEM.
    2) When calling __get_free_pages() to allocate c->verify_ondisk pages,
    use ilog2(meta_bucket_pages(&c->sb)) to make sure ilog2() will always
    generate a pages order
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Similar to c->uuids, struct cache's prio_buckets and disk_buckets also
    have the potential memory allocation failure during cache registration
    if the bucket size > 8MB.

    ca->prio_buckets can be stored on cache device in multiple buckets, its
    in-memory space is allocated by kzalloc() interface but normally
    allocated by alloc_pages() because the size > KMALLOC_MAX_CACHE_SIZE.

    So allocation of ca->prio_buckets has the MAX_ORDER restriction too. If
    the bucket size > 8MB, by default the page allocator will fail because
    the page order > 11 (default MAX_ORDER value). ca->prio_buckets should
    also use meta_bucket_bytes(), meta_bucket_pages() to decide its memory
    size and use alloc_meta_bucket_pages() to allocate pages, to avoid the
    allocation failure during cache set registration when bucket size > 8MB.

    ca->disk_buckets is a single bucket size memory buffer, it is used to
    iterate each bucket of ca->prio_buckets, and compose the bio based on
    memory of ca->disk_buckets, then write ca->disk_buckets memory to cache
    disk one-by-one for each bucket of ca->prio_buckets. ca->disk_buckets
    should have in-memory size exact to the meta_bucket_pages(), this is the
    size that ca->prio_buckets will be stored into each on-disk bucket.

    This patch fixes the above issues and handle cache's prio_buckets and
    disk_buckets properly for bucket size larger than 8MB.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Bcache allocates a whole bucket to store c->uuids on cache device, and
    allocates continuous pages to store it in-memory. When the bucket size
    exceeds maximum allocable continuous pages, bch_cache_set_alloc() will
    fail and cache device registration will fail.

    This patch allocates c->uuids by alloc_meta_bucket_pages(), and uses
    ilog2(meta_bucket_pages(c)) to indicate order of c->uuids pages when
    free it. When writing c->uuids to cache device, its size is decided
    by meta_bucket_pages(c) * PAGE_SECTORS. Now c->uuids is properly handled
    for bucket size > 8MB.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Currently the in-memory meta data like c->uuids or c->disk_buckets
    are allocated by alloc_bucket_pages(). The macro alloc_bucket_pages()
    calls __get_free_pages() to allocated continuous pages with order
    indicated by ilog2(bucket_pages(c)),
    #define alloc_bucket_pages(gfp, c) \
    ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))

    The maximum order is defined as MAX_ORDER, the default value is 11 (and
    can be overwritten by CONFIG_FORCE_MAX_ZONEORDER). In bcache code the
    maximum bucket size width is 16bits, this is restricted both by KEY_SIZE
    size and bucket_size size from struct cache_sb_disk. The maximum 16bits
    width and power-of-2 value is (1<<
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • Setting sb->first_bucket and checking sb->keys indeed are only for cache
    device, it does not make sense to do them in read_super() for backing
    device too.

    This patch moves the related code piece into read_super_common()
    explicitly for cache device and avoid the confusion.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • The new added super block version BCACHE_SB_VERSION_BDEV_WITH_FEATURES
    (5) BCACHE_SB_VERSION_CDEV_WITH_FEATURES value (6), is for the feature
    set bits.

    Devices have super block version equal to the new version will have
    three new members for feature set bits in the on-disk super block,
    __le64 feature_compat;
    __le64 feature_incompat;
    __le64 feature_ro_compat;

    They are used for further new features which may introduce on-disk
    format change, and avoid unncessary super block version increase.

    The very basic features handling code skeleton is also initialized in
    this patch.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • In register_cache_set(), c is pointer to struct cache_set, and ca is
    pointer to struct cache, if ca->sb.seq > c->sb.seq, it means this
    registering cache has up to date version and other members, the in-
    memory version and other members should be updated to the newer value.

    But current implementation makes a cache set only has a single cache
    device, so the above assumption works well except for a special case.
    The execption is when a cache device new created and both ca->sb.seq and
    c->sb.seq are 0, because the super block is never flushed out yet. In
    the location for the following if() check,
    2156 if (ca->sb.seq > c->sb.seq) {
    2157 c->sb.version = ca->sb.version;
    2158 memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16);
    2159 c->sb.flags = ca->sb.flags;
    2160 c->sb.seq = ca->sb.seq;
    2161 pr_debug("set version = %llu\n", c->sb.version);
    2162 }
    c->sb.version is not initialized yet and valued 0. When ca->sb.seq is 0,
    the if() check will fail (because both values are 0), and the cache set
    version, set_uuid, flags and seq won't be updated.

    The above problem is hiden for current code, because the bucket size is
    compatible among different super block version. And the next time when
    running cache set again, ca->sb.seq will be larger than 0 and cache set
    super block version will be updated properly.

    But if the large bucket feature is enabled, sb->bucket_size is the low
    16bits of the bucket size. For a power of 2 value, when the actual
    bucket size exceeds 16bit width, sb->bucket_size will always be 0. Then
    read_super_common() will fail because the if() check to
    is_power_of_2(sb->bucket_size) is false. This is how the long time
    hidden bug is triggered.

    This patch modifies the if() check to the following way,
    2156 if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
    Then cache set's version, set_uuid, flags and seq will always be updated
    corectly including for a new created cache device.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • In bch_cache_set_alloc() there is a big if() checks combined by 11 items
    together. When this big if() statement fails, it is difficult to tell
    exactly which item fails indeed.

    This patch disassembles this big if() checks into 11 single if() checks,
    which makes code debug more easier.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li
     
  • The improperly set bucket or block size will trigger error in
    read_super_common(). For large bucket size, a more accurate error message
    for invalid bucket or block size is necessary.

    This patch disassembles the combined if() checks into multiple single
    if() check, and provide more accurate error message for each check
    failure condition.

    Signed-off-by: Coly Li
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Jens Axboe

    Coly Li