04 Aug, 2020

1 commit

  • Pull core block updates from Jens Axboe:
    "Good amount of cleanups and tech debt removals in here, and as a
    result, the diffstat shows a nice net reduction in code.

    - Softirq completion cleanups (Christoph)

    - Stop using ->queuedata (Christoph)

    - Cleanup bd claiming (Christoph)

    - Use check_events, moving away from the legacy media change
    (Christoph)

    - Use inode i_blkbits consistently (Christoph)

    - Remove old unused writeback congestion bits (Christoph)

    - Cleanup/unify submission path (Christoph)

    - Use bio_uninit consistently, instead of bio_disassociate_blkg
    (Christoph)

    - sbitmap cleared bits handling (John)

    - Request merging blktrace event addition (Jan)

    - sysfs add/remove race fixes (Luis)

    - blk-mq tag fixes/optimizations (Ming)

    - Duplicate words in comments (Randy)

    - Flush deferral cleanup (Yufen)

    - IO context locking/retry fixes (John)

    - struct_size() usage (Gustavo)

    - blk-iocost fixes (Chengming)

    - blk-cgroup IO stats fixes (Boris)

    - Various little fixes"

    * tag 'for-5.9/block-20200802' of git://git.kernel.dk/linux-block: (135 commits)
    block: blk-timeout: delete duplicated word
    block: blk-mq-sched: delete duplicated word
    block: blk-mq: delete duplicated word
    block: genhd: delete duplicated words
    block: elevator: delete duplicated word and fix typos
    block: bio: delete duplicated words
    block: bfq-iosched: fix duplicated word
    iocost_monitor: start from the oldest usage index
    iocost: Fix check condition of iocg abs_vdebt
    block: Remove callback typedefs for blk_mq_ops
    block: Use non _rcu version of list functions for tag_set_list
    blk-cgroup: show global disk stats in root cgroup io.stat
    blk-cgroup: make iostat functions visible to stat printing
    block: improve discard bio alignment in __blkdev_issue_discard()
    block: change REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL to be odd numbers
    block: defer flush request no matter whether we have elevator
    block: make blk_timeout_init() static
    block: remove retry loop in ioc_release_fn()
    block: remove unnecessary ioc nested locking
    block: integrate bd_start_claiming into __blkdev_get
    ...

    Linus Torvalds
     

09 Jul, 2020

1 commit

  • Only triggering reclaim based on the percentage of unmapped cache
    zones can fail to detect cases where reclaim is needed, e.g. if the
    target has only 2 or 3 cache zones and only one unmapped cache zone,
    the percentage of free cache zones is higher than
    DMZ_RECLAIM_LOW_UNMAP_ZONES (30%) and reclaim does not trigger.

    This problem, combined with the fact that dmz_schedule_reclaim() is
    called from dmz_handle_bio() without the map lock held, leads to a
    race between zone allocation and dmz_should_reclaim() result.
    Depending on the workload applied, this race can lead to the write
    path waiting forever for a free zone without reclaim being triggered.

    Fix this by moving dmz_schedule_reclaim() inside dmz_alloc_zone()
    under the map lock. This results in checking the need for zone reclaim
    whenever a new data or buffer zone needs to be allocated.

    Also fix dmz_reclaim_percentage() to always return 0 if the number of
    unmapped cache (or random) zones is less than or equal to 1.

    Suggested-by: Shin'ichiro Kawasaki
    Signed-off-by: Damien Le Moal
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Mike Snitzer

    Damien Le Moal
     

01 Jul, 2020

1 commit


18 Jun, 2020

1 commit

  • The unit of max_io_len is sector instead of byte (spotted through
    code review), so fix it.

    Fixes: 3b1a94c88b79 ("dm zoned: drive-managed zoned block device target")
    Cc: stable@vger.kernel.org
    Signed-off-by: Hou Tao
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Hou Tao
     

06 Jun, 2020

5 commits


21 May, 2020

4 commits

  • Instead of lumping emulated zones together with random zones we
    should be handling them as separate 'cache' zones. This improves
    code readability and allows an easier implementation of different
    cache policies.

    Also add additional allocation flags, to separate the type (cache,
    random, or sequential) from the purpose (eg reclaim).

    Also switch the allocation policy to not use random zones as buffer
    zones if cache zones are present. This avoids a performance drop when
    all cache zones are used.

    Signed-off-by: Hannes Reinecke
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Hannes Reinecke
     
  • When building arm32 allyesconfig:

    ld.lld: error: undefined symbol: __aeabi_uldivmod
    >>> referenced by dm-zoned-target.c
    >>> md/dm-zoned-target.o:(dmz_ctr) in archive drivers/built-in.a

    dmz_fixup_devices uses DIV_ROUND_UP with variables of type sector_t. As
    such, it should be using DIV_ROUND_UP_SECTOR_T, which handles this
    automatically.

    Fixes: 70978208ec91 ("dm zoned: metadata version 2")
    Signed-off-by: Nathan Chancellor
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Nathan Chancellor
     
  • DMDEBUG will already add a newline to the logging messages, so we
    shouldn't be adding it to the message itself.

    Signed-off-by: Hannes Reinecke
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Hannes Reinecke
     
  • Implement handling for metadata version 2. The new metadata adds a
    label and UUID for the device mapper device, and additional UUID for
    the underlying block devices.

    It also allows for an additional regular drive to be used for
    emulating random access zones. The emulated zones will be placed
    logically in front of the zones from the zoned block device, causing
    the superblocks and metadata to be stored on that device.

    The first zone of the original zoned device will be used to hold
    another, tertiary copy of the metadata; this copy carries a generation
    number of 0 and is never updated; it's just used for identification.

    Signed-off-by: Hannes Reinecke
    Reviewed-by: Bob Liu
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Hannes Reinecke
     

20 May, 2020

2 commits


15 May, 2020

6 commits


04 Mar, 2020

1 commit

  • Changes made during the 5.6 cycle warrant bumping the version number
    for DM core and the targets modified by this commit.

    It should be noted that dm-thin, dm-crypt and dm-raid already had
    their target version bumped during the 5.6 merge window.

    Signed-off-by; Mike Snitzer

    Mike Snitzer
     

28 Feb, 2020

1 commit

  • Dm-zoned initializes reference counters of new chunk works with zero
    value and refcount_inc() is called to increment the counter. However, the
    refcount_inc() function handles the addition to zero value as an error
    and triggers the warning as follows:

    refcount_t: addition on 0; use-after-free.
    WARNING: CPU: 7 PID: 1506 at lib/refcount.c:25 refcount_warn_saturate+0x68/0xf0
    ...
    CPU: 7 PID: 1506 Comm: systemd-udevd Not tainted 5.4.0+ #134
    ...
    Call Trace:
    dmz_map+0x2d2/0x350 [dm_zoned]
    __map_bio+0x42/0x1a0
    __split_and_process_non_flush+0x14a/0x1b0
    __split_and_process_bio+0x83/0x240
    ? kmem_cache_alloc+0x165/0x220
    dm_process_bio+0x90/0x230
    ? generic_make_request_checks+0x2e7/0x680
    dm_make_request+0x3e/0xb0
    generic_make_request+0xcf/0x320
    ? memcg_drain_all_list_lrus+0x1c0/0x1c0
    submit_bio+0x3c/0x160
    ? guard_bio_eod+0x2c/0x130
    mpage_readpages+0x182/0x1d0
    ? bdev_evict_inode+0xf0/0xf0
    read_pages+0x6b/0x1b0
    __do_page_cache_readahead+0x1ba/0x1d0
    force_page_cache_readahead+0x93/0x100
    generic_file_read_iter+0x83a/0xe40
    ? __seccomp_filter+0x7b/0x670
    new_sync_read+0x12a/0x1c0
    vfs_read+0x9d/0x150
    ksys_read+0x5f/0xe0
    do_syscall_64+0x5b/0x180
    entry_SYSCALL_64_after_hwframe+0x44/0xa9
    ...

    After this warning, following refcount API calls for the counter all fail
    to change the counter value.

    Fix this by setting the initial reference counter value not zero but one
    for the new chunk works. Instead, do not call refcount_inc() via
    dmz_get_chunk_work() for the new chunks works.

    The failure was observed with linux version 5.4 with CONFIG_REFCOUNT_FULL
    enabled. Refcount rework was merged to linux version 5.5 by the
    commit 168829ad09ca ("Merge branch 'locking-core-for-linus' of
    git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip"). After this
    commit, CONFIG_REFCOUNT_FULL was removed and the failure was observed
    regardless of kernel configuration.

    Linux version 4.20 merged the commit 092b5648760a ("dm zoned: target: use
    refcount_t for dm zoned reference counters"). Before this commit, dm
    zoned used atomic_t APIs which does not check addition to zero, then this
    fix is not necessary.

    Fixes: 092b5648760a ("dm zoned: target: use refcount_t for dm zoned reference counters")
    Cc: stable@vger.kernel.org # 5.4+
    Signed-off-by: Shin'ichiro Kawasaki
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Shin'ichiro Kawasaki
     

03 Dec, 2019

1 commit

  • Simplify the arguments to blkdev_nr_zones by passing a gendisk instead
    of the block_device and capacity. This also removes the need for
    __blkdev_nr_zones as all callers are outside the fast path and can
    deal with the additional branch.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

07 Nov, 2019

1 commit

  • Commit 75d66ffb48efb3 added backing device health checks and as a part
    of these checks, check_events() block ops template call is invoked in
    dm-zoned mapping path as well as in reclaim and flush path. Calling
    check_events() with ATA or SCSI backing devices introduces a blocking
    scsi_test_unit_ready() call being made in sd_check_events(). Even though
    the overhead of calling scsi_test_unit_ready() is small for ATA zoned
    devices, it is much larger for SCSI and it affects performance in a very
    negative way.

    Fix this performance regression by executing check_events() only in case
    of any I/O errors. The function dmz_bdev_is_dying() is modified to call
    only blk_queue_dying(), while calls to check_events() are made in a new
    helper function, dmz_check_bdev().

    Reported-by: zhangxiaoxu
    Fixes: 75d66ffb48efb3 ("dm zoned: properly handle backing device failure")
    Cc: stable@vger.kernel.org
    Signed-off-by: Dmitry Fomichev
    Signed-off-by: Mike Snitzer

    Dmitry Fomichev
     

26 Aug, 2019

1 commit

  • Commit 75d66ffb48efb30f2dd42f041ba8b39c5b2bd115 ("dm zoned: properly
    handle backing device failure") triggers a coverity warning:

    *** CID 1452808: Memory - illegal accesses (USE_AFTER_FREE)
    /drivers/md/dm-zoned-target.c: 137 in dmz_submit_bio()
    131 clone->bi_private = bioctx;
    132
    133 bio_advance(bio, clone->bi_iter.bi_size);
    134
    135 refcount_inc(&bioctx->ref);
    136 generic_make_request(clone);
    >>> CID 1452808: Memory - illegal accesses (USE_AFTER_FREE)
    >>> Dereferencing freed pointer "clone".
    137 if (clone->bi_status == BLK_STS_IOERR)
    138 return -EIO;
    139
    140 if (bio_op(bio) == REQ_OP_WRITE && dmz_is_seq(zone))
    141 zone->wp_block += nr_blocks;
    142

    The "clone" bio may be processed and freed before the check
    "clone->bi_status == BLK_STS_IOERR" - so this check can access invalid
    memory.

    Fixes: 75d66ffb48efb3 ("dm zoned: properly handle backing device failure")
    Cc: stable@vger.kernel.org
    Signed-off-by: Mikulas Patocka
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Mikulas Patocka
     

16 Aug, 2019

3 commits

  • Signed-off-by: Dmitry Fomichev
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Dmitry Fomichev
     
  • dm-zoned is observed to lock up or livelock in case of hardware
    failure or some misconfiguration of the backing zoned device.

    This patch adds a new dm-zoned target function that checks the status of
    the backing device. If the request queue of the backing device is found
    to be in dying state or the SCSI backing device enters offline state,
    the health check code sets a dm-zoned target flag prompting all further
    incoming I/O to be rejected. In order to detect backing device failures
    timely, this new function is called in the request mapping path, at the
    beginning of every reclaim run and before performing any metadata I/O.

    The proper way out of this situation is to do

    dmsetup remove

    and recreate the target when the problem with the backing device
    is resolved.

    Fixes: 3b1a94c88b79 ("dm zoned: drive-managed zoned block device target")
    Cc: stable@vger.kernel.org
    Signed-off-by: Dmitry Fomichev
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Dmitry Fomichev
     
  • Some errors are ignored in the I/O path during queueing chunks
    for processing by chunk works. Since at least these errors are
    transient in nature, it should be possible to retry the failed
    incoming commands.

    The fix -

    Errors that can happen while queueing chunks are carried upwards
    to the main mapping function and it now returns DM_MAPIO_REQUEUE
    for any incoming requests that can not be properly queued.

    Error logging/debug messages are added where needed.

    Fixes: 3b1a94c88b79 ("dm zoned: drive-managed zoned block device target")
    Cc: stable@vger.kernel.org
    Signed-off-by: Dmitry Fomichev
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Dmitry Fomichev
     

19 Apr, 2019

1 commit

  • My static checker complains about this line from dmz_get_zoned_device()

    aligned_capacity = dev->capacity & ~(blk_queue_zone_sectors(q) - 1);

    The problem is that "aligned_capacity" and "dev->capacity" are sector_t
    type (which is a u64 under most configs) but blk_queue_zone_sectors(q)
    returns a u32 so the higher 32 bits in aligned_capacity are cleared to
    zero. This patch adds a cast to address the issue.

    Fixes: 114e025968b5 ("dm zoned: ignore last smaller runt zone")
    Signed-off-by: Dan Carpenter
    Reviewed-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    Dan Carpenter
     

21 Feb, 2019

1 commit


08 Dec, 2018

1 commit

  • struct bioctx includes the ref refcount_t to track the number of I/O
    fragments used to process a target BIO as well as ensure that the zone
    of the BIO is kept in the active state throughout the lifetime of the
    BIO. However, since decrementing of this reference count is done in the
    target .end_io method, the function bio_endio() must be called multiple
    times for read and write target BIOs, which causes problems with the
    value of the __bi_remaining struct bio field for chained BIOs (e.g. the
    clone BIO passed by dm core is large and splits into fragments by the
    block layer), resulting in incorrect values and inconsistencies with the
    BIO_CHAIN flag setting. This is turn triggers the BUG_ON() call:

    BUG_ON(atomic_read(&bio->__bi_remaining)
    Signed-off-by: Mike Snitzer

    Damien Le Moal
     

27 Oct, 2018

1 commit

  • …/device-mapper/linux-dm

    Pull device mapper updates from Mike Snitzer:

    - The biggest change this cycle is to remove support for the legacy IO
    path (.request_fn) from request-based DM.

    Jens has already started preparing for complete removal of the legacy
    IO path in 4.21 but this earlier removal of support from DM has been
    coordinated with Jens (as evidenced by the commit being attributed to
    him).

    Making request-based DM exclussively blk-mq only cleans up that
    portion of DM core quite nicely.

    - Convert the thinp and zoned targets over to using refcount_t where
    applicable.

    - A couple fixes to the DM zoned target for refcounting and other races
    buried in the implementation of metadata block creation and use.

    - Small cleanups to remove redundant unlikely() around a couple
    WARN_ON_ONCE().

    - Simplify how dm-ioctl copies from userspace, eliminating some
    potential for a malicious user trying to change the executed ioctl
    after its processing has begun.

    - Tweaked DM crypt target to use the DM device name when naming the
    various workqueues created for a particular DM crypt device (makes
    the N workqueues for a DM crypt device more easily understood and
    enhances user's accounting capabilities at a glance via "ps")

    - Small fixup to remove dead branch in DM writecache's memory_entry().

    * tag 'for-4.20/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
    dm writecache: remove disabled code in memory_entry()
    dm zoned: fix various dmz_get_mblock() issues
    dm zoned: fix metadata block ref counting
    dm raid: avoid bitmap with raid4/5/6 journal device
    dm crypt: make workqueue names device-specific
    dm: add dm_table_device_name()
    dm ioctl: harden copy_params()'s copy_from_user() from malicious users
    dm: remove unnecessary unlikely() around WARN_ON_ONCE()
    dm zoned: target: use refcount_t for dm zoned reference counters
    dm thin: use refcount_t for thin_c reference counting
    dm table: require that request-based DM be layered on blk-mq devices
    dm: rename DM_TYPE_MQ_REQUEST_BASED to DM_TYPE_REQUEST_BASED
    dm: remove legacy request-based IO path

    Linus Torvalds
     

26 Oct, 2018

1 commit

  • Introduce the blkdev_nr_zones() helper function to get the total
    number of zones of a zoned block device. This number is always 0 for a
    regular block device (q->limits.zoned == BLK_ZONED_NONE case).

    Replace hard-coded number of zones calculation in dmz_get_zoned_device()
    with a call to this helper.

    Reviewed-by: Christoph Hellwig
    Reviewed-by: Hannes Reinecke
    Signed-off-by: Damien Le Moal
    Signed-off-by: Jens Axboe

    Damien Le Moal
     

17 Oct, 2018

1 commit

  • The API surrounding refcount_t should be used in place of atomic_t
    when variables are being used as reference counters. This API can
    prevent issues such as counter overflows and use-after-free
    conditions. Within the dm zoned target stack, the atomic_t API is
    used for bioctx->ref and cw->refcount. Change these to use
    refcount_t, avoiding the issues mentioned.

    Signed-off-by: John Pittman
    Reviewed-by: Damien Le Moal
    Tested-by: Damien Le Moal
    Signed-off-by: Mike Snitzer

    John Pittman
     

23 Jun, 2018

1 commit

  • This patch avoids that lockdep reports the following:

    ======================================================
    WARNING: possible circular locking dependency detected
    4.18.0-rc1 #62 Not tainted
    ------------------------------------------------------
    kswapd0/84 is trying to acquire lock:
    00000000c313516d (&xfs_nondir_ilock_class){++++}, at: xfs_free_eofblocks+0xa2/0x1e0

    but task is already holding lock:
    00000000591c83ae (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30

    which lock already depends on the new lock.

    the existing dependency chain (in reverse order) is:

    -> #2 (fs_reclaim){+.+.}:
    kmem_cache_alloc+0x2c/0x2b0
    radix_tree_node_alloc.constprop.19+0x3d/0xc0
    __radix_tree_create+0x161/0x1c0
    __radix_tree_insert+0x45/0x210
    dmz_map+0x245/0x2d0 [dm_zoned]
    __map_bio+0x40/0x260
    __split_and_process_non_flush+0x116/0x220
    __split_and_process_bio+0x81/0x180
    __dm_make_request.isra.32+0x5a/0x100
    generic_make_request+0x36e/0x690
    submit_bio+0x6c/0x140
    mpage_readpages+0x19e/0x1f0
    read_pages+0x6d/0x1b0
    __do_page_cache_readahead+0x21b/0x2d0
    force_page_cache_readahead+0xc4/0x100
    generic_file_read_iter+0x7c6/0xd20
    __vfs_read+0x102/0x180
    vfs_read+0x9b/0x140
    ksys_read+0x55/0xc0
    do_syscall_64+0x5a/0x1f0
    entry_SYSCALL_64_after_hwframe+0x49/0xbe

    -> #1 (&dmz->chunk_lock){+.+.}:
    dmz_map+0x133/0x2d0 [dm_zoned]
    __map_bio+0x40/0x260
    __split_and_process_non_flush+0x116/0x220
    __split_and_process_bio+0x81/0x180
    __dm_make_request.isra.32+0x5a/0x100
    generic_make_request+0x36e/0x690
    submit_bio+0x6c/0x140
    _xfs_buf_ioapply+0x31c/0x590
    xfs_buf_submit_wait+0x73/0x520
    xfs_buf_read_map+0x134/0x2f0
    xfs_trans_read_buf_map+0xc3/0x580
    xfs_read_agf+0xa5/0x1e0
    xfs_alloc_read_agf+0x59/0x2b0
    xfs_alloc_pagf_init+0x27/0x60
    xfs_bmap_longest_free_extent+0x43/0xb0
    xfs_bmap_btalloc_nullfb+0x7f/0xf0
    xfs_bmap_btalloc+0x428/0x7c0
    xfs_bmapi_write+0x598/0xcc0
    xfs_iomap_write_allocate+0x15a/0x330
    xfs_map_blocks+0x1cf/0x3f0
    xfs_do_writepage+0x15f/0x7b0
    write_cache_pages+0x1ca/0x540
    xfs_vm_writepages+0x65/0xa0
    do_writepages+0x48/0xf0
    __writeback_single_inode+0x58/0x730
    writeback_sb_inodes+0x249/0x5c0
    wb_writeback+0x11e/0x550
    wb_workfn+0xa3/0x670
    process_one_work+0x228/0x670
    worker_thread+0x3c/0x390
    kthread+0x11c/0x140
    ret_from_fork+0x3a/0x50

    -> #0 (&xfs_nondir_ilock_class){++++}:
    down_read_nested+0x43/0x70
    xfs_free_eofblocks+0xa2/0x1e0
    xfs_fs_destroy_inode+0xac/0x270
    dispose_list+0x51/0x80
    prune_icache_sb+0x52/0x70
    super_cache_scan+0x127/0x1a0
    shrink_slab.part.47+0x1bd/0x590
    shrink_node+0x3b5/0x470
    balance_pgdat+0x158/0x3b0
    kswapd+0x1ba/0x600
    kthread+0x11c/0x140
    ret_from_fork+0x3a/0x50

    other info that might help us debug this:

    Chain exists of:
    &xfs_nondir_ilock_class --> &dmz->chunk_lock --> fs_reclaim

    Possible unsafe locking scenario:

    CPU0 CPU1
    ---- ----
    lock(fs_reclaim);
    lock(&dmz->chunk_lock);
    lock(fs_reclaim);
    lock(&xfs_nondir_ilock_class);

    *** DEADLOCK ***

    3 locks held by kswapd0/84:
    #0: 00000000591c83ae (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30
    #1: 000000000f8208f5 (shrinker_rwsem){++++}, at: shrink_slab.part.47+0x3f/0x590
    #2: 00000000cacefa54 (&type->s_umount_key#43){.+.+}, at: trylock_super+0x16/0x50

    stack backtrace:
    CPU: 7 PID: 84 Comm: kswapd0 Not tainted 4.18.0-rc1 #62
    Hardware name: Supermicro Super Server/X10SRL-F, BIOS 2.0 12/17/2015
    Call Trace:
    dump_stack+0x85/0xcb
    print_circular_bug.isra.36+0x1ce/0x1db
    __lock_acquire+0x124e/0x1310
    lock_acquire+0x9f/0x1f0
    down_read_nested+0x43/0x70
    xfs_free_eofblocks+0xa2/0x1e0
    xfs_fs_destroy_inode+0xac/0x270
    dispose_list+0x51/0x80
    prune_icache_sb+0x52/0x70
    super_cache_scan+0x127/0x1a0
    shrink_slab.part.47+0x1bd/0x590
    shrink_node+0x3b5/0x470
    balance_pgdat+0x158/0x3b0
    kswapd+0x1ba/0x600
    kthread+0x11c/0x140
    ret_from_fork+0x3a/0x50

    Reported-by: Masato Suzuki
    Fixes: 4218a9554653 ("dm zoned: use GFP_NOIO in I/O path")
    Cc:
    Signed-off-by: Bart Van Assche
    Signed-off-by: Mike Snitzer

    Bart Van Assche
     

08 Jun, 2018

1 commit


31 May, 2018

1 commit


05 Apr, 2018

1 commit

  • Use the fmode_t that is passed to dm_blk_ioctl() rather than
    inconsistently (varies across targets) drop it on the floor by
    overriding it with the fmode_t stored in 'struct dm_dev'.

    All the persistent reservation functions weren't using the fmode_t they
    got back from .prepare_ioctl so remove them.

    Signed-off-by: Mike Snitzer

    Mike Snitzer
     

17 Jan, 2018

1 commit