17 May, 2016

7 commits

  • dax_do_io() is calling filemap_write_and_wait() if DIO_LOCKING flags is
    set. Presumably this was copied over from direct IO code. However DAX
    inodes have no pagecache pages to write so the call is pointless. Remove
    it.

    Reviewed-by: Ross Zwisler
    Signed-off-by: Jan Kara
    Signed-off-by: Vishal Verma

    Jan Kara
     
  • All the filesystems are now zeroing blocks themselves for DAX IO to avoid
    races between dax_io() and dax_fault(). Remove the zeroing code from
    dax_io() and add warning to catch the case when somebody unexpectedly
    returns new or unwritten buffer.

    Reviewed-by: Ross Zwisler
    Signed-off-by: Jan Kara
    Signed-off-by: Vishal Verma

    Jan Kara
     
  • Now that all filesystems zero out blocks allocated for a fault handler,
    we can just remove the zeroing from the handler itself. Also add checks
    that no filesystem returns to us unwritten or new buffer.

    Reviewed-by: Ross Zwisler
    Signed-off-by: Jan Kara
    Signed-off-by: Vishal Verma

    Jan Kara
     
  • Currently ext2 zeroes any data blocks allocated for DAX inode however it
    still returns them as BH_New. Thus DAX code zeroes them again in
    dax_insert_mapping() which can possibly overwrite the data that has been
    already stored to those blocks by a racing dax_io(). Avoid marking
    pre-zeroed buffers as new.

    Reviewed-by: Ross Zwisler
    Signed-off-by: Jan Kara
    Signed-off-by: Vishal Verma

    Jan Kara
     
  • When zeroing allocated blocks for DAX, we accidentally zeroed only the
    first allocated block instead of all of them. So far this problem is
    hidden by the fact that page faults always need only a single block and
    DAX write code zeroes blocks again. But the zeroing in DAX code is racy
    and needs to be removed so fix the zeroing in ext2 to zero all allocated
    blocks.

    Reported-by: Ross Zwisler
    Signed-off-by: Jan Kara
    Signed-off-by: Vishal Verma

    Jan Kara
     
  • Fault handlers currently take complete_unwritten argument to convert
    unwritten extents after PTEs are updated. However no filesystem uses
    this anymore as the code is racy. Remove the unused argument.

    Reviewed-by: Ross Zwisler
    Signed-off-by: Jan Kara
    Signed-off-by: Vishal Verma

    Jan Kara
     
  • These don't belong in radix-tree.c any more than PAGECACHE_TAG_* do.
    Let's try to maintain the idea that radix-tree simply implements an
    abstract data type.

    Acked-by: Ross Zwisler
    Reviewed-by: Matthew Wilcox
    Signed-off-by: NeilBrown
    Signed-off-by: Jan Kara
    Signed-off-by: Vishal Verma

    NeilBrown
     

13 May, 2016

5 commits

  • Currently ext4 treats DAX IO the same way as direct IO. I.e., it
    allocates unwritten extents before IO is done and converts unwritten
    extents afterwards. However this way DAX IO can race with page fault to
    the same area:

    ext4_ext_direct_IO() dax_fault()
    dax_io()
    get_block() - allocates unwritten extent
    copy_from_iter_pmem()
    get_block() - converts
    unwritten block to
    written and zeroes it
    out
    ext4_convert_unwritten_extents()

    So data written with DAX IO gets lost. Similarly dax_new_buf() called
    from dax_io() can overwrite data that has been already written to the
    block via mmap.

    Fix the problem by using pre-zeroed blocks for DAX IO the same way as we
    use them for DAX mmap. The downside of this solution is that every
    allocating write writes each block twice (once zeros, once data). Fixing
    the race with locking is possible as well however we would need to
    lock-out faults for the whole range written to by DAX IO. And that is
    not easy to do without locking-out faults for the whole file which seems
    too aggressive.

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • Currently ext4 direct IO handling is split between ext4_ext_direct_IO()
    and ext4_ind_direct_IO(). However the extent based function calls into
    the indirect based one for some cases and for example it is not able to
    handle file extending. Previously it was not also properly handling
    retries in case of ENOSPC errors. With DAX things would get even more
    contrieved so just refactor the direct IO code and instead of indirect /
    extent split do the split to read vs writes.

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • When there are blocks to free in the running transaction, block
    allocator can return ENOSPC although the filesystem has some blocks to
    free. We use ext4_should_retry_alloc() to force commit of the current
    transaction and return whether anything was committed so that it makes
    sense to retry the allocation. However the transaction may get committed
    after block allocation fails but before we call
    ext4_should_retry_alloc(). So ext4_should_retry_alloc() returns false
    because there is nothing to commit and we wrongly return ENOSPC.

    Fix the race by unconditionally returning 1 from ext4_should_retry_alloc()
    when we tried to commit a transaction. This should not add any
    unnecessary retries since we had a transaction running a while ago when
    trying to allocate blocks and we want to retry the allocation once that
    transaction has committed anyway.

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • ext4_dax_get_blocks() was accidentally omitted fixing get blocks
    handlers to properly handle transient ENOSPC errors. Fix it now to use
    ext4_get_blocks_trans() helper which takes care of these errors.

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • Currently, __dax_fault() does not call get_blocks() callback with create
    argument set, when we got back unwritten extent from the initial
    get_blocks() call during a write fault. This is because originally
    filesystems were supposed to convert unwritten extents to written ones
    using complete_unwritten() callback. Later this was abandoned in favor of
    using pre-zeroed blocks however the condition whether get_blocks() needs
    to be called with create == 1 remained.

    Fix the condition so that filesystems are not forced to zero-out and
    convert unwritten extents when get_blocks() is called with create == 0
    (which introduces unnecessary overhead for read faults and can be
    problematic as the filesystem may possibly be read-only).

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     

06 May, 2016

5 commits

  • ext4_find_extent(), stripped down to the parts relevant to this patch,
    reads as

    ppos = 0;
    i = depth;
    while (i) {
    --i;
    ++ppos;
    if (unlikely(ppos > depth)) {
    ...
    ret = -EFSCORRUPTED;
    goto err;
    }
    }

    Due to the loop's bounds, the condition ppos > depth can never be met.

    Remove this dead code.

    Signed-off-by: Nicolai Stange
    Signed-off-by: Theodore Ts'o

    Nicolai Stange
     
  • Commit bf6993276f74 ("jbd2: Use tracepoints for history file")
    removed the members j_history, j_history_max and j_history_cur from struct
    handle_s but the descriptions stayed lingering. Removing them.

    Signed-off-by: Luis de Bethencourt
    Signed-off-by: Theodore Ts'o
    Reviewed-by: Jan Kara

    Luis de Bethencourt
     
  • ext4_io_submit() used to check for EOPNOTSUPP after bio submission,
    which is why it had to get an extra reference to the bio before
    submitting it. But since we no longer touch the bio after submission,
    get rid of the redundant get/put of the bio. If we do get the extra
    reference, we enter the slower path of having to flag this bio as now
    having external references.

    Signed-off-by: Jens Axboe
    Signed-off-by: Theodore Ts'o

    Jens Axboe
     
  • Currently, in ext4_mb_init(), there's a loop like the following:

    do {
    ...
    offset += 1 << (sb->s_blocksize_bits - i);
    i++;
    } while (i s_blocksize_bits + 1);

    Note that the updated offset is used in the loop's next iteration only.

    However, at the last iteration, that is at i == sb->s_blocksize_bits + 1,
    the shift count becomes equal to (unsigned)-1 > 31 (c.f. C99 6.5.7(3))
    and UBSAN reports

    UBSAN: Undefined behaviour in fs/ext4/mballoc.c:2621:15
    shift exponent 4294967295 is too large for 32-bit type 'int'
    [...]
    Call Trace:
    [] dump_stack+0xbc/0x117
    [] ? _atomic_dec_and_lock+0x169/0x169
    [] ubsan_epilogue+0xd/0x4e
    [] __ubsan_handle_shift_out_of_bounds+0x1fb/0x254
    [] ? __ubsan_handle_load_invalid_value+0x158/0x158
    [] ? kmem_cache_alloc+0x101/0x390
    [] ? ext4_mb_init+0x13b/0xfd0
    [] ? create_cache+0x57/0x1f0
    [] ? create_cache+0x11a/0x1f0
    [] ? mutex_lock+0x38/0x60
    [] ? mutex_unlock+0x1b/0x50
    [] ? put_online_mems+0x5b/0xc0
    [] ? kmem_cache_create+0x117/0x2c0
    [] ext4_mb_init+0xc49/0xfd0
    [...]

    Observe that the mentioned shift exponent, 4294967295, equals (unsigned)-1.

    Unless compilers start to do some fancy transformations (which at least
    GCC 6.0.0 doesn't currently do), the issue is of cosmetic nature only: the
    such calculated value of offset is never used again.

    Silence UBSAN by introducing another variable, offset_incr, holding the
    next increment to apply to offset and adjust that one by right shifting it
    by one position per loop iteration.

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=114701
    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=112161

    Cc: stable@vger.kernel.org
    Signed-off-by: Nicolai Stange
    Signed-off-by: Theodore Ts'o

    Nicolai Stange
     
  • Currently, in mb_find_order_for_block(), there's a loop like the following:

    while (order bd_blkbits + 1) {
    ...
    bb += 1 << (e4b->bd_blkbits - order);
    }

    Note that the updated bb is used in the loop's next iteration only.

    However, at the last iteration, that is at order == e4b->bd_blkbits + 1,
    the shift count becomes negative (c.f. C99 6.5.7(3)) and UBSAN reports

    UBSAN: Undefined behaviour in fs/ext4/mballoc.c:1281:11
    shift exponent -1 is negative
    [...]
    Call Trace:
    [] dump_stack+0xbc/0x117
    [] ? _atomic_dec_and_lock+0x169/0x169
    [] ubsan_epilogue+0xd/0x4e
    [] __ubsan_handle_shift_out_of_bounds+0x1fb/0x254
    [] ? __ubsan_handle_load_invalid_value+0x158/0x158
    [] ? ext4_mb_generate_from_pa+0x590/0x590
    [] ? ext4_read_block_bitmap_nowait+0x598/0xe80
    [] mb_find_order_for_block+0x1ce/0x240
    [...]

    Unless compilers start to do some fancy transformations (which at least
    GCC 6.0.0 doesn't currently do), the issue is of cosmetic nature only: the
    such calculated value of bb is never used again.

    Silence UBSAN by introducing another variable, bb_incr, holding the next
    increment to apply to bb and adjust that one by right shifting it by one
    position per loop iteration.

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=114701
    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=112161

    Cc: stable@vger.kernel.org
    Signed-off-by: Nicolai Stange
    Signed-off-by: Theodore Ts'o

    Nicolai Stange
     

05 May, 2016

2 commits

  • When filesystem is corrupted in the right way, it can happen
    ext4_mark_iloc_dirty() in ext4_orphan_add() returns error and we
    subsequently remove inode from the in-memory orphan list. However this
    deletion is done with list_del(&EXT4_I(inode)->i_orphan) and thus we
    leave i_orphan list_head with a stale content. Later we can look at this
    content causing list corruption, oops, or other issues. The reported
    trace looked like:

    WARNING: CPU: 0 PID: 46 at lib/list_debug.c:53 __list_del_entry+0x6b/0x100()
    list_del corruption, 0000000061c1d6e0->next is LIST_POISON1
    0000000000100100)
    CPU: 0 PID: 46 Comm: ext4.exe Not tainted 4.1.0-rc4+ #250
    Stack:
    60462947 62219960 602ede24 62219960
    602ede24 603ca293 622198f0 602f02eb
    62219950 6002c12c 62219900 601b4d6b
    Call Trace:
    [] ? vprintk_emit+0x2dc/0x5c0
    [] ? printk+0x0/0x94
    [] show_stack+0xdc/0x1a0
    [] ? printk+0x0/0x94
    [] ? printk+0x0/0x94
    [] dump_stack+0x2a/0x2c
    [] warn_slowpath_common+0x9c/0xf0
    [] ? __list_del_entry+0x6b/0x100
    [] warn_slowpath_fmt+0x94/0xa0
    [] ? __mutex_lock_slowpath+0x239/0x3a0
    [] ? warn_slowpath_fmt+0x0/0xa0
    [] ? set_signals+0x3f/0x50
    [] ? kmem_cache_free+0x10a/0x180
    [] ? mutex_lock+0x18/0x30
    [] __list_del_entry+0x6b/0x100
    [] ext4_orphan_del+0x22c/0x2f0
    [] ? __ext4_journal_start_sb+0x2c/0xa0
    [] ? ext4_truncate+0x383/0x390
    [] ext4_write_begin+0x30b/0x4b0
    [] ? copy_from_user+0x0/0xb0
    [] ? iov_iter_fault_in_readable+0xa0/0xc0
    [] generic_perform_write+0xaf/0x1e0
    [] ? file_update_time+0x46/0x110
    [] __generic_file_write_iter+0x18f/0x1b0
    [] ext4_file_write_iter+0x15f/0x470
    [] ? unlink_file_vma+0x0/0x70
    [] ? unlink_anon_vmas+0x0/0x260
    [] ? free_pgtables+0xb9/0x100
    [] __vfs_write+0xb0/0x130
    [] vfs_write+0xa5/0x170
    [] SyS_write+0x56/0xe0
    [] ? __libc_waitpid+0x0/0xa0
    [] handle_syscall+0x68/0x90
    [] userspace+0x4fd/0x600
    [] ? save_registers+0x1f/0x40
    [] ? arch_prctl+0x177/0x1b0
    [] fork_handler+0x85/0x90

    Fix the problem by using list_del_init() as we always should with
    i_orphan list.

    CC: stable@vger.kernel.org
    Reported-by: Vegard Nossum
    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • A failed call to dqget() returns an ERR_PTR() and not null. Fix
    the check in ext4_ioctl_setproject() to handle this correctly.

    Fixes: 9b7365fc1c82 ("ext4: add FS_IOC_FSSETXATTR/FS_IOC_FSGETXATTR interface support")
    Cc: stable@vger.kernel.org # v4.5
    Signed-off-by: Seth Forshee
    Signed-off-by: Theodore Ts'o
    Reviewed-by: Jan Kara

    Seth Forshee
     

30 Apr, 2016

2 commits

  • Instead of just printing warning messages, if the orphan list is
    corrupted, declare the file system is corrupted. If there are any
    reserved inodes in the orphaned inode list, declare the file system
    corrupted and stop right away to avoid doing more potential damage to
    the file system.

    Cc: stable@vger.kernel.org
    Signed-off-by: Theodore Ts'o

    Theodore Ts'o
     
  • If the orphaned inode list contains inode #5, ext4_iget() returns a
    bad inode (since the bootloader inode should never be referenced
    directly). Because of the bad inode, we end up processing the inode
    repeatedly and this hangs the machine.

    This can be reproduced via:

    mke2fs -t ext4 /tmp/foo.img 100
    debugfs -w -R "ssv last_orphan 5" /tmp/foo.img
    mount -o loop /tmp/foo.img /mnt

    (But don't do this if you are using an unpatched kernel if you care
    about the system staying functional. :-)

    This bug was found by the port of American Fuzzy Lop into the kernel
    to find file system problems[1]. (Since it *only* happens if inode #5
    shows up on the orphan list --- 3, 7, 8, etc. won't do it, it's not
    surprising that AFL needed two hours before it found it.)

    [1] http://events.linuxfoundation.org/sites/events/files/slides/AFL%20filesystem%20fuzzing%2C%20Vault%202016_0.pdf

    Cc: stable@vger.kernel.org
    Reported by: Vegard Nossum
    Signed-off-by: Theodore Ts'o

    Theodore Ts'o
     

27 Apr, 2016

1 commit


26 Apr, 2016

3 commits

  • In ext4, there is a race condition between changing inode journal mode
    and ext4_writepages(). While ext4_writepages() is executed on a
    non-journalled mode inode, the inode's journal mode could be enabled
    by ioctl() and then, some pages dirtied after switching the journal
    mode will be still exposed to ext4_writepages() in non-journaled mode.
    To resolve this problem, we use fs-wide per-cpu rw semaphore by Jan
    Kara's suggestion because we don't want to waste ext4_inode_info's
    space for this extra rare case.

    Signed-off-by: Daeho Jeong
    Signed-off-by: Theodore Ts'o
    Reviewed-by: Jan Kara

    Daeho Jeong
     
  • We already allocate delalloc blocks before changing the inode mode into
    "per-file data journal" mode to prevent delalloc blocks from remaining
    not allocated, but another issue concerned with "BH_Unwritten" status
    still exists. For example, by fallocate(), several buffers' status
    change into "BH_Unwritten", but these buffers cannot be processed by
    ext4_alloc_da_blocks(). So, they still remain in unwritten status after
    per-file data journaling is enabled and they cannot be changed into
    written status any more and, if they are journaled and eventually
    checkpointed, these unwritten buffer will cause a kernel panic by the
    below BUG_ON() function of submit_bh_wbc() when they are submitted
    during checkpointing.

    static int submit_bh_wbc(int rw, struct buffer_head *bh,...
    {
    ...
    BUG_ON(buffer_unwritten(bh));

    Moreover, when "dioread_nolock" option is enabled, the status of a
    buffer is changed into "BH_Unwritten" after write_begin() completes and
    the "BH_Unwritten" status will be cleared after I/O is done. Therefore,
    if a buffer's status is changed into unwrutten but the buffer's I/O is
    not submitted and completed, it can cause the same problem after
    enabling per-file data journaling. You can easily generate this bug by
    executing the following command.

    ./kvm-xfstests -C 10000 -m nodelalloc,dioread_nolock generic/269

    To resolve these problems and define a boundary between the previous
    mode and per-file data journaling mode, we need to flush and wait all
    the I/O of buffers of a file before enabling per-file data journaling
    of the file.

    Signed-off-by: Daeho Jeong
    Signed-off-by: Theodore Ts'o
    Reviewed-by: Jan Kara

    Daeho Jeong
     
  • The function jbd2_journal_extend() takes as its argument the number of
    new credits to be added to the handle. We weren't taking into account
    the currently unused handle credits; worse, we would try to extend the
    handle by N credits when it had N credits available.

    In the case where jbd2_journal_extend() fails because the transaction
    is too large, when jbd2_journal_restart() gets called, the N credits
    owned by the handle gets returned to the transaction, and the
    transaction commit is asynchronously requested, and then
    start_this_handle() will be able to successfully attach the handle to
    the current transaction since the required credits are now available.

    This is mostly harmless, but since ext4_ext_truncate_extend_restart()
    returns EAGAIN, the truncate machinery will once again try to call
    ext4_ext_truncate_extend_restart(), which will do the above sequence
    over and over again until the transaction has committed.

    This was found while I was debugging a lockup in caused by running
    xfstests generic/074 in the data=journal case. I'm still not sure why
    we ended up looping forever, which suggests there may still be another
    bug hiding in the transaction accounting machinery, but this commit
    prevents us from looping in the first place.

    Signed-off-by: Theodore Ts'o

    Theodore Ts'o
     

24 Apr, 2016

5 commits

  • Currently we ask jbd2 to write all dirty allocated buffers before
    committing a transaction when doing writeback of delay allocated blocks.
    However this is unnecessary since we move all pages to writeback state
    before dropping a transaction handle and then submit all the necessary
    IO. We still need the transaction commit to wait for all the outstanding
    writeback before flushing disk caches during transaction commit to avoid
    data exposure issues though. Use the new jbd2 capability and ask it to
    only wait for outstanding writeback during transaction commit when
    writing back data in ext4_writepages().

    Tested-by: "HUANG Weller (CM/ESW12-CN)"
    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • Currently when filesystem needs to make sure data is on permanent
    storage before committing a transaction it adds inode to transaction's
    inode list. During transaction commit, jbd2 writes back all dirty
    buffers that have allocated underlying blocks and waits for the IO to
    finish. However when doing writeback for delayed allocated data, we
    allocate blocks and immediately submit the data. Thus asking jbd2 to
    write dirty pages just unnecessarily adds more work to jbd2 possibly
    writing back other redirtied blocks.

    Add support to jbd2 to allow filesystem to ask jbd2 to only wait for
    outstanding data writes before committing a transaction and thus avoid
    unnecessary writes.

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • This flag is just duplicating what ext4_should_order_data() tells you
    and is used in a single place. Furthermore it doesn't reflect changes to
    inode data journalling flag so it may be possibly misleading. Just
    remove it.

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • Huang has reported that in his powerfail testing he is seeing stale
    block contents in some of recently allocated blocks although he mounts
    ext4 in data=ordered mode. After some investigation I have found out
    that indeed when delayed allocation is used, we don't add inode to
    transaction's list of inodes needing flushing before commit. Originally
    we were doing that but commit f3b59291a69d removed the logic with a
    flawed argument that it is not needed.

    The problem is that although for delayed allocated blocks we write their
    contents immediately after allocating them, there is no guarantee that
    the IO scheduler or device doesn't reorder things and thus transaction
    allocating blocks and attaching them to inode can reach stable storage
    before actual block contents. Actually whenever we attach freshly
    allocated blocks to inode using a written extent, we should add inode to
    transaction's ordered inode list to make sure we properly wait for block
    contents to be written before committing the transaction. So that is
    what we do in this patch. This also handles other cases where stale data
    exposure was possible - like filling hole via mmap in
    data=ordered,nodelalloc mode.

    The only exception to the above rule are extending direct IO writes where
    blkdev_direct_IO() waits for IO to complete before increasing i_size and
    thus stale data exposure is not possible. For now we don't complicate
    the code with optimizing this special case since the overhead is pretty
    low. In case this is observed to be a performance problem we can always
    handle it using a special flag to ext4_map_blocks().

    CC: stable@vger.kernel.org
    Fixes: f3b59291a69d0b734be1fc8be489fef2dd846d3d
    Reported-by: "HUANG Weller (CM/ESW12-CN)"
    Tested-by: "HUANG Weller (CM/ESW12-CN)"
    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o

    Jan Kara
     
  • If a directory has a large number of empty blocks, iterating over all
    of them can take a long time, leading to scheduler warnings and users
    getting irritated when they can't kill a process in the middle of one
    of these long-running readdir operations. Fix this by adding checks to
    ext4_readdir() and ext4_htree_fill_tree().

    This was reverted earlier due to a typo in the original commit where I
    experimented with using signal_pending() instead of
    fatal_signal_pending(). The test was in the wrong place if we were
    going to return signal_pending() since we would end up returning
    duplicant entries. See 9f2394c9be47 for a more detailed explanation.

    Added fix as suggested by Linus to check for signal_pending() in
    in the filldir() functions.

    Reported-by: Benjamin LaHaise
    Google-Bug-Id: 27880676
    Signed-off-by: Theodore Ts'o

    Theodore Ts'o
     

18 Apr, 2016

2 commits


17 Apr, 2016

8 commits

  • Commit 9567366fefdd ("dm cache metadata: fix READ_LOCK macros and
    cleanup WRITE_LOCK macros") uses down_write() instead of down_read() in
    cmd_read_lock(), yet up_read() is used to release the lock in
    READ_UNLOCK(). Fix it.

    Fixes: 9567366fefdd ("dm cache metadata: fix READ_LOCK macros and cleanup WRITE_LOCK macros")
    Cc: stable@vger.kernel.org
    Signed-off-by: Ahmed Samy
    Signed-off-by: Mike Snitzer

    Ahmed Samy
     
  • Pull char/misc fixes from Greg KH:
    "Here are some small char/misc driver fixes for 4.6-rc4. Full details
    are in the shortlog, nothing major here.

    These have all been in linux-next for a while with no reported issues"

    * tag 'char-misc-4.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
    lkdtm: do not leak free page on kmalloc failure
    lkdtm: fix memory leak of base
    lkdtm: fix memory leak of val
    extcon: palmas: Drop stray IRQF_EARLY_RESUME flag

    Linus Torvalds
     
  • Pull misc fixes from Greg KH:
    "Here are three small fixes for 4.6-rc4.

    Two fix up some lz4 issues with big endian systems, and the remaining
    one resolves a minor debugfs issue that was reported.

    All have been in linux-next with no reported issues"

    * tag 'driver-core-4.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
    lib: lz4: cleanup unaligned access efficiency detection
    lib: lz4: fixed zram with lz4 on big endian machines
    debugfs: Make automount point inodes permanently empty

    Linus Torvalds
     
  • Pull USB driver fixes from Greg KH:
    "Here are some small USB fixes for 4.6-rc4.

    Mostly xhci fixes for reported issues, a UAS bug that has hit a number
    of people, including stable tree users, and a few other minor things.

    All have been in linux-next for a while with no reported issues"

    * tag 'usb-4.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
    usb: hcd: out of bounds access in for_each_companion
    USB: uas: Add a new NO_REPORT_LUNS quirk
    USB: uas: Limit qdepth at the scsi-host level
    doc: usb: Fix typo in gadget_multi documentation
    usb: host: xhci-plat: Make enum xhci_plat_type start at a non zero value
    xhci: fix 10 second timeout on removal of PCI hotpluggable xhci controllers
    usb: xhci: fix wild pointers in xhci_mem_cleanup
    usb: host: xhci-plat: fix cannot work if R-Car Gen2/3 run on above 4GB phys
    usb: host: xhci: add a new quirk XHCI_NO_64BIT_SUPPORT
    xhci: resume USB 3 roothub first
    usb: xhci: applying XHCI_PME_STUCK_QUIRK to Intel BXT B0 host
    cdc-acm: fix crash if flushed with nothing buffered

    Linus Torvalds
     
  • Pull dmaengine fixes from Vinod Koul:
    "This time we have some odd fixes in hsu, edma, omap and xilinx.

    Usual fixes and nothing special"

    * tag 'dmaengine-fix-4.6-rc4' of git://git.infradead.org/users/vkoul/slave-dma:
    dmaengine: dw: fix master selection
    dmaengine: edma: special case slot limit workaround
    dmaengine: edma: Remove dynamic TPTC power management feature
    dmaengine: vdma: don't crash when bad channel is requested
    dmaengine: omap-dma: Do not suppress interrupts for memcpy
    dmaengine: omap-dma: Fix polled channel completion detection and handling
    dmaengine: hsu: correct use of channel status register
    dmaengine: hsu: correct residue calculation of active descriptor
    dmaengine: hsu: set HSU_CH_MTSR to memory width

    Linus Torvalds
     
  • Pull locking fixlet from Ingo Molnar:
    "Fixes a build warning on certain Kconfig combinations"

    * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    locking/lockdep: Fix print_collision() unused warning

    Linus Torvalds
     
  • Pull EFI fix from Ingo Molnar:
    "An arm64 boot crash fix"

    * 'efi-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    efi/arm64: Don't apply MEMBLOCK_NOMAP to UEFI memory map mapping

    Linus Torvalds
     
  • Vinod Koul