23 Jul, 2010

4 commits

  • Workqueue can now handle high concurrency. Use system_nrt_wq
    instead of slow-work.

    * Updated is_valid_oplock_break() to not call cifs_oplock_break_put()
    as advised by Steve French. It might cause deadlock. Instead,
    reference is increased after queueing succeeded and
    cifs_oplock_break() briefly grabs GlobalSMBSeslock before putting
    the cfile to make sure it doesn't put before the matching get is
    finished.

    * Anton Blanchard reported that cifs conversion was using now gone
    system_single_wq. Use system_nrt_wq which provides non-reentrance
    guarantee which is enough and much better.

    Signed-off-by: Tejun Heo
    Acked-by: Steve French
    Cc: Anton Blanchard

    Tejun Heo
     
  • fscache no longer uses slow-work. Drop references to it.

    Signed-off-by: Tejun Heo
    Acked-by: David Howells

    Tejun Heo
     
  • Make fscache operation to use only workqueue instead of combination of
    workqueue and slow-work. FSCACHE_OP_SLOW is dropped and
    FSCACHE_OP_FAST is renamed to FSCACHE_OP_ASYNC and uses newly added
    fscache_op_wq workqueue to execute op->processor().
    fscache_operation_init_slow() is dropped and fscache_operation_init()
    now takes @processor argument directly.

    * Unbound workqueue is used.

    * fscache_retrieval_work() is no longer necessary as OP_ASYNC now does
    the equivalent thing.

    * sysctl fscache.operation_max_active added to control concurrency.
    The default value is nr_cpus clamped between 2 and
    WQ_UNBOUND_MAX_ACTIVE.

    * debugfs support is dropped for now. Tracing API based debug
    facility is planned to be added.

    Signed-off-by: Tejun Heo
    Acked-by: David Howells

    Tejun Heo
     
  • Make fscache object state transition callbacks use workqueue instead
    of slow-work. New dedicated unbound CPU workqueue fscache_object_wq
    is created. get/put callbacks are renamed and modified to take
    @object and called directly from the enqueue wrapper and the work
    function. While at it, make all open coded instances of get/put to
    use fscache_get/put_object().

    * Unbound workqueue is used.

    * work_busy() output is printed instead of slow-work flags in object
    debugging outputs. They mean basically the same thing bit-for-bit.

    * sysctl fscache.object_max_active added to control concurrency. The
    default value is nr_cpus clamped between 4 and
    WQ_UNBOUND_MAX_ACTIVE.

    * slow_work_sleep_till_thread_needed() is replaced with fscache
    private implementation fscache_object_sleep_till_congested() which
    waits on fscache_object_wq congestion.

    * debugfs support is dropped for now. Tracing API based debug
    facility is planned to be added.

    Signed-off-by: Tejun Heo
    Acked-by: David Howells

    Tejun Heo
     

12 Jun, 2010

15 commits


11 Jun, 2010

13 commits

  • when we use remap_file_pages() to remap a file, remap_file_pages always return
    error. It is because btrfs didn't set VM_CAN_NONLINEAR for vma.

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • refs can be used with uninitialized data if btrfs_lookup_extent_info()
    fails on the first pass through the loop. In the original code if that
    happens then check_path_shared() probably returns 1, this patch
    changes it to return 1 for safety.

    Signed-off-by: Dan Carpenter
    Signed-off-by: Chris Mason

    Dan Carpenter
     
  • Seems that when btrfs_fallocate was converted to use the new ENOSPC stuff we
    dropped passing the mode to the function that actually does the preallocation.
    This breaks anybody who wants to use FALLOC_FL_KEEP_SIZE. Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • We cannot use the loop device which has been connected to a file in the btrf

    The reproduce steps is following:
    # dd if=/dev/zero of=vdev0 bs=1M count=1024
    # losetup /dev/loop0 vdev0
    # mkfs.btrfs /dev/loop0
    ...
    failed to zero device start -5

    The reason is that the btrfs don't implement either ->write_begin or ->write
    the VFS API, so we fix it by setting ->write to do_sync_write().

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • If we have enough memory to allocate a new cap release message, do so, so
    that we can send a partial release message immediately. This keeps us from
    making the MDS wait when the cap release it needs is in a partially full
    release message.

    If we fail because of ENOMEM, oh well, they'll just have to wait a bit
    longer.

    Signed-off-by: Sage Weil

    Sage Weil
     
  • If we get an IMPORT that give us a cap, but we don't have the inode, queue
    a release (and try to send it immediately) so that the MDS doesn't get
    stuck waiting for us.

    Signed-off-by: Sage Weil

    Sage Weil
     
  • Nothing is released here: the caps message is simply ignored in this case.

    Signed-off-by: Sage Weil

    Sage Weil
     
  • bdi_seq is an atomic_long_t but we're using ATOMIC_INIT, which causes
    build failures on ia64. This patch fixes it to use ATOMIC_LONG_INIT.

    Signed-off-by: Jeff Mahoney
    Signed-off-by: Sage Weil

    Jeff Mahoney
     
  • As it stands this check compares the number of pages to the page size.
    This makes no sense and makes the fcntl fail in almost any sane case.

    Fix it by checking if nr_pages is not zero (it can become zero only if
    arg is too big and round_pipe_size() overflows).

    Signed-off-by: Miklos Szeredi
    Signed-off-by: Jens Axboe

    Miklos Szeredi
     
  • pipe_set_size() needs to copy pipe bufs from the old circular buffer
    to the new.

    The current code gets this wrong in multiple ways, resulting in oops.

    Test program is available here:
    http://www.kernel.org/pub/linux/kernel/people/mszeredi/piperesize/

    Signed-off-by: Miklos Szeredi
    Signed-off-by: Jens Axboe

    Miklos Szeredi
     
  • We do the same BUG_ON() just a line later when calling into
    __bd_abort_claiming().

    Reported-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • I don't like the subtle multi-context code in bd_claim (ie. detects where it
    has been called based on bd_claiming). It seems clearer to just require a new
    function to finish a 2-part claim.

    Also improve commentary in bd_start_claiming as to how it should
    be used.

    Signed-off-by: Nick Piggin
    Acked-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Nick Piggin
     
  • bd_start_claiming has an unbalanced module_put introduced in 6b4517a79.

    Signed-off-by: Nick Piggin
    Acked-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Nick Piggin
     

10 Jun, 2010

1 commit


09 Jun, 2010

3 commits


08 Jun, 2010

1 commit

  • * git://git.infradead.org/~dwmw2/mtd-2.6.35:
    jffs2: update ctime when changing the file's permission by setfacl
    jffs2: Fix NFS race by using insert_inode_locked()
    jffs2: Fix in-core inode leaks on error paths
    mtd: Fix NAND submenu
    mtd/r852: update card detect early.
    mtd/r852: Fixes in case of DMA timeout
    mtd/r852: register IRQ as last step
    drivers/mtd: Use memdup_user
    docbook: make mtd nand module init static

    Linus Torvalds
     

06 Jun, 2010

2 commits


05 Jun, 2010

1 commit