13 Sep, 2014

1 commit

  • Callers of d_splice_alias(dentry, inode) don't need iput(), neither
    on success nor on failure. Either the reference to inode is stored
    in a previously negative dentry, or it's dropped. In either case
    inode reference the caller used to hold is consumed.

    __gfs2_lookup() does iput() in case when d_splice_alias() has failed.
    Double iput() if we ever hit that. And gfs2_create_inode() ends up
    not only with double iput(), but with link count dropped to zero - on
    an inode it has just found in directory.

    Cc: stable@vger.kernel.org # v3.14+
    Signed-off-by: Al Viro
    Signed-off-by: Steven Whitehouse

    Al Viro
     

11 Sep, 2014

2 commits

  • MAXQUOTAS value defines maximum number of quota types VFS supports.
    This isn't necessarily the number of types gfs2 supports and with
    addition of project quotas these two numbers stop matching. So make gfs2
    use its private definition.

    CC: cluster-devel@redhat.com
    Signed-off-by: Jan Kara
    Signed-off-by: Steven Whitehouse

    Jan Kara
     
  • Fix a regression introduced by:
    6d4ade986f9c8df31e68 GFS2: Add atomic_open support
    where an early return misses d_splice_alias() which had been
    adding the negative dentry.

    Signed-off-by: Benjamin Coddington
    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Benjamin Coddington
     

21 Aug, 2014

3 commits

  • This patch changes the flock code so that it uses the TRY_1CB flag
    instead of the TRY flag on the first attempt. That forces any holding
    nodes to issue a dlm callback, which requests a demote of the glock.
    Then, if the "try" failed, it sleeps a small amount of time for the
    demote to occur. Then it tries again, for an increasing amount of time.
    Subsequent attempts to gain the "try" lock don't use "_1CB" so that
    only one callback is issued.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • This patch changes some variables (especially maxlen in function
    gfs2_block_map) from unsigned int to size_t. We need 64-bit arithmetic
    for very large files (e.g. 1PB) where the variables otherwise get
    shifted to all 0's.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • fix checkpatch warnings:
    "WARNING: Prefer seq_puts to seq_printf"

    Cc: cluster-devel@redhat.com
    Signed-off-by: Fabian Frederick
    Signed-off-by: Steven Whitehouse

    Fabian Frederick
     

28 Jul, 2014

1 commit


18 Jul, 2014

8 commits

  • Cc: cluster-devel@redhat.com
    Signed-off-by: Fabian Frederick
    Signed-off-by: Steven Whitehouse

    Fabian Frederick
     
  • Signed-off-by: Geert Uytterhoeven
    Cc: cluster-devel@redhat.com
    Signed-off-by: Steven Whitehouse

    Geert Uytterhoeven
     
  • This patch removes the GLF_NOCACHE flag from the glocks associated with
    flocks. There should be no good reason not to cache glocks for flocks:
    they only force the glock to be demoted before they can be reacquired,
    which can slow down performance and even cause glock hangs, especially
    in cases where the flocks are held in Shared (SH) mode.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • This patch allows flock glocks to use a non-blocking dequeue rather
    than dq_wait. It also reverts the previous patch I had posted regarding
    dq_wait. The reverted patch isn't necessarily a bad idea, but I decided
    this might avoid unforeseen side effects, and was therefore safer.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • kcalloc manages count*sizeof overflow.

    Cc: cluster-devel@redhat.com
    Signed-off-by: Fabian Frederick
    Signed-off-by: Steven Whitehouse

    Fabian Frederick
     
  • Normally GFP_KERNEL is ok here, but there is now a rarely used code path
    relating to deallocation of unlinked inodes (in certain corner cases)
    which if hit at times of memory shortage can cause recursion while
    trying to free memory.

    One solution would be to try and move the gfs2_glock_get() call so
    that it is no longer called while another glock is held, but that
    doesn't look at all easy, so GFP_NOFS is the best solution for the
    time being.

    Signed-off-by: Steven Whitehouse

    Steven Whitehouse
     
  • We must not leave items on the LRU list with GLF_LOCK set, since
    they can be removed if the glock is brought back into use, which
    may then potentially result in a hang, waiting for GLF_LOCK to
    clear.

    It doesn't happen very often, since it requires a glock that has
    not been used for a long time to be brought back into use at the
    same moment that the shrinker is part way through disposing of
    glocks.

    The fix is to set GLF_LOCK at a later time, when we already know
    that the other locks can be obtained. Also, we now only release
    the lru_lock in case a resched is needed, rather than on every
    iteration.

    Signed-off-by: Steven Whitehouse

    Steven Whitehouse
     
  • Function gfs2_glock_dq_wait is supposed to dequeue a glock and then
    wait for the lock to be demoted. The problem is, if this is a shared
    lock, its demote will depend on the other holders, which means you
    might end up waiting forever because the other process is blocked.
    This problem is especially apparent when dealing with nested flocks.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     

16 Jul, 2014

1 commit

  • The current "wait_on_bit" interface requires an 'action'
    function to be provided which does the actual waiting.
    There are over 20 such functions, many of them identical.
    Most cases can be satisfied by one of just two functions, one
    which uses io_schedule() and one which just uses schedule().

    So:
    Rename wait_on_bit and wait_on_bit_lock to
    wait_on_bit_action and wait_on_bit_lock_action
    to make it explicit that they need an action function.

    Introduce new wait_on_bit{,_lock} and wait_on_bit{,_lock}_io
    which are *not* given an action function but implicitly use
    a standard one.
    The decision to error-out if a signal is pending is now made
    based on the 'mode' argument rather than being encoded in the action
    function.

    All instances of the old wait_on_bit and wait_on_bit_lock which
    can use the new version have been changed accordingly and their
    action functions have been discarded.
    wait_on_bit{_lock} does not return any specific error code in the
    event of a signal so the caller must check for non-zero and
    interpolate their own error code as appropriate.

    The wait_on_bit() call in __fscache_wait_on_invalidate() was
    ambiguous as it specified TASK_UNINTERRUPTIBLE but used
    fscache_wait_bit_interruptible as an action function.
    David Howells confirms this should be uniformly
    "uninterruptible"

    The main remaining user of wait_on_bit{,_lock}_action is NFS
    which needs to use a freezer-aware schedule() call.

    A comment in fs/gfs2/glock.c notes that having multiple 'action'
    functions is useful as they display differently in the 'wchan'
    field of 'ps'. (and /proc/$PID/wchan).
    As the new bit_wait{,_io} functions are tagged "__sched", they
    will not show up at all, but something higher in the stack. So
    the distinction will still be visible, only with different
    function names (gds2_glock_wait versus gfs2_glock_dq_wait in the
    gfs2/glock.c case).

    Since first version of this patch (against 3.15) two new action
    functions appeared, on in NFS and one in CIFS. CIFS also now
    uses an action function that makes the same freezer aware
    schedule call as NFS.

    Signed-off-by: NeilBrown
    Acked-by: David Howells (fscache, keys)
    Acked-by: Steven Whitehouse (gfs2)
    Acked-by: Peter Zijlstra
    Cc: Oleg Nesterov
    Cc: Steve French
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/20140707051603.28027.72349.stgit@notabene.brown
    Signed-off-by: Ingo Molnar

    NeilBrown
     

13 Jun, 2014

1 commit

  • Pull vfs updates from Al Viro:
    "This the bunch that sat in -next + lock_parent() fix. This is the
    minimal set; there's more pending stuff.

    In particular, I really hope to get acct.c fixes merged this cycle -
    we need that to deal sanely with delayed-mntput stuff. In the next
    pile, hopefully - that series is fairly short and localized
    (kernel/acct.c, fs/super.c and fs/namespace.c). In this pile: more
    iov_iter work. Most of prereqs for ->splice_write with sane locking
    order are there and Kent's dio rewrite would also fit nicely on top of
    this pile"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (70 commits)
    lock_parent: don't step on stale ->d_parent of all-but-freed one
    kill generic_file_splice_write()
    ceph: switch to iter_file_splice_write()
    shmem: switch to iter_file_splice_write()
    nfs: switch to iter_splice_write_file()
    fs/splice.c: remove unneeded exports
    ocfs2: switch to iter_file_splice_write()
    ->splice_write() via ->write_iter()
    bio_vec-backed iov_iter
    optimize copy_page_{to,from}_iter()
    bury generic_file_aio_{read,write}
    lustre: get rid of messing with iovecs
    ceph: switch to ->write_iter()
    ceph_sync_direct_write: stop poking into iov_iter guts
    ceph_sync_read: stop poking into iov_iter guts
    new helper: copy_page_from_iter()
    fuse: switch to ->write_iter()
    btrfs: switch to ->write_iter()
    ocfs2: switch to ->write_iter()
    xfs: switch to ->write_iter()
    ...

    Linus Torvalds
     

12 Jun, 2014

1 commit

  • iter_file_splice_write() - a ->splice_write() instance that gathers the
    pipe buffers, builds a bio_vec-based iov_iter covering those and feeds
    it to ->write_iter(). A bunch of simple cases coverted to that...

    [AV: fixed the braino spotted by Cyrill]

    Signed-off-by: Al Viro

    Al Viro
     

05 Jun, 2014

1 commit

  • aops->write_begin may allocate a new page and make it visible only to have
    mark_page_accessed called almost immediately after. Once the page is
    visible the atomic operations are necessary which is noticable overhead
    when writing to an in-memory filesystem like tmpfs but should also be
    noticable with fast storage. The objective of the patch is to initialse
    the accessed information with non-atomic operations before the page is
    visible.

    The bulk of filesystems directly or indirectly use
    grab_cache_page_write_begin or find_or_create_page for the initial
    allocation of a page cache page. This patch adds an init_page_accessed()
    helper which behaves like the first call to mark_page_accessed() but may
    called before the page is visible and can be done non-atomically.

    The primary APIs of concern in this care are the following and are used
    by most filesystems.

    find_get_page
    find_lock_page
    find_or_create_page
    grab_cache_page_nowait
    grab_cache_page_write_begin

    All of them are very similar in detail to the patch creates a core helper
    pagecache_get_page() which takes a flags parameter that affects its
    behavior such as whether the page should be marked accessed or not. Then
    old API is preserved but is basically a thin wrapper around this core
    function.

    Each of the filesystems are then updated to avoid calling
    mark_page_accessed when it is known that the VM interfaces have already
    done the job. There is a slight snag in that the timing of the
    mark_page_accessed() has now changed so in rare cases it's possible a page
    gets to the end of the LRU as PageReferenced where as previously it might
    have been repromoted. This is expected to be rare but it's worth the
    filesystem people thinking about it in case they see a problem with the
    timing change. It is also the case that some filesystems may be marking
    pages accessed that previously did not but it makes sense that filesystems
    have consistent behaviour in this regard.

    The test case used to evaulate this is a simple dd of a large file done
    multiple times with the file deleted on each iterations. The size of the
    file is 1/10th physical memory to avoid dirty page balancing. In the
    async case it will be possible that the workload completes without even
    hitting the disk and will have variable results but highlight the impact
    of mark_page_accessed for async IO. The sync results are expected to be
    more stable. The exception is tmpfs where the normal case is for the "IO"
    to not hit the disk.

    The test machine was single socket and UMA to avoid any scheduling or NUMA
    artifacts. Throughput and wall times are presented for sync IO, only wall
    times are shown for async as the granularity reported by dd and the
    variability is unsuitable for comparison. As async results were variable
    do to writback timings, I'm only reporting the maximum figures. The sync
    results were stable enough to make the mean and stddev uninteresting.

    The performance results are reported based on a run with no profiling.
    Profile data is based on a separate run with oprofile running.

    async dd
    3.15.0-rc3 3.15.0-rc3
    vanilla accessed-v2
    ext3 Max elapsed 13.9900 ( 0.00%) 11.5900 ( 17.16%)
    tmpfs Max elapsed 0.5100 ( 0.00%) 0.4900 ( 3.92%)
    btrfs Max elapsed 12.8100 ( 0.00%) 12.7800 ( 0.23%)
    ext4 Max elapsed 18.6000 ( 0.00%) 13.3400 ( 28.28%)
    xfs Max elapsed 12.5600 ( 0.00%) 2.0900 ( 83.36%)

    The XFS figure is a bit strange as it managed to avoid a worst case by
    sheer luck but the average figures looked reasonable.

    samples percentage
    ext3 86107 0.9783 vmlinux-3.15.0-rc4-vanilla mark_page_accessed
    ext3 23833 0.2710 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
    ext3 5036 0.0573 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
    ext4 64566 0.8961 vmlinux-3.15.0-rc4-vanilla mark_page_accessed
    ext4 5322 0.0713 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
    ext4 2869 0.0384 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
    xfs 62126 1.7675 vmlinux-3.15.0-rc4-vanilla mark_page_accessed
    xfs 1904 0.0554 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
    xfs 103 0.0030 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
    btrfs 10655 0.1338 vmlinux-3.15.0-rc4-vanilla mark_page_accessed
    btrfs 2020 0.0273 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
    btrfs 587 0.0079 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
    tmpfs 59562 3.2628 vmlinux-3.15.0-rc4-vanilla mark_page_accessed
    tmpfs 1210 0.0696 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
    tmpfs 94 0.0054 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed

    [akpm@linux-foundation.org: don't run init_page_accessed() against an uninitialised pointer]
    Signed-off-by: Mel Gorman
    Cc: Johannes Weiner
    Cc: Vlastimil Babka
    Cc: Jan Kara
    Cc: Michal Hocko
    Cc: Hugh Dickins
    Cc: Dave Hansen
    Cc: Theodore Ts'o
    Cc: "Paul E. McKenney"
    Cc: Oleg Nesterov
    Cc: Rik van Riel
    Cc: Peter Zijlstra
    Tested-by: Prabhakar Lad
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

04 Jun, 2014

1 commit

  • …teve/gfs2-3.0-nmw into next

    Pull gfs2 updates from Steven Whitehouse:
    "This must be about the smallest merge window patch set ever for GFS2.
    It is probably also the first one without a single patch from me.
    That is down to a combination of factors, and I have some things in
    the works that are not quite ready yet, that I hope to put in next
    time around.

    Returning to what is here this time... we have 3 patches which fix
    various warnings. Two are bug fixes (for quotas and also a rare
    recovery race condition). The final patch, from Ben Marzinski, is an
    important change in the freeze code which has been in progress for
    some time. This removes the need to take and drop the transaction
    lock for every single transaction, when the only time it was used, was
    at file system freeze time. Ben's patch integrates the freeze
    operation into the journal flush code as an alternative with lower
    overheads and also lands up resolving some difficult to fix races at
    the same time"

    * tag 'gfs2-merge-window' of git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-nmw:
    GFS2: Prevent recovery before the local journal is set
    GFS2: fs/gfs2/file.c: kernel-doc warning fixes
    GFS2: fs/gfs2/bmap.c: kernel-doc warning fixes
    GFS2: remove transaction glock
    GFS2: lops.c: replace 0 by NULL for pointers
    GFS2: quotas not being refreshed in gfs2_adjust_quota

    Linus Torvalds
     

03 Jun, 2014

1 commit


16 May, 2014

2 commits


14 May, 2014

1 commit

  • GFS2 has a transaction glock, which must be grabbed for every
    transaction, whose purpose is to deal with freezing the filesystem.
    Aside from this involving a large amount of locking, it is very easy to
    make the current fsfreeze code hang on unfreezing.

    This patch rewrites how gfs2 handles freezing the filesystem. The
    transaction glock is removed. In it's place is a freeze glock, which is
    cached (but not held) in a shared state by every node in the cluster
    when the filesystem is mounted. This lock only needs to be grabbed on
    freezing, and actions which need to be safe from freezing, like
    recovery.

    When a node wants to freeze the filesystem, it grabs this glock
    exclusively. When the freeze glock state changes on the nodes (either
    from shared to unlocked, or shared to exclusive), the filesystem does a
    special log flush. gfs2_log_flush() does all the work for flushing out
    the and shutting down the incore log, and then it tries to grab the
    freeze glock in a shared state again. Since the filesystem is stuck in
    gfs2_log_flush, no new transaction can start, and nothing can be written
    to disk. Unfreezing the filesytem simply involes dropping the freeze
    glock, allowing gfs2_log_flush() to grab and then release the shared
    lock, so it is cached for next time.

    However, in order for the unfreezing ioctl to occur, gfs2 needs to get a
    shared lock on the filesystem root directory inode to check permissions.
    If that glock has already been grabbed exclusively, fsfreeze will be
    unable to get the shared lock and unfreeze the filesystem.

    In order to allow the unfreeze, this patch makes gfs2 grab a shared lock
    on the filesystem root directory during the freeze, and hold it until it
    unfreezes the filesystem. The functions which need to grab a shared
    lock in order to allow the unfreeze ioctl to be issued now use the lock
    grabbed by the freeze code instead.

    The freeze and unfreeze code take care to make sure that this shared
    lock will not be dropped while another process is using it.

    Signed-off-by: Benjamin Marzinski
    Signed-off-by: Steven Whitehouse

    Benjamin Marzinski
     

07 May, 2014

5 commits


28 Apr, 2014

1 commit


18 Apr, 2014

1 commit

  • Mostly scripted conversion of the smp_mb__* barriers.

    Signed-off-by: Peter Zijlstra
    Acked-by: Paul E. McKenney
    Link: http://lkml.kernel.org/n/tip-55dhyhocezdw1dg7u19hmh1u@git.kernel.org
    Cc: Linus Torvalds
    Cc: linux-arch@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

17 Apr, 2014

1 commit


08 Apr, 2014

1 commit

  • filemap_map_pages() is generic implementation of ->map_pages() for
    filesystems who uses page cache.

    It should be safe to use filemap_map_pages() for ->map_pages() if
    filesystem use filemap_fault() for ->fault().

    Signed-off-by: Kirill A. Shutemov
    Acked-by: Linus Torvalds
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Andi Kleen
    Cc: Matthew Wilcox
    Cc: Dave Hansen
    Cc: Alexander Viro
    Cc: Dave Chinner
    Cc: Ning Qu
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     

05 Apr, 2014

2 commits

  • Pull ext4 updates from Ted Ts'o:
    "Major changes for 3.14 include support for the newly added ZERO_RANGE
    and COLLAPSE_RANGE fallocate operations, and scalability improvements
    in the jbd2 layer and in xattr handling when the extended attributes
    spill over into an external block.

    Other than that, the usual clean ups and minor bug fixes"

    * tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (42 commits)
    ext4: fix premature freeing of partial clusters split across leaf blocks
    ext4: remove unneeded test of ret variable
    ext4: fix comment typo
    ext4: make ext4_block_zero_page_range static
    ext4: atomically set inode->i_flags in ext4_set_inode_flags()
    ext4: optimize Hurd tests when reading/writing inodes
    ext4: kill i_version support for Hurd-castrated file systems
    ext4: each filesystem creates and uses its own mb_cache
    fs/mbcache.c: doucple the locking of local from global data
    fs/mbcache.c: change block and index hash chain to hlist_bl_node
    ext4: Introduce FALLOC_FL_ZERO_RANGE flag for fallocate
    ext4: refactor ext4_fallocate code
    ext4: Update inode i_size after the preallocation
    ext4: fix partial cluster handling for bigalloc file systems
    ext4: delete path dealloc code in ext4_ext_handle_uninitialized_extents
    ext4: only call sync_filesystm() when remounting read-only
    fs: push sync_filesystem() down to the file system's remount_fs()
    jbd2: improve error messages for inconsistent journal heads
    jbd2: minimize region locked by j_list_lock in jbd2_journal_forget()
    jbd2: minimize region locked by j_list_lock in journal_get_create_access()
    ...

    Linus Torvalds
     
  • Pull GFS2 updates from Steven Whitehouse:
    "One of the main highlights this time, is not the patches themselves
    but instead the widening contributor base. It is good to see that
    interest is increasing in GFS2, and I'd like to thank all the
    contributors to this patch set.

    In addition to the usual set of bug fixes and clean ups, there are
    patches to improve inode creation performance when xattrs are required
    and some improvements to the transaction code which is intended to
    help improve scalability after further changes in due course.

    Journal extent mapping is also updated to make it more efficient and
    again, this is a foundation for future work in this area.

    The maximum number of ACLs has been increased to 300 (for a 4k block
    size) which means that even with a few additional xattrs from selinux,
    everything should fit within a single fs block.

    There is also a patch to bring GFS2's own copy of the writepages code
    up to the same level as the core VFS. Eventually we may be able to
    merge some of this code, since it is fairly similar.

    The other major change this time, is bringing consistency to the
    printing of messages via fs_, pr_ macros"

    * tag 'gfs2-merge-window' of git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-nmw: (29 commits)
    GFS2: Fix address space from page function
    GFS2: Fix uninitialized VFS inode in gfs2_create_inode
    GFS2: Fix return value in slot_get()
    GFS2: inline function gfs2_set_mode
    GFS2: Remove extraneous function gfs2_security_init
    GFS2: Increase the max number of ACLs
    GFS2: Re-add a call to log_flush_wait when flushing the journal
    GFS2: Ensure workqueue is scheduled after noexp request
    GFS2: check NULL return value in gfs2_ok_to_move
    GFS2: Convert gfs2_lm_withdraw to use fs_err
    GFS2: Use fs_ more often
    GFS2: Use pr_ more consistently
    GFS2: Move recovery variables to journal structure in memory
    GFS2: global conversion to pr_foo()
    GFS2: return -E2BIG if hit the maximum limits of ACLs
    GFS2: Clean up journal extent mapping
    GFS2: replace kmalloc - __vmalloc / memset 0
    GFS2: Remove extra "if" in gfs2_log_flush()
    fs: NULL dereference in posix_acl_to_xattr()
    GFS2: Move log buffer accounting to transaction
    ...

    Linus Torvalds
     

04 Apr, 2014

1 commit

  • Reclaim will be leaving shadow entries in the page cache radix tree upon
    evicting the real page. As those pages are found from the LRU, an
    iput() can lead to the inode being freed concurrently. At this point,
    reclaim must no longer install shadow pages because the inode freeing
    code needs to ensure the page tree is really empty.

    Add an address_space flag, AS_EXITING, that the inode freeing code sets
    under the tree lock before doing the final truncate. Reclaim will check
    for this flag before installing shadow pages.

    Signed-off-by: Johannes Weiner
    Reviewed-by: Rik van Riel
    Reviewed-by: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Bob Liu
    Cc: Christoph Hellwig
    Cc: Dave Chinner
    Cc: Greg Thelen
    Cc: Hugh Dickins
    Cc: Jan Kara
    Cc: KOSAKI Motohiro
    Cc: Luigi Semenzato
    Cc: Mel Gorman
    Cc: Metin Doslu
    Cc: Michel Lespinasse
    Cc: Ozgun Erdogan
    Cc: Peter Zijlstra
    Cc: Roman Gushchin
    Cc: Ryan Mallon
    Cc: Tejun Heo
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

01 Apr, 2014

1 commit

  • Now that rgrps use the address space which is part of the super
    block, we need to update gfs2_mapping2sbd() to take account of
    that. The only way to do that easily is to use a different set
    of address_space_operations for rgrps.

    Reported-by: Abhi Das
    Tested-by: Abhi Das
    Signed-off-by: Steven Whitehouse

    Steven Whitehouse
     

31 Mar, 2014

2 commits

  • When gfs2_create_inode() fails due to quota violation, the VFS
    inode is not completely uninitialized. This can cause a list
    corruption error.

    This patch correctly uninitializes the VFS inode when a quota
    violation occurs in the gfs2_create_inode codepath.

    Resolves: rhbz#1059808
    Signed-off-by: Abhi Das
    Signed-off-by: Steven Whitehouse

    Abhi Das
     
  • ENOSPC was being returned in slot_get inspite of successful
    execution of the function. This patch fixes this return
    code.

    Signed-off-by: Abhi Das
    Signed-off-by: Steven Whitehouse

    Abhi Das
     

19 Mar, 2014

1 commit