07 Aug, 2014

40 commits

  • count_history_pages does only call page_cache_prev_hole in rcu_lock
    context using address_space mapping. There's no need to have
    file_ra_state here.

    Signed-off-by: Fabian Frederick
    Acked-by: Fengguang Wu
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • Just about all of these have been converted to __func__, so convert the
    last use.

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • We mark some slab caches (e.g. kmem_cache_node) as unmergeable by
    setting refcount to -1, and their alias should be 0, not refcount-1, so
    correct it here.

    Signed-off-by: Gu Zheng
    Acked-by: David Rientjes
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gu Zheng
     
  • The return statement goes with the cmpxchg_double() condition so it needs
    to be indented another tab.

    Also these days the fashion is to line function parameters up, and it
    looks nicer that way because then the "freelist_new" is not at the same
    indent level as the "return 1;".

    Signed-off-by: Dan Carpenter
    Signed-off-by: Pekka Enberg
    Signed-off-by: David Rientjes
    Cc: Joonsoo Kim
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Carpenter
     
  • Current struct kmem_cache has no 'lock' field, and slab page is managed by
    struct kmem_cache_node, which has 'list_lock' field.

    Clean up the related comment.

    Signed-off-by: Wang Sheng-Hui
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Sheng-Hui
     
  • Functions krealloc(), __krealloc(), kzfree() belongs to slab API, so
    should be placed in slab_common.c

    Also move slab allocator's tracepoints defenitions to slab_common.c No
    functional changes here.

    Signed-off-by: Andrey Ryabinin
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • When a kmem_cache is created with ctor, each object in the kmem_cache
    will be initialized before ready to use. While in slub implementation,
    the first object will be initialized twice.

    This patch reduces the duplication of initialization of the first
    object.

    Fix commit 7656c72b ("SLUB: add macros for scanning objects in a slab").

    Signed-off-by: Wei Yang
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wei Yang
     
  • It is better to represent allocation size in size_t rather than int. So
    change it.

    Signed-off-by: Joonsoo Kim
    Suggested-by: Andrew Morton
    Cc: Christoph Lameter
    Reviewed-by: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • BAD_ALIEN_MAGIC value isn't used anymore. So remove it.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Now, there is no code to hold two lock simultaneously, since we don't
    call slab_destroy() with holding any lock. So, lockdep annotation is
    useless now. Remove it.

    v2: don't remove BAD_ALIEN_MAGIC in this patch. It will be removed
    in the following patch.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • I haven't heard that this alien cache lock is contended, but to reduce
    chance of contention would be better generally. And with this change,
    we can simplify complex lockdep annotation in slab code. In the
    following patch, it will be implemented.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Now, we have separate alien_cache structure, so it'd be better to hold
    the lock on alien_cache while manipulating alien_cache. After that, we
    don't need the lock on array_cache, so remove it.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Currently, we use array_cache for alien_cache. Although they are mostly
    similar, there is one difference, that is, need for spinlock. We don't
    need spinlock for array_cache itself, but to use array_cache for
    alien_cache, array_cache structure should have spinlock. This is
    needless overhead, so removing it would be better. This patch prepare
    it by introducing alien_cache and using it. In the following patch, we
    remove spinlock in array_cache.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Factor out initialization of array cache to use it in following patch.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • In free_block(), if freeing object makes new free slab and number of
    free_objects exceeds free_limit, we start to destroy this new free slab
    with holding the kmem_cache node lock. Holding the lock is useless and,
    generally, holding a lock as least as possible is good thing. I never
    measure performance effect of this, but we'd be better not to hold the
    lock as much as possible.

    Commented by Christoph:
    This is also good because kmem_cache_free is no longer called while
    holding the node lock. So we avoid one case of recursion.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • node isn't changed, so we don't need to retreive this structure
    everytime we move the object. Maybe compiler do this optimization, but
    making it explicitly is better.

    Signed-off-by: Joonsoo Kim
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • This patchset does some cleanup and tries to remove lockdep annotation.

    Patches 1~2 are just for really really minor improvement.
    Patches 3~9 are for clean-up and removing lockdep annotation.

    There are two cases that lockdep annotation is needed in SLAB.
    1) holding two node locks
    2) holding two array cache(alien cache) locks

    I looked at the code and found that we can avoid these cases without any
    negative effect.

    1) occurs if freeing object makes new free slab and we decide to
    destroy it. Although we don't need to hold the lock during destroying
    a slab, current code do that. Destroying a slab without holding the
    lock would help the reduction of the lock contention. To do it, I
    change the implementation that new free slab is destroyed after
    releasing the lock.

    2) occurs on similar situation. When we free object from non-local
    node, we put this object to alien cache with holding the alien cache
    lock. If alien cache is full, we try to flush alien cache to proper
    node cache, and, in this time, new free slab could be made. Destroying
    it would be started and we will free metadata object which comes from
    another node. In this case, we need another node's alien cache lock to
    free object. This forces us to hold two array cache locks and then we
    need lockdep annotation although they are always different locks and
    deadlock cannot be possible. To prevent this situation, I use same way
    as 1).

    In this way, we can avoid 1) and 2) cases, and then, can remove lockdep
    annotation. As short stat noted, this makes SLAB code much simpler.

    This patch (of 9):

    slab_should_failslab() is called on every allocation, so to optimize it
    is reasonable. We normally don't allocate from kmem_cache. It is just
    used when new kmem_cache is created, so it's very rare case. Therefore,
    add unlikely macro to help compiler optimization.

    Signed-off-by: Joonsoo Kim
    Acked-by: David Rientjes
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • There are two versions of alloc/free hooks now - one for
    CONFIG_SLUB_DEBUG=y and another one for CONFIG_SLUB_DEBUG=n.

    I see no reason why calls to other debugging subsystems (LOCKDEP,
    DEBUG_ATOMIC_SLEEP, KMEMCHECK and FAILSLAB) are hidden under SLUB_DEBUG.
    All this features should work regardless of SLUB_DEBUG config, as all of
    them already have own Kconfig options.

    This also fixes failslab for CONFIG_SLUB_DEBUG=n configuration. It
    simply has not worked before because should_failslab() call was in a
    hook hidden under "#ifdef CONFIG_SLUB_DEBUG #else".

    Note: There is one concealed change in allocation path for SLUB_DEBUG=n
    and all other debugging features disabled. The might_sleep_if() call
    can generate some code even if DEBUG_ATOMIC_SLEEP=n. For
    PREEMPT_VOLUNTARY=y might_sleep() inserts _cond_resched() call, but I
    think it should be ok.

    Signed-off-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • resiliency_test() is only called for bootstrap, so it may be moved to
    init.text and freed after boot.

    Signed-off-by: David Rientjes
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Guarding section:
    #ifndef MM_SLAB_H
    #define MM_SLAB_H
    ...
    #endif
    currently doesn't cover the whole mm/slab.h. It seems like it was
    done unintentionally.

    Wrap the whole file by moving closing #endif to the end of it.

    Signed-off-by: Andrey Ryabinin
    Acked-by: Christoph Lameter
    Acked-by: David Rientjes
    Reviewed-by: Vladimir Davydov
    Cc: Pekka Enberg
    Cc: Joonsoo Kim

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Use the two functions to simplify the code avoiding numerous explicit
    checks coded checking for a certain node to be online.

    Get rid of various repeated calculations of kmem_cache_node structures.

    [akpm@linux-foundation.org: fix build]
    Signed-off-by: Christoph Lameter
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Acked-by: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Make use of the new node functions in mm/slab.h to reduce code size and
    simplify.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Christoph Lameter
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Acked-by: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • The patchset provides two new functions in mm/slab.h and modifies SLAB
    and SLUB to use these. The kmem_cache_node structure is shared between
    both allocators and the use of common accessors will allow us to move
    more code into slab_common.c in the future.

    This patch (of 3):

    These functions allow to eliminate repeatedly used code in both SLAB and
    SLUB and also allow for the insertion of debugging code that may be
    needed in the development process.

    Signed-off-by: Christoph Lameter
    Cc: Pekka Enberg
    Acked-by: David Rientjes
    Acked-by: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • init_lock_keys is only called by __init kmem_cache_init_late

    Signed-off-by: Fabian Frederick
    Acked-by: Christoph Lameter
    Acked-by: David Rientjes
    Cc: Joonsoo Kim
    Cc: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • Replace some obsolete functions.

    Signed-off-by: Fabian Frederick
    Cc: Frederic Weisbecker
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • kcalloc manages count*sizeof overflow.

    Signed-off-by: Fabian Frederick
    Cc: Mark Fasheh
    Cc: Joel Becker
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • Orabug: 19074140

    When umount is issued during recovery on the new master that has not
    finished remastering locks, it triggers BUG() in
    dlm_send_mig_lockres_msg(). Here is the situation:

    1) node A has a lock on resource X mastered by node B.

    2) node B dies -> node A sets recovering flag for res X

    3) Node C becomes the new master for resources owned by the
    dead node and is remastering locks of the dead node but
    has not finished the remastering process yet.

    4) umount is issued on node C.

    5) During processing of umount, ignoring unfished recovery,
    node C attempts to migrate resource X to node A.

    6) node A finds res X in DLM_LOCK_RES_RECOVERING state, considers
    it a logic error and sends back -EFAULT.

    7) node C asserts BUG() upon seeing EFAULT resp from node B.

    Fix is to delay migrating res X till remastering is finished at which
    point recovering flag will be cleared on both A and C.

    Signed-off-by: Tariq Saeed
    Cc: Mark Fasheh
    Cc: Joel Becker
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tariq Saeed
     
  • The unit of total_backoff is msecs not jiffies, so no need to do the
    conversion. Otherwise, the join timeout is not 90 sec.

    Signed-off-by: Yiwen Jiang
    Signed-off-by: joyce.xue
    Cc: Mark Fasheh
    Cc: Joel Becker
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xue jiufei
     
  • ocfs2_search_extent_list may return -1, so we should check the return
    value in ocfs2_split_and_insert, otherwise it may cause array index out of
    bound.

    And ocfs2_search_extent_list can only return value less than
    el->l_next_free_rec, so check if it is equal or larger than
    le16_to_cpu(el->l_next_free_rec) is meaningless.

    Signed-off-by: Yingtai Xie
    Signed-off-by: Joseph Qi
    Cc: Joel Becker
    Cc: Mark Fasheh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yingtai Xie
     
  • - Convert printk to pr_foo()
    - Add pr_fmt for future logging entries
    - Coalesce formats

    Signed-off-by: Fabian Frederick
    Cc: Phillip Lougher
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • kmalloc_array() manages count*sizeof overflow.

    Signed-off-by: Fabian Frederick
    Cc: Phillip Lougher
    Cc: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • Fix build error as reported by Geert Uytterhoeven here:

    http://kisskb.ellerman.id.au/kisskb/buildresult/11607865/

    The error happens when CONFIG_HAS_IOPORT_MAP=n because of which there
    are missing definitions of ioport_map/unmap(). Fix this build error by
    adding these prototypes.

    Signed-off-by: Pranith Kumar
    Reported-by: Geert Uytterhoeven
    Cc: [3.16+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pranith Kumar
     
  • Fix the device name for the CMT.

    Add clocks called usb0 and usb1 so that r8a66597_hcd works again on the
    ecovec24 board

    Signed-off-by: Daniel Palmer
    Cc: Kuninori Morimoto
    Cc: Richard Weinberger
    Cc: Paul Bolle
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Palmer
     
  • Replace IS_ERR/PTR_ERR.

    Signed-off-by: Fabian Frederick
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • Replace IS_ERR/PTR_ERR.

    Signed-off-by: Fabian Frederick
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • The symbol is an orphan, get rid of it.

    Submitted by Richard a few months ago as "[PATCH 21/28] Remove
    CPU_SUBTYPE_SH7764".

    [pebolle@tiscali.nl: re-added dropped ||]
    Signed-off-by: Richard Weinberger
    Signed-off-by: Paul Bolle
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Richard Weinberger
     
  • 'COUNTER' and other same kind macros are too common to use, and easy to
    get conflict with other modules.

    At present, they are not used, so it is OK to simply remove them. And
    the related warning (allmodconfig with score):

    CC [M] drivers/md/raid1.o
    In file included from drivers/md/raid1.c:42:0:
    drivers/md/bitmap.h:93:0: warning: "COUNTER" redefined
    #define COUNTER(x) (((bitmap_counter_t) x) & COUNTER_MAX)
    ^
    In file included from ./arch/score/include/asm/ptrace.h:4:0,
    from include/linux/sched.h:31,
    from include/linux/blkdev.h:4,
    from drivers/md/raid1.c:36:
    ./arch/score/include/uapi/asm/ptrace.h:13:0: note: this is the location of the previous definition
    #define COUNTER 38

    Signed-off-by: Chen Gang
    Signed-off-by: David Rientjes
    Cc: Lennox Wu
    Cc: Guenter Roeck
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chen Gang
     
  • cached_page and lru_pvec were removed from ntfs_attr_extend_initialized
    in commit 2ec93b0bf35f ("ntfs: clean up ntfs_attr_extend_initialized")

    lru_pvec has been removed from __ntfs_grab_cache_pages in commit
    4c99000ac47c ("ntfs: use add_to_page_cache_lru()")

    Signed-off-by: Fabian Frederick
    Acked-by: Anton Altaparmakov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • s/-/:/ and fix variable names.

    Signed-off-by: Fabian Frederick
    Cc: Joern Engel
    Cc: Prasad Joshi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • This adds a hopefully helpful comment above the (seemingly weird) compiler
    flag selection logic.

    Signed-off-by: Kees Cook
    Suggested-by: Andrew Morton
    Cc: Andi Kleen
    Cc: Randy Dunlap
    Cc: Michal Marek
    Cc: Michal Hocko
    Cc: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook