15 Jun, 2014

1 commit

  • Tetsuo Handa wrote:
    "Commit 62a8067a7f35 ("bio_vec-backed iov_iter") introduced an unnamed
    union inside a struct which gcc-4.4.7 cannot handle. Name the unnamed
    union as u in order to fix build failure"

    Let's do this instead: there is only one place in the entire tree that
    steps into this breakage. Anon structs and unions work in older gcc
    versions; as the matter of fact, we have those in the tree - see e.g.
    struct ieee80211_tx_info in include/net/mac80211.h

    What doesn't work is handling their initializers:

    struct {
    int a;
    union {
    int b;
    char c;
    };
    } x[2] = {{.a = 1, .c = 'a'}, {.a = 0, .b = 1}};

    is the obvious syntax for initializer, perfectly fine for C11 and
    handled correctly by gcc-4.7 or later.

    Earlier versions, though, break on it - declaration is fine and so's
    access to fields (i.e. x[0].c = 'a'; would produce the right code), but
    members of the anon structs and unions are not inserted into the right
    namespace. Tellingly, those older versions will not barf on struct {int
    a; struct {int a;};}; - looks like they just have it hacked up somewhere
    around the handling of . and -> instead of doing the right thing.

    The easiest way to deal with that crap is to turn initialization of
    those fields (in the only place where we have such initializer of
    iov_iter) into plain assignment.

    Reported-by: Tetsuo Handa
    Reported-by: Russell King
    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Al Viro
     

13 Jun, 2014

1 commit

  • Pull vfs updates from Al Viro:
    "This the bunch that sat in -next + lock_parent() fix. This is the
    minimal set; there's more pending stuff.

    In particular, I really hope to get acct.c fixes merged this cycle -
    we need that to deal sanely with delayed-mntput stuff. In the next
    pile, hopefully - that series is fairly short and localized
    (kernel/acct.c, fs/super.c and fs/namespace.c). In this pile: more
    iov_iter work. Most of prereqs for ->splice_write with sane locking
    order are there and Kent's dio rewrite would also fit nicely on top of
    this pile"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (70 commits)
    lock_parent: don't step on stale ->d_parent of all-but-freed one
    kill generic_file_splice_write()
    ceph: switch to iter_file_splice_write()
    shmem: switch to iter_file_splice_write()
    nfs: switch to iter_splice_write_file()
    fs/splice.c: remove unneeded exports
    ocfs2: switch to iter_file_splice_write()
    ->splice_write() via ->write_iter()
    bio_vec-backed iov_iter
    optimize copy_page_{to,from}_iter()
    bury generic_file_aio_{read,write}
    lustre: get rid of messing with iovecs
    ceph: switch to ->write_iter()
    ceph_sync_direct_write: stop poking into iov_iter guts
    ceph_sync_read: stop poking into iov_iter guts
    new helper: copy_page_from_iter()
    fuse: switch to ->write_iter()
    btrfs: switch to ->write_iter()
    ocfs2: switch to ->write_iter()
    xfs: switch to ->write_iter()
    ...

    Linus Torvalds
     

12 Jun, 2014

2 commits


10 Jun, 2014

1 commit

  • Pull cgroup updates from Tejun Heo:
    "A lot of activities on cgroup side. Heavy restructuring including
    locking simplification took place to improve the code base and enable
    implementation of the unified hierarchy, which currently exists behind
    a __DEVEL__ mount option. The core support is mostly complete but
    individual controllers need further work. To explain the design and
    rationales of the the unified hierarchy

    Documentation/cgroups/unified-hierarchy.txt

    is added.

    Another notable change is css (cgroup_subsys_state - what each
    controller uses to identify and interact with a cgroup) iteration
    update. This is part of continuing updates on css object lifetime and
    visibility. cgroup started with reference count draining on removal
    way back and is now reaching a point where csses behave and are
    iterated like normal refcnted objects albeit with some complexities to
    allow distinguishing the state where they're being deleted. The css
    iteration update isn't taken advantage of yet but is planned to be
    used to simplify memcg significantly"

    * 'for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (77 commits)
    cgroup: disallow disabled controllers on the default hierarchy
    cgroup: don't destroy the default root
    cgroup: disallow debug controller on the default hierarchy
    cgroup: clean up MAINTAINERS entries
    cgroup: implement css_tryget()
    device_cgroup: use css_has_online_children() instead of has_children()
    cgroup: convert cgroup_has_live_children() into css_has_online_children()
    cgroup: use CSS_ONLINE instead of CGRP_DEAD
    cgroup: iterate cgroup_subsys_states directly
    cgroup: introduce CSS_RELEASED and reduce css iteration fallback window
    cgroup: move cgroup->serial_nr into cgroup_subsys_state
    cgroup: link all cgroup_subsys_states in their sibling lists
    cgroup: move cgroup->sibling and ->children into cgroup_subsys_state
    cgroup: remove cgroup->parent
    device_cgroup: remove direct access to cgroup->children
    memcg: update memcg_has_children() to use css_next_child()
    memcg: remove tasks/children test from mem_cgroup_force_empty()
    cgroup: remove css_parent()
    cgroup: skip refcnting on normal root csses and cgrp_dfl_root self css
    cgroup: use cgroup->self.refcnt for cgroup refcnting
    ...

    Linus Torvalds
     

09 Jun, 2014

3 commits

  • shrink_inactive_list() used to wait 0.1s to avoid congestion when all
    the pages that were isolated from the inactive list were dirty but not
    under active writeback. That makes no real sense, and apparently causes
    major interactivity issues under some loads since 3.11.

    The ostensible reason for it was to wait for kswapd to start writing
    pages, but that seems questionable as well, since the congestion wait
    code seems to trigger for kswapd itself as well. Also, the logic behind
    delaying anything when we haven't actually started writeback is not
    clear - it only delays actually starting that writeback.

    We'll still trigger the congestion waiting if

    (a) the process is kswapd, and we hit pages flagged for immediate
    reclaim

    (b) the process is not kswapd, and the zone backing dev writeback is
    actually congested.

    This probably needs to be revisited, but as it is this fixes a reported
    regression.

    Reported-by: Felipe Contreras
    Pinpointed-by: Hillf Danton
    Cc: Michal Hocko
    Cc: Andrew Morton
    Cc: Mel Gorman
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • Pull ext4 updates from Ted Ts'o:
    "Clean ups and miscellaneous bug fixes, in particular for the new
    collapse_range and zero_range fallocate functions. In addition,
    improve the scalability of adding and remove inodes from the orphan
    list"

    * tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (25 commits)
    ext4: handle symlink properly with inline_data
    ext4: fix wrong assert in ext4_mb_normalize_request()
    ext4: fix zeroing of page during writeback
    ext4: remove unused local variable "stored" from ext4_readdir(...)
    ext4: fix ZERO_RANGE test failure in data journalling
    ext4: reduce contention on s_orphan_lock
    ext4: use sbi in ext4_orphan_{add|del}()
    ext4: use EXT_MAX_BLOCKS in ext4_es_can_be_merged()
    ext4: add missing BUFFER_TRACE before ext4_journal_get_write_access
    ext4: remove unnecessary double parentheses
    ext4: do not destroy ext4_groupinfo_caches if ext4_mb_init() fails
    ext4: make local functions static
    ext4: fix block bitmap validation when bigalloc, ^flex_bg
    ext4: fix block bitmap initialization under sparse_super2
    ext4: find the group descriptors on a 1k-block bigalloc,meta_bg filesystem
    ext4: avoid unneeded lookup when xattr name is invalid
    ext4: fix data integrity sync in ordered mode
    ext4: remove obsoleted check
    ext4: add a new spinlock i_raw_lock to protect the ext4's raw inode
    ext4: fix locking for O_APPEND writes
    ...

    Linus Torvalds
     
  • Now that 3.15 is released, this merges the 'next' branch into 'master',
    bringing us to the normal situation where my 'master' branch is the
    merge window.

    * accumulated work in next: (6809 commits)
    ufs: sb mutex merge + mutex_destroy
    powerpc: update comments for generic idle conversion
    cris: update comments for generic idle conversion
    idle: remove cpu_idle() forward declarations
    nbd: zero from and len fields in NBD_CMD_DISCONNECT.
    mm: convert some level-less printks to pr_*
    MAINTAINERS: adi-buildroot-devel is moderated
    MAINTAINERS: add linux-api for review of API/ABI changes
    mm/kmemleak-test.c: use pr_fmt for logging
    fs/dlm/debug_fs.c: replace seq_printf by seq_puts
    fs/dlm/lockspace.c: convert simple_str to kstr
    fs/dlm/config.c: convert simple_str to kstr
    mm: mark remap_file_pages() syscall as deprecated
    mm: memcontrol: remove unnecessary memcg argument from soft limit functions
    mm: memcontrol: clean up memcg zoneinfo lookup
    mm/memblock.c: call kmemleak directly from memblock_(alloc|free)
    mm/mempool.c: update the kmemleak stack trace for mempool allocations
    lib/radix-tree.c: update the kmemleak stack trace for radix tree allocations
    mm: introduce kmemleak_update_trace()
    mm/kmemleak.c: use %u to print ->checksum
    ...

    Linus Torvalds
     

07 Jun, 2014

14 commits

  • printk is meant to be used with an associated log level. There are some
    instances of printk scattered around the mm code where the log level is
    missing. Add a log level and adhere to suggestions by
    scripts/checkpatch.pl by moving to the pr_* macros.

    Also add the typical pr_fmt definition so that print statements can be
    easily traced back to the modules where they occur, correlated one with
    another, etc. This will require the removal of some (now redundant)
    prefixes on a few print statements.

    Signed-off-by: Mitchel Humpherys
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mitchel Humpherys
     
  • Signed-off-by: Fabian Frederick
    Acked-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • The remap_file_pages() system call is used to create a nonlinear
    mapping, that is, a mapping in which the pages of the file are mapped
    into a nonsequential order in memory. The advantage of using
    remap_file_pages() over using repeated calls to mmap(2) is that the
    former approach does not require the kernel to create additional VMA
    (Virtual Memory Area) data structures.

    Supporting of nonlinear mapping requires significant amount of
    non-trivial code in kernel virtual memory subsystem including hot paths.
    Also to get nonlinear mapping work kernel need a way to distinguish
    normal page table entries from entries with file offset (pte_file).
    Kernel reserves flag in PTE for this purpose. PTE flags are scarce
    resource especially on some CPU architectures. It would be nice to free
    up the flag for other usage.

    Fortunately, there are not many users of remap_file_pages() in the wild.
    It's only known that one enterprise RDBMS implementation uses the
    syscall on 32-bit systems to map files bigger than can linearly fit into
    32-bit virtual address space. This use-case is not critical anymore
    since 64-bit systems are widely available.

    The plan is to deprecate the syscall and replace it with an emulation.
    The emulation will create new VMAs instead of nonlinear mappings. It's
    going to work slower for rare users of remap_file_pages() but ABI is
    preserved.

    One side effect of emulation (apart from performance) is that user can
    hit vm.max_map_count limit more easily due to additional VMAs. See
    comment for DEFAULT_MAX_MAP_COUNT for more details on the limit.

    [akpm@linux-foundation.org: fix spello]
    Signed-off-by: Kirill A. Shutemov
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Dave Jones
    Cc: Armin Rigo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     
  • Signed-off-by: Johannes Weiner
    Acked-by: Michal Hocko
    Cc: Jianyu Zhan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Memcg zoneinfo lookup sites have either the page, the zone, or the node
    id and zone index, but sites that only have the zone have to look up the
    node id and zone index themselves, whereas sites that already have those
    two integers use a function for a simple pointer chase.

    Provide mem_cgroup_zone_zoneinfo() that takes a zone pointer and let
    sites that already have node id and zone index - all for each node, for
    each zone iterators - use &memcg->nodeinfo[nid]->zoneinfo[zid].

    Rename page_cgroup_zoneinfo() to mem_cgroup_page_zoneinfo() to match.

    Signed-off-by: Jianyu Zhan
    Signed-off-by: Johannes Weiner
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianyu Zhan
     
  • Kmemleak could ignore memory blocks allocated via memblock_alloc()
    leading to false positives during scanning. This patch adds the
    corresponding callbacks and removes kmemleak_free_* calls in
    mm/nobootmem.c to avoid duplication.

    The kmemleak_alloc() in mm/nobootmem.c is kept since
    __alloc_memory_core_early() does not use memblock_alloc() directly.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • When mempool_alloc() returns an existing pool object, kmemleak_alloc()
    is no longer called and the stack trace corresponds to the original
    object allocation. This patch updates the kmemleak allocation stack
    trace for such objects to make it more useful for debugging.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • The memory allocation stack trace is not always useful for debugging a
    memory leak (e.g. radix_tree_preload). This function, when called,
    updates the stack trace for an already allocated object.

    Signed-off-by: Catalin Marinas
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • Signed-off-by: Jianpeng Ma
    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianpeng Ma
     
  • Memory reclaim always uses swappiness of the reclaim target memcg
    (origin of the memory pressure) or vm_swappiness for global memory
    reclaim. This behavior was consistent (except for difference between
    global and hard limit reclaim) because swappiness was enforced to be
    consistent within each memcg hierarchy.

    After "mm: memcontrol: remove hierarchy restrictions for swappiness and
    oom_control" each memcg can have its own swappiness independent of
    hierarchical parents, though, so the consistency guarantee is gone.
    This can lead to an unexpected behavior. Say that a group is explicitly
    configured to not swapout by memory.swappiness=0 but its memory gets
    swapped out anyway when the memory pressure comes from its parent with a
    It is also unexpected that the knob is meaningless without setting the
    hard limit which would trigger the reclaim and enforce the swappiness.
    There are setups where the hard limit is configured higher in the
    hierarchy by an administrator and children groups are under control of
    somebody else who is interested in the swapout behavior but not
    necessarily about the memory limit.

    From a semantic point of view swappiness is an attribute defining anon
    vs.
    file proportional scanning of LRU which is memcg specific (unlike
    charges which are propagated up the hierarchy) so it should be applied
    to the particular memcg's LRU regardless where the memory pressure comes
    from.

    This patch removes vmscan_swappiness() and stores the swappiness into
    the scan_control structure. mem_cgroup_swappiness is then used to
    provide the correct value before shrink_lruvec is called. The global
    vm_swappiness is used for the root memcg.

    [hughd@google.com: oopses immediately when booted with cgroup_disable=memory]
    Signed-off-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Tejun Heo
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     
  • This typedef is unnecessary and should just be removed.

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Currently, if allocation constraint to node is NUMA_NO_NODE, we search a
    partial slab on numa_node_id() node. This doesn't work properly on a
    system having memoryless nodes, since it can have no memory on that node
    so there must be no partial slab on that node.

    On that node, page allocation always falls back to numa_mem_id() first.
    So searching a partial slab on numa_node_id() in that case is the proper
    solution for the memoryless node case.

    Signed-off-by: Joonsoo Kim
    Acked-by: Nishanth Aravamudan
    Acked-by: David Rientjes
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Wanpeng Li
    Cc: Han Pingtian
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • When kswapd exits, it can end up taking locks that were previously held
    by allocating tasks while they waited for reclaim. Lockdep currently
    warns about this:

    On Wed, May 28, 2014 at 06:06:34PM +0800, Gu Zheng wrote:
    > inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-R} usage.
    > kswapd2/1151 [HC0[0]:SC0[0]:HE1:SE1] takes:
    > (&sig->group_rwsem){+++++?}, at: exit_signals+0x24/0x130
    > {RECLAIM_FS-ON-W} state was registered at:
    > mark_held_locks+0xb9/0x140
    > lockdep_trace_alloc+0x7a/0xe0
    > kmem_cache_alloc_trace+0x37/0x240
    > flex_array_alloc+0x99/0x1a0
    > cgroup_attach_task+0x63/0x430
    > attach_task_by_pid+0x210/0x280
    > cgroup_procs_write+0x16/0x20
    > cgroup_file_write+0x120/0x2c0
    > vfs_write+0xc0/0x1f0
    > SyS_write+0x4c/0xa0
    > tracesys+0xdd/0xe2
    > irq event stamp: 49
    > hardirqs last enabled at (49): _raw_spin_unlock_irqrestore+0x36/0x70
    > hardirqs last disabled at (48): _raw_spin_lock_irqsave+0x2b/0xa0
    > softirqs last enabled at (0): copy_process.part.24+0x627/0x15f0
    > softirqs last disabled at (0): (null)
    >
    > other info that might help us debug this:
    > Possible unsafe locking scenario:
    >
    > CPU0
    > ----
    > lock(&sig->group_rwsem);
    >
    > lock(&sig->group_rwsem);
    >
    > *** DEADLOCK ***
    >
    > no locks held by kswapd2/1151.
    >
    > stack backtrace:
    > CPU: 30 PID: 1151 Comm: kswapd2 Not tainted 3.10.39+ #4
    > Call Trace:
    > dump_stack+0x19/0x1b
    > print_usage_bug+0x1f7/0x208
    > mark_lock+0x21d/0x2a0
    > __lock_acquire+0x52a/0xb60
    > lock_acquire+0xa2/0x140
    > down_read+0x51/0xa0
    > exit_signals+0x24/0x130
    > do_exit+0xb5/0xa50
    > kthread+0xdb/0x100
    > ret_from_fork+0x7c/0xb0

    This is because the kswapd thread is still marked as a reclaimer at the
    time of exit. But because it is exiting, nobody is actually waiting on
    it to make reclaim progress anymore, and it's nothing but a regular
    thread at this point. Be tidy and strip it of all its powers
    (PF_MEMALLOC, PF_SWAPWRITE, PF_KSWAPD, and the lockdep reclaim state)
    before returning from the thread function.

    Signed-off-by: Johannes Weiner
    Reported-by: Gu Zheng
    Cc: Yasuaki Ishimatsu
    Cc: Tang Chen
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • The age table walker doesn't check non-present hugetlb entry in common
    path, so hugetlb_entry() callbacks must check it. The reason for this
    behavior is that some callers want to handle it in its own way.

    [ I think that reason is bogus, btw - it should just do what the regular
    code does, which is to call the "pte_hole()" function for such hugetlb
    entries - Linus]

    However, some callers don't check it now, which causes unpredictable
    result, for example when we have a race between migrating hugepage and
    reading /proc/pid/numa_maps. This patch fixes it by adding !pte_present
    checks on buggy callbacks.

    This bug exists for years and got visible by introducing hugepage
    migration.

    ChangeLog v2:
    - fix if condition (check !pte_present() instead of pte_present())

    Reported-by: Sasha Levin
    Signed-off-by: Naoya Horiguchi
    Cc: Rik van Riel
    Cc: [3.12+]
    Signed-off-by: Andrew Morton
    [ Backported to 3.15. Signed-off-by: Josh Boyer ]
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

06 Jun, 2014

1 commit

  • While working address sanitizer for kernel I've discovered
    use-after-free bug in __put_anon_vma.

    For the last anon_vma, anon_vma->root freed before child anon_vma.
    Later in anon_vma_free(anon_vma) we are referencing to already freed
    anon_vma->root to check rwsem.

    This fixes it by freeing the child anon_vma before freeing
    anon_vma->root.

    Signed-off-by: Andrey Ryabinin
    Acked-by: Peter Zijlstra
    Cc: # v3.0+
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

05 Jun, 2014

17 commits

  • Pull x86 cdso updates from Peter Anvin:
    "Vdso cleanups and improvements largely from Andy Lutomirski. This
    makes the vdso a lot less ''special''"

    * 'x86/vdso' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    x86/vdso, build: Make LE access macros clearer, host-safe
    x86/vdso, build: Fix cross-compilation from big-endian architectures
    x86/vdso, build: When vdso2c fails, unlink the output
    x86, vdso: Fix an OOPS accessing the HPET mapping w/o an HPET
    x86, mm: Replace arch_vma_name with vm_ops->name for vsyscalls
    x86, mm: Improve _install_special_mapping and fix x86 vdso naming
    mm, fs: Add vm_ops->name as an alternative to arch_vma_name
    x86, vdso: Fix an OOPS accessing the HPET mapping w/o an HPET
    x86, vdso: Remove vestiges of VDSO_PRELINK and some outdated comments
    x86, vdso: Move the vvar and hpet mappings next to the 64-bit vDSO
    x86, vdso: Move the 32-bit vdso special pages after the text
    x86, vdso: Reimplement vdso.so preparation in build-time C
    x86, vdso: Move syscall and sysenter setup into kernel/cpu/common.c
    x86, vdso: Clean up 32-bit vs 64-bit vdso params
    x86, mm: Ensure correct alignment of the fixmap

    Linus Torvalds
     
  • zswap_dstmem is a percpu block of memory, which should be allocated using
    kmalloc_node(), to get better NUMA locality.

    Without it, all the blocks are allocated from a single node.

    Signed-off-by: Eric Dumazet
    Acked-by: Seth Jennings
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Dumazet
     
  • Now, we can build zsmalloc as module because unmap_kernel_range was
    exported.

    Signed-off-by: Minchan Kim
    Cc: Nitin Gupta
    Cc: Sergey Senozhatsky
    Cc: Jerome Marchand
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • zsmalloc needs exported unmap_kernel_range for building as a module. See
    https://lkml.org/lkml/2013/1/18/487

    I didn't send a patch to make unmap_kernel_range exportable at that time
    because zram was staging stuff and I thought VM function exporting for
    staging stuff makes no sense.

    Now zsmalloc was promoted. If we can't build zsmalloc as module, it means
    we can't build zram as module, either. Additionally, buddy map_vm_area is
    already exported so let's export unmap_kernel_range to help his buddy.

    Signed-off-by: Minchan Kim
    Cc: Nitin Gupta
    Cc: Sergey Senozhatsky
    Cc: Jerome Marchand
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • According to calculation, ZS_SIZE_CLASSES value is 255 on systems with 4K
    page size, not 254. The old value may forget count the ZS_MIN_ALLOC_SIZE
    in.

    This patch fixes this trivial issue in the comments.

    Signed-off-by: Weijie Yang
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Weijie Yang
     
  • zbud_alloc is only called by zswap_frontswap_store with unsigned int len.
    Change function parameter + update >= 0 check.

    Signed-off-by: Fabian Frederick
    Acked-by: Seth Jennings
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • mem_cgroup_force_empty_list() can iterate a large number of pages on an
    lru and mem_cgroup_move_parent() doesn't return an errno unless certain
    criteria, none of which indicate that the iteration may be taking too
    long, is met.

    We have encountered the following stack trace many times indicating
    "need_resched set for > 51000020 ns (51 ticks) without schedule", for
    example:

    scheduler_tick()

    mem_cgroup_move_account+0x4d/0x1d5
    mem_cgroup_move_parent+0x8d/0x109
    mem_cgroup_reparent_charges+0x149/0x2ba
    mem_cgroup_css_offline+0xeb/0x11b
    cgroup_offline_fn+0x68/0x16b
    process_one_work+0x129/0x350

    If this iteration is taking too long, we still need to do cond_resched()
    even when an individual page is not busy.

    [rientjes@google.com: changelog]
    Signed-off-by: Hugh Dickins
    Signed-off-by: David Rientjes
    Acked-by: Johannes Weiner
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Currently memory error handler handles action optional errors in the
    deferred manner by default. And if a recovery aware application wants
    to handle it immediately, it can do it by setting PF_MCE_EARLY flag.
    However, such signal can be sent only to the main thread, so it's
    problematic if the application wants to have a dedicated thread to
    handler such signals.

    So this patch adds dedicated thread support to memory error handler. We
    have PF_MCE_EARLY flags for each thread separately, so with this patch
    AO signal is sent to the thread with PF_MCE_EARLY flag set, not the main
    thread. If you want to implement a dedicated thread, you call prctl()
    to set PF_MCE_EARLY on the thread.

    Memory error handler collects processes to be killed, so this patch lets
    it check PF_MCE_EARLY flag on each thread in the collecting routines.

    No behavioral change for all non-early kill cases.

    Tony said:

    : The old behavior was crazy - someone with a multithreaded process might
    : well expect that if they call prctl(PF_MCE_EARLY) in just one thread, then
    : that thread would see the SIGBUS with si_code = BUS_MCEERR_A0 - even if
    : that thread wasn't the main thread for the process.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Naoya Horiguchi
    Reviewed-by: Tony Luck
    Cc: Kamil Iskra
    Cc: Andi Kleen
    Cc: Borislav Petkov
    Cc: Chen Gong
    Cc: [3.2+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • When Linux sees an "action optional" machine check (where h/w has reported
    an error that is not in the current execution path) we generally do not
    want to signal a process, since most processes do not have a SIGBUS
    handler - we'd just prematurely terminate the process for a problem that
    they might never actually see.

    task_early_kill() decides whether to consider a process - and it checks
    whether this specific process has been marked for early signals with
    "prctl", or if the system administrator has requested early signals for
    all processes using /proc/sys/vm/memory_failure_early_kill.

    But for MF_ACTION_REQUIRED case we must not defer. The error is in the
    execution path of the current thread so we must send the SIGBUS
    immediatley.

    Fix by passing a flag argument through collect_procs*() to
    task_early_kill() so it knows whether we can defer or must take action.

    Signed-off-by: Tony Luck
    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Borislav Petkov
    Cc: Chen Gong
    Cc: [3.2+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Luck
     
  • When a thread in a multi-threaded application hits a machine check because
    of an uncorrectable error in memory - we want to send the SIGBUS with
    si.si_code = BUS_MCEERR_AR to that thread. Currently we fail to do that
    if the active thread is not the primary thread in the process.
    collect_procs() just finds primary threads and this test:

    if ((flags & MF_ACTION_REQUIRED) && t == current) {

    will see that the thread we found isn't the current thread and so send a
    si.si_code = BUS_MCEERR_AO to the primary (and nothing to the active
    thread at this time).

    We can fix this by checking whether "current" shares the same mm with the
    process that collect_procs() said owned the page. If so, we send the
    SIGBUS to current (with code BUS_MCEERR_AR).

    Signed-off-by: Tony Luck
    Signed-off-by: Naoya Horiguchi
    Reported-by: Otto Bruggeman
    Cc: Andi Kleen
    Cc: Borislav Petkov
    Cc: Chen Gong
    Cc: [3.2+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Luck
     
  • There is an orphaned prehistoric comment , which used to be against
    get_dirty_limits(), the dawn of global_dirtyable_memory().

    Back then, the implementation of get_dirty_limits() is complicated and
    full of magic numbers, so this comment is necessary. But we now use the
    clear and neat global_dirtyable_memory(), which renders this comment
    ambiguous and useless. Remove it.

    Signed-off-by: Jianyu Zhan
    Acked-by: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianyu Zhan
     
  • Via commit ebc2a1a69111 ("swap: make cluster allocation per-cpu"), we
    can find that all SWP_SOLIDSTATE "seek is cheap"(SSD case) has already
    gone to si->cluster_info scan_swap_map_try_ssd_cluster() route. So that
    the "last_in_cluster < scan_base" loop in the body of scan_swap_map()
    has already become a dead code snippet, and it should have been deleted.

    This patch is to delete the redundant loop as Hugh and Shaohua
    suggested.

    [hughd@google.com: fix comment, simplify code]
    Signed-off-by: Chen Yucong
    Cc: Shaohua Li
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chen Yucong
     
  • We already have a function named hugepages_supported(), and the similar
    name hugepage_migration_support() is a bit unconfortable, so let's rename
    it hugepage_migration_supported().

    Signed-off-by: Naoya Horiguchi
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Some clarification on how faultaround works.

    [akpm@linux-foundation.org: tweak comment text]
    Signed-off-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     
  • There is evidencs that the faultaround feature is less relevant on
    architectures with page size bigger then 4k. Which makes sense since page
    fault overhead per byte of mapped area should be less there.

    Let's rework the feature to specify faultaround area in bytes instead of
    page order. It's 64 kilobytes for now.

    The patch effectively disables faultaround on architectures with page size
    >= 64k (like ppc64).

    It's possible that some other size of faultaround area is relevant for a
    platform. We can expose `fault_around_bytes' variable to arch-specific
    code once such platforms will be found.

    Signed-off-by: Kirill A. Shutemov
    Cc: Rusty Russell
    Cc: Hugh Dickins
    Cc: Madhavan Srinivasan
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Andi Kleen
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     
  • add_active_range() has been repalced by memblock_set_node(). Clean up the
    comments to comply with that change.

    Signed-off-by: Zhang Zhen
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Zhen
     
  • Transform action part of ttu_flags into individiual bits. These flags
    aren't part of any uses-space visible api or even trace events.

    Signed-off-by: Konstantin Khlebnikov
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov