27 Jan, 2012

1 commit


25 Jan, 2012

1 commit

  • Davem says:

    1) Fix JIT code generation on x86-64 for divide by zero, from Eric Dumazet.

    2) tg3 header length computation correction from Eric Dumazet.

    3) More build and reference counting fixes for socket memory cgroup
    code from Glauber Costa.

    4) module.h snuck back into a core header after all the hard work we
    did to remove that, from Paul Gortmaker and Jesper Dangaard Brouer.

    5) Fix PHY naming regression and add some new PCI IDs in stmmac, from
    Alessandro Rubini.

    6) Netlink message generation fix in new team driver, should only advertise
    the entries that changed during events, from Jiri Pirko.

    7) SRIOV VF registration and unregistration fixes, and also add a
    missing PCI ID, from Roopa Prabhu.

    8) Fix infinite loop in tx queue flush code of brcmsmac, from Stanislaw Gruszka.

    9) ftgmac100/ftmac100 build fix, missing interrupt.h include.

    10) Memory leak fix in net/hyperv do_set_mutlicast() handling, from Wei Yongjun.

    11) Off by one fix in netem packet scheduler, from Vijay Subramanian.

    12) TCP loss detection fix from Yuchung Cheng.

    13) TCP reset packet MD5 calculation uses wrong address, fix from Shawn Lu.

    14) skge carrier assertion and DMA mapping fixes from Stephen Hemminger.

    15) Congestion recovery undo performed at the wrong spot in BIC and CUBIC
    congestion control modules, fix from Neal Cardwell.

    16) Ethtool ETHTOOL_GSSET_INFO is unnecessarily restrictive, from Michał Mirosław.

    17) Fix triggerable race in ipv6 sysctl handling, from Francesco Ruggeri.

    18) Statistics bug fixes in mlx4 from Eugenia Emantayev.

    19) rds locking bug fix during info dumps, from your's truly.

    * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (67 commits)
    rds: Make rds_sock_lock BH rather than IRQ safe.
    netprio_cgroup.h: dont include module.h from other includes
    net: flow_dissector.c missing include linux/export.h
    team: send only changed options/ports via netlink
    net/hyperv: fix possible memory leak in do_set_multicast()
    drivers/net: dsa/mv88e6xxx.c files need linux/module.h
    stmmac: added PCI identifiers
    llc: Fix race condition in llc_ui_recvmsg
    stmmac: fix phy naming inconsistency
    dsa: Add reporting of silicon revision for Marvell 88E6123/88E6161/88E6165 switches.
    tg3: fix ipv6 header length computation
    skge: add byte queue limit support
    mv643xx_eth: Add Rx Discard and Rx Overrun statistics
    bnx2x: fix compilation error with SOE in fw_dump
    bnx2x: handle CHIP_REVISION during init_one
    bnx2x: allow user to change ring size in ISCSI SD mode
    bnx2x: fix Big-Endianess in ethtool -t
    bnx2x: fixed ethtool statistics for MF modes
    bnx2x: credit-leakage fixup on vlan_mac_del_all
    macvlan: fix a possible use after free
    ...

    Linus Torvalds
     

24 Jan, 2012

7 commits

  • Memory migration fills a pte with a migration entry and it doesn't
    update the rss counters. Then it replaces the migration entry with the
    new page (or the old one if migration failed). But between these two
    passes this pte can be unmaped, or a task can fork a child and it will
    get a copy of this migration entry. Nobody accounts for this in the rss
    counters.

    This patch properly adjust rss counters for migration entries in
    zap_pte_range() and copy_one_pte(). Thus we avoid extra atomic
    operations on the migration fast-path.

    Signed-off-by: Konstantin Khlebnikov
    Cc: Hugh Dickins
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov
     
  • Commit cc39c6a9bbde ("mm: account skipped entries to avoid looping in
    find_get_pages") correctly fixed an infinite loop; but left a problem
    that find_get_pages() on shmem would return 0 (appearing to callers to
    mean end of tree) when it meets a run of nr_pages swap entries.

    The only uses of find_get_pages() on shmem are via pagevec_lookup(),
    called from invalidate_mapping_pages(), and from shmctl SHM_UNLOCK's
    scan_mapping_unevictable_pages(). The first is already commented, and
    not worth worrying about; but the second can leave pages on the
    Unevictable list after an unusual sequence of swapping and locking.

    Fix that by using shmem_find_get_pages_and_swap() (then ignoring the
    swap) instead of pagevec_lookup().

    But I don't want to contaminate vmscan.c with shmem internals, nor
    shmem.c with LRU locking. So move scan_mapping_unevictable_pages() into
    shmem.c, renaming it shmem_unlock_mapping(); and rename
    check_move_unevictable_page() to check_move_unevictable_pages(), looping
    down an array of pages, oftentimes under the same lock.

    Leave out the "rotate unevictable list" block: that's a leftover from
    when this was used for /proc/sys/vm/scan_unevictable_pages, whose flawed
    handling involved looking at pages at tail of LRU.

    Was there significance to the sequence first ClearPageUnevictable, then
    test page_evictable, then SetPageUnevictable here? I think not, we're
    under LRU lock, and have no barriers between those.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Eric Dumazet
    Cc: Johannes Weiner
    Cc: Michel Lespinasse
    Cc: [back to 3.1 but will need respins]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • scan_mapping_unevictable_pages() is used to make SysV SHM_LOCKed pages
    evictable again once the shared memory is unlocked. It does this with
    pagevec_lookup()s across the whole object (which might occupy most of
    memory), and takes 300ms to unlock 7GB here. A cond_resched() every
    PAGEVEC_SIZE pages would be good.

    However, KOSAKI-san points out that this is called under shmem.c's
    info->lock, and it's also under shm.c's shm_lock(), both spinlocks.
    There is no strong reason for that: we need to take these pages off the
    unevictable list soonish, but those locks are not required for it.

    So move the call to scan_mapping_unevictable_pages() from shmem.c's
    unlock handling up to shm.c's unlock handling. Remove the recently
    added barrier, not needed now we have spin_unlock() before the scan.

    Use get_file(), with subsequent fput(), to make sure we have a reference
    to mapping throughout scan_mapping_unevictable_pages(): that's something
    that was previously guaranteed by the shm_lock().

    Remove shmctl's lru_add_drain_all(): we don't fault in pages at SHM_LOCK
    time, and we lazily discover them to be Unevictable later, so it serves
    no purpose for SHM_LOCK; and serves no purpose for SHM_UNLOCK, since
    pages still on pagevec are not marked Unevictable.

    The original code avoided redundant rescans by checking VM_LOCKED flag
    at its level: now avoid them by checking shp's SHM_LOCKED.

    The original code called scan_mapping_unevictable_pages() on a locked
    area at shm_destroy() time: perhaps we once had accounting cross-checks
    which required that, but not now, so skip the overhead and just let
    inode eviction deal with them.

    Put check_move_unevictable_page() and scan_mapping_unevictable_pages()
    under CONFIG_SHMEM (with stub for the TINY case when ramfs is used),
    more as comment than to save space; comment them used for SHM_UNLOCK.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Eric Dumazet
    Cc: Johannes Weiner
    Cc: Michel Lespinasse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Page mapcount should be updated only if we are sure that the page ends
    up in the page table otherwise we would leak if we couldn't COW due to
    reservations or if idx is out of bounds.

    Signed-off-by: Hillf Danton
    Reviewed-by: Michal Hocko
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hillf Danton
     
  • end_migration() passes the old page instead of the new page to commit
    the charge. This page descriptor is not used for committing itself,
    though, since we also pass the (correct) page_cgroup descriptor. But
    it's used to find the soft limit tree through the page's zone, so the
    soft limit tree of the old page's zone is updated instead of that of the
    new page's, which might get slightly out of date until the next charge
    reaches the ratelimit point.

    This glitch has been present since 5564e88 ("memcg: condense
    page_cgroup-to-page lookup points").

    This fixes a bug that I introduced in 2.6.38. It's benign enough (to my
    knowledge) that we probably don't want this for stable.

    Reported-by: Hugh Dickins
    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Acked-by: Michal Hocko
    Acked-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • page_zone() requires an online node otherwise we are accessing NULL
    NODE_DATA. This is not an issue at the moment because node_zones are
    located at the structure beginning but this might change in the future
    so better be careful about that.

    Signed-off-by: Michal Hocko
    Signed-off-by: KAMEZAWA Hiroyuki
    Acked-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     
  • Fix the following NULL ptr dereference caused by

    cat /sys/devices/system/memory/memory0/removable

    Pid: 13979, comm: sed Not tainted 3.0.13-0.5-default #1 IBM BladeCenter LS21 -[7971PAM]-/Server Blade
    RIP: __count_immobile_pages+0x4/0x100
    Process sed (pid: 13979, threadinfo ffff880221c36000, task ffff88022e788480)
    Call Trace:
    is_pageblock_removable_nolock+0x34/0x40
    is_mem_section_removable+0x74/0xf0
    show_mem_removable+0x41/0x70
    sysfs_read_file+0xfe/0x1c0
    vfs_read+0xc7/0x130
    sys_read+0x53/0xa0
    system_call_fastpath+0x16/0x1b

    We are crashing because we are trying to dereference NULL zone which
    came from pfn=0 (struct page ffffea0000000000). According to the boot
    log this page is marked reserved:
    e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)

    and early_node_map confirms that:
    early_node_map[3] active PFN ranges
    1: 0x00000010 -> 0x0000009c
    1: 0x00000100 -> 0x000bffa3
    1: 0x00100000 -> 0x00240000

    The problem is that memory_present works in PAGE_SECTION_MASK aligned
    blocks so the reserved range sneaks into the the section as well. This
    also means that free_area_init_node will not take care of those reserved
    pages and they stay uninitialized.

    When we try to read the removable status we walk through all available
    sections and hope that the zone is valid for all pages in the section.
    But this is not true in this case as the zone and nid are not initialized.

    We have only one node in this particular case and it is marked as node=1
    (rather than 0) and that made the problem visible because page_to_nid will
    return 0 and there are no zones on the node.

    Let's check that the zone is valid and that the given pfn falls into its
    boundaries and mark the section not removable. This might cause some
    false positives, probably, but we do not have any sane way to find out
    whether the page is reserved by the platform or it is just not used for
    whatever other reasons.

    Signed-off-by: Michal Hocko
    Acked-by: Mel Gorman
    Cc: KAMEZAWA Hiroyuki
    Cc: Andrea Arcangeli
    Cc: David Rientjes
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

23 Jan, 2012

1 commit


18 Jan, 2012

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (47 commits)
    tg3: Fix single-vector MSI-X code
    openvswitch: Fix multipart datapath dumps.
    ipv6: fix per device IP snmp counters
    inetpeer: initialize ->redirect_genid in inet_getpeer()
    net: fix NULL-deref in WARN() in skb_gso_segment()
    net: WARN if skb_checksum_help() is called on skb requiring segmentation
    caif: Remove bad WARN_ON in caif_dev
    caif: Fix typo in Vendor/Product-ID for CAIF modems
    bnx2x: Disable AN KR work-around for BCM57810
    bnx2x: Remove AutoGrEEEn for BCM84833
    bnx2x: Remove 100Mb force speed for BCM84833
    bnx2x: Fix PFC setting on BCM57840
    bnx2x: Fix Super-Isolate mode for BCM84833
    net: fix some sparse errors
    net: kill duplicate included header
    net: sh-eth: Fix build error by the value which is not defined
    net: Use device model to get driver name in skb_gso_segment()
    bridge: BH already disabled in br_fdb_cleanup()
    net: move sock_update_memcg outside of CONFIG_INET
    mwl8k: Fixing Sparse ENDIAN CHECK warning
    ...

    Linus Torvalds
     

17 Jan, 2012

1 commit

  • Although only used currently for tcp sockets, this function
    is now used in common sock code (for sock_clone())

    Commit 475f1b52645a29936b9df1d8fcd45f7e56bd4a9f moved the
    declaration of sock_update_clone() to inside sock.c, but
    this only fixes the problem when CONFIG_CGROUP_MEM_RES_CTLR_KMEM
    is also not defined.

    This patch here is verified to fix both problems, although
    reverting the previous one is not necessary.

    Signed-off-by: Glauber Costa
    CC: David S. Miller
    CC: Stephen Rothwell
    Reported-by: Randy Dunlap
    Acked-by: Randy Dunlap
    Signed-off-by: David S. Miller

    Glauber Costa
     

16 Jan, 2012

1 commit

  • 7bd0b0f0da ("memblock: Reimplement memblock allocation using
    reverse free area iterator") implemented a simple top-down
    allocator using a reverse memblock iterator. To avoid underflow
    in the allocator loop, it simply raised the lower boundary to
    the requested size under the assumption that requested size
    would be far smaller than available memblocks.

    This causes early page table allocation failure under certain
    configurations in Xen. Fix it by checking for underflow directly
    instead of bumping up lower bound.

    Signed-off-by: Tejun Heo
    Reported-by: Konrad Rzeszutek Wilk
    Cc: rjw@sisk.pl
    Cc: xen-devel@lists.xensource.com
    Cc: Benjamin Herrenschmidt
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/20120113181412.GA11112@google.com
    Signed-off-by: Ingo Molnar

    Tejun Heo
     

15 Jan, 2012

1 commit

  • Kmemleak patches

    Main features:
    - Handle percpu memory allocations (only scanning them, not actually
    reporting).
    - Memory hotplug support.

    Usability improvements:
    - Show the origin of early allocations.
    - Report previously found leaks even if kmemleak has been disabled by
    some error.

    * tag 'kmemleak' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux:
    kmemleak: Add support for memory hotplug
    kmemleak: Handle percpu memory allocation
    kmemleak: Report previously found leaks even after an error
    kmemleak: When the early log buffer is exceeded, report the actual number
    kmemleak: Show where early_log issues come from

    Linus Torvalds
     

13 Jan, 2012

26 commits

  • If either of the vas or vms arrays are not properly kzalloced, then the
    code jumps to the err_free label.

    The err_free label runs a loop to check and free each of the array members
    of the vas and vms arrays which is not required for this situation as none
    of the array members have been allocated till this point.

    Eliminate the extra loop we have to go through by introducing a new label
    err_free2 and then jumping to it.

    [akpm@linux-foundation.org: remove now-unneeded tests]
    Signed-off-by: Kautuk Consul
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kautuk Consul
     
  • There is sometimes confusion between the global putback_lru_pages() in
    migrate.c and the static putback_lru_pages() in vmscan.c: rename the
    latter putback_inactive_pages(): it helps shrink_inactive_list() rather as
    move_active_pages_to_lru() helps shrink_active_list().

    Remove unused scan_control arg from putback_inactive_pages() and from
    update_isolated_counts(). Move clear_active_flags() inside
    update_isolated_counts(). Move NR_ISOLATED accounting up into
    shrink_inactive_list() itself, so the balance is clearer.

    Do the spin_lock_irq() before calling putback_inactive_pages() and
    spin_unlock_irq() after return from it, so that it better matches
    update_isolated_counts() and move_active_pages_to_lru().

    Signed-off-by: Hugh Dickins
    Cc: Johannes Weiner
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • The isolate_pages() level in vmscan.c offers little but indirection: merge
    it into isolate_lru_pages() as the compiler does, and use the names
    nr_to_scan and nr_scanned in each case.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • del_page_from_lru() repeats del_page_from_lru_list(), also working out
    which LRU the page was on, clearing the relevant bits. Decouple those
    functions: remove del_page_from_lru() and add page_off_lru().

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Mostly we use "enum lru_list lru": change those few "l"s to "lru"s.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • checkpatch rightly protests

    WARNING: EXPORT_SYMBOL(foo); should immediately follow its function/variable

    so fix the five offenders in mm/swap.c.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • What's so special about ____pagevec_lru_add() that it needs four leading
    underscores? Nothing, it just helped to distinguish from
    __pagevec_lru_add() in 2.6.28 development. Cut two leading underscores.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Replace pagevecs in putback_lru_pages() and move_active_pages_to_lru()
    by lists of pages_to_free: then apply Konstantin Khlebnikov's
    free_hot_cold_page_list() to them instead of pagevec_release().

    Which simplifies the flow (no need to drop and retake lock whenever
    pagevec fills up) and reduces stale addresses in stack backtraces
    (which often showed through the pagevecs); but more importantly,
    removes another 120 bytes from the deepest stacks in page reclaim.
    Although I've not recently seen an actual stack overflow here with
    a vanilla kernel, move_active_pages_to_lru() has often featured in
    deep backtraces.

    However, free_hot_cold_page_list() does not handle compound pages
    (nor need it: a Transparent HugePage would have been split by the
    time it reaches the call in shrink_page_list()), but it is possible
    for putback_lru_pages() or move_active_pages_to_lru() to be left
    holding the last reference on a THP, so must exclude the unlikely
    compound case before putting on pages_to_free.

    Remove pagevec_strip(), its work now done in move_active_pages_to_lru().
    The pagevec in scan_mapping_unevictable_pages() remains in mm/vmscan.c,
    but that is never on the reclaim path, and cannot be replaced by a list.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Reviewed-by: Konstantin Khlebnikov
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • If DEBUG_VM, mem_cgroup_print_bad_page() is called whenever bad_page()
    shows a "Bad page state" message, removes page from circulation, adds a
    taint and continues. This is at a very low level, often when a spinlock
    is held (sometimes when page table lock is held, for example).

    We want to recover from this badness, not make it worse: we must not
    kmalloc memory here, we must not do a cgroup path lookup via dubious
    pointers. No doubt that code was useful to debug a particular case at one
    time, and may be again, but take it out of the mainline kernel.

    Signed-off-by: Hugh Dickins
    Cc: Daisuke Nishimura
    Cc: KAMEZAWA Hiroyuki
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • This patch started off as a cleanup: __split_huge_page_refcounts() has to
    cope with two scenarios, when the hugepage being split is already on LRU,
    and when it is not; but why does it have to split that accounting across
    three different sites? Consolidate it in lru_add_page_tail(), handling
    evictable and unevictable alike, and use standard add_page_to_lru_list()
    when accounting is needed (when the head is not yet on LRU).

    But a recent regression in -next, I guess the removal of PageCgroupAcctLRU
    test from mem_cgroup_split_huge_fixup(), makes this now a necessary fix:
    under load, the MEM_CGROUP_ZSTAT count was wrapping to a huge number,
    messing up reclaim calculations and causing a freeze at rmdir of cgroup.

    Add a VM_BUG_ON to mem_cgroup_lru_del_list() when we're about to wrap that
    count - this has not been the only such incident. Document that
    lru_add_page_tail() is for Transparent HugePages by #ifdef around it.

    Signed-off-by: Hugh Dickins
    Cc: Daisuke Nishimura
    Cc: KAMEZAWA Hiroyuki
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • If compaction can proceed for a given zone, shrink_zones() does not
    reclaim any more pages from it. After commit [e0c2327: vmscan: abort
    reclaim/compaction if compaction can proceed], do_try_to_free_pages()
    tries to finish as soon as possible once one zone can compact.

    This was intended to prevent slabs being shrunk unnecessarily but there
    are side-effects. One is that a small zone that is ready for compaction
    will abort reclaim even if the chances of successfully allocating a THP
    from that zone is small. It also means that reclaim can return too early
    even though sc->nr_to_reclaim pages were not reclaimed.

    This partially reverts the commit until it is proven that slabs are really
    being shrunk unnecessarily but preserves the check to return 1 to avoid
    OOM if reclaim was aborted prematurely.

    [aarcange@redhat.com: This patch replaces a revert from Andrea]
    Signed-off-by: Mel Gorman
    Reviewed-by: Rik van Riel
    Cc: Andrea Arcangeli
    Cc: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • In commit e0887c19 ("vmscan: limit direct reclaim for higher order
    allocations"), Rik noted that reclaim was too aggressive when THP was
    enabled. In his initial patch he used the number of free pages to decide
    if reclaim should abort for compaction. My feedback was that reclaim and
    compaction should be using the same logic when deciding if reclaim should
    be aborted.

    Unfortunately, this had the effect of reducing THP success rates when the
    workload included something like streaming reads that continually
    allocated pages. The window during which compaction could run and return
    a THP was too small.

    This patch combines Rik's two patches together. compaction_suitable() is
    still used to decide if reclaim should be aborted to allow compaction is
    used. However, it will also ensure that there is a reasonable buffer of
    free pages available. This improves upon the THP allocation success rates
    but bounds the number of pages that are freed for compaction.

    Signed-off-by: Mel Gorman
    Reviewed-by: Rik van Riel
    Cc: Andrea Arcangeli
    Cc: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This patch adds a lightweight sync migrate operation MIGRATE_SYNC_LIGHT
    mode that avoids writing back pages to backing storage. Async compaction
    maps to MIGRATE_ASYNC while sync compaction maps to MIGRATE_SYNC_LIGHT.
    For other migrate_pages users such as memory hotplug, MIGRATE_SYNC is
    used.

    This avoids sync compaction stalling for an excessive length of time,
    particularly when copying files to a USB stick where there might be a
    large number of dirty pages backed by a filesystem that does not support
    ->writepages.

    [aarcange@redhat.com: This patch is heavily based on Andrea's work]
    [akpm@linux-foundation.org: fix fs/nfs/write.c build]
    [akpm@linux-foundation.org: fix fs/btrfs/disk-io.c build]
    Signed-off-by: Mel Gorman
    Reviewed-by: Rik van Riel
    Cc: Andrea Arcangeli
    Cc: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • If compaction is deferred, direct reclaim is used to try to free enough
    pages for the allocation to succeed. For small high-orders, this has a
    reasonable chance of success. However, if the caller has specified
    __GFP_NO_KSWAPD to limit the disruption to the system, it makes more sense
    to fail the allocation rather than stall the caller in direct reclaim.
    This patch skips direct reclaim if compaction is deferred and the caller
    specifies __GFP_NO_KSWAPD.

    Async compaction only considers a subset of pages so it is possible for
    compaction to be deferred prematurely and not enter direct reclaim even in
    cases where it should. To compensate for this, this patch also defers
    compaction only if sync compaction failed.

    Signed-off-by: Mel Gorman
    Acked-by: Minchan Kim
    Reviewed-by: Rik van Riel
    Cc: Andrea Arcangeli
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Commit 39deaf85 ("mm: compaction: make isolate_lru_page() filter-aware")
    noted that compaction does not migrate dirty or writeback pages and that
    is was meaningless to pick the page and re-add it to the LRU list. This
    had to be partially reverted because some dirty pages can be migrated by
    compaction without blocking.

    This patch updates "mm: compaction: make isolate_lru_page" by skipping
    over pages that migration has no possibility of migrating to minimise LRU
    disruption.

    Signed-off-by: Mel Gorman
    Reviewed-by: Rik van Riel
    Cc: Andrea Arcangeli
    Reviewed-by: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Asynchronous compaction is used when allocating transparent hugepages to
    avoid blocking for long periods of time. Due to reports of stalling,
    there was a debate on disabling synchronous compaction but this severely
    impacted allocation success rates. Part of the reason was that many dirty
    pages are skipped in asynchronous compaction by the following check;

    if (PageDirty(page) && !sync &&
    mapping->a_ops->migratepage != migrate_page)
    rc = -EBUSY;

    This skips over all mapping aops using buffer_migrate_page() even though
    it is possible to migrate some of these pages without blocking. This
    patch updates the ->migratepage callback with a "sync" parameter. It is
    the responsibility of the callback to fail gracefully if migration would
    block.

    Signed-off-by: Mel Gorman
    Reviewed-by: Rik van Riel
    Cc: Andrea Arcangeli
    Cc: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • During direct reclaim it is possible that reclaim will be aborted so that
    compaction can be attempted to satisfy a high-order allocation. If this
    decision is made before any pages are reclaimed, it is possible that 0 is
    returned to the page allocator potentially triggering an OOM. This has
    not been observed but it is a possibility so this patch addresses it.

    Signed-off-by: Mel Gorman
    Reviewed-by: Rik van Riel
    Cc: Andrea Arcangeli
    Cc: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Properly take into account if we isolated a compound page during the lumpy
    scan in reclaim and skip over the tail pages when encountered. This
    corrects the values given to the tracepoint for number of lumpy pages
    isolated and will avoid breaking the loop early if compound pages smaller
    than the requested allocation size are requested.

    [mgorman@suse.de: Updated changelog]
    Signed-off-by: Andrea Arcangeli
    Signed-off-by: Mel Gorman
    Reviewed-by: Minchan Kim
    Reviewed-by: Rik van Riel
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     
  • When asynchronous compaction was introduced, the
    /proc/sys/vm/compact_memory handler should have been updated to always use
    synchronous compaction. This did not happen so this patch addresses it.

    The assumption is if a user writes to /proc/sys/vm/compact_memory, they
    are willing for that process to stall.

    Signed-off-by: Mel Gorman
    Reviewed-by: Andrea Arcangeli
    Reviewed-by: Rik van Riel
    Reviewed-by: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Short summary: There are severe stalls when a USB stick using VFAT is
    used with THP enabled that are reduced by this series. If you are
    experiencing this problem, please test and report back and considering I
    have seen complaints from openSUSE and Fedora users on this as well as a
    few private mails, I'm guessing it's a widespread issue. This is a new
    type of USB-related stall because it is due to synchronous compaction
    writing where as in the past the big problem was dirty pages reaching
    the end of the LRU and being written by reclaim.

    Am cc'ing Andrew this time and this series would replace
    mm-do-not-stall-in-synchronous-compaction-for-thp-allocations.patch.
    I'm also cc'ing Dave Jones as he might have merged that patch to Fedora
    for wider testing and ideally it would be reverted and replaced by this
    series.

    That said, the later patches could really do with some review. If this
    series is not the answer then a new direction needs to be discussed
    because as it is, the stalls are unacceptable as the results in this
    leader show.

    For testers that try backporting this to 3.1, it won't work because
    there is a non-obvious dependency on not writing back pages in direct
    reclaim so you need those patches too.

    Changelog since V5
    o Rebase to 3.2-rc5
    o Tidy up the changelogs a bit

    Changelog since V4
    o Added reviewed-bys, credited Andrea properly for sync-light
    o Allow dirty pages without mappings to be considered for migration
    o Bound the number of pages freed for compaction
    o Isolate PageReclaim pages on their own LRU list

    This is against 3.2-rc5 and follows on from discussions on "mm: Do
    not stall in synchronous compaction for THP allocations" and "[RFC
    PATCH 0/5] Reduce compaction-related stalls". Initially, the proposed
    patch eliminated stalls due to compaction which sometimes resulted in
    user-visible interactivity problems on browsers by simply never using
    sync compaction. The downside was that THP success allocation rates
    were lower because dirty pages were not being migrated as reported by
    Andrea. His approach at fixing this was nacked on the grounds that
    it reverted fixes from Rik merged that reduced the amount of pages
    reclaimed as it severely impacted his workloads performance.

    This series attempts to reconcile the requirements of maximising THP
    usage, without stalling in a user-visible fashion due to compaction
    or cheating by reclaiming an excessive number of pages.

    Patch 1 partially reverts commit 39deaf85 to allow migration to isolate
    dirty pages. This is because migration can move some dirty
    pages without blocking.

    Patch 2 notes that the /proc/sys/vm/compact_memory handler is not using
    synchronous compaction when it should be. This is unrelated
    to the reported stalls but is worth fixing.

    Patch 3 checks if we isolated a compound page during lumpy scan and
    account for it properly. For the most part, this affects
    tracing so it's unrelated to the stalls but worth fixing.

    Patch 4 notes that it is possible to abort reclaim early for compaction
    and return 0 to the page allocator potentially entering the
    "may oom" path. This has not been observed in practice but
    the rest of the series potentially makes it easier to happen.

    Patch 5 adds a sync parameter to the migratepage callback and gives
    the callback responsibility for migrating the page without
    blocking if sync==false. For example, fallback_migrate_page
    will not call writepage if sync==false. This increases the
    number of pages that can be handled by asynchronous compaction
    thereby reducing stalls.

    Patch 6 restores filter-awareness to isolate_lru_page for migration.
    In practice, it means that pages under writeback and pages
    without a ->migratepage callback will not be isolated
    for migration.

    Patch 7 avoids calling direct reclaim if compaction is deferred but
    makes sure that compaction is only deferred if sync
    compaction was used.

    Patch 8 introduces a sync-light migration mechanism that sync compaction
    uses. The objective is to allow some stalls but to not call
    ->writepage which can lead to significant user-visible stalls.

    Patch 9 notes that while we want to abort reclaim ASAP to allow
    compation to go ahead that we leave a very small window of
    opportunity for compaction to run. This patch allows more pages
    to be freed by reclaim but bounds the number to a reasonable
    level based on the high watermark on each zone.

    Patch 10 allows slabs to be shrunk even after compaction_ready() is
    true for one zone. This is to avoid a problem whereby a single
    small zone can abort reclaim even though no pages have been
    reclaimed and no suitably large zone is in a usable state.

    Patch 11 fixes a problem with the rate of page scanning. As reclaim is
    rarely stalling on pages under writeback it means that scan
    rates are very high. This is particularly true for direct
    reclaim which is not calling writepage. The vmstat figures
    implied that much of this was busy work with PageReclaim pages
    marked for immediate reclaim. This patch is a prototype that
    moves these pages to their own LRU list.

    This has been tested and other than 2 USB keys getting trashed,
    nothing horrible fell out. That said, I am a bit unhappy with the
    rescue logic in patch 11 but did not find a better way around it. It
    does significantly reduce scan rates and System CPU time indicating
    it is the right direction to take.

    What is of critical importance is that stalls due to compaction
    are massively reduced even though sync compaction was still
    allowed. Testing from people complaining about stalls copying to USBs
    with THP enabled are particularly welcome.

    The following tests all involve THP usage and USB keys in some
    way. Each test follows this type of pattern

    1. Read from some fast fast storage, be it raw device or file. Each time
    the copy finishes, start again until the test ends
    2. Write a large file to a filesystem on a USB stick. Each time the copy
    finishes, start again until the test ends
    3. When memory is low, start an alloc process that creates a mapping
    the size of physical memory to stress THP allocation. This is the
    "real" part of the test and the part that is meant to trigger
    stalls when THP is enabled. Copying continues in the background.
    4. Record the CPU usage and time to execute of the alloc process
    5. Record the number of THP allocs and fallbacks as well as the number of THP
    pages in use a the end of the test just before alloc exited
    6. Run the test 5 times to get an idea of variability
    7. Between each run, sync is run and caches dropped and the test
    waits until nr_dirty is a small number to avoid interference
    or caching between iterations that would skew the figures.

    The individual tests were then

    writebackCPDeviceBasevfat
    Disable THP, read from a raw device (sda), vfat on USB stick
    writebackCPDeviceBaseext4
    Disable THP, read from a raw device (sda), ext4 on USB stick
    writebackCPDevicevfat
    THP enabled, read from a raw device (sda), vfat on USB stick
    writebackCPDeviceext4
    THP enabled, read from a raw device (sda), ext4 on USB stick
    writebackCPFilevfat
    THP enabled, read from a file on fast storage and USB, both vfat
    writebackCPFileext4
    THP enabled, read from a file on fast storage and USB, both ext4

    The kernels tested were

    3.1 3.1
    vanilla 3.2-rc5
    freemore Patches 1-10
    immediate Patches 1-11
    andrea The 8 patches Andrea posted as a basis of comparison

    The results are very long unfortunately. I'll start with the case
    where we are not using THP at all

    writebackCPDeviceBasevfat
    3.1.0-vanilla rc5-vanilla freemore-v6r1 isolate-v6r1 andrea-v2r1
    System Time 1.28 ( 0.00%) 54.49 (-4143.46%) 48.63 (-3687.69%) 4.69 ( -265.11%) 51.88 (-3940.81%)
    +/- 0.06 ( 0.00%) 2.45 (-4305.55%) 4.75 (-8430.57%) 7.46 (-13282.76%) 4.76 (-8440.70%)
    User Time 0.09 ( 0.00%) 0.05 ( 40.91%) 0.06 ( 29.55%) 0.07 ( 15.91%) 0.06 ( 27.27%)
    +/- 0.02 ( 0.00%) 0.01 ( 45.39%) 0.02 ( 25.07%) 0.00 ( 77.06%) 0.01 ( 52.24%)
    Elapsed Time 110.27 ( 0.00%) 56.38 ( 48.87%) 49.95 ( 54.70%) 11.77 ( 89.33%) 53.43 ( 51.54%)
    +/- 7.33 ( 0.00%) 3.77 ( 48.61%) 4.94 ( 32.63%) 6.71 ( 8.50%) 4.76 ( 35.03%)
    THP Active 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
    +/- 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
    Fault Alloc 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
    +/- 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
    Fault Fallback 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
    +/- 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)

    The THP figures are obviously all 0 because THP was enabled. The
    main thing to watch is the elapsed times and how they compare to
    times when THP is enabled later. It's also important to note that
    elapsed time is improved by this series as System CPu time is much
    reduced.

    writebackCPDevicevfat

    3.1.0-vanilla rc5-vanilla freemore-v6r1 isolate-v6r1 andrea-v2r1
    System Time 1.22 ( 0.00%) 13.89 (-1040.72%) 46.40 (-3709.20%) 4.44 ( -264.37%) 47.37 (-3789.33%)
    +/- 0.06 ( 0.00%) 22.82 (-37635.56%) 3.84 (-6249.44%) 6.48 (-10618.92%) 6.60
    (-10818.53%)
    User Time 0.06 ( 0.00%) 0.06 ( -6.90%) 0.05 ( 17.24%) 0.05 ( 13.79%) 0.04 ( 31.03%)
    +/- 0.01 ( 0.00%) 0.01 ( 33.33%) 0.01 ( 33.33%) 0.01 ( 39.14%) 0.01 ( 25.46%)
    Elapsed Time 10445.54 ( 0.00%) 2249.92 ( 78.46%) 70.06 ( 99.33%) 16.59 ( 99.84%) 472.43 (
    95.48%)
    +/- 643.98 ( 0.00%) 811.62 ( -26.03%) 10.02 ( 98.44%) 7.03 ( 98.91%) 59.99 ( 90.68%)
    THP Active 15.60 ( 0.00%) 35.20 ( 225.64%) 65.00 ( 416.67%) 70.80 ( 453.85%) 62.20 ( 398.72%)
    +/- 18.48 ( 0.00%) 51.29 ( 277.59%) 15.99 ( 86.52%) 37.91 ( 205.18%) 22.02 ( 119.18%)
    Fault Alloc 121.80 ( 0.00%) 76.60 ( 62.89%) 155.40 ( 127.59%) 181.20 ( 148.77%) 286.60 ( 235.30%)
    +/- 73.51 ( 0.00%) 61.11 ( 83.12%) 34.89 ( 47.46%) 31.88 ( 43.36%) 68.13 ( 92.68%)
    Fault Fallback 881.20 ( 0.00%) 926.60 ( -5.15%) 847.60 ( 3.81%) 822.00 ( 6.72%) 716.60 ( 18.68%)
    +/- 73.51 ( 0.00%) 61.26 ( 16.67%) 34.89 ( 52.54%) 31.65 ( 56.94%) 67.75 ( 7.84%)
    MMTests Statistics: duration
    User/Sys Time Running Test (seconds) 3540.88 1945.37 716.04 64.97 1937.03
    Total Elapsed Time (seconds) 52417.33 11425.90 501.02 230.95 2520.28

    The first thing to note is the "Elapsed Time" for the vanilla kernels
    of 2249 seconds versus 56 with THP disabled which might explain the
    reports of USB stalls with THP enabled. Applying the patches brings
    performance in line with THP-disabled performance while isolating
    pages for immediate reclaim from the LRU cuts down System CPU time.

    The "Fault Alloc" success rate figures are also improved. The vanilla
    kernel only managed to allocate 76.6 pages on average over the course
    of 5 iterations where as applying the series allocated 181.20 on
    average albeit it is well within variance. It's worth noting that
    applies the series at least descreases the amount of variance which
    implies an improvement.

    Andrea's series had a higher success rate for THP allocations but
    at a severe cost to elapsed time which is still better than vanilla
    but still much worse than disabling THP altogether. One can bring my
    series close to Andrea's by removing this check

    /*
    * If compaction is deferred for high-order allocations, it is because
    * sync compaction recently failed. In this is the case and the caller
    * has requested the system not be heavily disrupted, fail the
    * allocation now instead of entering direct reclaim
    */
    if (deferred_compaction && (gfp_mask & __GFP_NO_KSWAPD))
    goto nopage;

    I didn't include a patch that removed the above check because hurting
    overall performance to improve the THP figure is not what the average
    user wants. It's something to consider though if someone really wants
    to maximise THP usage no matter what it does to the workload initially.

    This is summary of vmstat figures from the same test.

    3.1.0-vanilla rc5-vanilla freemore-v6r1 isolate-v6r1 andrea-v2r1
    Page Ins 3257266139 1111844061 17263623 10901575 161423219
    Page Outs 81054922 30364312 3626530 3657687 8753730
    Swap Ins 3294 2851 6560 4964 4592
    Swap Outs 390073 528094 620197 790912 698285
    Direct pages scanned 1077581700 3024951463 1764930052 115140570 5901188831
    Kswapd pages scanned 34826043 7112868 2131265 1686942 1893966
    Kswapd pages reclaimed 28950067 4911036 1246044 966475 1497726
    Direct pages reclaimed 805148398 280167837 3623473 2215044 40809360
    Kswapd efficiency 83% 69% 58% 57% 79%
    Kswapd velocity 664.399 622.521 4253.852 7304.360 751.490
    Direct efficiency 74% 9% 0% 1% 0%
    Direct velocity 20557.737 264745.137 3522673.849 498551.938 2341481.435
    Percentage direct scans 96% 99% 99% 98% 99%
    Page writes by reclaim 722646 529174 620319 791018 699198
    Page writes file 332573 1080 122 106 913
    Page writes anon 390073 528094 620197 790912 698285
    Page reclaim immediate 0 2552514720 1635858848 111281140 5478375032
    Page rescued immediate 0 0 0 87848 0
    Slabs scanned 23552 23552 9216 8192 9216
    Direct inode steals 231 0 0 0 0
    Kswapd inode steals 0 0 0 0 0
    Kswapd skipped wait 28076 786 0 61 6
    THP fault alloc 609 383 753 906 1433
    THP collapse alloc 12 6 0 0 6
    THP splits 536 211 456 593 1136
    THP fault fallback 4406 4633 4263 4110 3583
    THP collapse fail 120 127 0 0 4
    Compaction stalls 1810 728 623 779 3200
    Compaction success 196 53 60 80 123
    Compaction failures 1614 675 563 699 3077
    Compaction pages moved 193158 53545 243185 333457 226688
    Compaction move failure 9952 9396 16424 23676 45070

    The main things to look at are

    1. Page In/out figures are much reduced by the series.

    2. Direct page scanning is incredibly high (264745.137 pages scanned
    per second on the vanilla kernel) but isolating PageReclaim pages
    on their own list reduces the number of pages scanned significantly.

    3. The fact that "Page rescued immediate" is a positive number implies
    that we sometimes race removing pages from the LRU_IMMEDIATE list
    that need to be put back on a normal LRU but it happens only for
    0.07% of the pages marked for immediate reclaim.

    writebackCPDeviceext4
    3.1.0-vanilla rc5-vanilla freemore-v6r1 isolate-v6r1 andrea-v2r1
    System Time 1.51 ( 0.00%) 1.77 ( -17.66%) 1.46 ( 2.92%) 1.15 ( 23.77%) 1.89 ( -25.63%)
    +/- 0.27 ( 0.00%) 0.67 ( -148.52%) 0.33 ( -22.76%) 0.30 ( -11.15%) 0.19 ( 30.16%)
    User Time 0.03 ( 0.00%) 0.04 ( -37.50%) 0.05 ( -62.50%) 0.07 ( -112.50%) 0.04 ( -18.75%)
    +/- 0.01 ( 0.00%) 0.02 ( -146.64%) 0.02 ( -97.91%) 0.02 ( -75.59%) 0.02 ( -63.30%)
    Elapsed Time 124.93 ( 0.00%) 114.49 ( 8.36%) 96.77 ( 22.55%) 27.48 ( 78.00%) 205.70 ( -64.65%)
    +/- 20.20 ( 0.00%) 74.39 ( -268.34%) 59.88 ( -196.48%) 7.72 ( 61.79%) 25.03 ( -23.95%)
    THP Active 161.80 ( 0.00%) 83.60 ( 51.67%) 141.20 ( 87.27%) 84.60 ( 52.29%) 82.60 ( 51.05%)
    +/- 71.95 ( 0.00%) 43.80 ( 60.88%) 26.91 ( 37.40%) 59.02 ( 82.03%) 52.13 ( 72.45%)
    Fault Alloc 471.40 ( 0.00%) 228.60 ( 48.49%) 282.20 ( 59.86%) 225.20 ( 47.77%) 388.40 ( 82.39%)
    +/- 88.07 ( 0.00%) 87.42 ( 99.26%) 73.79 ( 83.78%) 109.62 ( 124.47%) 82.62 ( 93.81%)
    Fault Fallback 531.60 ( 0.00%) 774.60 ( -45.71%) 720.80 ( -35.59%) 777.80 ( -46.31%) 614.80 ( -15.65%)
    +/- 88.07 ( 0.00%) 87.26 ( 0.92%) 73.79 ( 16.22%) 109.62 ( -24.47%) 82.29 ( 6.56%)
    MMTests Statistics: duration
    User/Sys Time Running Test (seconds) 50.22 33.76 30.65 24.14 128.45
    Total Elapsed Time (seconds) 1113.73 1132.19 1029.45 759.49 1707.26

    Similar test but the USB stick is using ext4 instead of vfat. As
    ext4 does not use writepage for migration, the large stalls due to
    compaction when THP is enabled are not observed. Still, isolating
    PageReclaim pages on their own list helped completion time largely
    by reducing the number of pages scanned by direct reclaim although
    time spend in congestion_wait could also be a factor.

    Again, Andrea's series had far higher success rates for THP allocation
    at the cost of elapsed time. I didn't look too closely but a quick
    look at the vmstat figures tells me kswapd reclaimed 8 times more pages
    than the patch series and direct reclaim reclaimed roughly three times
    as many pages. It follows that if memory is aggressively reclaimed,
    there will be more available for THP.

    writebackCPFilevfat
    3.1.0-vanilla rc5-vanilla freemore-v6r1 isolate-v6r1 andrea-v2r1
    System Time 1.76 ( 0.00%) 29.10 (-1555.52%) 46.01 (-2517.18%) 4.79 ( -172.35%) 54.89 (-3022.53%)
    +/- 0.14 ( 0.00%) 25.61 (-18185.17%) 2.15 (-1434.83%) 6.60 (-4610.03%) 9.75
    (-6863.76%)
    User Time 0.05 ( 0.00%) 0.07 ( -45.83%) 0.05 ( -4.17%) 0.06 ( -29.17%) 0.06 ( -16.67%)
    +/- 0.02 ( 0.00%) 0.02 ( 20.11%) 0.02 ( -3.14%) 0.01 ( 31.58%) 0.01 ( 47.41%)
    Elapsed Time 22520.79 ( 0.00%) 1082.85 ( 95.19%) 73.30 ( 99.67%) 32.43 ( 99.86%) 291.84 ( 98.70%)
    +/- 7277.23 ( 0.00%) 706.29 ( 90.29%) 19.05 ( 99.74%) 17.05 ( 99.77%) 125.55 ( 98.27%)
    THP Active 83.80 ( 0.00%) 12.80 ( 15.27%) 15.60 ( 18.62%) 13.00 ( 15.51%) 0.80 ( 0.95%)
    +/- 66.81 ( 0.00%) 20.19 ( 30.22%) 5.92 ( 8.86%) 15.06 ( 22.54%) 1.17 ( 1.75%)
    Fault Alloc 171.00 ( 0.00%) 67.80 ( 39.65%) 97.40 ( 56.96%) 125.60 ( 73.45%) 133.00 ( 77.78%)
    +/- 82.91 ( 0.00%) 30.69 ( 37.02%) 53.91 ( 65.02%) 55.05 ( 66.40%) 21.19 ( 25.56%)
    Fault Fallback 832.00 ( 0.00%) 935.20 ( -12.40%) 906.00 ( -8.89%) 877.40 ( -5.46%) 870.20 ( -4.59%)
    +/- 82.91 ( 0.00%) 30.69 ( 62.98%) 54.01 ( 34.86%) 55.05 ( 33.60%) 20.91 ( 74.78%)
    MMTests Statistics: duration
    User/Sys Time Running Test (seconds) 7229.81 928.42 704.52 80.68 1330.76
    Total Elapsed Time (seconds) 112849.04 5618.69 571.11 360.54 1664.28

    In this case, the test is reading/writing only from filesystems but as
    it's vfat, it's slow due to calling writepage during compaction. Little
    to observe really - the time to complete the test goes way down
    with the series applied and THP allocation success rates go up in
    comparison to 3.2-rc5. The success rates are lower than 3.1.0 but
    the elapsed time for that kernel is abysmal so it is not really a
    sensible comparison.

    As before, Andrea's series allocates more THPs at the cost of overall
    performance.

    writebackCPFileext4
    3.1.0-vanilla rc5-vanilla freemore-v6r1 isolate-v6r1 andrea-v2r1
    System Time 1.51 ( 0.00%) 1.77 ( -17.66%) 1.46 ( 2.92%) 1.15 ( 23.77%) 1.89 ( -25.63%)
    +/- 0.27 ( 0.00%) 0.67 ( -148.52%) 0.33 ( -22.76%) 0.30 ( -11.15%) 0.19 ( 30.16%)
    User Time 0.03 ( 0.00%) 0.04 ( -37.50%) 0.05 ( -62.50%) 0.07 ( -112.50%) 0.04 ( -18.75%)
    +/- 0.01 ( 0.00%) 0.02 ( -146.64%) 0.02 ( -97.91%) 0.02 ( -75.59%) 0.02 ( -63.30%)
    Elapsed Time 124.93 ( 0.00%) 114.49 ( 8.36%) 96.77 ( 22.55%) 27.48 ( 78.00%) 205.70 ( -64.65%)
    +/- 20.20 ( 0.00%) 74.39 ( -268.34%) 59.88 ( -196.48%) 7.72 ( 61.79%) 25.03 ( -23.95%)
    THP Active 161.80 ( 0.00%) 83.60 ( 51.67%) 141.20 ( 87.27%) 84.60 ( 52.29%) 82.60 ( 51.05%)
    +/- 71.95 ( 0.00%) 43.80 ( 60.88%) 26.91 ( 37.40%) 59.02 ( 82.03%) 52.13 ( 72.45%)
    Fault Alloc 471.40 ( 0.00%) 228.60 ( 48.49%) 282.20 ( 59.86%) 225.20 ( 47.77%) 388.40 ( 82.39%)
    +/- 88.07 ( 0.00%) 87.42 ( 99.26%) 73.79 ( 83.78%) 109.62 ( 124.47%) 82.62 ( 93.81%)
    Fault Fallback 531.60 ( 0.00%) 774.60 ( -45.71%) 720.80 ( -35.59%) 777.80 ( -46.31%) 614.80 ( -15.65%)
    +/- 88.07 ( 0.00%) 87.26 ( 0.92%) 73.79 ( 16.22%) 109.62 ( -24.47%) 82.29 ( 6.56%)
    MMTests Statistics: duration
    User/Sys Time Running Test (seconds) 50.22 33.76 30.65 24.14 128.45
    Total Elapsed Time (seconds) 1113.73 1132.19 1029.45 759.49 1707.26

    Same type of story - elapsed times go down. In this case, allocation
    success rates are roughtly the same. As before, Andrea's has higher
    success rates but takes a lot longer.

    Overall the series does reduce latencies and while the tests are
    inherency racy as alloc competes with the cp processes, the variability
    was included. The THP allocation rates are not as high as they could
    be but that is because we would have to be more aggressive about
    reclaim and compaction impacting overall performance.

    This patch:

    Commit 39deaf85 ("mm: compaction: make isolate_lru_page() filter-aware")
    noted that compaction does not migrate dirty or writeback pages and that
    is was meaningless to pick the page and re-add it to the LRU list.

    What was missed during review is that asynchronous migration moves dirty
    pages if their ->migratepage callback is migrate_page() because these can
    be moved without blocking. This potentially impacted hugepage allocation
    success rates by a factor depending on how many dirty pages are in the
    system.

    This patch partially reverts 39deaf85 to allow migration to isolate dirty
    pages again. This increases how much compaction disrupts the LRU but that
    is addressed later in the series.

    Signed-off-by: Mel Gorman
    Reviewed-by: Andrea Arcangeli
    Reviewed-by: Rik van Riel
    Reviewed-by: Minchan Kim
    Cc: Dave Jones
    Cc: Jan Kara
    Cc: Andy Isaacson
    Cc: Nai Xia
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • In trace_mm_vmscan_lru_isolate(), we don't output 'file' information to
    the trace event and it is a bit inconvenient for the user to get the
    real information(like pasted below). mm_vmscan_lru_isolate:
    isolate_mode=2 order=0 nr_requested=32 nr_scanned=32 nr_taken=32
    contig_taken=0 contig_dirty=0 contig_failed=0

    'active' can be obtained by analyzing mode(Thanks go to Minchan and
    Mel), So this patch adds 'file' to the trace event and it now looks
    like: mm_vmscan_lru_isolate: isolate_mode=2 order=0 nr_requested=32
    nr_scanned=32 nr_taken=32 contig_taken=0 contig_dirty=0 contig_failed=0
    file=0

    Signed-off-by: Tao Ma
    Acked-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Reviewed-by: Minchan Kim
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tao Ma
     
  • Put the tail subpages of an isolated hugepage under splitting in the lru
    reclaim head as they supposedly should be isolated too next.

    Queues the subpages in physical order in the lru for non isolated
    hugepages under splitting. That might provide some theoretical cache
    benefit to the buddy allocator later.

    Signed-off-by: Shaohua Li
    Signed-off-by: Andrea Arcangeli
    Cc: David Rientjes
    Cc: Johannes Weiner
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shaohua Li
     
  • We have tlb_remove_tlb_entry to indicate a pte tlb flush entry should be
    flushed, but not a corresponding API for pmd entry. This isn't a
    problem so far because THP is only for x86 currently and tlb_flush()
    under x86 will flush entire TLB. But this is confusion and could be
    missed if thp is ported to other arch.

    Also convert tlb->need_flush = 1 to a VM_BUG_ON(!tlb->need_flush) in
    __tlb_remove_page() as suggested by Andrea Arcangeli. The
    __tlb_remove_page() function is supposed to be called after
    tlb_remove_xxx_tlb_entry() and we can catch any misuse.

    Signed-off-by: Shaohua Li
    Reviewed-by: Andrea Arcangeli
    Cc: David Rientjes
    Cc: Johannes Weiner
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shaohua Li
     
  • change_protection() will do TLB flush later, don't need duplicate tlb
    flush.

    Signed-off-by: Shaohua Li
    Reviewed-by: Andrea Arcangeli
    Cc: David Rientjes
    Cc: Johannes Weiner
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shaohua Li
     
  • Improve the error code path. Delete unnecessary sysfs file for example.
    Also remove the #ifdef xxx to make code better.

    Signed-off-by: Shaohua Li
    Reviewed-by: Andrea Arcangeli
    Cc: David Rientjes
    Cc: Johannes Weiner
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shaohua Li
     
  • No need for two CONFIG_MEMORY_HOTPLUG blocks.

    Signed-off-by: Bob Liu
    Acked-by: Michal Hocko
    Cc: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bob Liu