15 Oct, 2019

1 commit

  • Florian and Dave reported [1] a NULL pointer dereference in
    __reset_isolation_pfn(). While the exact cause is unclear, staring at
    the code revealed two bugs, which might be related.

    One bug is that if zone starts in the middle of pageblock, block_page
    might correspond to different pfn than block_pfn, and then the
    pfn_valid_within() checks will check different pfn's than those accessed
    via struct page. This might result in acessing an unitialized page in
    CONFIG_HOLES_IN_ZONE configs.

    The other bug is that end_page refers to the first page of next
    pageblock and not last page of current pageblock. The online and valid
    check is then wrong and with sections, the while (page < end_page) loop
    might wander off actual struct page arrays.

    [1] https://lore.kernel.org/linux-xfs/87o8z1fvqu.fsf@mid.deneb.enyo.de/

    Link: http://lkml.kernel.org/r/20191008152915.24704-1-vbabka@suse.cz
    Fixes: 6b0868c820ff ("mm/compaction.c: correct zone boundary handling when resetting pageblock skip hints")
    Signed-off-by: Vlastimil Babka
    Reported-by: Florian Weimer
    Reported-by: Dave Chinner
    Acked-by: Mel Gorman
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vlastimil Babka
     

25 Sep, 2019

3 commits

  • Like commit 40cacbcb3240 ("mm, compaction: remove unnecessary zone
    parameter in some instances"), remove unnecessary zone parameter.

    No functional change.

    Link: http://lkml.kernel.org/r/20190806151616.21107-1-lpf.vector@gmail.com
    Signed-off-by: Pengfei Li
    Reviewed-by: Andrew Morton
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Cc: Qian Cai
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pengfei Li
     
  • total_{migrate,free}_scanned will be added to COMPACTMIGRATE_SCANNED and
    COMPACTFREE_SCANNED in compact_zone(). We should clear them before
    scanning a new zone. In the proc triggered compaction, we forgot clearing
    them.

    [laoar.shao@gmail.com: introduce a helper compact_zone_counters_init()]
    Link: http://lkml.kernel.org/r/1563869295-25748-1-git-send-email-laoar.shao@gmail.com
    [akpm@linux-foundation.org: expand compact_zone_counters_init() into its single callsite, per mhocko]
    [vbabka@suse.cz: squash compact_zone() list_head init as well]
    Link: http://lkml.kernel.org/r/1fb6f7da-f776-9e42-22f8-bbb79b030b98@suse.cz
    [akpm@linux-foundation.org: kcompactd_do_work(): avoid unnecessary initialization of cc.zone]
    Link: http://lkml.kernel.org/r/1563789275-9639-1-git-send-email-laoar.shao@gmail.com
    Fixes: 7f354a548d1c ("mm, compaction: add vmstats for kcompactd work")
    Signed-off-by: Yafang Shao
    Signed-off-by: Vlastimil Babka
    Reviewed-by: Vlastimil Babka
    Cc: David Rientjes
    Cc: Yafang Shao
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yafang Shao
     
  • Replace 1 << compound_order(page) with compound_nr(page). Minor
    improvements in readability.

    Link: http://lkml.kernel.org/r/20190721104612.19120-4-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     

03 Aug, 2019

1 commit

  • "howaboutsynergy" reported via kernel buzilla number 204165 that
    compact_zone_order was consuming 100% CPU during a stress test for
    prolonged periods of time. Specifically the following command, which
    should exit in 10 seconds, was taking an excessive time to finish while
    the CPU was pegged at 100%.

    stress -m 220 --vm-bytes 1000000000 --timeout 10

    Tracing indicated a pattern as follows

    stress-3923 [007] 519.106208: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106212: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106216: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106219: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106223: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106227: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106231: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106235: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106238: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0
    stress-3923 [007] 519.106242: mm_compaction_isolate_migratepages: range=(0x70bb80 ~ 0x70bb80) nr_scanned=0 nr_taken=0

    Note that compaction is entered in rapid succession while scanning and
    isolating nothing. The problem is that when a task that is compacting
    receives a fatal signal, it retries indefinitely instead of exiting
    while making no progress as a fatal signal is pending.

    It's not easy to trigger this condition although enabling zswap helps on
    the basis that the timing is altered. A very small window has to be hit
    for the problem to occur (signal delivered while compacting and
    isolating a PFN for migration that is not aligned to SWAP_CLUSTER_MAX).

    This was reproduced locally -- 16G single socket system, 8G swap, 30%
    zswap configured, vm-bytes 22000000000 using Colin Kings stress-ng
    implementation from github running in a loop until the problem hits).
    Tracing recorded the problem occurring almost 200K times in a short
    window. With this patch, the problem hit 4 times but the task existed
    normally instead of consuming CPU.

    This problem has existed for some time but it was made worse by commit
    cf66f0700c8f ("mm, compaction: do not consider a need to reschedule as
    contention"). Before that commit, if the same condition was hit then
    locks would be quickly contended and compaction would exit that way.

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=204165
    Link: http://lkml.kernel.org/r/20190718085708.GE24383@techsingularity.net
    Fixes: cf66f0700c8f ("mm, compaction: do not consider a need to reschedule as contention")
    Signed-off-by: Mel Gorman
    Reviewed-by: Vlastimil Babka
    Cc: [5.1+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

02 Jun, 2019

1 commit

  • When we have holes in a normal memory zone, we could endup having
    cached_migrate_pfns which may not necessarily be valid, under heavy memory
    pressure with swapping enabled ( via __reset_isolation_suitable(),
    triggered by kswapd).

    Later if we fail to find a page via fast_isolate_freepages(), we may end
    up using the migrate_pfn we started the search with, as valid page. This
    could lead to accessing NULL pointer derefernces like below, due to an
    invalid mem_section pointer.

    Unable to handle kernel NULL pointer dereference at virtual address 0000000000000008 [47/1825]
    Mem abort info:
    ESR = 0x96000004
    Exception class = DABT (current EL), IL = 32 bits
    SET = 0, FnV = 0
    EA = 0, S1PTW = 0
    Data abort info:
    ISV = 0, ISS = 0x00000004
    CM = 0, WnR = 0
    user pgtable: 4k pages, 48-bit VAs, pgdp = 0000000082f94ae9
    [0000000000000008] pgd=0000000000000000
    Internal error: Oops: 96000004 [#1] SMP
    ...
    CPU: 10 PID: 6080 Comm: qemu-system-aar Not tainted 510-rc1+ #6
    Hardware name: AmpereComputing(R) OSPREY EV-883832-X3-0001/OSPREY, BIOS 4819 09/25/2018
    pstate: 60000005 (nZCv daif -PAN -UAO)
    pc : set_pfnblock_flags_mask+0x58/0xe8
    lr : compaction_alloc+0x300/0x950
    [...]
    Process qemu-system-aar (pid: 6080, stack limit = 0x0000000095070da5)
    Call trace:
    set_pfnblock_flags_mask+0x58/0xe8
    compaction_alloc+0x300/0x950
    migrate_pages+0x1a4/0xbb0
    compact_zone+0x750/0xde8
    compact_zone_order+0xd8/0x118
    try_to_compact_pages+0xb4/0x290
    __alloc_pages_direct_compact+0x84/0x1e0
    __alloc_pages_nodemask+0x5e0/0xe18
    alloc_pages_vma+0x1cc/0x210
    do_huge_pmd_anonymous_page+0x108/0x7c8
    __handle_mm_fault+0xdd4/0x1190
    handle_mm_fault+0x114/0x1c0
    __get_user_pages+0x198/0x3c0
    get_user_pages_unlocked+0xb4/0x1d8
    __gfn_to_pfn_memslot+0x12c/0x3b8
    gfn_to_pfn_prot+0x4c/0x60
    kvm_handle_guest_abort+0x4b0/0xcd8
    handle_exit+0x140/0x1b8
    kvm_arch_vcpu_ioctl_run+0x260/0x768
    kvm_vcpu_ioctl+0x490/0x898
    do_vfs_ioctl+0xc4/0x898
    ksys_ioctl+0x8c/0xa0
    __arm64_sys_ioctl+0x28/0x38
    el0_svc_common+0x74/0x118
    el0_svc_handler+0x38/0x78
    el0_svc+0x8/0xc
    Code: f8607840 f100001f 8b011401 9a801020 (f9400400)
    ---[ end trace af6a35219325a9b6 ]---

    The issue was reported on an arm64 server with 128GB with holes in the
    zone (e.g, [32GB@4GB, 96GB@544GB]), with a swap device enabled, while
    running 100 KVM guest instances.

    This patch fixes the issue by ensuring that the page belongs to a valid
    PFN when we fallback to using the lower limit of the scan range upon
    failure in fast_isolate_freepages().

    Link: http://lkml.kernel.org/r/1558711908-15688-1-git-send-email-suzuki.poulose@arm.com
    Fixes: 5a811889de10f1eb ("mm, compaction: use free lists to quickly locate a migration target")
    Signed-off-by: Suzuki K Poulose
    Reported-by: Marc Zyngier
    Reviewed-by: Mel Gorman
    Reviewed-by: Anshuman Khandual
    Cc: Michal Hocko
    Cc: Qian Cai
    Cc: Marc Zyngier
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Suzuki K Poulose
     

19 May, 2019

1 commit

  • syzbot reported the following error from a tree with a head commit of
    baf76f0c58ae ("slip: make slhc_free() silently accept an error pointer")

    BUG: unable to handle kernel paging request at ffffea0003348000
    #PF error: [normal kernel read fault]
    PGD 12c3f9067 P4D 12c3f9067 PUD 12c3f8067 PMD 0
    Oops: 0000 [#1] PREEMPT SMP KASAN
    CPU: 1 PID: 28916 Comm: syz-executor.2 Not tainted 5.1.0-rc6+ #89
    Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
    RIP: 0010:constant_test_bit arch/x86/include/asm/bitops.h:314 [inline]
    RIP: 0010:PageCompound include/linux/page-flags.h:186 [inline]
    RIP: 0010:isolate_freepages_block+0x1c0/0xd40 mm/compaction.c:579
    Code: 01 d8 ff 4d 85 ed 0f 84 ef 07 00 00 e8 29 00 d8 ff 4c 89 e0 83 85 38 ff
    ff ff 01 48 c1 e8 03 42 80 3c 38 00 0f 85 31 0a 00 00 8b 2c 24 31 ff 49
    c1 ed 10 41 83 e5 01 44 89 ee e8 3a 01 d8 ff
    RSP: 0018:ffff88802b31eab8 EFLAGS: 00010246
    RAX: 1ffffd4000669000 RBX: 00000000000cd200 RCX: ffffc9000a235000
    RDX: 000000000001ca5e RSI: ffffffff81988cc7 RDI: 0000000000000001
    RBP: ffff88802b31ebd8 R08: ffff88805af700c0 R09: 0000000000000000
    R10: 0000000000000000 R11: 0000000000000000 R12: ffffea0003348000
    R13: 0000000000000000 R14: ffff88802b31f030 R15: dffffc0000000000
    FS: 00007f61648dc700(0000) GS:ffff8880ae900000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: ffffea0003348000 CR3: 0000000037c64000 CR4: 00000000001426e0
    Call Trace:
    fast_isolate_around mm/compaction.c:1243 [inline]
    fast_isolate_freepages mm/compaction.c:1418 [inline]
    isolate_freepages mm/compaction.c:1438 [inline]
    compaction_alloc+0x1aee/0x22e0 mm/compaction.c:1550

    There is no reproducer and it is difficult to hit -- 1 crash every few
    days. The issue is very similar to the fix in commit 6b0868c820ff
    ("mm/compaction.c: correct zone boundary handling when resetting pageblock
    skip hints"). When isolating free pages around a target pageblock, the
    boundary handling is off by one and can stray into the next pageblock.
    Triggering the syzbot error requires that the end of pageblock is section
    or zone aligned, and that the next section is unpopulated.

    A more subtle consequence of the bug is that pageblocks were being
    improperly used as migration targets which potentially hurts fragmentation
    avoidance in the long-term one page at a time.

    A debugging patch revealed that it's definitely possible to stray outside
    of a pageblock which is not intended. While syzbot cannot be used to
    verify this patch, it was confirmed that the debugging warning no longer
    triggers with this patch applied. It has also been confirmed that the THP
    allocation stress tests are not degraded by this patch.

    Link: http://lkml.kernel.org/r/20190510182124.GI18914@techsingularity.net
    Fixes: e332f741a8dd ("mm, compaction: be selective about what pageblocks to clear skip hints")
    Signed-off-by: Mel Gorman
    Reported-by: syzbot+d84c80f9fe26a0f7a734@syzkaller.appspotmail.com
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Qian Cai
    Cc: Michal Hocko
    Cc: Vlastimil Babka
    Cc: # v5.1+
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

15 May, 2019

2 commits

  • In preparation for runtime randomization of the zone lists, take all
    (well, most of) the list_*() functions in the buddy allocator and put
    them in helper functions. Provide a common control point for injecting
    additional behavior when freeing pages.

    [dan.j.williams@intel.com: fix buddy list helpers]
    Link: http://lkml.kernel.org/r/155033679702.1773410.13041474192173212653.stgit@dwillia2-desk3.amr.corp.intel.com
    [vbabka@suse.cz: remove del_page_from_free_area() migratetype parameter]
    Link: http://lkml.kernel.org/r/4672701b-6775-6efd-0797-b6242591419e@suse.cz
    Link: http://lkml.kernel.org/r/154899812264.3165233.5219320056406926223.stgit@dwillia2-desk3.amr.corp.intel.com
    Signed-off-by: Dan Williams
    Signed-off-by: Vlastimil Babka
    Tested-by: Tetsuo Handa
    Acked-by: Michal Hocko
    Cc: Vlastimil Babka
    Cc: Dave Hansen
    Cc: Kees Cook
    Cc: Keith Busch
    Cc: Robert Elliott
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Williams
     
  • In a low-memory situation, cc->fast_search_fail can keep increasing as it
    is unable to find an available page to isolate in
    fast_isolate_freepages(). As the result, it could trigger an error below,
    so just compare with the maximum bits can be shifted first.

    UBSAN: Undefined behaviour in mm/compaction.c:1160:30
    shift exponent 64 is too large for 64-bit type 'unsigned long'
    CPU: 131 PID: 1308 Comm: kcompactd1 Kdump: loaded Tainted: G
    W L 5.0.0+ #17
    Call trace:
    dump_backtrace+0x0/0x450
    show_stack+0x20/0x2c
    dump_stack+0xc8/0x14c
    __ubsan_handle_shift_out_of_bounds+0x7e8/0x8c4
    compaction_alloc+0x2344/0x2484
    unmap_and_move+0xdc/0x1dbc
    migrate_pages+0x274/0x1310
    compact_zone+0x26ec/0x43bc
    kcompactd+0x15b8/0x1a24
    kthread+0x374/0x390
    ret_from_fork+0x10/0x18

    [akpm@linux-foundation.org: code cleanup]
    Link: http://lkml.kernel.org/r/20190320203338.53367-1-cai@lca.pw
    Fixes: 70b44595eafe ("mm, compaction: use free lists to quickly locate a migration source")
    Signed-off-by: Qian Cai
    Acked-by: Vlastimil Babka
    Acked-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Qian Cai
     

04 Apr, 2019

2 commits

  • Running LTP oom01 in a tight loop or memory stress testing put the system
    in a low-memory situation could triggers random memory corruption like
    page flag corruption below due to in fast_isolate_freepages(), if
    isolation fails, next_search_order() does not abort the search immediately
    could lead to improper accesses.

    UBSAN: Undefined behaviour in ./include/linux/mm.h:1195:50
    index 7 is out of range for type 'zone [5]'
    Call Trace:
    dump_stack+0x62/0x9a
    ubsan_epilogue+0xd/0x7f
    __ubsan_handle_out_of_bounds+0x14d/0x192
    __isolate_free_page+0x52c/0x600
    compaction_alloc+0x886/0x25f0
    unmap_and_move+0x37/0x1e70
    migrate_pages+0x2ca/0xb20
    compact_zone+0x19cb/0x3620
    kcompactd_do_work+0x2df/0x680
    kcompactd+0x1d8/0x6c0
    kthread+0x32c/0x3f0
    ret_from_fork+0x35/0x40
    ------------[ cut here ]------------
    kernel BUG at mm/page_alloc.c:3124!
    invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI
    RIP: 0010:__isolate_free_page+0x464/0x600
    RSP: 0000:ffff888b9e1af848 EFLAGS: 00010007
    RAX: 0000000030000000 RBX: ffff888c39fcf0f8 RCX: 0000000000000000
    RDX: 1ffff111873f9e25 RSI: 0000000000000004 RDI: ffffed1173c35ef6
    RBP: ffff888b9e1af898 R08: fffffbfff4fc2461 R09: fffffbfff4fc2460
    R10: fffffbfff4fc2460 R11: ffffffffa7e12303 R12: 0000000000000008
    R13: dffffc0000000000 R14: 0000000000000000 R15: 0000000000000007
    FS: 0000000000000000(0000) GS:ffff888ba8e80000(0000)
    knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 00007fc7abc00000 CR3: 0000000752416004 CR4: 00000000001606a0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Call Trace:
    compaction_alloc+0x886/0x25f0
    unmap_and_move+0x37/0x1e70
    migrate_pages+0x2ca/0xb20
    compact_zone+0x19cb/0x3620
    kcompactd_do_work+0x2df/0x680
    kcompactd+0x1d8/0x6c0
    kthread+0x32c/0x3f0
    ret_from_fork+0x35/0x40

    Link: http://lkml.kernel.org/r/20190320192648.52499-1-cai@lca.pw
    Fixes: dbe2d4e4f12e ("mm, compaction: round-robin the order while searching the free lists for a target")
    Signed-off-by: Qian Cai
    Acked-by: Mel Gorman
    Cc: Daniel Jordan
    Cc: Mikhail Gavrilov
    Cc: Vlastimil Babka
    Cc: Pavel Tatashin
    Signed-off-by: Mel Gorman

    Qian Cai
     
  • Mikhail Gavrilo reported the following bug being triggered in a Fedora
    kernel based on 5.1-rc1 but it is relevant to a vanilla kernel.

    kernel: page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
    kernel: ------------[ cut here ]------------
    kernel: kernel BUG at include/linux/mm.h:1021!
    kernel: invalid opcode: 0000 [#1] SMP NOPTI
    kernel: CPU: 6 PID: 116 Comm: kswapd0 Tainted: G C 5.1.0-0.rc1.git1.3.fc31.x86_64 #1
    kernel: Hardware name: System manufacturer System Product Name/ROG STRIX X470-I GAMING, BIOS 1201 12/07/2018
    kernel: RIP: 0010:__reset_isolation_pfn+0x244/0x2b0
    kernel: Code: fe 06 e8 0f 8e fc ff 44 0f b6 4c 24 04 48 85 c0 0f 85 dc fe ff ff e9 68 fe ff ff 48 c7 c6 58 b7 2e 8c 4c 89 ff e8 0c 75 00 00 0b 48 c7 c6 58 b7 2e 8c e8 fe 74 00 00 0f 0b 48 89 fa 41 b8 01
    kernel: RSP: 0018:ffff9e2d03f0fde8 EFLAGS: 00010246
    kernel: RAX: 0000000000000034 RBX: 000000000081f380 RCX: ffff8cffbddd6c20
    kernel: RDX: 0000000000000000 RSI: 0000000000000006 RDI: ffff8cffbddd6c20
    kernel: RBP: 0000000000000001 R08: 0000009898b94613 R09: 0000000000000000
    kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000100000
    kernel: R13: 0000000000100000 R14: 0000000000000001 R15: ffffca7de07ce000
    kernel: FS: 0000000000000000(0000) GS:ffff8cffbdc00000(0000) knlGS:0000000000000000
    kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    kernel: CR2: 00007fc1670e9000 CR3: 00000007f5276000 CR4: 00000000003406e0
    kernel: Call Trace:
    kernel: __reset_isolation_suitable+0x62/0x120
    kernel: reset_isolation_suitable+0x3b/0x40
    kernel: kswapd+0x147/0x540
    kernel: ? finish_wait+0x90/0x90
    kernel: kthread+0x108/0x140
    kernel: ? balance_pgdat+0x560/0x560
    kernel: ? kthread_park+0x90/0x90
    kernel: ret_from_fork+0x27/0x50

    He bisected it down to e332f741a8dd ("mm, compaction: be selective about
    what pageblocks to clear skip hints"). The problem is that the patch in
    question was sloppy with respect to the handling of zone boundaries. In
    some instances, it was possible for PFNs outside of a zone to be examined
    and if those were not properly initialised or poisoned then it would
    trigger the VM_BUG_ON. This patch corrects the zone boundary issues when
    resetting pageblock skip hints and Mikhail reported that the bug did not
    trigger after 30 hours of testing.

    Link: http://lkml.kernel.org/r/20190327085424.GL3189@techsingularity.net
    Fixes: e332f741a8dd ("mm, compaction: be selective about what pageblocks to clear skip hints")
    Reported-by: Mikhail Gavrilov
    Tested-by: Mikhail Gavrilov
    Cc: Daniel Jordan
    Cc: Qian Cai
    Cc: Vlastimil Babka
    Signed-off-by: Mel Gorman

    Mel Gorman
     

06 Mar, 2019

21 commits

  • too_many_isolated() in mm/compaction.c looks only at node state, so it
    makes more sense to change argument to pgdat instead of zone.

    Link: http://lkml.kernel.org/r/20190228083329.31892-3-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Vlastimil Babka
    Acked-by: Rik van Riel
    Acked-by: Mel Gorman
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: William Kucharski
    Cc: John Hubbard
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • We have common pattern to access lru_lock from a page pointer:
    zone_lru_lock(page_zone(page))

    Which is silly, because it unfolds to this:
    &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]->zone_pgdat->lru_lock
    while we can simply do
    &NODE_DATA(page_to_nid(page))->lru_lock

    Remove zone_lru_lock() function, since it's only complicate things. Use
    'page_pgdat(page)->lru_lock' pattern instead.

    [aryabinin@virtuozzo.com: a slightly better version of __split_huge_page()]
    Link: http://lkml.kernel.org/r/20190301121651.7741-1-aryabinin@virtuozzo.com
    Link: http://lkml.kernel.org/r/20190228083329.31892-2-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Vlastimil Babka
    Acked-by: Mel Gorman
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Rik van Riel
    Cc: William Kucharski
    Cc: John Hubbard
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Compaction is inherently race-prone as a suitable page freed during
    compaction can be allocated by any parallel task. This patch uses a
    capture_control structure to isolate a page immediately when it is freed
    by a direct compactor in the slow path of the page allocator. The
    intent is to avoid redundant scanning.

    5.0.0-rc1 5.0.0-rc1
    selective-v3r17 capture-v3r19
    Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%*
    Amean fault-both-3 2582.11 ( 0.00%) 2563.68 ( 0.71%)
    Amean fault-both-5 4500.26 ( 0.00%) 4233.52 ( 5.93%)
    Amean fault-both-7 5819.53 ( 0.00%) 6333.65 ( -8.83%)
    Amean fault-both-12 9321.18 ( 0.00%) 9759.38 ( -4.70%)
    Amean fault-both-18 9782.76 ( 0.00%) 10338.76 ( -5.68%)
    Amean fault-both-24 15272.81 ( 0.00%) 13379.55 * 12.40%*
    Amean fault-both-30 15121.34 ( 0.00%) 16158.25 ( -6.86%)
    Amean fault-both-32 18466.67 ( 0.00%) 18971.21 ( -2.73%)

    Latency is only moderately affected but the devil is in the details. A
    closer examination indicates that base page fault latency is reduced but
    latency of huge pages is increased as it takes creater care to succeed.
    Part of the "problem" is that allocation success rates are close to 100%
    even when under pressure and compaction gets harder

    5.0.0-rc1 5.0.0-rc1
    selective-v3r17 capture-v3r19
    Percentage huge-3 96.70 ( 0.00%) 98.23 ( 1.58%)
    Percentage huge-5 96.99 ( 0.00%) 95.30 ( -1.75%)
    Percentage huge-7 94.19 ( 0.00%) 97.24 ( 3.24%)
    Percentage huge-12 94.95 ( 0.00%) 97.35 ( 2.53%)
    Percentage huge-18 96.74 ( 0.00%) 97.30 ( 0.58%)
    Percentage huge-24 97.07 ( 0.00%) 97.55 ( 0.50%)
    Percentage huge-30 95.69 ( 0.00%) 98.50 ( 2.95%)
    Percentage huge-32 96.70 ( 0.00%) 99.27 ( 2.65%)

    And scan rates are reduced as expected by 6% for the migration scanner
    and 29% for the free scanner indicating that there is less redundant
    work.

    Compaction migrate scanned 20815362 19573286
    Compaction free scanned 16352612 11510663

    [mgorman@techsingularity.net: remove redundant check]
    Link: http://lkml.kernel.org/r/20190201143853.GH9565@techsingularity.net
    Link: http://lkml.kernel.org/r/20190118175136.31341-23-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Pageblock hints are cleared when compaction restarts or kswapd makes
    enough progress that it can sleep but it's over-eager in that the bit is
    cleared for migration sources with no LRU pages and migration targets
    with no free pages. As pageblock skip hint flushes are relatively rare
    and out-of-band with respect to kswapd, this patch makes a few more
    expensive checks to see if it's appropriate to even clear the bit.
    Every pageblock that is not cleared will avoid 512 pages being scanned
    unnecessarily on x86-64.

    The impact is variable with different workloads showing small
    differences in latency, success rates and scan rates. This is expected
    as clearing the hints is not that common but doing a small amount of
    work out-of-band to avoid a large amount of work in-band later is
    generally a good thing.

    Link: http://lkml.kernel.org/r/20190118175136.31341-22-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Signed-off-by: Qian Cai
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    [cai@lca.pw: no stuck in __reset_isolation_pfn()]
    Link: http://lkml.kernel.org/r/20190206034732.75687-1-cai@lca.pw
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Once fast searching finishes, there is a possibility that the linear
    scanner is scanning full blocks found by the fast scanner earlier. This
    patch uses an adaptive stride to sample pageblocks for free pages. The
    more consecutive full pageblocks encountered, the larger the stride
    until a pageblock with free pages is found. The scanners might meet
    slightly sooner but it is an acceptable risk given that the search of
    the free lists may still encounter the pages and adjust the cached PFN
    of the free scanner accordingly.

    5.0.0-rc1 5.0.0-rc1
    roundrobin-v3r17 samplefree-v3r17
    Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%*
    Amean fault-both-3 2752.37 ( 0.00%) 2729.95 ( 0.81%)
    Amean fault-both-5 4341.69 ( 0.00%) 4397.80 ( -1.29%)
    Amean fault-both-7 6308.75 ( 0.00%) 6097.61 ( 3.35%)
    Amean fault-both-12 10241.81 ( 0.00%) 9407.15 ( 8.15%)
    Amean fault-both-18 13736.09 ( 0.00%) 10857.63 * 20.96%*
    Amean fault-both-24 16853.95 ( 0.00%) 13323.24 * 20.95%*
    Amean fault-both-30 15862.61 ( 0.00%) 17345.44 ( -9.35%)
    Amean fault-both-32 18450.85 ( 0.00%) 16892.00 ( 8.45%)

    The latency is mildly improved offseting some overhead from earlier
    patches that are prerequisites for the rest of the series. However, a
    major impact is on the free scan rate with an 82% reduction.

    5.0.0-rc1 5.0.0-rc1
    roundrobin-v3r17 samplefree-v3r17
    Compaction migrate scanned 21607271 20116887
    Compaction free scanned 95336406 16668703

    It's also the first time in the series where the number of pages scanned
    by the migration scanner is greater than the free scanner due to the
    increased search efficiency.

    Link: http://lkml.kernel.org/r/20190118175136.31341-21-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • As compaction proceeds and creates high-order blocks, the free list
    search gets less efficient as the larger blocks are used as compaction
    targets. Eventually, the larger blocks will be behind the migration
    scanner for partially migrated pageblocks and the search fails. This
    patch round-robins what orders are searched so that larger blocks can be
    ignored and find smaller blocks that can be used as migration targets.

    The overall impact was small on 1-socket but it avoids corner cases
    where the migration/free scanners meet prematurely or situations where
    many of the pageblocks encountered by the free scanner are almost full
    instead of being properly packed. Previous testing had indicated that
    without this patch there were occasional large spikes in the free
    scanner without this patch.

    [dan.carpenter@oracle.com: fix static checker warning]
    Link: http://lkml.kernel.org/r/20190118175136.31341-20-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • The fast isolation of free pages allows the cached PFN of the free
    scanner to advance faster than necessary depending on the contents of
    the free list. The key is that fast_isolate_freepages() can update
    zone->compact_cached_free_pfn via isolate_freepages_block(). When the
    fast search fails, the linear scan can start from a point that has
    skipped valid migration targets, particularly pageblocks with just
    low-order free pages. This can cause the migration source/target
    scanners to meet prematurely causing a reset.

    This patch starts by avoiding an update of the pageblock skip
    information and cached PFN from isolate_freepages_block() and puts the
    responsibility of updating that information in the callers. The fast
    scanner will update the cached PFN if and only if it finds a block that
    is higher than the existing cached PFN and sets the skip if the
    pageblock is full or nearly full. The linear scanner will update
    skipped information and the cached PFN only when a block is completely
    scanned. The total impact is that the free scanner advances more slowly
    as it is primarily driven by the linear scanner instead of the fast
    search.

    5.0.0-rc1 5.0.0-rc1
    noresched-v3r17 slowfree-v3r17
    Amean fault-both-3 2965.68 ( 0.00%) 3036.75 ( -2.40%)
    Amean fault-both-5 3995.90 ( 0.00%) 4522.24 * -13.17%*
    Amean fault-both-7 5842.12 ( 0.00%) 6365.35 ( -8.96%)
    Amean fault-both-12 9550.87 ( 0.00%) 10340.93 ( -8.27%)
    Amean fault-both-18 13304.72 ( 0.00%) 14732.46 ( -10.73%)
    Amean fault-both-24 14618.59 ( 0.00%) 16288.96 ( -11.43%)
    Amean fault-both-30 16650.96 ( 0.00%) 16346.21 ( 1.83%)
    Amean fault-both-32 17145.15 ( 0.00%) 19317.49 ( -12.67%)

    The impact to latency is higher than the last version but it appears to
    be due to a slight increase in the free scan rates which is a potential
    side-effect of the patch. However, this is necessary for later patches
    that are more careful about how pageblocks are treated as earlier
    iterations of those patches hit corner cases where the restarts were
    punishing and very visible.

    Link: http://lkml.kernel.org/r/20190118175136.31341-19-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Scanning on large machines can take a considerable length of time and
    eventually need to be rescheduled. This is treated as an abort event
    but that's not appropriate as the attempt is likely to be retried after
    making numerous checks and taking another cycle through the page
    allocator. This patch will check the need to reschedule if necessary
    but continue the scanning.

    The main benefit is reduced scanning when compaction is taking a long
    time or the machine is over-saturated. It also avoids an unnecessary
    exit of compaction that ends up being retried by the page allocator in
    the outer loop.

    5.0.0-rc1 5.0.0-rc1
    synccached-v3r16 noresched-v3r17
    Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%*
    Amean fault-both-3 2958.27 ( 0.00%) 2965.68 ( -0.25%)
    Amean fault-both-5 4091.90 ( 0.00%) 3995.90 ( 2.35%)
    Amean fault-both-7 5803.05 ( 0.00%) 5842.12 ( -0.67%)
    Amean fault-both-12 9481.06 ( 0.00%) 9550.87 ( -0.74%)
    Amean fault-both-18 14141.51 ( 0.00%) 13304.72 ( 5.92%)
    Amean fault-both-24 16438.00 ( 0.00%) 14618.59 ( 11.07%)
    Amean fault-both-30 17531.72 ( 0.00%) 16650.96 ( 5.02%)
    Amean fault-both-32 17101.96 ( 0.00%) 17145.15 ( -0.25%)

    Link: http://lkml.kernel.org/r/20190118175136.31341-18-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • With incremental changes, compact_should_abort no longer makes any
    documented sense. Rename to compact_check_resched and update the
    associated comments. There is no benefit other than reducing redundant
    code and making the intent slightly clearer. It could potentially be
    merged with earlier patches but it just makes the review slightly
    harder.

    Link: http://lkml.kernel.org/r/20190118175136.31341-17-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Migrate has separate cached PFNs for ASYNC and SYNC* migration on the
    basis that some migrations will fail in ASYNC mode. However, if the
    cached PFNs match at the start of scanning and pageblocks are skipped
    due to having no isolation candidates, then the sync state does not
    matter. This patch keeps matching cached PFNs in sync until a pageblock
    with isolation candidates is found.

    The actual benefit is marginal given that the sync scanner following the
    async scanner will often skip a number of pageblocks but it's useless
    work. Any benefit depends heavily on whether the scanners restarted
    recently.

    Link: http://lkml.kernel.org/r/20190118175136.31341-16-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • When scanning for sources or targets, PageCompound is checked for huge
    pages as they can be skipped quickly but it happens relatively late
    after a lot of setup and checking. This patch short-cuts the check to
    make it earlier. It might still change when the lock is acquired but
    this has less overhead overall. The free scanner advances but the
    migration scanner does not. Typically the free scanner encounters more
    movable blocks that change state over the lifetime of the system and
    also tends to scan more aggressively as it's actively filling its
    portion of the physical address space with data. This could change in
    the future but for the moment, this worked better in practice and
    incurred fewer scan restarts.

    The impact on latency and allocation success rates is marginal but the
    free scan rates are reduced by 15% and system CPU usage is reduced by
    3.3%. The 2-socket results are not materially different.

    Link: http://lkml.kernel.org/r/20190118175136.31341-15-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Async migration aborts on spinlock contention but contention can be high
    when there are multiple compaction attempts and kswapd is active. The
    consequence is that the migration scanners move forward uselessly while
    still contending on locks for longer while leaving suitable migration
    sources behind.

    This patch will acquire the lock but track when contention occurs. When
    it does, the current pageblock will finish as compaction may succeed for
    that block and then abort. This will have a variable impact on latency
    as in some cases useless scanning is avoided (reduces latency) but a
    lock will be contended (increase latency) or a single contended
    pageblock is scanned that would otherwise have been skipped (increase
    latency).

    5.0.0-rc1 5.0.0-rc1
    norescan-v3r16 finishcontend-v3r16
    Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%*
    Amean fault-both-3 3002.07 ( 0.00%) 3153.17 ( -5.03%)
    Amean fault-both-5 4684.47 ( 0.00%) 4280.52 ( 8.62%)
    Amean fault-both-7 6815.54 ( 0.00%) 5811.50 * 14.73%*
    Amean fault-both-12 10864.02 ( 0.00%) 9276.85 ( 14.61%)
    Amean fault-both-18 12247.52 ( 0.00%) 11032.67 ( 9.92%)
    Amean fault-both-24 15683.99 ( 0.00%) 14285.70 ( 8.92%)
    Amean fault-both-30 18620.02 ( 0.00%) 16293.76 * 12.49%*
    Amean fault-both-32 19250.28 ( 0.00%) 16721.02 * 13.14%*

    5.0.0-rc1 5.0.0-rc1
    norescan-v3r16 finishcontend-v3r16
    Percentage huge-1 0.00 ( 0.00%) 0.00 ( 0.00%)
    Percentage huge-3 95.00 ( 0.00%) 96.82 ( 1.92%)
    Percentage huge-5 94.22 ( 0.00%) 95.40 ( 1.26%)
    Percentage huge-7 92.35 ( 0.00%) 95.92 ( 3.86%)
    Percentage huge-12 91.90 ( 0.00%) 96.73 ( 5.25%)
    Percentage huge-18 89.58 ( 0.00%) 96.77 ( 8.03%)
    Percentage huge-24 90.03 ( 0.00%) 96.05 ( 6.69%)
    Percentage huge-30 89.14 ( 0.00%) 96.81 ( 8.60%)
    Percentage huge-32 90.58 ( 0.00%) 97.41 ( 7.54%)

    There is a variable impact that is mostly good on latency while allocation
    success rates are slightly higher. System CPU usage is reduced by about
    10% but scan rate impact is mixed

    Compaction migrate scanned 27997659.00 20148867
    Compaction free scanned 120782791.00 118324914

    Migration scan rates are reduced 28% which is expected as a pageblock is
    used by the async scanner instead of skipped. The impact on the free
    scanner is known to be variable. Overall the primary justification for
    this patch is that completing scanning of a pageblock is very important
    for later patches.

    [yuehaibing@huawei.com: fix unused variable warning]
    Link: http://lkml.kernel.org/r/20190118175136.31341-14-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: YueHaibing
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Pageblocks are marked for skip when no pages are isolated after a scan.
    However, it's possible to hit corner cases where the migration scanner
    gets stuck near the boundary between the source and target scanner. Due
    to pages being migrated in blocks of COMPACT_CLUSTER_MAX, pages that are
    migrated can be reallocated before the pageblock is complete. The
    pageblock is not necessarily skipped so it can be rescanned multiple
    times. Similarly, a pageblock with some dirty/writeback pages may fail
    to migrate and be rescanned until writeback completes which is wasteful.

    This patch tracks if a pageblock is being rescanned. If so, then the
    entire pageblock will be migrated as one operation. This narrows the
    race window during which pages can be reallocated during migration.
    Secondly, if there are pages that cannot be isolated then the pageblock
    will still be fully scanned and marked for skipping. On the second
    rescan, the pageblock skip is set and the migration scanner makes
    progress.

    5.0.0-rc1 5.0.0-rc1
    findfree-v3r16 norescan-v3r16
    Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%*
    Amean fault-both-3 3200.68 ( 0.00%) 3002.07 ( 6.21%)
    Amean fault-both-5 4847.75 ( 0.00%) 4684.47 ( 3.37%)
    Amean fault-both-7 6658.92 ( 0.00%) 6815.54 ( -2.35%)
    Amean fault-both-12 11077.62 ( 0.00%) 10864.02 ( 1.93%)
    Amean fault-both-18 12403.97 ( 0.00%) 12247.52 ( 1.26%)
    Amean fault-both-24 15607.10 ( 0.00%) 15683.99 ( -0.49%)
    Amean fault-both-30 18752.27 ( 0.00%) 18620.02 ( 0.71%)
    Amean fault-both-32 21207.54 ( 0.00%) 19250.28 * 9.23%*

    5.0.0-rc1 5.0.0-rc1
    findfree-v3r16 norescan-v3r16
    Percentage huge-3 96.86 ( 0.00%) 95.00 ( -1.91%)
    Percentage huge-5 93.72 ( 0.00%) 94.22 ( 0.53%)
    Percentage huge-7 94.31 ( 0.00%) 92.35 ( -2.08%)
    Percentage huge-12 92.66 ( 0.00%) 91.90 ( -0.82%)
    Percentage huge-18 91.51 ( 0.00%) 89.58 ( -2.11%)
    Percentage huge-24 90.50 ( 0.00%) 90.03 ( -0.52%)
    Percentage huge-30 91.57 ( 0.00%) 89.14 ( -2.65%)
    Percentage huge-32 91.00 ( 0.00%) 90.58 ( -0.46%)

    Negligible difference but this was likely a case when the specific
    corner case was not hit. A previous run of the same patch based on an
    earlier iteration of the series showed large differences where migration
    rates could be halved when the corner case was hit.

    The specific corner case where migration scan rates go through the roof
    was due to a dirty/writeback pageblock located at the boundary of the
    migration/free scanner did not happen in this case. When it does
    happen, the scan rates multipled by massive margins.

    Link: http://lkml.kernel.org/r/20190118175136.31341-13-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Similar to the migration scanner, this patch uses the free lists to
    quickly locate a migration target. The search is different in that
    lower orders will be searched for a suitable high PFN if necessary but
    the search is still bound. This is justified on the grounds that the
    free scanner typically scans linearly much more than the migration
    scanner.

    If a free page is found, it is isolated and compaction continues if
    enough pages were isolated. For SYNC* scanning, the full pageblock is
    scanned for any remaining free pages so that is can be marked for
    skipping in the near future.

    1-socket thpfioscale
    5.0.0-rc1 5.0.0-rc1
    isolmig-v3r15 findfree-v3r16
    Amean fault-both-3 3024.41 ( 0.00%) 3200.68 ( -5.83%)
    Amean fault-both-5 4749.30 ( 0.00%) 4847.75 ( -2.07%)
    Amean fault-both-7 6454.95 ( 0.00%) 6658.92 ( -3.16%)
    Amean fault-both-12 10324.83 ( 0.00%) 11077.62 ( -7.29%)
    Amean fault-both-18 12896.82 ( 0.00%) 12403.97 ( 3.82%)
    Amean fault-both-24 13470.60 ( 0.00%) 15607.10 * -15.86%*
    Amean fault-both-30 17143.99 ( 0.00%) 18752.27 ( -9.38%)
    Amean fault-both-32 17743.91 ( 0.00%) 21207.54 * -19.52%*

    The impact on latency is variable but the search is optimistic and
    sensitive to the exact system state. Success rates are similar but the
    major impact is to the rate of scanning

    5.0.0-rc1 5.0.0-rc1
    isolmig-v3r15 findfree-v3r16
    Compaction migrate scanned 25646769 29507205
    Compaction free scanned 201558184 100359571

    The free scan rates are reduced by 50%. The 2-socket reductions for the
    free scanner are more dramatic which is a likely reflection that the
    machine has more memory.

    [dan.carpenter@oracle.com: fix static checker warning]
    [vbabka@suse.cz: correct number of pages scanned for lower orders]
    Link: http://lkml.kernel.org/r/20190118175136.31341-12-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Dan Carpenter
    Cc: Andrea Arcangeli
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Due to either a fast search of the free list or a linear scan, it is
    possible for multiple compaction instances to pick the same pageblock
    for migration. This is lucky for one scanner and increased scanning for
    all the others. It also allows a race between requests on which first
    allocates the resulting free block.

    This patch tests and updates the pageblock skip for the migration
    scanner carefully. When isolating a block, it will check and skip if
    the block is already in use. Once the zone lock is acquired, it will be
    rechecked so that only one scanner can set the pageblock skip for
    exclusive use. Any scanner contending will continue with a linear scan.
    The skip bit is still set if no pages can be isolated in a range. While
    this may result in redundant scanning, it avoids unnecessarily acquiring
    the zone lock when there are no suitable migration sources.

    1-socket thpscale
    Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%*
    Amean fault-both-3 3390.40 ( 0.00%) 3024.41 ( 10.80%)
    Amean fault-both-5 5082.28 ( 0.00%) 4749.30 ( 6.55%)
    Amean fault-both-7 7012.51 ( 0.00%) 6454.95 ( 7.95%)
    Amean fault-both-12 11346.63 ( 0.00%) 10324.83 ( 9.01%)
    Amean fault-both-18 15324.19 ( 0.00%) 12896.82 * 15.84%*
    Amean fault-both-24 16088.50 ( 0.00%) 13470.60 * 16.27%*
    Amean fault-both-30 18723.42 ( 0.00%) 17143.99 ( 8.44%)
    Amean fault-both-32 18612.01 ( 0.00%) 17743.91 ( 4.66%)

    5.0.0-rc1 5.0.0-rc1
    findmig-v3r15 isolmig-v3r15
    Percentage huge-3 89.83 ( 0.00%) 92.96 ( 3.48%)
    Percentage huge-5 91.96 ( 0.00%) 93.26 ( 1.41%)
    Percentage huge-7 92.85 ( 0.00%) 93.63 ( 0.84%)
    Percentage huge-12 92.74 ( 0.00%) 92.80 ( 0.07%)
    Percentage huge-18 91.71 ( 0.00%) 91.62 ( -0.10%)
    Percentage huge-24 92.13 ( 0.00%) 91.50 ( -0.69%)
    Percentage huge-30 93.79 ( 0.00%) 92.73 ( -1.13%)
    Percentage huge-32 91.27 ( 0.00%) 91.94 ( 0.74%)

    This shows a reasonable reduction in latency as multiple compaction
    scanners do not operate on the same blocks with a similar allocation
    success rate.

    Compaction migrate scanned 41093126 25646769

    Migration scan rates are reduced by 38%.

    Link: http://lkml.kernel.org/r/20190118175136.31341-11-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • The migration scanner is a linear scan of a zone with a potentiall large
    search space. Furthermore, many pageblocks are unusable such as those
    filled with reserved pages or partially filled with pages that cannot
    migrate. These still get scanned in the common case of allocating a THP
    and the cost accumulates.

    The patch uses a partial search of the free lists to locate a migration
    source candidate that is marked as MOVABLE when allocating a THP. It
    prefers picking a block with a larger number of free pages already on
    the basis that there are fewer pages to migrate to free the entire
    block. The lowest PFN found during searches is tracked as the basis of
    the start for the linear search after the first search of the free list
    fails. After the search, the free list is shuffled so that the next
    search will not encounter the same page. If the search fails then the
    subsequent searches will be shorter and the linear scanner is used.

    If this search fails, or if the request is for a small or
    unmovable/reclaimable allocation then the linear scanner is still used.
    It is somewhat pointless to use the list search in those cases. Small
    free pages must be used for the search and there is no guarantee that
    movable pages are located within that block that are contiguous.

    5.0.0-rc1 5.0.0-rc1
    noboost-v3r10 findmig-v3r15
    Amean fault-both-3 3771.41 ( 0.00%) 3390.40 ( 10.10%)
    Amean fault-both-5 5409.05 ( 0.00%) 5082.28 ( 6.04%)
    Amean fault-both-7 7040.74 ( 0.00%) 7012.51 ( 0.40%)
    Amean fault-both-12 11887.35 ( 0.00%) 11346.63 ( 4.55%)
    Amean fault-both-18 16718.19 ( 0.00%) 15324.19 ( 8.34%)
    Amean fault-both-24 21157.19 ( 0.00%) 16088.50 * 23.96%*
    Amean fault-both-30 21175.92 ( 0.00%) 18723.42 * 11.58%*
    Amean fault-both-32 21339.03 ( 0.00%) 18612.01 * 12.78%*

    5.0.0-rc1 5.0.0-rc1
    noboost-v3r10 findmig-v3r15
    Percentage huge-3 86.50 ( 0.00%) 89.83 ( 3.85%)
    Percentage huge-5 92.52 ( 0.00%) 91.96 ( -0.61%)
    Percentage huge-7 92.44 ( 0.00%) 92.85 ( 0.44%)
    Percentage huge-12 92.98 ( 0.00%) 92.74 ( -0.25%)
    Percentage huge-18 91.70 ( 0.00%) 91.71 ( 0.02%)
    Percentage huge-24 91.59 ( 0.00%) 92.13 ( 0.60%)
    Percentage huge-30 90.14 ( 0.00%) 93.79 ( 4.04%)
    Percentage huge-32 90.03 ( 0.00%) 91.27 ( 1.37%)

    This shows an improvement in allocation latencies with similar
    allocation success rates. While not presented, there was a 31%
    reduction in migration scanning and a 8% reduction on system CPU usage.
    A 2-socket machine showed similar benefits.

    [mgorman@techsingularity.net: several fixes]
    Link: http://lkml.kernel.org/r/20190204120111.GL9565@techsingularity.net
    [vbabka@suse.cz: migrate block that was found-fast, some optimisations]
    Link: http://lkml.kernel.org/r/20190118175136.31341-10-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • When compaction is finishing, it uses a flag to ensure the pageblock is
    complete but it makes sense to always complete migration of a pageblock.
    Minimally, skip information is based on a pageblock and partially
    scanned pageblocks may incur more scanning in the future. The pageblock
    skip handling also becomes more strict later in the series and the hint
    is more useful if a complete pageblock was always scanned.

    The potentially impacts latency as more scanning is done but it's not a
    consistent win or loss as the scanning is not always a high percentage
    of the pageblock and sometimes it is offset by future reductions in
    scanning. Hence, the results are not presented this time due to a
    misleading mix of gains/losses without any clear pattern. However, full
    scanning of the pageblock is important for later patches.

    Link: http://lkml.kernel.org/r/20190118175136.31341-8-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • It's non-obvious that high-order free pages are split into order-0 pages
    from the function name. Fix it.

    Link: http://lkml.kernel.org/r/20190118175136.31341-6-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • A zone parameter is passed into a number of top-level compaction
    functions despite the fact that it's already in compact_control. This
    is harmless but it did need an audit to check if zone actually ever
    changes meaningfully. This patches removes the parameter in a number of
    top-level functions. The change could be much deeper but this was
    enough to briefly clarify the flow.

    No functional change.

    Link: http://lkml.kernel.org/r/20190118175136.31341-5-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • The last_migrated_pfn field is a bit dubious as to whether it really
    helps but either way, the information from it can be inferred without
    increasing the size of compact_control so remove the field.

    Link: http://lkml.kernel.org/r/20190118175136.31341-4-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Dan Carpenter
    Cc: David Rientjes
    Cc: YueHaibing
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • sysctl_extfrag_handler() neglects to propagate the return value from
    proc_dointvec_minmax() to its caller. It's a wrapper that doesn't need
    to exist, so just use proc_dointvec_minmax() directly.

    Link: http://lkml.kernel.org/r/20190104032557.3056-1-willy@infradead.org
    Signed-off-by: Matthew Wilcox
    Reported-by: Aditya Pakki
    Acked-by: Mel Gorman
    Acked-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox
     

29 Dec, 2018

1 commit

  • This is a preparation patch only, no functional change.

    Link: http://lkml.kernel.org/r/20181123114528.28802-3-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: David Rientjes
    Cc: Michal Hocko
    Cc: Zi Yan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

27 Oct, 2018

1 commit

  • When systems are overcommitted and resources become contended, it's hard
    to tell exactly the impact this has on workload productivity, or how close
    the system is to lockups and OOM kills. In particular, when machines work
    multiple jobs concurrently, the impact of overcommit in terms of latency
    and throughput on the individual job can be enormous.

    In order to maximize hardware utilization without sacrificing individual
    job health or risk complete machine lockups, this patch implements a way
    to quantify resource pressure in the system.

    A kernel built with CONFIG_PSI=y creates files in /proc/pressure/ that
    expose the percentage of time the system is stalled on CPU, memory, or IO,
    respectively. Stall states are aggregate versions of the per-task delay
    accounting delays:

    cpu: some tasks are runnable but not executing on a CPU
    memory: tasks are reclaiming, or waiting for swapin or thrashing cache
    io: tasks are waiting for io completions

    These percentages of walltime can be thought of as pressure percentages,
    and they give a general sense of system health and productivity loss
    incurred by resource overcommit. They can also indicate when the system
    is approaching lockup scenarios and OOMs.

    To do this, psi keeps track of the task states associated with each CPU
    and samples the time they spend in stall states. Every 2 seconds, the
    samples are averaged across CPUs - weighted by the CPUs' non-idle time to
    eliminate artifacts from unused CPUs - and translated into percentages of
    walltime. A running average of those percentages is maintained over 10s,
    1m, and 5m periods (similar to the loadaverage).

    [hannes@cmpxchg.org: doc fixlet, per Randy]
    Link: http://lkml.kernel.org/r/20180828205625.GA14030@cmpxchg.org
    [hannes@cmpxchg.org: code optimization]
    Link: http://lkml.kernel.org/r/20180907175015.GA8479@cmpxchg.org
    [hannes@cmpxchg.org: rename psi_clock() to psi_update_work(), per Peter]
    Link: http://lkml.kernel.org/r/20180907145404.GB11088@cmpxchg.org
    [hannes@cmpxchg.org: fix build]
    Link: http://lkml.kernel.org/r/20180913014222.GA2370@cmpxchg.org
    Link: http://lkml.kernel.org/r/20180828172258.3185-9-hannes@cmpxchg.org
    Signed-off-by: Johannes Weiner
    Acked-by: Peter Zijlstra (Intel)
    Tested-by: Daniel Drake
    Tested-by: Suren Baghdasaryan
    Cc: Christopher Lameter
    Cc: Ingo Molnar
    Cc: Johannes Weiner
    Cc: Mike Galbraith
    Cc: Peter Enderborg
    Cc: Randy Dunlap
    Cc: Shakeel Butt
    Cc: Tejun Heo
    Cc: Vinayak Menon
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

15 Jun, 2018

1 commit

  • mm/*.c files use symbolic and octal styles for permissions.

    Using octal and not symbolic permissions is preferred by many as more
    readable.

    https://lkml.org/lkml/2016/8/2/1945

    Prefer the direct use of octal for permissions.

    Done using
    $ scripts/checkpatch.pl -f --types=SYMBOLIC_PERMS --fix-inplace mm/*.c
    and some typing.

    Before: $ git grep -P -w "0[0-7]{3,3}" mm | wc -l
    44
    After: $ git grep -P -w "0[0-7]{3,3}" mm | wc -l
    86

    Miscellanea:

    o Whitespace neatening around these conversions.

    Link: http://lkml.kernel.org/r/2e032ef111eebcd4c5952bae86763b541d373469.1522102887.git.joe@perches.com
    Signed-off-by: Joe Perches
    Acked-by: David Rientjes
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

25 May, 2018

1 commit

  • This reverts the following commits that change CMA design in MM.

    3d2054ad8c2d ("ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y")

    1d47a3ec09b5 ("mm/cma: remove ALLOC_CMA")

    bad8c6c0b114 ("mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE")

    Ville reported a following error on i386.

    Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
    microcode: microcode updated early to revision 0x4, date = 2013-06-28
    Initializing CPU#0
    Initializing HighMem for node 0 (000377fe:00118000)
    Initializing Movable for node 0 (00000001:00118000)
    BUG: Bad page state in process swapper pfn:377fe
    page:f53effc0 count:0 mapcount:-127 mapping:00000000 index:0x0
    flags: 0x80000000()
    raw: 80000000 00000000 00000000 ffffff80 00000000 00000100 00000200 00000001
    page dumped because: nonzero mapcount
    Modules linked in:
    CPU: 0 PID: 0 Comm: swapper Not tainted 4.17.0-rc5-elk+ #145
    Hardware name: Dell Inc. Latitude E5410/03VXMC, BIOS A15 07/11/2013
    Call Trace:
    dump_stack+0x60/0x96
    bad_page+0x9a/0x100
    free_pages_check_bad+0x3f/0x60
    free_pcppages_bulk+0x29d/0x5b0
    free_unref_page_commit+0x84/0xb0
    free_unref_page+0x3e/0x70
    __free_pages+0x1d/0x20
    free_highmem_page+0x19/0x40
    add_highpages_with_active_regions+0xab/0xeb
    set_highmem_pages_init+0x66/0x73
    mem_init+0x1b/0x1d7
    start_kernel+0x17a/0x363
    i386_start_kernel+0x95/0x99
    startup_32_smp+0x164/0x168

    The reason for this error is that the span of MOVABLE_ZONE is extended
    to whole node span for future CMA initialization, and, normal memory is
    wrongly freed here. I submitted the fix and it seems to work, but,
    another problem happened.

    It's so late time to fix the later problem so I decide to reverting the
    series.

    Reported-by: Ville Syrjälä
    Acked-by: Laura Abbott
    Acked-by: Michal Hocko
    Cc: Andrew Morton
    Signed-off-by: Joonsoo Kim
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     

12 Apr, 2018

2 commits

  • Now, all reserved pages for CMA region are belong to the ZONE_MOVABLE
    and it only serves for a request with GFP_HIGHMEM && GFP_MOVABLE.

    Therefore, we don't need to maintain ALLOC_CMA at all.

    Link: http://lkml.kernel.org/r/1512114786-5085-3-git-send-email-iamjoonsoo.kim@lge.com
    Signed-off-by: Joonsoo Kim
    Reviewed-by: Aneesh Kumar K.V
    Tested-by: Tony Lindgren
    Acked-by: Vlastimil Babka
    Cc: Johannes Weiner
    Cc: Laura Abbott
    Cc: Marek Szyprowski
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Michal Nazarewicz
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Russell King
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • No allocation callback is using this argument anymore. new_page_node
    used to use this parameter to convey node_id resp. migration error up
    to move_pages code (do_move_page_to_node_array). The error status never
    made it into the final status field and we have a better way to
    communicate node id to the status field now. All other allocation
    callbacks simply ignored the argument so we can drop it finally.

    [mhocko@suse.com: fix migration callback]
    Link: http://lkml.kernel.org/r/20180105085259.GH2801@dhcp22.suse.cz
    [akpm@linux-foundation.org: fix alloc_misplaced_dst_page()]
    [mhocko@kernel.org: fix build]
    Link: http://lkml.kernel.org/r/20180103091134.GB11319@dhcp22.suse.cz
    Link: http://lkml.kernel.org/r/20180103082555.14592-3-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Reviewed-by: Zi Yan
    Cc: Andrea Reale
    Cc: Anshuman Khandual
    Cc: Kirill A. Shutemov
    Cc: Mike Kravetz
    Cc: Naoya Horiguchi
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

06 Apr, 2018

2 commits

  • Link: http://lkml.kernel.org/r/1519585191-10180-4-git-send-email-rppt@linux.vnet.ibm.com
    Signed-off-by: Mike Rapoport
    Reviewed-by: Andrew Morton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • It's possible for free pages to become stranded on per-cpu pagesets
    (pcps) that, if drained, could be merged with buddy pages on the zone's
    free area to form large order pages, including up to MAX_ORDER.

    Consider a verbose example using the tools/vm/page-types tool at the
    beginning of a ZONE_NORMAL ('B' indicates a buddy page and 'S' indicates
    a slab page). Pages on pcps do not have any page flags set.

    109954 1 _______S________________________________________________________
    109955 2 __________B_____________________________________________________
    109957 1 ________________________________________________________________
    109958 1 __________B_____________________________________________________
    109959 7 ________________________________________________________________
    109960 1 __________B_____________________________________________________
    109961 9 ________________________________________________________________
    10996a 1 __________B_____________________________________________________
    10996b 3 ________________________________________________________________
    10996e 1 __________B_____________________________________________________
    10996f 1 ________________________________________________________________
    ...
    109f8c 1 __________B_____________________________________________________
    109f8d 2 ________________________________________________________________
    109f8f 2 __________B_____________________________________________________
    109f91 f ________________________________________________________________
    109fa0 1 __________B_____________________________________________________
    109fa1 7 ________________________________________________________________
    109fa8 1 __________B_____________________________________________________
    109fa9 1 ________________________________________________________________
    109faa 1 __________B_____________________________________________________
    109fab 1 _______S________________________________________________________

    The compaction migration scanner is attempting to defragment this memory
    since it is at the beginning of the zone. It has done so quite well,
    all movable pages have been migrated. From pfn [0x109955, 0x109fab),
    there are only buddy pages and pages without flags set.

    These pages may be stranded on pcps that could otherwise allow this
    memory to be coalesced if freed back to the zone free area. It is
    possible that some of these pages may not be on pcps and that something
    has called alloc_pages() and used the memory directly, but we rely on
    the absence of __GFP_MOVABLE in these cases to allocate from
    MIGATE_UNMOVABLE pageblocks to try to keep these MIGRATE_MOVABLE
    pageblocks as free as possible.

    These buddy and pcp pages, spanning 1,621 pages, could be coalesced and
    allow for three transparent hugepages to be dynamically allocated.
    Running the numbers for all such spans on the system, it was found that
    there were over 400 such spans of only buddy pages and pages without
    flags set at the time this /proc/kpageflags sample was collected.
    Without this support, there were _no_ order-9 or order-10 pages free.

    When kcompactd fails to defragment memory such that a cc.order page can
    be allocated, drain all pcps for the zone back to the buddy allocator so
    this stranding cannot occur. Compaction for that order will
    subsequently be deferred, which acts as a ratelimit on this drain.

    Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1803010340100.88270@chino.kir.corp.google.com
    Signed-off-by: David Rientjes
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes