09 Jun, 2022

1 commit

  • commit bbe832b9db2e1ad21522f8f0bf02775fff8a0e0e upstream.

    At present, pages not in the target zone are added to cc->migratepages
    list in isolate_migratepages_block(). As a result, pages may migrate
    between nodes unintentionally.

    This would be a serious problem for older kernels without commit
    a984226f457f849e ("mm: memcontrol: remove the pgdata parameter of
    mem_cgroup_page_lruvec"), because it can corrupt the lru list by
    handling pages in list without holding proper lru_lock.

    Avoid returning a pfn outside the target zone in the case that it is
    not aligned with a pageblock boundary. Otherwise
    isolate_migratepages_block() will handle pages not in the target zone.

    Link: https://lkml.kernel.org/r/20220511044300.4069-1-yamamoto.rei@jp.fujitsu.com
    Fixes: 70b44595eafe ("mm, compaction: use free lists to quickly locate a migration source")
    Signed-off-by: Rei Yamamoto
    Reviewed-by: Miaohe Lin
    Acked-by: Mel Gorman
    Reviewed-by: Oscar Salvador
    Cc: Don Dutile
    Cc: Wonhyuk Yang
    Cc: Rei Yamamoto
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Greg Kroah-Hartman

    Rei Yamamoto
     

09 Sep, 2021

2 commits

  • Merge more updates from Andrew Morton:
    "147 patches, based on 7d2a07b769330c34b4deabeed939325c77a7ec2f.

    Subsystems affected by this patch series: mm (memory-hotplug, rmap,
    ioremap, highmem, cleanups, secretmem, kfence, damon, and vmscan),
    alpha, percpu, procfs, misc, core-kernel, MAINTAINERS, lib,
    checkpatch, epoll, init, nilfs2, coredump, fork, pids, criu, kconfig,
    selftests, ipc, and scripts"

    * emailed patches from Andrew Morton : (94 commits)
    scripts: check_extable: fix typo in user error message
    mm/workingset: correct kernel-doc notations
    ipc: replace costly bailout check in sysvipc_find_ipc()
    selftests/memfd: remove unused variable
    Kconfig.debug: drop selecting non-existing HARDLOCKUP_DETECTOR_ARCH
    configs: remove the obsolete CONFIG_INPUT_POLLDEV
    prctl: allow to setup brk for et_dyn executables
    pid: cleanup the stale comment mentioning pidmap_init().
    kernel/fork.c: unexport get_{mm,task}_exe_file
    coredump: fix memleak in dump_vma_snapshot()
    fs/coredump.c: log if a core dump is aborted due to changed file permissions
    nilfs2: use refcount_dec_and_lock() to fix potential UAF
    nilfs2: fix memory leak in nilfs_sysfs_delete_snapshot_group
    nilfs2: fix memory leak in nilfs_sysfs_create_snapshot_group
    nilfs2: fix memory leak in nilfs_sysfs_delete_##name##_group
    nilfs2: fix memory leak in nilfs_sysfs_create_##name##_group
    nilfs2: fix NULL pointer in nilfs_##name##_attr_release
    nilfs2: fix memory leak in nilfs_sysfs_create_device_group
    trap: cleanup trap_init()
    init: move usermodehelper_enable() to populate_rootfs()
    ...

    Linus Torvalds
     
  • Patch series "mm: remove pfn_valid_within() and CONFIG_HOLES_IN_ZONE".

    After recent updates to freeing unused parts of the memory map, no
    architecture can have holes in the memory map within a pageblock. This
    makes pfn_valid_within() check and CONFIG_HOLES_IN_ZONE configuration
    option redundant.

    The first patch removes them both in a mechanical way and the second patch
    simplifies memory_hotplug::test_pages_in_a_zone() that had
    pfn_valid_within() surrounded by more logic than simple if.

    This patch (of 2):

    After recent changes in freeing of the unused parts of the memory map and
    rework of pfn_valid() in arm and arm64 there are no architectures that can
    have holes in the memory map within a pageblock and so nothing can enable
    CONFIG_HOLES_IN_ZONE which guards non trivial implementation of
    pfn_valid_within().

    With that, pfn_valid_within() is always hardwired to 1 and can be
    completely removed.

    Remove calls to pfn_valid_within() and CONFIG_HOLES_IN_ZONE.

    Link: https://lkml.kernel.org/r/20210713080035.7464-1-rppt@kernel.org
    Link: https://lkml.kernel.org/r/20210713080035.7464-2-rppt@kernel.org
    Signed-off-by: Mike Rapoport
    Acked-by: David Hildenbrand
    Cc: Greg Kroah-Hartman
    Cc: "Rafael J. Wysocki"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

04 Sep, 2021

3 commits

  • The proactive compaction[1] gets triggered for every 500msec and run
    compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9) pages
    based on the value set to sysctl.compaction_proactiveness. Triggering the
    compaction for every 500msec in search of COMPACTION_HPAGE_ORDER pages is
    not needed for all applications, especially on the embedded system
    usecases which may have few MB's of RAM. Enabling the proactive
    compaction in its state will endup in running almost always on such
    systems.

    Other side, proactive compaction can still be very much useful for getting
    a set of higher order pages in some controllable manner(controlled by
    using the sysctl.compaction_proactiveness). So, on systems where enabling
    the proactive compaction always may proove not required, can trigger the
    same from user space on write to its sysctl interface. As an example, say
    app launcher decide to launch the memory heavy application which can be
    launched fast if it gets more higher order pages thus launcher can prepare
    the system in advance by triggering the proactive compaction from
    userspace.

    This triggering of proactive compaction is done on a write to
    sysctl.compaction_proactiveness by user.

    [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a

    [akpm@linux-foundation.org: tweak vm.rst, per Mike]

    Link: https://lkml.kernel.org/r/1627653207-12317-1-git-send-email-charante@codeaurora.org
    Signed-off-by: Charan Teja Reddy
    Acked-by: Vlastimil Babka
    Acked-by: Rafael Aquini
    Cc: Mike Rapoport
    Cc: Luis Chamberlain
    Cc: Kees Cook
    Cc: Iurii Zaikin
    Cc: Dave Hansen
    Cc: Mel Gorman
    Cc: Nitin Gupta
    Cc: Jonathan Corbet
    Cc: Khalid Aziz
    Cc: David Rientjes
    Cc: Vinayak Menon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Charan Teja Reddy
     
  • Vlastimil Babka figured out that when fragmentation score didn't go down
    across the proactive compaction i.e. when no progress is made, next wake
    up for proactive compaction is deferred for 1 << COMPACT_MAX_DEFER_SHIFT,
    i.e. 64 times, with each wakeup interval of
    HPAGE_FRAG_CHECK_INTERVAL_MSEC(=500). In each of this wakeup, it just
    decrement 'proactive_defer' counter and goes sleep i.e. it is getting
    woken to just decrement a counter.

    The same deferral time can also achieved by simply doing the
    HPAGE_FRAG_CHECK_INTERVAL_MSEC << COMPACT_MAX_DEFER_SHIFT thus unnecessary
    wakeup of kcompact thread is avoided thus also removes the need of
    'proactive_defer' thread counter.

    [akpm@linux-foundation.org: tweak comment]

    Link: https://lore.kernel.org/linux-fsdevel/88abfdb6-2c13-b5a6-5b46-742d12d1c910@suse.cz/
    Link: https://lkml.kernel.org/r/1626869599-25412-1-git-send-email-charante@codeaurora.org
    Signed-off-by: Charan Teja Reddy
    Acked-by: Vlastimil Babka
    Reviewed-by: Khalid Aziz
    Acked-by: David Rientjes
    Cc: Nitin Gupta
    Cc: Vinayak Menon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Charan Teja Reddy
     
  • Under normal circumstances, migrate_pages() returns the number of pages
    migrated. In error conditions, it returns an error code. When returning
    an error code, there is no way to know how many pages were migrated or not
    migrated.

    Make migrate_pages() return how many pages are demoted successfully for
    all cases, including when encountering errors. Page reclaim behavior will
    depend on this in subsequent patches.

    Link: https://lkml.kernel.org/r/20210721063926.3024591-3-ying.huang@intel.com
    Link: https://lkml.kernel.org/r/20210715055145.195411-4-ying.huang@intel.com
    Signed-off-by: Yang Shi
    Signed-off-by: Dave Hansen
    Signed-off-by: "Huang, Ying"
    Suggested-by: Oscar Salvador [optional parameter]
    Reviewed-by: Yang Shi
    Reviewed-by: Zi Yan
    Cc: Michal Hocko
    Cc: Wei Xu
    Cc: Dan Williams
    Cc: David Hildenbrand
    Cc: David Rientjes
    Cc: Greg Thelen
    Cc: Keith Busch
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yang Shi
     

03 Jul, 2021

1 commit

  • Merge more updates from Andrew Morton:
    "190 patches.

    Subsystems affected by this patch series: mm (hugetlb, userfaultfd,
    vmscan, kconfig, proc, z3fold, zbud, ras, mempolicy, memblock,
    migration, thp, nommu, kconfig, madvise, memory-hotplug, zswap,
    zsmalloc, zram, cleanups, kfence, and hmm), procfs, sysctl, misc,
    core-kernel, lib, lz4, checkpatch, init, kprobes, nilfs2, hfs,
    signals, exec, kcov, selftests, compress/decompress, and ipc"

    * emailed patches from Andrew Morton : (190 commits)
    ipc/util.c: use binary search for max_idx
    ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock
    ipc: use kmalloc for msg_queue and shmid_kernel
    ipc sem: use kvmalloc for sem_undo allocation
    lib/decompressors: remove set but not used variabled 'level'
    selftests/vm/pkeys: exercise x86 XSAVE init state
    selftests/vm/pkeys: refill shadow register after implicit kernel write
    selftests/vm/pkeys: handle negative sys_pkey_alloc() return code
    selftests/vm/pkeys: fix alloc_random_pkey() to make it really, really random
    kcov: add __no_sanitize_coverage to fix noinstr for all architectures
    exec: remove checks in __register_bimfmt()
    x86: signal: don't do sas_ss_reset() until we are certain that sigframe won't be abandoned
    hfsplus: report create_date to kstat.btime
    hfsplus: remove unnecessary oom message
    nilfs2: remove redundant continue statement in a while-loop
    kprobes: remove duplicated strong free_insn_page in x86 and s390
    init: print out unknown kernel parameters
    checkpatch: do not complain about positive return values starting with EPOLL
    checkpatch: improve the indented label test
    checkpatch: scripts/spdxcheck.py now requires python3
    ...

    Linus Torvalds
     

01 Jul, 2021

3 commits

  • Because of 'min(1, ...)', fast_isolate_freepages set 'limit' to 0 or 1.
    This takes away the opportunities of find candinate pages. So, by making
    enough scans available, increases the probability of finding the
    appropriate freepage.

    Tested it on the thpscale and the results are as follows.

    5.12.0 5.12.0
    valnilla patched
    Amean fault-both-1 598.15 ( 0.00%) 592.56 ( 0.93%)
    Amean fault-both-3 1494.47 ( 0.00%) 1514.35 ( -1.33%)
    Amean fault-both-5 2519.48 ( 0.00%) 2471.76 ( 1.89%)
    Amean fault-both-7 3173.85 ( 0.00%) 3079.19 ( 2.98%)
    Amean fault-both-12 8063.83 ( 0.00%) 7858.29 ( 2.55%)
    Amean fault-both-18 8781.20 ( 0.00%) 7827.70 * 10.86%*
    Amean fault-both-24 12576.44 ( 0.00%) 12250.20 ( 2.59%)
    Amean fault-both-30 18503.27 ( 0.00%) 17528.11 * 5.27%*
    Amean fault-both-32 16133.69 ( 0.00%) 13874.24 * 14.00%*

    5.12.0 5.12.0
    vanilla patched
    Ops Compaction migrate scanned 6547133.00 5963901.00
    Ops Compaction free scanned 32452453.00 26609101.00

    5.12 5.12
    vanilla patched
    Duration User 27.99 28.84
    Duration System 244.08 236.76
    Duration Elapsed 78.27 78.38

    Link: https://lkml.kernel.org/r/20210626082443.22547-1-vvghjk1234@gmail.com
    Fixes: 5a811889de10f ("mm, compaction: use free lists to quickly locate a migration target")
    Signed-off-by: Wonhyuk Yang
    Acked-by: Mel Gorman
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wonhyuk Yang
     
  • The list_splice_tail(&sublist, freelist) also do !list_empty(&sublist)
    check, so remove the duplicate call.

    Link: https://lkml.kernel.org/r/20210609095409.19920-1-liu.xiang@zlingsmart.com
    Signed-off-by: Liu Xiang
    Reviewed-by: David Hildenbrand
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Liu Xiang
     
  • Use DEVICE_ATTR_WO helper instead of plain DEVICE_ATTR, which makes the
    code a bit shorter and easier to read.

    Link: https://lkml.kernel.org/r/20210523064521.32912-1-yuehaibing@huawei.com
    Signed-off-by: YueHaibing
    Reviewed-by: Andrew Morton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    YueHaibing
     

30 Jun, 2021

2 commits

  • Merge misc updates from Andrew Morton:
    "191 patches.

    Subsystems affected by this patch series: kthread, ia64, scripts,
    ntfs, squashfs, ocfs2, kernel/watchdog, and mm (gup, pagealloc, slab,
    slub, kmemleak, dax, debug, pagecache, gup, swap, memcg, pagemap,
    mprotect, bootmem, dma, tracing, vmalloc, kasan, initialization,
    pagealloc, and memory-failure)"

    * emailed patches from Andrew Morton : (191 commits)
    mm,hwpoison: make get_hwpoison_page() call get_any_page()
    mm,hwpoison: send SIGBUS with error virutal address
    mm/page_alloc: split pcp->high across all online CPUs for cpuless nodes
    mm/page_alloc: allow high-order pages to be stored on the per-cpu lists
    mm: replace CONFIG_FLAT_NODE_MEM_MAP with CONFIG_FLATMEM
    mm: replace CONFIG_NEED_MULTIPLE_NODES with CONFIG_NUMA
    docs: remove description of DISCONTIGMEM
    arch, mm: remove stale mentions of DISCONIGMEM
    mm: remove CONFIG_DISCONTIGMEM
    m68k: remove support for DISCONTIGMEM
    arc: remove support for DISCONTIGMEM
    arc: update comment about HIGHMEM implementation
    alpha: remove DISCONTIGMEM and NUMA
    mm/page_alloc: move free_the_page
    mm/page_alloc: fix counting of managed_pages
    mm/page_alloc: improve memmap_pages dbg msg
    mm: drop SECTION_SHIFT in code comments
    mm/page_alloc: introduce vm.percpu_pagelist_high_fraction
    mm/page_alloc: limit the number of pages on PCP lists when reclaim is active
    mm/page_alloc: scale the number of pages that are batch freed
    ...

    Linus Torvalds
     
  • All the callers of mem_cgroup_page_lruvec() just pass page_pgdat(page) as
    the 2nd parameter to it (except isolate_migratepages_block()). But for
    isolate_migratepages_block(), the page_pgdat(page) is also equal to the
    local variable of @pgdat. So mem_cgroup_page_lruvec() do not need the
    pgdat parameter. Just remove it to simplify the code.

    Link: https://lkml.kernel.org/r/20210417043538.9793-4-songmuchun@bytedance.com
    Signed-off-by: Muchun Song
    Acked-by: Johannes Weiner
    Reviewed-by: Shakeel Butt
    Acked-by: Roman Gushchin
    Acked-by: Michal Hocko
    Cc: Vladimir Davydov
    Cc: Xiongchun Duan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Muchun Song
     

18 Jun, 2021

1 commit

  • Replace a bunch of 'p->state == TASK_RUNNING' with a new helper:
    task_is_running(p).

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Davidlohr Bueso
    Acked-by: Geert Uytterhoeven
    Acked-by: Will Deacon
    Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org

    Peter Zijlstra
     

07 May, 2021

1 commit

  • Fix ~94 single-word typos in locking code comments, plus a few
    very obvious grammar mistakes.

    Link: https://lkml.kernel.org/r/20210322212624.GA1963421@gmail.com
    Link: https://lore.kernel.org/r/20210322205203.GB1959563@gmail.com
    Signed-off-by: Ingo Molnar
    Reviewed-by: Matthew Wilcox (Oracle)
    Reviewed-by: Randy Dunlap
    Cc: Bhaskar Chowdhury
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

06 May, 2021

7 commits

  • Various coding style tweaks to various files under mm/

    [daizhiyuan@phytium.com.cn: mm/swapfile: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614223624-16055-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/sparse: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614227288-19363-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/vmscan: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614227649-19853-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/compaction: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614228218-20770-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/oom_kill: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614228360-21168-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/shmem: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614228504-21491-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/page_alloc: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614228613-21754-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/filemap: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1614228936-22337-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/mlock: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1613956588-2453-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/frontswap: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1613962668-15045-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/vmalloc: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1613963379-15988-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/memory_hotplug: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1613971784-24878-1-git-send-email-daizhiyuan@phytium.com.cn
    [daizhiyuan@phytium.com.cn: mm/mempolicy: minor coding style tweaks]
    Link: https://lkml.kernel.org/r/1613972228-25501-1-git-send-email-daizhiyuan@phytium.com.cn

    Link: https://lkml.kernel.org/r/1614222374-13805-1-git-send-email-daizhiyuan@phytium.com.cn
    Signed-off-by: Zhiyuan Dai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhiyuan Dai
     
  • Currently, migrate_[prep|finish] is merely a wrapper of
    lru_cache_[disable|enable]. There is not much to gain from having
    additional abstraction.

    Use lru_cache_[disable|enable] instead of migrate_[prep|finish], which
    would be more descriptive.

    note: migrate_prep_local in compaction.c changed into lru_add_drain to
    avoid CPU schedule cost with involving many other CPUs to keep old
    behavior.

    Link: https://lkml.kernel.org/r/20210319175127.886124-2-minchan@kernel.org
    Signed-off-by: Minchan Kim
    Acked-by: Michal Hocko
    Reviewed-by: David Hildenbrand
    Cc: Chris Goldsworthy
    Cc: John Dias
    Cc: Matthew Wilcox
    Cc: Oliver Sang
    Cc: Suren Baghdasaryan
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • By definition, COMPACT[STALL|FAIL] events needs to be counted when there
    is 'At least in one zone compaction wasn't deferred or skipped from the
    direct compaction'. And when compaction is skipped or deferred,
    COMPACT_SKIPPED will be returned but it will still go and update these
    compaction events which is wrong in the sense that COMPACT[STALL|FAIL]
    is counted without even trying the compaction.

    Correct this by skipping the counting of these events when
    COMPACT_SKIPPED is returned for compaction. This indirectly also avoid
    the unnecessary try into the get_page_from_freelist() when compaction is
    not even tried.

    There is a corner case where compaction is skipped but still count
    COMPACTSTALL event, which is that IRQ came and freed the page and the
    same is captured in capture_control.

    Link: https://lkml.kernel.org/r/1613151184-21213-1-git-send-email-charante@codeaurora.org
    Signed-off-by: Charan Teja Reddy
    Acked-by: Vlastimil Babka
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Charan Teja Reddy
     
  • The sysctl_compact_memory is mostly unused in mm/compaction.c It just
    acts as a place holder for sysctl to store .data.

    But the .data itself is not needed here.

    So we can get ride of this variable completely and make .data as NULL.
    This will also eliminate the extern declaration from header file. No
    functionality is broken or changed this way.

    Link: https://lkml.kernel.org/r/1614852224-14671-1-git-send-email-pintu@codeaurora.org
    Signed-off-by: Pintu Kumar
    Signed-off-by: Pintu Agarwal
    Reviewed-by: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pintu Kumar
     
  • alloc_contig_range() will fail if it finds a HugeTLB page within the
    range, without a chance to handle them. Since HugeTLB pages can be
    migrated as any LRU or Movable page, it does not make sense to bail out
    without trying. Enable the interface to recognize in-use HugeTLB pages so
    we can migrate them, and have much better chances to succeed the call.

    Link: https://lkml.kernel.org/r/20210419075413.1064-7-osalvador@suse.de
    Signed-off-by: Oscar Salvador
    Reviewed-by: Mike Kravetz
    Acked-by: Michal Hocko
    Acked-by: David Hildenbrand
    Cc: Muchun Song
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oscar Salvador
     
  • alloc_contig_range will fail if it ever sees a HugeTLB page within the
    range we are trying to allocate, even when that page is free and can be
    easily reallocated.

    This has proved to be problematic for some users of alloc_contic_range,
    e.g: CMA and virtio-mem, where those would fail the call even when those
    pages lay in ZONE_MOVABLE and are free.

    We can do better by trying to replace such page.

    Free hugepages are tricky to handle so as to no userspace application
    notices disruption, we need to replace the current free hugepage with a
    new one.

    In order to do that, a new function called alloc_and_dissolve_huge_page is
    introduced. This function will first try to get a new fresh hugepage, and
    if it succeeds, it will replace the old one in the free hugepage pool.

    The free page replacement is done under hugetlb_lock, so no external users
    of hugetlb will notice the change. To allocate the new huge page, we use
    alloc_buddy_huge_page(), so we do not have to deal with any counters, and
    prep_new_huge_page() is not called. This is valulable because in case we
    need to free the new page, we only need to call __free_pages().

    Once we know that the page to be replaced is a genuine 0-refcounted huge
    page, we remove the old page from the freelist by remove_hugetlb_page().
    Then, we can call __prep_new_huge_page() and
    __prep_account_new_huge_page() for the new huge page to properly
    initialize it and increment the hstate->nr_huge_pages counter (previously
    decremented by remove_hugetlb_page()). Once done, the page is enqueued by
    enqueue_huge_page() and it is ready to be used.

    There is one tricky case when page's refcount is 0 because it is in the
    process of being released. A missing PageHugeFreed bit will tell us that
    freeing is in flight so we retry after dropping the hugetlb_lock. The
    race window should be small and the next retry should make a forward
    progress.

    E.g:

    CPU0 CPU1
    free_huge_page() isolate_or_dissolve_huge_page
    PageHuge() == T
    alloc_and_dissolve_huge_page
    alloc_buddy_huge_page()
    spin_lock_irq(hugetlb_lock)
    // PageHuge() && !PageHugeFreed &&
    // !PageCount()
    spin_unlock_irq(hugetlb_lock)
    spin_lock_irq(hugetlb_lock)
    1) update_and_free_page
    PageHuge() == F
    __free_pages()
    2) enqueue_huge_page
    SetPageHugeFreed()
    spin_unlock_irq(&hugetlb_lock)
    spin_lock_irq(hugetlb_lock)
    1) PageHuge() == F (freed by case#1 from CPU0)
    2) PageHuge() == T
    PageHugeFreed() == T
    - proceed with replacing the page

    In the case above we retry as the window race is quite small and we have
    high chances to succeed next time.

    With regard to the allocation, we restrict it to the node the page belongs
    to with __GFP_THISNODE, meaning we do not fallback on other node's zones.

    Note that gigantic hugetlb pages are fenced off since there is a cyclic
    dependency between them and alloc_contig_range.

    Link: https://lkml.kernel.org/r/20210419075413.1064-6-osalvador@suse.de
    Signed-off-by: Oscar Salvador
    Acked-by: Michal Hocko
    Acked-by: David Hildenbrand
    Reviewed-by: Mike Kravetz
    Cc: Muchun Song
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oscar Salvador
     
  • Currently, isolate_migratepages_{range,block} and their callers use a pfn
    == 0 vs pfn != 0 scheme to let the caller know whether there was any error
    during isolation.

    This does not work as soon as we need to start reporting different error
    codes and make sure we pass them down the chain, so they are properly
    interpreted by functions like e.g: alloc_contig_range.

    Let us rework isolate_migratepages_{range,block} so we can report error
    codes. Since isolate_migratepages_block will stop returning the next pfn
    to be scanned, we reuse the cc->migrate_pfn field to keep track of that.

    Link: https://lkml.kernel.org/r/20210419075413.1064-3-osalvador@suse.de
    Signed-off-by: Oscar Salvador
    Acked-by: Vlastimil Babka
    Acked-by: Mike Kravetz
    Reviewed-by: David Hildenbrand
    Cc: Michal Hocko
    Cc: Muchun Song
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oscar Salvador
     

25 Feb, 2021

7 commits

  • Compaction always operates on pages from a single given zone when
    isolating both pages to migrate and freepages. Pageblock boundaries are
    intersected with zone boundaries to be safe in case zone starts or ends in
    the middle of pageblock. The use of pageblock_pfn_to_page() protects
    against non-contiguous pageblocks.

    The functions fast_isolate_freepages() and fast_isolate_around() don't
    currently protect the fast freepage isolation thoroughly enough against
    these corner cases, and can result in freepage isolation operate outside
    of zone boundaries:

    - in fast_isolate_freepages() if we get a pfn from the first pageblock
    of a zone that starts in the middle of that pageblock, 'highest' can
    be a pfn outside of the zone.

    If we fail to isolate anything in this function, we may then call
    fast_isolate_around() on a pfn outside of the zone and there
    effectively do a set_pageblock_skip(page_to_pfn(highest)) which may
    currently hit a VM_BUG_ON() in some configurations

    - fast_isolate_around() checks only the zone end boundary and not
    beginning, nor that the pageblock is contiguous (with
    pageblock_pfn_to_page()) so it's possible that we end up calling
    isolate_freepages_block() on a range of pfn's from two different
    zones and end up e.g. isolating freepages under the wrong zone's
    lock.

    This patch should fix the above issues.

    Link: https://lkml.kernel.org/r/20210217173300.6394-1-vbabka@suse.cz
    Fixes: 5a811889de10 ("mm, compaction: use free lists to quickly locate a migration target")
    Signed-off-by: Vlastimil Babka
    Acked-by: David Rientjes
    Acked-by: Mel Gorman
    Cc: Andrea Arcangeli
    Cc: David Hildenbrand
    Cc: Michal Hocko
    Cc: Mike Rapoport
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vlastimil Babka
     
  • In the fast_find_migrateblock(), it iterates ocer the freelist to find the
    proper pageblock. But there are some misbehaviors.

    First, if the page we found is equal to cc->migrate_pfn, it is considered
    that we didn't find a suitable pageblock. Secondly, if the loop was
    terminated because order is less than PAGE_ALLOC_COSTLY_ORDER, it could be
    considered that we found a suitable one. Thirdly, if the skip bit is set
    on the page block and we goto continue, it doesn't check nr_scanned.
    Fourthly, if the page block's skip bit is set, it checks that page block
    is the last of list, which is unnecessary.

    Link: https://lkml.kernel.org/r/20210128130411.6125-1-vvghjk1234@gmail.com
    Fixes: 70b44595eafe9 ("mm, compaction: use free lists to quickly locate a migration source")
    Signed-off-by: Wonhyuk Yang
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wonhyuk Yang
     
  • should_proactive_compact_node() returns true when sum of the weighted
    fragmentation score of all the zones in the node is greater than the
    wmark_high of compaction, which then triggers the proactive compaction
    that operates on the individual zones of the node. But proactive
    compaction runs on the zone only when its weighted fragmentation score
    is greater than wmark_low(=wmark_high - 10).

    This means that the sum of the weighted fragmentation scores of all the
    zones can exceed the wmark_high but individual weighted fragmentation zone
    scores can still be less than wmark_low which makes the unnecessary
    trigger of the proactive compaction only to return doing nothing.

    Issue with the return of proactive compaction with out even trying is its
    deferral. It is simply deferred for 1 << COMPACT_MAX_DEFER_SHIFT if the
    scores across the proactive compaction is same, thinking that compaction
    didn't make any progress but in reality it didn't even try. With the
    delay between successive retries for proactive compaction is 500msec, it
    can result into the deferral for ~30sec with out even trying the proactive
    compaction.

    Test scenario is that: compaction_proactiveness=50 thus the wmark_low = 50
    and wmark_high = 60. System have 2 zones(Normal and Movable) with sizes
    5GB and 6GB respectively. After opening some apps on the android, the
    weighted fragmentation scores of these zones are 47 and 49 respectively.
    Since the sum of these fragmentation scores are above the wmark_high which
    triggers the proactive compaction and there since the individual zones
    weighted fragmentation scores are below wmark_low, it returns without
    trying the proactive compaction. As a result the weighted fragmentation
    scores of the zones are still 47 and 49 which makes the existing logic to
    defer the compaction thinking that noprogress is made across the
    compaction.

    Fix this by checking just zone fragmentation score, not the weighted, in
    __compact_finished() and use the zones weighted fragmentation score in
    fragmentation_score_node(). In the test case above, If the weighted
    average of is above wmark_high, then individual score (not adjusted) of
    atleast one zone has to be above wmark_high. Thus it avoids the
    unnecessary trigger and deferrals of the proactive compaction.

    Link: https://lkml.kernel.org/r/1610989938-31374-1-git-send-email-charante@codeaurora.org
    Signed-off-by: Charan Teja Reddy
    Suggested-by: Vlastimil Babka
    Acked-by: Vlastimil Babka
    Reviewed-by: Khalid Aziz
    Acked-by: David Rientjes
    Cc: Michal Hocko
    Cc: Nitin Gupta
    Cc: Vinayak Menon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Charan Teja Reddy
     
  • The VM_BUG_ON_PAGE(!PageLocked(page), page) is also done in PageMovable.
    Remove this explicitly one.

    Link: https://lkml.kernel.org/r/20210109081420.46030-1-linmiaohe@huawei.com
    Signed-off-by: Miaohe Lin
    Reviewed-by: David Hildenbrand
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Miaohe Lin
     
  • isolate_migratepages_block() used rcu_read_lock() with the intention of
    safeguarding against the mem_cgroup being destroyed concurrently; but
    its TestClearPageLRU already protects against that. Delete the
    unnecessary rcu_read_lock() and _unlock().

    Hugh Dickins helped on commit log polishing, Thanks!

    Link: https://lkml.kernel.org/r/1608614453-10739-3-git-send-email-alex.shi@linux.alibaba.com
    Signed-off-by: Alex Shi
    Acked-by: Hugh Dickins
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alex Shi
     
  • The parameter is redundant in the sense that it can be potentially
    extracted from the "struct page" parameter by page_lru(). We need to
    make sure that existing PageActive() or PageUnevictable() remains
    until the function returns. A few places don't conform, and simple
    reordering fixes them.

    This patch may have left page_off_lru() seemingly odd, and we'll take
    care of it in the next patch.

    Link: https://lore.kernel.org/linux-mm/20201207220949.830352-6-yuzhao@google.com/
    Link: https://lkml.kernel.org/r/20210122220600.906146-6-yuzhao@google.com
    Signed-off-by: Yu Zhao
    Cc: Alex Shi
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Matthew Wilcox
    Cc: Michal Hocko
    Cc: Roman Gushchin
    Cc: Vladimir Davydov
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yu Zhao
     
  • The function just returns 2 results, so using a 'switch' to deal with its
    result is unnecessary. Also simplify it to a bool func as Vlastimil
    suggested.

    Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's
    suggestion to update comments in function.

    Link: https://lkml.kernel.org/r/728874d7-2d93-4049-68c1-dcc3b2d52ccd@linux.alibaba.com
    Signed-off-by: Alex Shi
    Reviewed-by: Andrew Morton
    Acked-by: Vlastimil Babka
    Cc: Matthew Wilcox
    Cc: Hugh Dickins
    Cc: Yu Zhao
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alex Shi
     

06 Feb, 2021

1 commit

  • In fast_isolate_freepages, high_pfn will be used if a prefered one (ie
    PFN >= low_fn) not found.

    But the high_pfn is not reset before searching an free area, so when it
    was used as freepage, it may from another free area searched before. As
    a result move_freelist_head(freelist, freepage) will have unexpected
    behavior (eg corrupt the MOVABLE freelist)

    Unable to handle kernel paging request at virtual address dead000000000200
    Mem abort info:
    ESR = 0x96000044
    Exception class = DABT (current EL), IL = 32 bits
    SET = 0, FnV = 0
    EA = 0, S1PTW = 0
    Data abort info:
    ISV = 0, ISS = 0x00000044
    CM = 0, WnR = 1
    [dead000000000200] address between user and kernel address ranges

    -000|list_cut_before(inline)
    -000|move_freelist_head(inline)
    -000|fast_isolate_freepages(inline)
    -000|isolate_freepages(inline)
    -000|compaction_alloc(?, ?)
    -001|unmap_and_move(inline)
    -001|migrate_pages([NSD:0xFFFFFF80088CBBD0] from = 0xFFFFFF80088CBD88, [NSD:0xFFFFFF80088CBBC8] get_new_p
    -002|__read_once_size(inline)
    -002|static_key_count(inline)
    -002|static_key_false(inline)
    -002|trace_mm_compaction_migratepages(inline)
    -002|compact_zone(?, [NSD:0xFFFFFF80088CBCB0] capc = 0x0)
    -003|kcompactd_do_work(inline)
    -003|kcompactd([X19] p = 0xFFFFFF93227FBC40)
    -004|kthread([X20] _create = 0xFFFFFFE1AFB26380)
    -005|ret_from_fork(asm)

    The issue was reported on an smart phone product with 6GB ram and 3GB
    zram as swap device.

    This patch fixes the issue by reset high_pfn before searching each free
    area, which ensure freepage and freelist match when call
    move_freelist_head in fast_isolate_freepages().

    Link: http://lkml.kernel.org/r/20190118175136.31341-12-mgorman@techsingularity.net
    Link: https://lkml.kernel.org/r/20210112094720.1238444-1-wu-yan@tcl.com
    Fixes: 5a811889de10f1eb ("mm, compaction: use free lists to quickly locate a migration target")
    Signed-off-by: Rokudo Yan
    Acked-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rokudo Yan
     

16 Dec, 2020

5 commits

  • This patch moves per node lru_lock into lruvec, thus bring a lru_lock for
    each of memcg per node. So on a large machine, each of memcg don't have
    to suffer from per node pgdat->lru_lock competition. They could go fast
    with their self lru_lock.

    After move memcg charge before lru inserting, page isolation could
    serialize page's memcg, then per memcg lruvec lock is stable and could
    replace per node lru lock.

    In isolate_migratepages_block(), compact_unlock_should_abort and
    lock_page_lruvec_irqsave are open coded to work with compact_control.
    Also add a debug func in locking which may give some clues if there are
    sth out of hands.

    Daniel Jordan's testing show 62% improvement on modified readtwice case on
    his 2P * 10 core * 2 HT broadwell box.
    https://lore.kernel.org/lkml/20200915165807.kpp7uhiw7l3loofu@ca-dmjordan1.us.oracle.com/

    Hugh Dickins helped on the patch polish, thanks!

    [alex.shi@linux.alibaba.com: fix comment typo]
    Link: https://lkml.kernel.org/r/5b085715-292a-4b43-50b3-d73dc90d1de5@linux.alibaba.com
    [alex.shi@linux.alibaba.com: use page_memcg()]
    Link: https://lkml.kernel.org/r/5a4c2b72-7ee8-2478-fc0e-85eb83aafec4@linux.alibaba.com

    Link: https://lkml.kernel.org/r/1604566549-62481-18-git-send-email-alex.shi@linux.alibaba.com
    Signed-off-by: Alex Shi
    Acked-by: Hugh Dickins
    Acked-by: Johannes Weiner
    Cc: Rong Chen
    Cc: Michal Hocko
    Cc: Vladimir Davydov
    Cc: Yang Shi
    Cc: Matthew Wilcox
    Cc: Konstantin Khlebnikov
    Cc: Daniel Jordan
    Cc: Alexander Duyck
    Cc: Andrea Arcangeli
    Cc: Andrey Ryabinin
    Cc: "Huang, Ying"
    Cc: Jann Horn
    Cc: Joonsoo Kim
    Cc: Kirill A. Shutemov
    Cc: Kirill A. Shutemov
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Mika Penttilä
    Cc: Minchan Kim
    Cc: Shakeel Butt
    Cc: Tejun Heo
    Cc: Thomas Gleixner
    Cc: Vlastimil Babka
    Cc: Wei Yang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alex Shi
     
  • Currently, compaction would get the lru_lock and then do page isolation
    which works fine with pgdat->lru_lock, since any page isoltion would
    compete for the lru_lock. If we want to change to memcg lru_lock, we have
    to isolate the page before getting lru_lock, thus isoltion would block
    page's memcg change which relay on page isoltion too. Then we could
    safely use per memcg lru_lock later.

    The new page isolation use previous introduced TestClearPageLRU() + pgdat
    lru locking which will be changed to memcg lru lock later.

    Hugh Dickins fixed following bugs in this patch's early
    version:

    Fix lots of crashes under compaction load: isolate_migratepages_block()
    must clean up appropriately when rejecting a page, setting PageLRU again
    if it had been cleared; and a put_page() after get_page_unless_zero()
    cannot safely be done while holding locked_lruvec - it may turn out to be
    the final put_page(), which will take an lruvec lock when PageLRU.

    And move __isolate_lru_page_prepare back after get_page_unless_zero to
    make trylock_page() safe: trylock_page() is not safe to use at this time:
    its setting PG_locked can race with the page being freed or allocated
    ("Bad page"), and can also erase flags being set by one of those "sole
    owners" of a freshly allocated page who use non-atomic __SetPageFlag().

    Link: https://lkml.kernel.org/r/1604566549-62481-16-git-send-email-alex.shi@linux.alibaba.com
    Suggested-by: Johannes Weiner
    Signed-off-by: Alex Shi
    Acked-by: Hugh Dickins
    Acked-by: Johannes Weiner
    Acked-by: Vlastimil Babka
    Cc: Matthew Wilcox
    Cc: Alexander Duyck
    Cc: Andrea Arcangeli
    Cc: Andrey Ryabinin
    Cc: "Chen, Rong A"
    Cc: Daniel Jordan
    Cc: "Huang, Ying"
    Cc: Jann Horn
    Cc: Joonsoo Kim
    Cc: Kirill A. Shutemov
    Cc: Kirill A. Shutemov
    Cc: Konstantin Khlebnikov
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Michal Hocko
    Cc: Mika Penttilä
    Cc: Minchan Kim
    Cc: Shakeel Butt
    Cc: Tejun Heo
    Cc: Thomas Gleixner
    Cc: Vladimir Davydov
    Cc: Wei Yang
    Cc: Yang Shi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alex Shi
     
  • defer_compaction() and compaction_deferred() and compaction_restarting()
    in mm/compaction.c won't be used in other files, so make them static, and
    remove the declaration in the header file.

    Take the chance to fix a typo.

    Link: https://lkml.kernel.org/r/20201123170801.GA9625@rlk
    Signed-off-by: Hui Su
    Acked-by: Vlastimil Babka
    Cc: Nitin Gupta
    Cc: Baoquan He
    Cc: Mateusz Nosek
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hui Su
     
  • Since commit 837d026d560c ("mm/compaction: more trace to understand
    when/why compaction start/finish"), the comment place is not suitable.

    So move compaction_suitable's comment to right place.

    Link: https://lkml.kernel.org/r/20201116144121.GA385717@rlk
    Signed-off-by: Hui Su
    Reviewed-by: Andrew Morton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hui Su
     
  • There are two 'start_pfn' declared in compact_zone() which have different
    meanings. Rename the second one to 'iteration_start_pfn' to prevent
    confusion.

    Also, remove an useless semicolon.

    Link: https://lkml.kernel.org/r/20201019115044.1571-1-yanfei.xu@windriver.com
    Signed-off-by: Yanfei Xu
    Acked-by: David Hildenbrand
    Acked-by: Vlastimil Babka
    Acked-by: Pankaj Gupta
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yanfei Xu
     

15 Nov, 2020

2 commits

  • In isolate_migratepages_block, if we have too many isolated pages and
    nr_migratepages is not zero, we should try to migrate what we have
    without wasting time on isolating.

    In theory it's possible that multiple parallel compactions will cause
    too_many_isolated() to become true even if each has isolated less than
    COMPACT_CLUSTER_MAX, and loop forever in the while loop. Bailing
    immediately prevents that.

    [vbabka@suse.cz: changelog addition]

    Fixes: 1da2f328fa64 (“mm,thp,compaction,cma: allow THP migration for CMA allocations”)
    Suggested-by: Vlastimil Babka
    Signed-off-by: Zi Yan
    Signed-off-by: Andrew Morton
    Cc:
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Rik van Riel
    Cc: Yang Shi
    Link: https://lkml.kernel.org/r/20201030183809.3616803-2-zi.yan@sent.com
    Signed-off-by: Linus Torvalds

    Zi Yan
     
  • In isolate_migratepages_block, when cc->alloc_contig is true, we are
    able to isolate compound pages. But nr_migratepages and nr_isolated did
    not count compound pages correctly, causing us to isolate more pages
    than we thought.

    So count compound pages as the number of base pages they contain.
    Otherwise, we might be trapped in too_many_isolated while loop, since
    the actual isolated pages can go up to COMPACT_CLUSTER_MAX*512=16384,
    where COMPACT_CLUSTER_MAX is 32, since we stop isolation after
    cc->nr_migratepages reaches to COMPACT_CLUSTER_MAX.

    In addition, after we fix the issue above, cc->nr_migratepages could
    never be equal to COMPACT_CLUSTER_MAX if compound pages are isolated,
    thus page isolation could not stop as we intended. Change the isolation
    stop condition to '>='.

    The issue can be triggered as follows:

    In a system with 16GB memory and an 8GB CMA region reserved by
    hugetlb_cma, if we first allocate 10GB THPs and mlock them (so some THPs
    are allocated in the CMA region and mlocked), reserving 6 1GB hugetlb
    pages via /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages will
    get stuck (looping in too_many_isolated function) until we kill either
    task. With the patch applied, oom will kill the application with 10GB
    THPs and let hugetlb page reservation finish.

    [ziy@nvidia.com: v3]

    Link: https://lkml.kernel.org/r/20201030183809.3616803-1-zi.yan@sent.com
    Fixes: 1da2f328fa64 ("cmm,thp,compaction,cma: allow THP migration for CMA allocations")
    Signed-off-by: Zi Yan
    Signed-off-by: Andrew Morton
    Reviewed-by: Yang Shi
    Acked-by: Vlastimil Babka
    Cc: Rik van Riel
    Cc: Michal Hocko
    Cc: Mel Gorman
    Cc:
    Link: https://lkml.kernel.org/r/20201029200435.3386066-1-zi.yan@sent.com
    Signed-off-by: Linus Torvalds

    Zi Yan
     

17 Oct, 2020

1 commit

  • The current page_order() can only be called on pages in the buddy
    allocator. For compound pages, you have to use compound_order(). This is
    confusing and led to a bug, so rename page_order() to buddy_order().

    Signed-off-by: Matthew Wilcox (Oracle)
    Signed-off-by: Andrew Morton
    Link: https://lkml.kernel.org/r/20201001152259.14932-2-willy@infradead.org
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     

14 Oct, 2020

1 commit

  • The same code can work both for 'zone->compact_considered > defer_limit'
    and 'zone->compact_considered >= defer_limit'. In the latter there is one
    branch less which is more effective considering performance.

    Signed-off-by: Mateusz Nosek
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrew Morton
    Cc: Joonsoo Kim
    Cc: Vlastimil Babka
    Cc: Mel Gorman
    Cc: David Rientjes
    Link: https://lkml.kernel.org/r/20200913190448.28649-1-mateusznosek0@gmail.com
    Signed-off-by: Linus Torvalds

    Mateusz Nosek
     

15 Aug, 2020

1 commit

  • The thp prefix is more frequently used than hpage and we should be
    consistent between the various functions.

    [akpm@linux-foundation.org: fix mm/migrate.c]

    Signed-off-by: Matthew Wilcox (Oracle)
    Signed-off-by: Andrew Morton
    Reviewed-by: William Kucharski
    Reviewed-by: Zi Yan
    Cc: Mike Kravetz
    Cc: David Hildenbrand
    Cc: "Kirill A. Shutemov"
    Link: http://lkml.kernel.org/r/20200629151959.15779-6-willy@infradead.org
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     

13 Aug, 2020

1 commit