24 Jul, 2014

1 commit

  • Commit 4a705fef9862 ("hugetlb: fix copy_hugetlb_page_range() to handle
    migration/hwpoisoned entry") changed the order of
    huge_ptep_set_wrprotect() and huge_ptep_get(), which leads to breakage
    in some workloads like hugepage-backed heap allocation via libhugetlbfs.
    This patch fixes it.

    The test program for the problem is shown below:

    $ cat heap.c
    #include
    #include
    #include

    #define HPS 0x200000

    int main() {
    int i;
    char *p = malloc(HPS);
    memset(p, '1', HPS);
    for (i = 0; i < 5; i++) {
    if (!fork()) {
    memset(p, '2', HPS);
    p = malloc(HPS);
    memset(p, '3', HPS);
    free(p);
    return 0;
    }
    }
    sleep(1);
    free(p);
    return 0;
    }

    $ export HUGETLB_MORECORE=yes ; export HUGETLB_NO_PREFAULT= ; hugectl --heap ./heap

    Fixes 4a705fef9862 ("hugetlb: fix copy_hugetlb_page_range() to handle
    migration/hwpoisoned entry"), so is applicable to -stable kernels which
    include it.

    Signed-off-by: Naoya Horiguchi
    Reported-by: Guillaume Morin
    Suggested-by: Guillaume Morin
    Acked-by: Hugh Dickins
    Cc: [2.6.37+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

24 Jun, 2014

1 commit

  • There's a race between fork() and hugepage migration, as a result we try
    to "dereference" a swap entry as a normal pte, causing kernel panic.
    The cause of the problem is that copy_hugetlb_page_range() can't handle
    "swap entry" family (migration entry and hwpoisoned entry) so let's fix
    it.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Naoya Horiguchi
    Acked-by: Hugh Dickins
    Cc: Christoph Lameter
    Cc: [2.6.37+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

05 Jun, 2014

7 commits

  • We already have a function named hugepages_supported(), and the similar
    name hugepage_migration_support() is a bit unconfortable, so let's rename
    it hugepage_migration_supported().

    Signed-off-by: Naoya Horiguchi
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • alloc_huge_page() now mixes normal code path with error handle logic.
    This patches move out the error handle logic, to make normal code path
    more clean and redue code duplicate.

    Signed-off-by: Jianyu Zhan
    Acked-by: Davidlohr Bueso
    Reviewed-by: Michal Hocko
    Reviewed-by: Aneesh Kumar K.V
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianyu Zhan
     
  • HugeTLB is limited to allocating hugepages whose size are less than
    MAX_ORDER order. This is so because HugeTLB allocates hugepages via the
    buddy allocator. Gigantic pages (that is, pages whose size is greater
    than MAX_ORDER order) have to be allocated at boottime.

    However, boottime allocation has at least two serious problems. First,
    it doesn't support NUMA and second, gigantic pages allocated at boottime
    can't be freed.

    This commit solves both issues by adding support for allocating gigantic
    pages during runtime. It works just like regular sized hugepages,
    meaning that the interface in sysfs is the same, it supports NUMA, and
    gigantic pages can be freed.

    For example, on x86_64 gigantic pages are 1GB big. To allocate two 1G
    gigantic pages on node 1, one can do:

    # echo 2 > \
    /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages

    And to free them all:

    # echo 0 > \
    /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages

    The one problem with gigantic page allocation at runtime is that it
    can't be serviced by the buddy allocator. To overcome that problem,
    this commit scans all zones from a node looking for a large enough
    contiguous region. When one is found, it's allocated by using CMA, that
    is, we call alloc_contig_range() to do the actual allocation. For
    example, on x86_64 we scan all zones looking for a 1GB contiguous
    region. When one is found, it's allocated by alloc_contig_range().

    One expected issue with that approach is that such gigantic contiguous
    regions tend to vanish as runtime goes by. The best way to avoid this
    for now is to make gigantic page allocations very early during system
    boot, say from a init script. Other possible optimization include using
    compaction, which is supported by CMA but is not explicitly used by this
    commit.

    It's also important to note the following:

    1. Gigantic pages allocated at boottime by the hugepages= command-line
    option can be freed at runtime just fine

    2. This commit adds support for gigantic pages only to x86_64. The
    reason is that I don't have access to nor experience with other archs.
    The code is arch indepedent though, so it should be simple to add
    support to different archs

    3. I didn't add support for hugepage overcommit, that is allocating
    a gigantic page on demand when
    /proc/sys/vm/nr_overcommit_hugepages > 0. The reason is that I don't
    think it's reasonable to do the hard and long work required for
    allocating a gigantic page at fault time. But it should be simple
    to add this if wanted

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Luiz Capitulino
    Reviewed-by: Davidlohr Bueso
    Acked-by: Kirill A. Shutemov
    Reviewed-by: Zhang Yanfei
    Reviewed-by: Yasuaki Ishimatsu
    Cc: Andrea Arcangeli
    Cc: David Rientjes
    Cc: Marcelo Tosatti
    Cc: Naoya Horiguchi
    Cc: Rik van Riel
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luiz Capitulino
     
  • Next commit will add new code which will want to call
    for_each_node_mask_to_alloc() macro. Move it, its buddy
    for_each_node_mask_to_free() and their dependencies up in the file so the
    new code can use them. This is just code movement, no logic change.

    Signed-off-by: Luiz Capitulino
    Reviewed-by: Andrea Arcangeli
    Reviewed-by: Naoya Horiguchi
    Reviewed-by: Yasuaki Ishimatsu
    Reviewed-by: Davidlohr Bueso
    Acked-by: Kirill A. Shutemov
    Reviewed-by: Zhang Yanfei
    Cc: David Rientjes
    Cc: Marcelo Tosatti
    Cc: Rik van Riel
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luiz Capitulino
     
  • Hugepages pages never get the PG_reserved bit set, so don't clear it.

    However, note that if the bit gets mistakenly set free_pages_check() will
    catch it.

    Signed-off-by: Luiz Capitulino
    Reviewed-by: Davidlohr Bueso
    Acked-by: Kirill A. Shutemov
    Reviewed-by: Zhang Yanfei
    Cc: Andrea Arcangeli
    Cc: David Rientjes
    Cc: Marcelo Tosatti
    Cc: Naoya Horiguchi
    Cc: Rik van Riel
    Cc: Yasuaki Ishimatsu
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luiz Capitulino
     
  • Signed-off-by: Luiz Capitulino
    Reviewed-by: Andrea Arcangeli
    Reviewed-by: Naoya Horiguchi
    Reviewed-by: Yasuaki Ishimatsu
    Reviewed-by: Davidlohr Bueso
    Acked-by: Kirill A. Shutemov
    Reviewed-by: Zhang Yanfei
    Cc: David Rientjes
    Cc: Marcelo Tosatti
    Cc: Rik van Riel
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luiz Capitulino
     
  • The HugeTLB subsystem uses the buddy allocator to allocate hugepages
    during runtime. This means that hugepages allocation during runtime is
    limited to MAX_ORDER order. For archs supporting gigantic pages (that
    is, page sizes greater than MAX_ORDER), this in turn means that those
    pages can't be allocated at runtime.

    HugeTLB supports gigantic page allocation during boottime, via the boot
    allocator. To this end the kernel provides the command-line options
    hugepagesz= and hugepages=, which can be used to instruct the kernel to
    allocate N gigantic pages during boot.

    For example, x86_64 supports 2M and 1G hugepages, but only 2M hugepages
    can be allocated and freed at runtime. If one wants to allocate 1G
    gigantic pages, this has to be done at boot via the hugepagesz= and
    hugepages= command-line options.

    Now, gigantic page allocation at boottime has two serious problems:

    1. Boottime allocation is not NUMA aware. On a NUMA machine the kernel
    evenly distributes boottime allocated hugepages among nodes.

    For example, suppose you have a four-node NUMA machine and want
    to allocate four 1G gigantic pages at boottime. The kernel will
    allocate one gigantic page per node.

    On the other hand, we do have users who want to be able to specify
    which NUMA node gigantic pages should allocated from. So that they
    can place virtual machines on a specific NUMA node.

    2. Gigantic pages allocated at boottime can't be freed

    At this point it's important to observe that regular hugepages allocated
    at runtime don't have those problems. This is so because HugeTLB
    interface for runtime allocation in sysfs supports NUMA and runtime
    allocated pages can be freed just fine via the buddy allocator.

    This series adds support for allocating gigantic pages at runtime. It
    does so by allocating gigantic pages via CMA instead of the buddy
    allocator. Releasing gigantic pages is also supported via CMA. As this
    series builds on top of the existing HugeTLB interface, it makes gigantic
    page allocation and releasing just like regular sized hugepages. This
    also means that NUMA support just works.

    For example, to allocate two 1G gigantic pages on node 1, one can do:

    # echo 2 > \
    /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages

    And, to release all gigantic pages on the same node:

    # echo 0 > \
    /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages

    Please, refer to patch 5/5 for full technical details.

    Finally, please note that this series is a follow up for a previous series
    that tried to extend the command-line options set to be NUMA aware:

    http://marc.info/?l=linux-mm&m=139593335312191&w=2

    During the discussion of that series it was agreed that having runtime
    allocation support for gigantic pages was a better solution.

    This patch (of 5):

    This function is going to be used by non-init code in a future
    commit.

    Signed-off-by: Luiz Capitulino
    Reviewed-by: Davidlohr Bueso
    Acked-by: Kirill A. Shutemov
    Reviewed-by: Zhang Yanfei
    Cc: Marcelo Tosatti
    Cc: Andrea Arcangeli
    Cc: Davidlohr Bueso
    Cc: David Rientjes
    Cc: Yasuaki Ishimatsu
    Cc: Yinghai Lu
    Cc: Rik van Riel
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luiz Capitulino
     

07 May, 2014

1 commit

  • Currently, I am seeing the following when I `mount -t hugetlbfs /none
    /dev/hugetlbfs`, and then simply do a `ls /dev/hugetlbfs`. I think it's
    related to the fact that hugetlbfs is properly not correctly setting
    itself up in this state?:

    Unable to handle kernel paging request for data at address 0x00000031
    Faulting instruction address: 0xc000000000245710
    Oops: Kernel access of bad area, sig: 11 [#1]
    SMP NR_CPUS=2048 NUMA pSeries
    ....

    In KVM guests on Power, in a guest not backed by hugepages, we see the
    following:

    AnonHugePages: 0 kB
    HugePages_Total: 0
    HugePages_Free: 0
    HugePages_Rsvd: 0
    HugePages_Surp: 0
    Hugepagesize: 64 kB

    HPAGE_SHIFT == 0 in this configuration, which indicates that hugepages
    are not supported at boot-time, but this is only checked in
    hugetlb_init(). Extract the check to a helper function, and use it in a
    few relevant places.

    This does make hugetlbfs not supported (not registered at all) in this
    environment. I believe this is fine, as there are no valid hugepages
    and that won't change at runtime.

    [akpm@linux-foundation.org: use pr_info(), per Mel]
    [akpm@linux-foundation.org: fix build when HPAGE_SHIFT is undefined]
    Signed-off-by: Nishanth Aravamudan
    Reviewed-by: Aneesh Kumar K.V
    Acked-by: Mel Gorman
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     

19 Apr, 2014

1 commit

  • soft lockup in freeing gigantic hugepage fixed in commit 55f67141a892 "mm:
    hugetlb: fix softlockup when a large number of hugepages are freed." can
    happen in return_unused_surplus_pages(), so let's fix it.

    Signed-off-by: Masayoshi Mizuma
    Signed-off-by: Naoya Horiguchi
    Cc: Joonsoo Kim
    Cc: Michal Hocko
    Cc: Aneesh Kumar
    Cc: KOSAKI Motohiro
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mizuma, Masayoshi
     

08 Apr, 2014

5 commits

  • When I decrease the value of nr_hugepage in procfs a lot, softlockup
    happens. It is because there is no chance of context switch during this
    process.

    On the other hand, when I allocate a large number of hugepages, there is
    some chance of context switch. Hence softlockup doesn't happen during
    this process. So it's necessary to add the context switch in the
    freeing process as same as allocating process to avoid softlockup.

    When I freed 12 TB hugapages with kernel-2.6.32-358.el6, the freeing
    process occupied a CPU over 150 seconds and following softlockup message
    appeared twice or more.

    $ echo 6000000 > /proc/sys/vm/nr_hugepages
    $ cat /proc/sys/vm/nr_hugepages
    6000000
    $ grep ^Huge /proc/meminfo
    HugePages_Total: 6000000
    HugePages_Free: 6000000
    HugePages_Rsvd: 0
    HugePages_Surp: 0
    Hugepagesize: 2048 kB
    $ echo 0 > /proc/sys/vm/nr_hugepages

    BUG: soft lockup - CPU#16 stuck for 67s! [sh:12883] ...
    Pid: 12883, comm: sh Not tainted 2.6.32-358.el6.x86_64 #1
    Call Trace:
    free_pool_huge_page+0xb8/0xd0
    set_max_huge_pages+0x128/0x190
    hugetlb_sysctl_handler_common+0x113/0x140
    hugetlb_sysctl_handler+0x1e/0x20
    proc_sys_call_handler+0x97/0xd0
    proc_sys_write+0x14/0x20
    vfs_write+0xb8/0x1a0
    sys_write+0x51/0x90
    __audit_syscall_exit+0x265/0x290
    system_call_fastpath+0x16/0x1b

    I have not confirmed this problem with upstream kernels because I am not
    able to prepare the machine equipped with 12TB memory now. However I
    confirmed that the amount of decreasing hugepages was directly
    proportional to the amount of required time.

    I measured required times on a smaller machine. It showed 130-145
    hugepages decreased in a millisecond.

    Amount of decreasing Required time Decreasing rate
    hugepages (msec) (pages/msec)
    ------------------------------------------------------------
    10,000 pages == 20GB 70 - 74 135-142
    30,000 pages == 60GB 208 - 229 131-144

    It means decrement of 6TB hugepages will trigger softlockup with the
    default threshold 20sec, in this decreasing rate.

    Signed-off-by: Masayoshi Mizuma
    Cc: Joonsoo Kim
    Cc: Michal Hocko
    Cc: Wanpeng Li
    Cc: Aneesh Kumar
    Cc: KOSAKI Motohiro
    Cc: Naoya Horiguchi
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mizuma, Masayoshi
     
  • Signed-off-by: Choi Gi-yong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Choi Gi-yong
     
  • To increase compiler portability there is which
    provides convenience macros for various gcc constructs. Eg: __weak for
    __attribute__((weak)). I've replaced all instances of gcc attributes with
    the right macro in the memory management (/mm) subsystem.

    [akpm@linux-foundation.org: while-we're-there consistency tweaks]
    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gideon Israel Dsouza
     
  • The NUMA scanning code can end up iterating over many gigabytes of
    unpopulated memory, especially in the case of a freshly started KVM
    guest with lots of memory.

    This results in the mmu notifier code being called even when there are
    no mapped pages in a virtual address range. The amount of time wasted
    can be enough to trigger soft lockup warnings with very large KVM
    guests.

    This patch moves the mmu notifier call to the pmd level, which
    represents 1GB areas of memory on x86-64. Furthermore, the mmu notifier
    code is only called from the address in the PMD where present mappings
    are first encountered.

    The hugetlbfs code is left alone for now; hugetlb mappings are not
    relocatable, and as such are left alone by the NUMA code, and should
    never trigger this problem to begin with.

    Signed-off-by: Rik van Riel
    Acked-by: David Rientjes
    Cc: Peter Zijlstra
    Cc: Andrea Arcangeli
    Reported-by: Xing Gang
    Tested-by: Chegu Vinod
    Cc: Sasha Levin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • huge_pte_offset() could return NULL, so we need NULL check to avoid
    potential NULL pointer dereferences.

    Signed-off-by: Naoya Horiguchi
    Cc: Mel Gorman
    Cc: Sasha Levin
    Cc: Kirill A. Shutemov
    Cc: Aneesh Kumar K.V
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

04 Apr, 2014

8 commits

  • Both prep_compound_huge_page() and prep_compound_gigantic_page() are
    only called at bootstrap and can be marked as __init.

    The __SetPageTail(page) in prep_compound_gigantic_page() happening
    before page->first_page is initialized is not concerning since this is
    bootstrap.

    Signed-off-by: David Rientjes
    Reviewed-by: Michal Hocko
    Cc: Joonsoo Kim
    Reviewed-by: Davidlohr Bueso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The kernel can currently only handle a single hugetlb page fault at a
    time. This is due to a single mutex that serializes the entire path.
    This lock protects from spurious OOM errors under conditions of low
    availability of free hugepages. This problem is specific to hugepages,
    because it is normal to want to use every single hugepage in the system
    - with normal pages we simply assume there will always be a few spare
    pages which can be used temporarily until the race is resolved.

    Address this problem by using a table of mutexes, allowing a better
    chance of parallelization, where each hugepage is individually
    serialized. The hash key is selected depending on the mapping type.
    For shared ones it consists of the address space and file offset being
    faulted; while for private ones the mm and virtual address are used.
    The size of the table is selected based on a compromise of collisions
    and memory footprint of a series of database workloads.

    Large database workloads that make heavy use of hugepages can be
    particularly exposed to this issue, causing start-up times to be
    painfully slow. This patch reduces the startup time of a 10 Gb Oracle
    DB (with ~5000 faults) from 37.5 secs to 25.7 secs. Larger workloads
    will naturally benefit even more.

    NOTE:
    The only downside to this patch, detected by Joonsoo Kim, is that a
    small race is possible in private mappings: A child process (with its
    own mm, after cow) can instantiate a page that is already being handled
    by the parent in a cow fault. When low on pages, can trigger spurious
    OOMs. I have not been able to think of a efficient way of handling
    this... but do we really care about such a tiny window? We already
    maintain another theoretical race with normal pages. If not, one
    possible way to is to maintain the single hash for private mappings --
    any workloads that *really* suffer from this scaling problem should
    already use shared mappings.

    [akpm@linux-foundation.org: remove stray + characters, go BUG if hugetlb_init() kmalloc fails]
    Signed-off-by: Davidlohr Bueso
    Cc: Aneesh Kumar K.V
    Cc: David Gibson
    Cc: Joonsoo Kim
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • Util now, we get a resv_map by two ways according to each mapping type.
    This makes code dirty and unreadable. Unify it.

    [davidlohr@hp.com: code cleanups]
    Signed-off-by: Joonsoo Kim
    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Aneesh Kumar K.V
    Reviewed-by: Naoya Horiguchi
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • This is a preparation patch to unify the use of vma_resv_map()
    regardless of the map type. This patch prepares it by removing
    resv_map_put(), which only works for HPAGE_RESV_OWNER's resv_map, not
    for all resv_maps.

    [davidlohr@hp.com: update changelog]
    Signed-off-by: Joonsoo Kim
    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Aneesh Kumar K.V
    Reviewed-by: Naoya Horiguchi
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • There is a race condition if we map a same file on different processes.
    Region tracking is protected by mmap_sem and hugetlb_instantiation_mutex.
    When we do mmap, we don't grab a hugetlb_instantiation_mutex, but only
    mmap_sem (exclusively). This doesn't prevent other tasks from modifying
    the region structure, so it can be modified by two processes
    concurrently.

    To solve this, introduce a spinlock to resv_map and make region
    manipulation function grab it before they do actual work.

    [davidlohr@hp.com: updated changelog]
    Signed-off-by: Davidlohr Bueso
    Signed-off-by: Joonsoo Kim
    Suggested-by: Joonsoo Kim
    Acked-by: David Gibson
    Cc: David Gibson
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • To change a protection method for region tracking to find grained one,
    we pass the resv_map, instead of list_head, to region manipulation
    functions.

    This doesn't introduce any functional change, and it is just for
    preparing a next step.

    [davidlohr@hp.com: update changelog]
    Signed-off-by: Joonsoo Kim
    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Aneesh Kumar K.V
    Reviewed-by: Naoya Horiguchi
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Currently, to track reserved and allocated regions, we use two different
    ways, depending on the mapping. For MAP_SHARED, we use
    address_mapping's private_list and, while for MAP_PRIVATE, we use a
    resv_map.

    Now, we are preparing to change a coarse grained lock which protect a
    region structure to fine grained lock, and this difference hinder it.
    So, before changing it, unify region structure handling, consistently
    using a resv_map regardless of the kind of mapping.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Aneesh Kumar K.V
    Reviewed-by: Naoya Horiguchi
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Since put_mems_allowed() is strictly optional, its a seqcount retry, we
    don't need to evaluate the function if the allocation was in fact
    successful, saving a smp_rmb some loads and comparisons on some relative
    fast-paths.

    Since the naming, get/put_mems_allowed() does suggest a mandatory
    pairing, rename the interface, as suggested by Mel, to resemble the
    seqcount interface.

    This gives us: read_mems_allowed_begin() and read_mems_allowed_retry(),
    where it is important to note that the return value of the latter call
    is inverted from its previous incarnation.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

24 Jan, 2014

1 commit

  • Most of the VM_BUG_ON assertions are performed on a page. Usually, when
    one of these assertions fails we'll get a BUG_ON with a call stack and
    the registers.

    I've recently noticed based on the requests to add a small piece of code
    that dumps the page to various VM_BUG_ON sites that the page dump is
    quite useful to people debugging issues in mm.

    This patch adds a VM_BUG_ON_PAGE(cond, page) which beyond doing what
    VM_BUG_ON() does, also dumps the page before executing the actual
    BUG_ON.

    [akpm@linux-foundation.org: fix up includes]
    Signed-off-by: Sasha Levin
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

22 Jan, 2014

5 commits

  • Switch to memblock interfaces for early memory allocator instead of
    bootmem allocator. No functional change in beahvior than what it is in
    current code from bootmem users points of view.

    Archs already converted to NO_BOOTMEM now directly use memblock
    interfaces instead of bootmem wrappers build on top of memblock. And
    the archs which still uses bootmem, these new apis just fallback to
    exiting bootmem APIs.

    Signed-off-by: Grygorii Strashko
    Signed-off-by: Santosh Shilimkar
    Cc: "Rafael J. Wysocki"
    Cc: Arnd Bergmann
    Cc: Christoph Lameter
    Cc: Greg Kroah-Hartman
    Cc: H. Peter Anvin
    Cc: Johannes Weiner
    Cc: KAMEZAWA Hiroyuki
    Cc: Konrad Rzeszutek Wilk
    Cc: Michal Hocko
    Cc: Paul Walmsley
    Cc: Pavel Machek
    Cc: Russell King
    Cc: Tejun Heo
    Cc: Tony Lindgren
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Grygorii Strashko
     
  • When copy_hugetlb_page_range() is called to copy a range of hugetlb
    mappings, the secondary MMUs are not notified if there is a protection
    downgrade, which breaks COW semantics in KVM.

    This patch adds the necessary MMU notifier calls.

    Signed-off-by: Andreas Sandberg
    Acked-by: Steve Capper
    Acked-by: Marc Zyngier
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Hugh Dickins
    Cc: Andrea Arcangeli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andreas Sandberg
     
  • No actual need of it. So keep it internal.

    Signed-off-by: Andrea Arcangeli
    Cc: Khalid Aziz
    Cc: Pravin Shelar
    Cc: Greg Kroah-Hartman
    Cc: Ben Hutchings
    Cc: Christoph Lameter
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Andi Kleen
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     
  • Signed-off-by: Andrea Arcangeli
    Cc: Khalid Aziz
    Cc: Pravin Shelar
    Cc: Greg Kroah-Hartman
    Cc: Ben Hutchings
    Cc: Christoph Lameter
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Andi Kleen
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • get_page_foll() is more optimal and is always safe to use under the PT
    lock. More so for hugetlbfs as there's no risk of race conditions with
    split_huge_page regardless of the PT lock.

    Signed-off-by: Andrea Arcangeli
    Tested-by: Khalid Aziz
    Cc: Pravin Shelar
    Cc: Greg Kroah-Hartman
    Cc: Ben Hutchings
    Cc: Christoph Lameter
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Andi Kleen
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     

22 Nov, 2013

2 commits

  • Commit 7cb2ef56e6a8 ("mm: fix aio performance regression for database
    caused by THP") can cause dereference of a dangling pointer if
    split_huge_page runs during PageHuge() if there are updates to the
    tail_page->private field.

    Also it is repeating compound_head twice for hugetlbfs and it is running
    compound_head+compound_trans_head for THP when a single one is needed in
    both cases.

    The new code within the PageSlab() check doesn't need to verify that the
    THP page size is never bigger than the smallest hugetlbfs page size, to
    avoid memory corruption.

    A longstanding theoretical race condition was found while fixing the
    above (see the change right after the skip_unlock label, that is
    relevant for the compound_lock path too).

    By re-establishing the _mapcount tail refcounting for all compound
    pages, this also fixes the below problem:

    echo 0 >/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

    BUG: Bad page state in process bash pfn:59a01
    page:ffffea000139b038 count:0 mapcount:10 mapping: (null) index:0x0
    page flags: 0x1c00000000008000(tail)
    Modules linked in:
    CPU: 6 PID: 2018 Comm: bash Not tainted 3.12.0+ #25
    Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
    Call Trace:
    dump_stack+0x55/0x76
    bad_page+0xd5/0x130
    free_pages_prepare+0x213/0x280
    __free_pages+0x36/0x80
    update_and_free_page+0xc1/0xd0
    free_pool_huge_page+0xc2/0xe0
    set_max_huge_pages.part.58+0x14c/0x220
    nr_hugepages_store_common.isra.60+0xd0/0xf0
    nr_hugepages_store+0x13/0x20
    kobj_attr_store+0xf/0x20
    sysfs_write_file+0x189/0x1e0
    vfs_write+0xc5/0x1f0
    SyS_write+0x55/0xb0
    system_call_fastpath+0x16/0x1b

    Signed-off-by: Khalid Aziz
    Signed-off-by: Andrea Arcangeli
    Tested-by: Khalid Aziz
    Cc: Pravin Shelar
    Cc: Greg Kroah-Hartman
    Cc: Ben Hutchings
    Cc: Christoph Lameter
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Andi Kleen
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     
  • Right now, the migration code in migrate_page_copy() uses copy_huge_page()
    for hugetlbfs and thp pages:

    if (PageHuge(page) || PageTransHuge(page))
    copy_huge_page(newpage, page);

    So, yay for code reuse. But:

    void copy_huge_page(struct page *dst, struct page *src)
    {
    struct hstate *h = page_hstate(src);

    and a non-hugetlbfs page has no page_hstate(). This works 99% of the
    time because page_hstate() determines the hstate from the page order
    alone. Since the page order of a THP page matches the default hugetlbfs
    page order, it works.

    But, if you change the default huge page size on the boot command-line
    (say default_hugepagesz=1G), then we might not even *have* a 2MB hstate
    so page_hstate() returns null and copy_huge_page() oopses pretty fast
    since copy_huge_page() dereferences the hstate:

    void copy_huge_page(struct page *dst, struct page *src)
    {
    struct hstate *h = page_hstate(src);
    if (unlikely(pages_per_huge_page(h) > MAX_ORDER_NR_PAGES)) {
    ...

    Mel noticed that the migration code is really the only user of these
    functions. This moves all the copy code over to migrate.c and makes
    copy_huge_page() work for THP by checking for it explicitly.

    I believe the bug was introduced in commit b32967ff101a ("mm: numa: Add
    THP migration for the NUMA working set scanning fault case")

    [akpm@linux-foundation.org: fix coding-style and comment text, per Naoya Horiguchi]
    Signed-off-by: Dave Hansen
    Acked-by: Mel Gorman
    Reviewed-by: Naoya Horiguchi
    Cc: Hillf Danton
    Cc: Andrea Arcangeli
    Tested-by: Dave Jiang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen
     

15 Nov, 2013

1 commit

  • Hugetlb supports multiple page sizes. We use split lock only for PMD
    level, but not for PUD.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Naoya Horiguchi
    Signed-off-by: Kirill A. Shutemov
    Tested-by: Alex Thorlton
    Cc: Ingo Molnar
    Cc: "Eric W . Biederman"
    Cc: "Paul E . McKenney"
    Cc: Al Viro
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Dave Hansen
    Cc: Dave Jones
    Cc: David Howells
    Cc: Frederic Weisbecker
    Cc: Johannes Weiner
    Cc: Kees Cook
    Cc: Mel Gorman
    Cc: Michael Kerrisk
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: Robin Holt
    Cc: Sedat Dilek
    Cc: Srikar Dronamraju
    Cc: Thomas Gleixner
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     

17 Oct, 2013

2 commits

  • Commit 11feeb498086 ("kvm: optimize away THP checks in
    kvm_is_mmio_pfn()") introduced a memory leak when KVM is run on gigantic
    compound pages.

    That commit depends on the assumption that PG_reserved is identical for
    all head and tail pages of a compound page. So that if get_user_pages
    returns a tail page, we don't need to check the head page in order to
    know if we deal with a reserved page that requires different
    refcounting.

    The assumption that PG_reserved is the same for head and tail pages is
    certainly correct for THP and regular hugepages, but gigantic hugepages
    allocated through bootmem don't clear the PG_reserved on the tail pages
    (the clearing of PG_reserved is done later only if the gigantic hugepage
    is freed).

    This patch corrects the gigantic compound page initialization so that we
    can retain the optimization in 11feeb498086. The cacheline was already
    modified in order to set PG_tail so this won't affect the boot time of
    large memory systems.

    [akpm@linux-foundation.org: tweak comment layout and grammar]
    Signed-off-by: Andrea Arcangeli
    Reported-by: andy123
    Acked-by: Rik van Riel
    Cc: Gleb Natapov
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Acked-by: Rafael Aquini
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     
  • We should clear the page's private flag when returing the page to the
    hugepage pool. Otherwise, marked hugepage can be allocated to the user
    who tries to allocate the non-reserved hugepage. If this user fail to
    map this hugepage, he would try to return the page to the hugepage pool.
    Since this page has a private flag, resv_huge_pages would mistakenly
    increase. This patch fixes this situation.

    Signed-off-by: Joonsoo Kim
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: "Aneesh Kumar K.V"
    Cc: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Cc: Davidlohr Bueso
    Cc: David Gibson
    Cc: Wanpeng Li
    Cc: Naoya Horiguchi
    Cc: Hillf Danton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     

12 Sep, 2013

5 commits

  • Now hugepage migration is enabled, although restricted on pmd-based
    hugepages for now (due to lack of testing.) So we should allocate
    migratable hugepages from ZONE_MOVABLE if possible.

    This patch makes GFP flags in hugepage allocation dependent on migration
    support, not only the value of hugepages_treat_as_movable. It provides no
    change on the behavior for architectures which do not support hugepage
    migration,

    Signed-off-by: Naoya Horiguchi
    Acked-by: Andi Kleen
    Reviewed-by: Wanpeng Li
    Cc: Hillf Danton
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: KOSAKI Motohiro
    Cc: Michal Hocko
    Cc: Rik van Riel
    Cc: "Aneesh Kumar K.V"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Until now we can't offline memory blocks which contain hugepages because a
    hugepage is considered as an unmovable page. But now with this patch
    series, a hugepage has become movable, so by using hugepage migration we
    can offline such memory blocks.

    What's different from other users of hugepage migration is that we need to
    decompose all the hugepages inside the target memory block into free buddy
    pages after hugepage migration, because otherwise free hugepages remaining
    in the memory block intervene the memory offlining. For this reason we
    introduce new functions dissolve_free_huge_page() and
    dissolve_free_huge_pages().

    Other than that, what this patch does is straightforwardly to add hugepage
    migration code, that is, adding hugepage code to the functions which scan
    over pfn and collect hugepages to be migrated, and adding a hugepage
    allocation function to alloc_migrate_target().

    As for larger hugepages (1GB for x86_64), it's not easy to do hotremove
    over them because it's larger than memory block. So we now simply leave
    it to fail as it is.

    [yongjun_wei@trendmicro.com.cn: remove duplicated include]
    Signed-off-by: Naoya Horiguchi
    Acked-by: Andi Kleen
    Cc: Hillf Danton
    Cc: Wanpeng Li
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: KOSAKI Motohiro
    Cc: Michal Hocko
    Cc: Rik van Riel
    Cc: "Aneesh Kumar K.V"
    Signed-off-by: Wei Yongjun
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Extend do_mbind() to handle vma with VM_HUGETLB set. We will be able to
    migrate hugepage with mbind(2) after applying the enablement patch which
    comes later in this series.

    Signed-off-by: Naoya Horiguchi
    Acked-by: Andi Kleen
    Reviewed-by: Wanpeng Li
    Acked-by: Hillf Danton
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: KOSAKI Motohiro
    Cc: Michal Hocko
    Cc: Rik van Riel
    Cc: "Aneesh Kumar K.V"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Currently hugepage migration is available only for soft offlining, but
    it's also useful for some other users of page migration (clearly because
    users of hugepage can enjoy the benefit of mempolicy and memory hotplug.)
    So this patchset tries to extend such users to support hugepage migration.

    The target of this patchset is to enable hugepage migration for NUMA
    related system calls (migrate_pages(2), move_pages(2), and mbind(2)), and
    memory hotplug.

    This patchset does not add hugepage migration for memory compaction,
    because users of memory compaction mainly expect to construct thp by
    arranging raw pages, and there's little or no need to compact hugepages.
    CMA, another user of page migration, can have benefit from hugepage
    migration, but is not enabled to support it for now (just because of lack
    of testing and expertise in CMA.)

    Hugepage migration of non pmd-based hugepage (for example 1GB hugepage in
    x86_64, or hugepages in architectures like ia64) is not enabled for now
    (again, because of lack of testing.)

    As for how these are achived, I extended the API (migrate_pages()) to
    handle hugepage (with patch 1 and 2) and adjusted code of each caller to
    check and collect movable hugepages (with patch 3-7). Remaining 2 patches
    are kind of miscellaneous ones to avoid unexpected behavior. Patch 8 is
    about making sure that we only migrate pmd-based hugepages. And patch 9
    is about choosing appropriate zone for hugepage allocation.

    My test is mainly functional one, simply kicking hugepage migration via
    each entry point and confirm that migration is done correctly. Test code
    is available here:

    git://github.com/Naoya-Horiguchi/test_hugepage_migration_extension.git

    And I always run libhugetlbfs test when changing hugetlbfs's code. With
    this patchset, no regression was found in the test.

    This patch (of 9):

    Before enabling each user of page migration to support hugepage,
    this patch enables the list of pages for migration to link not only
    LRU pages, but also hugepages. As a result, putback_movable_pages()
    and migrate_pages() can handle both of LRU pages and hugepages.

    Signed-off-by: Naoya Horiguchi
    Acked-by: Andi Kleen
    Reviewed-by: Wanpeng Li
    Acked-by: Hillf Danton
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: KOSAKI Motohiro
    Cc: Michal Hocko
    Cc: Rik van Riel
    Cc: "Aneesh Kumar K.V"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • If we fail with a reserved page, just calling put_page() is not
    sufficient, because put_page() invoke free_huge_page() at last step and it
    doesn't know whether a page comes from a reserved pool or not. So it
    doesn't do anything related to reserved count. This makes reserve count
    lower than how we need, because reserve count already decrease in
    dequeue_huge_page_vma(). This patch fix this situation.

    Signed-off-by: Joonsoo Kim
    Cc: Aneesh Kumar
    Cc: Naoya Horiguchi
    Cc: Davidlohr Bueso
    Cc: David Gibson
    Cc: Wanpeng Li
    Cc: Hillf Danton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim