18 Oct, 2013

1 commit

  • commit 118b23022512eb2f41ce42db70dc0568d00be4ba upstream.

    dynamic_dname() is both too much and too little for those - the
    output may be well in excess of 64 bytes dynamic_dname() assumes
    to be enough (thanks to ashmem feeding really long names to
    shmem_file_setup()) and vsnprintf() is an overkill for those
    guys.

    Signed-off-by: Al Viro
    Cc: Colin Cross
    Signed-off-by: Greg Kroah-Hartman

    Al Viro
     

14 Oct, 2013

2 commits

  • commit 117aad1e9e4d97448d1df3f84b08bd65811e6d6a upstream.

    Isolated balloon pages can wrongly end up in LRU lists when
    migrate_pages() finishes its round without draining all the isolated
    page list.

    The same issue can happen when reclaim_clean_pages_from_list() tries to
    reclaim pages from an isolated page list, before migration, in the CMA
    path. Such balloon page leak opens a race window against LRU lists
    shrinkers that leads us to the following kernel panic:

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
    IP: [] shrink_page_list+0x24e/0x897
    PGD 3cda2067 PUD 3d713067 PMD 0
    Oops: 0000 [#1] SMP
    CPU: 0 PID: 340 Comm: kswapd0 Not tainted 3.12.0-rc1-22626-g4367597 #87
    Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
    RIP: shrink_page_list+0x24e/0x897
    RSP: 0000:ffff88003da499b8 EFLAGS: 00010286
    RAX: 0000000000000000 RBX: ffff88003e82bd60 RCX: 00000000000657d5
    RDX: 0000000000000000 RSI: 000000000000031f RDI: ffff88003e82bd40
    RBP: ffff88003da49ab0 R08: 0000000000000001 R09: 0000000081121a45
    R10: ffffffff81121a45 R11: ffff88003c4a9a28 R12: ffff88003e82bd40
    R13: ffff88003da0e800 R14: 0000000000000001 R15: ffff88003da49d58
    FS: 0000000000000000(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 00000000067d9000 CR3: 000000003ace5000 CR4: 00000000000407b0
    Call Trace:
    shrink_inactive_list+0x240/0x3de
    shrink_lruvec+0x3e0/0x566
    __shrink_zone+0x94/0x178
    shrink_zone+0x3a/0x82
    balance_pgdat+0x32a/0x4c2
    kswapd+0x2f0/0x372
    kthread+0xa2/0xaa
    ret_from_fork+0x7c/0xb0
    Code: 80 7d 8f 01 48 83 95 68 ff ff ff 00 4c 89 e7 e8 5a 7b 00 00 48 85 c0 49 89 c5 75 08 80 7d 8f 00 74 3e eb 31 48 8b 80 18 01 00 00 8b 74 0d 48 8b 78 30 be 02 00 00 00 ff d2 eb
    RIP [] shrink_page_list+0x24e/0x897
    RSP
    CR2: 0000000000000028
    ---[ end trace 703d2451af6ffbfd ]---
    Kernel panic - not syncing: Fatal exception

    This patch fixes the issue, by assuring the proper tests are made at
    putback_movable_pages() & reclaim_clean_pages_from_list() to avoid
    isolated balloon pages being wrongly reinserted in LRU lists.

    [akpm@linux-foundation.org: clarify awkward comment text]
    Signed-off-by: Rafael Aquini
    Reported-by: Luiz Capitulino
    Tested-by: Luiz Capitulino
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Rafael Aquini
     
  • commit 83b2944fd2532b92db099cb3ada12df32a05b368 upstream.

    The "force" parameter in __blk_queue_bounce was being ignored, which
    means that stable page snapshots are not always happening (on ext3).
    This of course leads to DIF disks reporting checksum errors, so fix this
    regression.

    The regression was introduced in commit 6bc454d15004 ("bounce: Refactor
    __blk_queue_bounce to not use bi_io_vec")

    Reported-by: Mel Gorman
    Signed-off-by: Darrick J. Wong
    Cc: Kent Overstreet
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Darrick J. Wong
     

02 Oct, 2013

1 commit

  • commit 7cb2ef56e6a8b7b368b2e883a0a47d02fed66911 upstream.

    I am working with a tool that simulates oracle database I/O workload.
    This tool (orion to be specific -
    )
    allocates hugetlbfs pages using shmget() with SHM_HUGETLB flag. It then
    does aio into these pages from flash disks using various common block
    sizes used by database. I am looking at performance with two of the most
    common block sizes - 1M and 64K. aio performance with these two block
    sizes plunged after Transparent HugePages was introduced in the kernel.
    Here are performance numbers:

    pre-THP 2.6.39 3.11-rc5
    1M read 8384 MB/s 5629 MB/s 6501 MB/s
    64K read 7867 MB/s 4576 MB/s 4251 MB/s

    I have narrowed the performance impact down to the overheads introduced by
    THP in __get_page_tail() and put_compound_page() routines. perf top shows
    >40% of cycles being spent in these two routines. Every time direct I/O
    to hugetlbfs pages starts, kernel calls get_page() to grab a reference to
    the pages and calls put_page() when I/O completes to put the reference
    away. THP introduced significant amount of locking overhead to get_page()
    and put_page() when dealing with compound pages because hugepages can be
    split underneath get_page() and put_page(). It added this overhead
    irrespective of whether it is dealing with hugetlbfs pages or transparent
    hugepages. This resulted in 20%-45% drop in aio performance when using
    hugetlbfs pages.

    Since hugetlbfs pages can not be split, there is no reason to go through
    all the locking overhead for these pages from what I can see. I added
    code to __get_page_tail() and put_compound_page() to bypass all the
    locking code when working with hugetlbfs pages. This improved performance
    significantly. Performance numbers with this patch:

    pre-THP 3.11-rc5 3.11-rc5 + Patch
    1M read 8384 MB/s 6501 MB/s 8371 MB/s
    64K read 7867 MB/s 4251 MB/s 6510 MB/s

    Performance with 64K read is still lower than what it was before THP, but
    still a 53% improvement. It does mean there is more work to be done but I
    will take a 53% improvement for now.

    Please take a look at the following patch and let me know if it looks
    reasonable.

    [akpm@linux-foundation.org: tweak comments]
    Signed-off-by: Khalid Aziz
    Cc: Pravin B Shelar
    Cc: Christoph Lameter
    Cc: Andrea Arcangeli
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Minchan Kim
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Khalid Aziz
     

27 Sep, 2013

2 commits

  • commit a8f531ebc33052642b4bd7b812eedf397108ce64 upstream.

    In collapse_huge_page() there is a race window between releasing the
    mmap_sem read lock and taking the mmap_sem write lock, so find_vma() may
    return NULL. So check the return value to avoid NULL pointer dereference.

    collapse_huge_page
    khugepaged_alloc_page
    up_read(&mm->mmap_sem)
    down_write(&mm->mmap_sem)
    vma = find_vma(mm, address)

    Signed-off-by: Libin
    Acked-by: Kirill A. Shutemov
    Reviewed-by: Wanpeng Li
    Reviewed-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Libin
     
  • commit 2bff24a3707093c435ab3241c47dcdb5f16e432b upstream.

    A memory cgroup with (1) multiple threshold notifications and (2) at least
    one threshold >=2G was not reliable. Specifically the notifications would
    either not fire or would not fire in the proper order.

    The __mem_cgroup_threshold() signaling logic depends on keeping 64 bit
    thresholds in sorted order. mem_cgroup_usage_register_event() sorts them
    with compare_thresholds(), which returns the difference of two 64 bit
    thresholds as an int. If the difference is positive but has bit[31] set,
    then sort() treats the difference as negative and breaks sort order.

    This fix compares the two arbitrary 64 bit thresholds returning the
    classic -1, 0, 1 result.

    The test below sets two notifications (at 0x1000 and 0x81001000):
    cd /sys/fs/cgroup/memory
    mkdir x
    for x in 4096 2164264960; do
    cgroup_event_listener x/memory.usage_in_bytes $x | sed "s/^/$x listener:/" &
    done
    echo $$ > x/cgroup.procs
    anon_leaker 500M

    v3.11-rc7 fails to signal the 4096 event listener:
    Leaking...
    Done leaking pages.

    Patched v3.11-rc7 properly notifies:
    Leaking...
    4096 listener:2013:8:31:14:13:36
    Done leaking pages.

    The fixed bug is old. It appears to date back to the introduction of
    memcg threshold notifications in v2.6.34-rc1-116-g2e72b6347c94 "memcg:
    implement memory thresholds"

    Signed-off-by: Greg Thelen
    Acked-by: Michal Hocko
    Acked-by: Kirill A. Shutemov
    Acked-by: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Greg Thelen
     

08 Sep, 2013

1 commit

  • commit 6f6b8951897e487ea6f77b90ea01f70a9c363770 upstream.

    If the system had a few memory groups and all of them were destroyed,
    memcg_limited_groups_array_size has non-zero value, but all new caches
    are created without memcg_params, because memcg_kmem_enabled() returns
    false.

    We try to enumirate child caches in a few places and all of them are
    potentially dangerous.

    For example my kernel is compiled with CONFIG_SLAB and it crashed when I
    tryed to mount a NFS share after a few experiments with kmemcg.

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
    IP: [] do_tune_cpucache+0x8a/0xd0
    PGD b942a067 PUD b999f067 PMD 0
    Oops: 0000 [#1] SMP
    Modules linked in: fscache(+) ip6table_filter ip6_tables iptable_filter ip_tables i2c_piix4 pcspkr virtio_net virtio_balloon i2c_core floppy
    CPU: 0 PID: 357 Comm: modprobe Not tainted 3.11.0-rc7+ #59
    Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
    task: ffff8800b9f98240 ti: ffff8800ba32e000 task.ti: ffff8800ba32e000
    RIP: 0010:[] [] do_tune_cpucache+0x8a/0xd0
    RSP: 0018:ffff8800ba32fb70 EFLAGS: 00010246
    RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000006
    RDX: 0000000000000000 RSI: ffff8800b9f98910 RDI: 0000000000000246
    RBP: ffff8800ba32fba0 R08: 0000000000000002 R09: 0000000000000004
    R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000010
    R13: 0000000000000008 R14: 00000000000000d0 R15: ffff8800375d0200
    FS: 00007f55f1378740(0000) GS:ffff8800bfa00000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    CR2: 00007f24feba57a0 CR3: 0000000037b51000 CR4: 00000000000006f0
    Call Trace:
    enable_cpucache+0x49/0x100
    setup_cpu_cache+0x215/0x280
    __kmem_cache_create+0x2fa/0x450
    kmem_cache_create_memcg+0x214/0x350
    kmem_cache_create+0x2b/0x30
    fscache_init+0x19b/0x230 [fscache]
    do_one_initcall+0xfa/0x1b0
    load_module+0x1c41/0x26d0
    SyS_finit_module+0x86/0xb0
    system_call_fastpath+0x16/0x1b

    Signed-off-by: Andrey Vagin
    Cc: Pekka Enberg
    Cc: Christoph Lameter
    Cc: Glauber Costa
    Cc: Joonsoo Kim
    Cc: Michal Hocko
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Andrey Vagin
     

20 Aug, 2013

2 commits

  • commit 2b047252d087be7f2ba088b4933cd904f92e6fce upstream.

    Ben Tebulin reported:

    "Since v3.7.2 on two independent machines a very specific Git
    repository fails in 9/10 cases on git-fsck due to an SHA1/memory
    failures. This only occurs on a very specific repository and can be
    reproduced stably on two independent laptops. Git mailing list ran
    out of ideas and for me this looks like some very exotic kernel issue"

    and bisected the failure to the backport of commit 53a59fc67f97 ("mm:
    limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT").

    That commit itself is not actually buggy, but what it does is to make it
    much more likely to hit the partial TLB invalidation case, since it
    introduces a new case in tlb_next_batch() that previously only ever
    happened when running out of memory.

    The real bug is that the TLB gather virtual memory range setup is subtly
    buggered. It was introduced in commit 597e1c3580b7 ("mm/mmu_gather:
    enable tlb flush range in generic mmu_gather"), and the range handling
    was already fixed at least once in commit e6c495a96ce0 ("mm: fix the TLB
    range flushed when __tlb_remove_page() runs out of slots"), but that fix
    was not complete.

    The problem with the TLB gather virtual address range is that it isn't
    set up by the initial tlb_gather_mmu() initialization (which didn't get
    the TLB range information), but it is set up ad-hoc later by the
    functions that actually flush the TLB. And so any such case that forgot
    to update the TLB range entries would potentially miss TLB invalidates.

    Rather than try to figure out exactly which particular ad-hoc range
    setup was missing (I personally suspect it's the hugetlb case in
    zap_huge_pmd(), which didn't have the same logic as zap_pte_range()
    did), this patch just gets rid of the problem at the source: make the
    TLB range information available to tlb_gather_mmu(), and initialize it
    when initializing all the other tlb gather fields.

    This makes the patch larger, but conceptually much simpler. And the end
    result is much more understandable; even if you want to play games with
    partial ranges when invalidating the TLB contents in chunks, now the
    range information is always there, and anybody who doesn't want to
    bother with it won't introduce subtle bugs.

    Ben verified that this fixes his problem.

    Reported-bisected-and-tested-by: Ben Tebulin
    Build-testing-by: Stephen Rothwell
    Build-testing-by: Richard Weinberger
    Reviewed-by: Michal Hocko
    Acked-by: Peter Zijlstra
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Linus Torvalds
     
  • commit 3e6b11df245180949938734bc192eaf32f3a06b3 upstream.

    struct memcg_cache_params has a union. Different parts of this union
    are used for root and non-root caches. A part with destroying work is
    used only for non-root caches.

    I fixed the same problem in another place v3.9-rc1-16204-gf101a94, but
    didn't notice this one.

    This patch fixes the kernel panic:

    [ 46.848187] BUG: unable to handle kernel paging request at 000000fffffffeb8
    [ 46.849026] IP: [] kmem_cache_destroy_memcg_children+0x6c/0xc0
    [ 46.849092] PGD 0
    [ 46.849092] Oops: 0000 [#1] SMP
    ...

    Signed-off-by: Andrey Vagin
    Cc: Glauber Costa
    Cc: Johannes Weiner
    Acked-by: Michal Hocko
    Cc: Balbir Singh
    Cc: KAMEZAWA Hiroyuki
    Cc: Konstantin Khlebnikov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Andrey Vagin
     

04 Aug, 2013

2 commits

  • commit 3964acd0dbec123aa0a621973a2a0580034b4788 upstream.

    vma_adjust() does vma_set_policy(vma, vma_policy(next)) and this
    is doubly wrong:

    1. This leaks vma->vm_policy if it is not NULL and not equal to
    next->vm_policy.

    This can happen if vma_merge() expands "area", not prev (case 8).

    2. This sets the wrong policy if vma_merge() joins prev and area,
    area is the vma the caller needs to update and it still has the
    old policy.

    Revert commit 1444f92c8498 ("mm: merging memory blocks resets
    mempolicy") which introduced these problems.

    Change mbind_range() to recheck mpol_equal() after vma_merge() to fix
    the problem that commit tried to address.

    Signed-off-by: Oleg Nesterov
    Acked-by: KOSAKI Motohiro
    Cc: Steven T Hampson
    Cc: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Oleg Nesterov
     
  • commit e6c495a96ce02574e765d5140039a64c8d4e8c9e upstream.

    zap_pte_range loops from @addr to @end. In the middle, if it runs out of
    batching slots, TLB entries needs to be flushed for @start to @interim,
    NOT @interim to @end.

    Since ARC port doesn't use page free batching I can't test it myself but
    this seems like the right thing to do.

    Observed this when working on a fix for the issue at thread:
    http://www.spinics.net/lists/linux-arch/msg21736.html

    Signed-off-by: Vineet Gupta
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: Rik van Riel
    Cc: David Rientjes
    Cc: Peter Zijlstra
    Acked-by: Catalin Marinas
    Cc: Max Filippov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Vineet Gupta
     

22 Jul, 2013

3 commits

  • commit cea27eb2a202959783f81254c48c250ddd80e129 upstream.

    The logic for the memory-remove code fails to correctly account the
    Total High Memory when a memory block which contains High Memory is
    offlined as shown in the example below. The following patch fixes it.

    Before logic memory remove:

    MemTotal: 7603740 kB
    MemFree: 6329612 kB
    Buffers: 94352 kB
    Cached: 872008 kB
    SwapCached: 0 kB
    Active: 626932 kB
    Inactive: 519216 kB
    Active(anon): 180776 kB
    Inactive(anon): 222944 kB
    Active(file): 446156 kB
    Inactive(file): 296272 kB
    Unevictable: 0 kB
    Mlocked: 0 kB
    HighTotal: 7294672 kB
    HighFree: 5704696 kB
    LowTotal: 309068 kB
    LowFree: 624916 kB

    After logic memory remove:

    MemTotal: 7079452 kB
    MemFree: 5805976 kB
    Buffers: 94372 kB
    Cached: 872000 kB
    SwapCached: 0 kB
    Active: 626936 kB
    Inactive: 519236 kB
    Active(anon): 180780 kB
    Inactive(anon): 222944 kB
    Active(file): 446156 kB
    Inactive(file): 296292 kB
    Unevictable: 0 kB
    Mlocked: 0 kB
    HighTotal: 7294672 kB
    HighFree: 5181024 kB
    LowTotal: 4294752076 kB
    LowFree: 624952 kB

    [mhocko@suse.cz: fix CONFIG_HIGHMEM=n build]
    Signed-off-by: Wanpeng Li
    Reviewed-by: Michal Hocko
    Cc: KAMEZAWA Hiroyuki
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Wanpeng Li
     
  • commit f37a96914d1aea10fed8d9af10251f0b9caea31b upstream.

    mem_cgroup_css_online calls mem_cgroup_put if memcg_init_kmem fails.
    This is not correct because only memcg_propagate_kmem takes an
    additional reference while mem_cgroup_sockets_init is allowed to fail as
    well (although no current implementation fails) but it doesn't take any
    reference. This all suggests that it should be memcg_propagate_kmem
    that should clean up after itself so this patch moves mem_cgroup_put
    over there.

    Unfortunately this is not that easy (as pointed out by Li Zefan) because
    memcg_kmem_mark_dead marks the group dead (KMEM_ACCOUNTED_DEAD) if it is
    marked active (KMEM_ACCOUNTED_ACTIVE) which is the case even if
    memcg_propagate_kmem fails so the additional reference is dropped in
    that case in kmem_cgroup_destroy which means that the reference would be
    dropped two times.

    The easiest way then would be to simply remove mem_cgrroup_put from
    mem_cgroup_css_online and rely on kmem_cgroup_destroy doing the right
    thing.

    Signed-off-by: Michal Hocko
    Signed-off-by: Li Zefan
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Cc: Tejun Heo
    Cc: Glauber Costa
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Michal Hocko
     
  • commit 0f8f8094d28eb53368ac09186ea6b3a324cc7d44 upstream.

    Some architectures (e.g. powerpc built with CONFIG_PPC_256K_PAGES=y
    CONFIG_FORCE_MAX_ZONEORDER=11) get PAGE_SHIFT + MAX_ORDER > 26.

    In 3.10 kernels, CONFIG_LOCKDEP=y with PAGE_SHIFT + MAX_ORDER > 26 makes
    init_lock_keys() dereference beyond kmalloc_caches[26].
    This leads to an unbootable system (kernel panic at initializing SLAB)
    if one of kmalloc_caches[26...PAGE_SHIFT+MAX_ORDER-1] is not NULL.

    Fix this by making sure that init_lock_keys() does not dereference beyond
    kmalloc_caches[26] arrays.

    Signed-off-by: Christoph Lameter
    Reported-by: Tetsuo Handa
    Cc: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Pekka Enberg
    Signed-off-by: Greg Kroah-Hartman

    Christoph Lameter
     

14 Jul, 2013

2 commits

  • commit fa460c2d37870e0a6f94c70e8b76d05ca11b6db0 upstream.

    This reverts commit e4715f01be697a.

    mem_cgroup_put is hierarchy aware so mem_cgroup_put(memcg) already drops
    an additional reference from all parents so the additional
    mem_cgrroup_put(parent) potentially causes use-after-free.

    Signed-off-by: Michal Hocko
    Signed-off-by: Li Zefan
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Cc: Tejun Heo
    Cc: Glauber Costa
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Michal Hocko
     
  • commit 13d60f4b6ab5b702dc8d2ee20999f98a93728aec upstream.

    The futex_keys of process shared futexes are generated from the page
    offset, the mapping host and the mapping index of the futex user space
    address. This should result in an unique identifier for each futex.

    Though this is not true when futexes are located in different subpages
    of an hugepage. The reason is, that the mapping index for all those
    futexes evaluates to the index of the base page of the hugetlbfs
    mapping. So a futex at offset 0 of the hugepage mapping and another
    one at offset PAGE_SIZE of the same hugepage mapping have identical
    futex_keys. This happens because the futex code blindly uses
    page->index.

    Steps to reproduce the bug:

    1. Map a file from hugetlbfs. Initialize pthread_mutex1 at offset 0
    and pthread_mutex2 at offset PAGE_SIZE of the hugetlbfs
    mapping.

    The mutexes must be initialized as PTHREAD_PROCESS_SHARED because
    PTHREAD_PROCESS_PRIVATE mutexes are not affected by this issue as
    their keys solely depend on the user space address.

    2. Lock mutex1 and mutex2

    3. Create thread1 and in the thread function lock mutex1, which
    results in thread1 blocking on the locked mutex1.

    4. Create thread2 and in the thread function lock mutex2, which
    results in thread2 blocking on the locked mutex2.

    5. Unlock mutex2. Despite the fact that mutex2 got unlocked, thread2
    still blocks on mutex2 because the futex_key points to mutex1.

    To solve this issue we need to take the normal page index of the page
    which contains the futex into account, if the futex is in an hugetlbfs
    mapping. In other words, we calculate the normal page mapping index of
    the subpage in the hugetlbfs mapping.

    Mappings which are not based on hugetlbfs are not affected and still
    use page->index.

    Thanks to Mel Gorman who provided a patch for adding proper evaluation
    functions to the hugetlbfs code to avoid exposing hugetlbfs specific
    details to the futex code.

    [ tglx: Massaged changelog ]

    Signed-off-by: Zhang Yi
    Reviewed-by: Jiang Biao
    Tested-by: Ma Chenggong
    Reviewed-by: 'Mel Gorman'
    Acked-by: 'Darren Hart'
    Cc: 'Peter Zijlstra'
    Link: http://lkml.kernel.org/r/000101ce71a6%24a83c5880%24f8b50980%24@com
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Zhang Yi
     

19 Jun, 2013

1 commit


13 Jun, 2013

7 commits

  • Sasha Levin noticed that the warning introduced by commit 6286ae9
    ("slab: Return NULL for oversized allocations) is being triggered:

    WARNING: CPU: 15 PID: 21519 at mm/slab_common.c:376 kmalloc_slab+0x2f/0xb0()
    can: request_module (can-proto-4) failed.
    mpoa: proc_mpc_write: could not parse ''
    Modules linked in:
    CPU: 15 PID: 21519 Comm: trinity-child15 Tainted: G W 3.10.0-rc4-next-20130607-sasha-00011-gcd78395-dirty #2
    0000000000000009 ffff880020a95e30 ffffffff83ff4041 0000000000000000
    ffff880020a95e68 ffffffff8111fe12 fffffffffffffff0 00000000000082d0
    0000000000080000 0000000000080000 0000000001400000 ffff880020a95e78
    Call Trace:
    [] dump_stack+0x4e/0x82
    [] warn_slowpath_common+0x82/0xb0
    [] warn_slowpath_null+0x15/0x20
    [] kmalloc_slab+0x2f/0xb0
    [] __kmalloc+0x24/0x4b0
    [] ? security_capable+0x13/0x20
    [] ? pipe_fcntl+0x107/0x210
    [] pipe_fcntl+0x107/0x210
    [] ? fget_raw_light+0x130/0x3f0
    [] SyS_fcntl+0x60b/0x6a0
    [] tracesys+0xe1/0xe6

    Andrew Morton writes:

    __GFP_NOWARN is frequently used by kernel code to probe for "how big
    an allocation can I get". That's a bit lame, but it's used on slow
    paths and is pretty simple.

    However, SLAB would still spew a warning when a big allocation happens
    if the __GFP_NOWARN flag is _not_ set to expose kernel bugs.

    Signed-off-by: Sasha Levin
    [ penberg@kernel.org: improve changelog ]
    Signed-off-by: Pekka Enberg

    Sasha Levin
     
  • The lockless reclaim hierarchy iterator currently has a misplaced
    barrier that can lead to use-after-free crashes.

    The reclaim hierarchy iterator consist of a sequence count and a
    position pointer that are read and written locklessly, with memory
    barriers enforcing ordering.

    The write side sets the position pointer first, then updates the
    sequence count to "publish" the new position. Likewise, the read side
    must read the sequence count first, then the position. If the sequence
    count is up to date, it's guaranteed that the position is up to date as
    well:

    writer: reader:
    iter->position = position if iter->sequence == expected:
    smp_wmb() smp_rmb()
    iter->sequence = sequence position = iter->position

    However, the read side barrier is currently misplaced, which can lead to
    dereferencing stale position pointers that no longer point to valid
    memory. Fix this.

    Signed-off-by: Johannes Weiner
    Reported-by: Tejun Heo
    Reviewed-by: Tejun Heo
    Acked-by: Michal Hocko
    Cc: KAMEZAWA Hiroyuki
    Cc: Glauber Costa
    Cc: [3.10+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • The bitmap accessed by bitops must have enough size to hold the required
    numbers of bits rounded up to a multiple of BITS_PER_LONG. And the
    bitmap must not be zeroed by memset() if the number of bits cleared is
    not a multiple of BITS_PER_LONG.

    This fixes incorrect zeroing and allocation size for frontswap_map. The
    incorrect zeroing part doesn't cause any problem because frontswap_map
    is freed just after zeroing. But the wrongly calculated allocation size
    may cause the problem.

    For 32bit systems, the allocation size of frontswap_map is about twice
    as large as required size. For 64bit systems, the allocation size is
    smaller than requeired if the number of bits is not a multiple of
    BITS_PER_LONG.

    Signed-off-by: Akinobu Mita
    Cc: Konrad Rzeszutek Wilk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • When we have a page fault for the address which is backed by a hugepage
    under migration, the kernel can't wait correctly and do busy looping on
    hugepage fault until the migration finishes. As a result, users who try
    to kick hugepage migration (via soft offlining, for example) occasionally
    experience long delay or soft lockup.

    This is because pte_offset_map_lock() can't get a correct migration entry
    or a correct page table lock for hugepage. This patch introduces
    migration_entry_wait_huge() to solve this.

    Signed-off-by: Naoya Horiguchi
    Reviewed-by: Rik van Riel
    Reviewed-by: Wanpeng Li
    Reviewed-by: Michal Hocko
    Cc: Mel Gorman
    Cc: Andi Kleen
    Cc: KOSAKI Motohiro
    Cc: [2.6.35+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • The watermark check consists of two sub-checks. The first one is:

    if (free_pages < order; o++) {
    free_pages -= z->free_area[o].nr_free << o;
    min >>= 1;
    if (free_pages free_area[o].nr_free is equal to the number of free pages
    including free CMA pages. Therefore the CMA pages are subtracted twice.
    This may cause a false positive fail of __zone_watermark_ok() if the CMA
    area gets strongly fragmented. In such a case there are many 0-order
    free pages located in CMA. Those pages are subtracted twice therefore
    they will quickly drain free_pages during the check against
    fragmentation. The test fails even though there are many free non-cma
    pages in the zone.

    This patch fixes this issue by subtracting CMA pages only for a purpose of
    (free_pages
    Signed-off-by: Kyungmin Park
    Tested-by: Laura Abbott
    Cc: Bartlomiej Zolnierkiewicz
    Acked-by: Minchan Kim
    Cc: Mel Gorman
    Tested-by: Marek Szyprowski
    Cc: [3.7+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tomasz Stanislawski
     
  • read_swap_cache_async() can race against get_swap_page(), and stumble
    across a SWAP_HAS_CACHE entry in the swap map whose page wasn't brought
    into the swapcache yet.

    This transient swap_map state is expected to be transitory, but the
    actual placement of discard at scan_swap_map() inserts a wait for I/O
    completion thus making the thread at read_swap_cache_async() to loop
    around its -EEXIST case, while the other end at get_swap_page() is
    scheduled away at scan_swap_map(). This can leave the system deadlocked
    if the I/O completion happens to be waiting on the CPU waitqueue where
    read_swap_cache_async() is busy looping and !CONFIG_PREEMPT.

    This patch introduces a cond_resched() call to make the aforementioned
    read_swap_cache_async() busy loop condition to bail out when necessary,
    thus avoiding the subtle race window.

    Signed-off-by: Rafael Aquini
    Acked-by: Johannes Weiner
    Acked-by: KOSAKI Motohiro
    Acked-by: Hugh Dickins
    Cc: Shaohua Li
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael Aquini
     
  • struct memcg_cache_params has a union. Different parts of this union
    are used for root and non-root caches. A part with destroying work is
    used only for non-root caches.

    BUG: unable to handle kernel paging request at 0000000fffffffe0
    IP: kmem_cache_alloc+0x41/0x1f0
    Modules linked in: netlink_diag af_packet_diag udp_diag tcp_diag inet_diag unix_diag ip6table_filter ip6_tables i2c_piix4 virtio_net virtio_balloon microcode i2c_core pcspkr floppy
    CPU: 0 PID: 1929 Comm: lt-vzctl Tainted: G D 3.10.0-rc1+ #2
    Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
    RIP: kmem_cache_alloc+0x41/0x1f0
    Call Trace:
    getname_flags.part.34+0x30/0x140
    getname+0x38/0x60
    do_sys_open+0xc5/0x1e0
    SyS_open+0x22/0x30
    system_call_fastpath+0x16/0x1b
    Code: f4 53 48 83 ec 18 8b 05 8e 53 b7 00 4c 8b 4d 08 21 f0 a8 10 74 0d 4c 89 4d c0 e8 1b 76 4a 00 4c 8b 4d c0 e9 92 00 00 00 4d 89 f5 8b 45 00 65 4c 03 04 25 48 cd 00 00 49 8b 50 08 4d 8b 38 49
    RIP [] kmem_cache_alloc+0x41/0x1f0

    Signed-off-by: Andrey Vagin
    Cc: Konstantin Khlebnikov
    Cc: Glauber Costa
    Cc: Johannes Weiner
    Cc: Balbir Singh
    Cc: KAMEZAWA Hiroyuki
    Reviewed-by: Michal Hocko
    Cc: Li Zefan
    Cc: [3.9.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Vagin
     

06 Jun, 2013

1 commit

  • Since the introduction of preemptible mmu_gather TLB fast mode has been
    broken. TLB fast mode relies on there being absolutely no concurrency;
    it frees pages first and invalidates TLBs later.

    However now we can get concurrency and stuff goes *bang*.

    This patch removes all tlb_fast_mode() code; it was found the better
    option vs trying to patch the hole by entangling tlb invalidation with
    the scheduler.

    Cc: Thomas Gleixner
    Cc: Russell King
    Cc: Tony Luck
    Reported-by: Max Filippov
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

25 May, 2013

6 commits

  • A panic can be caused by simply cat'ing /proc//smaps while an
    application has a VM_PFNMAP range. It happened in-house when a
    benchmarker was trying to decipher the memory layout of his program.

    /proc//smaps and similar walks through a user page table should not
    be looking at VM_PFNMAP areas.

    Certain tests in walk_page_range() (specifically split_huge_page_pmd())
    assume that all the mapped PFN's are backed with page structures. And
    this is not usually true for VM_PFNMAP areas. This can result in panics
    on kernel page faults when attempting to address those page structures.

    There are a half dozen callers of walk_page_range() that walk through a
    task's entire page table (as N. Horiguchi pointed out). So rather than
    change all of them, this patch changes just walk_page_range() to ignore
    VM_PFNMAP areas.

    The logic of hugetlb_vma() is moved back into walk_page_range(), as we
    want to test any vma in the range.

    VM_PFNMAP areas are used by:
    - graphics memory manager gpu/drm/drm_gem.c
    - global reference unit sgi-gru/grufile.c
    - sgi special memory char/mspec.c
    - and probably several out-of-tree modules

    [akpm@linux-foundation.org: remove now-unused hugetlb_vma() stub]
    Signed-off-by: Cliff Wickman
    Reviewed-by: Naoya Horiguchi
    Cc: Mel Gorman
    Cc: Andrea Arcangeli
    Cc: Dave Hansen
    Cc: David Sterba
    Cc: Johannes Weiner
    Cc: KOSAKI Motohiro
    Cc: "Kirill A. Shutemov"
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cliff Wickman
     
  • Fix printk format warnings in mm/memory_hotplug.c by using "%pa":

    mm/memory_hotplug.c: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 2 has type 'resource_size_t' [-Wformat]
    mm/memory_hotplug.c: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'resource_size_t' [-Wformat]

    Signed-off-by: Randy Dunlap
    Reported-by: Geert Uytterhoeven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • We should not use set_pmd_at to update pmd_t with pgtable_t pointer.
    set_pmd_at is used to set pmd with huge pte entries and architectures
    like ppc64, clear few flags from the pte when saving a new entry.
    Without this change we observe bad pte errors like below on ppc64 with
    THP enabled.

    BUG: Bad page map in process ld mm=0xc000001ee39f4780 pte:7fc3f37848000001 pmd:c000001ec0000000

    Signed-off-by: Aneesh Kumar K.V
    Cc: Hugh Dickins
    Cc: Benjamin Herrenschmidt
    Reviewed-by: Andrea Arcangeli
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Aneesh Kumar K.V
     
  • Page 'new' during MIGRATION can't be flushed with flush_cache_page().
    Using flush_cache_page(vma, addr, pfn) is justified only if the page is
    already placed in process page table, and that is done right after
    flush_cache_page(). But without it the arch function has no knowledge
    of process PTE and does nothing.

    Besides that, flush_cache_page() flushes an application cache page, but
    the kernel has a different page virtual address and dirtied it.

    Replace it with flush_dcache_page(new) which is the proper usage.

    The old page is flushed in try_to_unmap_one() before migration.

    This bug takes place in Sead3 board with M14Kc MIPS CPU without cache
    aliasing (but Harvard arch - separate I and D cache) in tight memory
    environment (128MB) each 1-3days on SOAK test. It fails in cc1 during
    kernel build (SIGILL, SIGBUS, SIGSEG) if CONFIG_COMPACTION is switched
    ON.

    Signed-off-by: Leonid Yegoshin
    Cc: Leonid Yegoshin
    Acked-by: Rik van Riel
    Cc: Michal Hocko
    Acked-by: Mel Gorman
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: David Miller
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Leonid Yegoshin
     
  • Commit 0c59b89c81ea ("mm: memcg: push down PageSwapCache check into
    uncharge entry functions") added a VM_BUG_ON() on PageSwapCache in the
    uncharge path after checking that page flag once, assuming that the
    state is stable in all paths, but this is not the case and the condition
    triggers in user environments. An uncharge after the last page table
    reference to the page goes away can race with reclaim adding the page to
    swap cache.

    Swap cache pages are usually uncharged when they are freed after
    swapout, from a path that also handles swap usage accounting and memcg
    lifetime management. However, since the last page table reference is
    gone and thus no references to the swap slot left, the swap slot will be
    freed shortly when reclaim attempts to write the page to disk. The
    whole swap accounting is not even necessary.

    So while the race condition for which this VM_BUG_ON was added is real
    and actually existed all along, there are no negative effects. Remove
    the VM_BUG_ON again.

    Reported-by: Heiko Carstens
    Reported-by: Lingzhu Xiang
    Signed-off-by: Johannes Weiner
    Acked-by: Hugh Dickins
    Acked-by: Michal Hocko
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Commit 751efd8610d3 ("mmu_notifier_unregister NULL Pointer deref and
    multiple ->release()") breaks the fix 3ad3d901bbcf ("mm: mmu_notifier:
    fix freed page still mapped in secondary MMU").

    Since hlist_for_each_entry_rcu() is changed now, we can not revert that
    patch directly, so this patch reverts the commit and simply fix the bug
    spotted by that patch

    This bug spotted by commit 751efd8610d3 is:

    There is a race condition between mmu_notifier_unregister() and
    __mmu_notifier_release().

    Assume two tasks, one calling mmu_notifier_unregister() as a result
    of a filp_close() ->flush() callout (task A), and the other calling
    mmu_notifier_release() from an mmput() (task B).

    A B
    t1 srcu_read_lock()
    t2 if (!hlist_unhashed())
    t3 srcu_read_unlock()
    t4 srcu_read_lock()
    t5 hlist_del_init_rcu()
    t6 synchronize_srcu()
    t7 srcu_read_unlock()
    t8 hlist_del_rcu() release should be fast
    since all the pages have already been released by the first call.
    Anyway, this issue should be fixed in a separate patch.

    -stable suggestions: Any version that has commit 751efd8610d3 need to be
    backported. I find the oldest version has this commit is 3.0-stable.

    [akpm@linux-foundation.org: tweak comments]
    Signed-off-by: Xiao Guangrong
    Tested-by: Robin Holt
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xiao Guangrong
     

22 May, 2013

1 commit

  • virt_to_page() is typically implemented as a macro containing a cast so
    that it will accept both pointers and unsigned long without causing a
    warning.

    But MIPS virt_to_page() uses virt_to_phys which is a function so passing
    an unsigned long will cause a warning:

    CC mm/page_alloc.o
    mm/page_alloc.c: In function ‘free_reserved_area’:
    mm/page_alloc.c:5161:3: warning: passing argument 1 of ‘virt_to_phys’ makes pointer from integer without a cast [enabled by default]
    arch/mips/include/asm/io.h:119:100: note: expected ‘const volatile void *’ but argument is of type ‘long unsigned int’

    All others users of virt_to_page() in mm/ are passing a void *.

    Signed-off-by: Ralf Baechle
    Reported-by: Eunbong Song
    Cc: linux-kernel@vger.kernel.org
    Cc: linux-mm@kvack.org
    Cc: linux-mips@linux-mips.org
    Signed-off-by: Linus Torvalds

    Ralf Baechle
     

10 May, 2013

1 commit

  • Dave reported an oops triggered by trinity:

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
    IP: newseg+0x10d/0x390
    PGD cf8c1067 PUD cf8c2067 PMD 0
    Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
    CPU: 2 PID: 7636 Comm: trinity-child2 Not tainted 3.9.0+#67
    ...
    Call Trace:
    ipcget+0x182/0x380
    SyS_shmget+0x5a/0x60
    tracesys+0xdd/0xe2

    This bug was introduced by commit af73e4d9506d ("hugetlbfs: fix mmap
    failure in unaligned size request").

    Reported-by: Dave Jones
    Cc:
    Signed-off-by: Li Zefan
    Reviewed-by: Naoya Horiguchi
    Acked-by: Rik van Riel
    Signed-off-by: Linus Torvalds

    Li Zefan
     

09 May, 2013

2 commits

  • Commit 8a965b3baa89 ("mm, slab_common: Fix bootstrap creation of kmalloc
    caches") introduced a regression that caused us to crash early during
    boot. The commit was introducing ordering of slab creation, making sure
    two odd-sized slabs were created after specific powers of two sizes.

    But, if any of the power of two slabs were created earlier during boot,
    slabs at index 1 or 2 might not get created at all. This patch makes
    sure none of the slabs get skipped.

    Tony Lindgren bisected this down to the offending commit, which really
    helped because bisect kept bringing me to almost but not quite this one.

    Signed-off-by: Chris Mason
    Acked-by: Christoph Lameter
    Acked-by: Tony Lindgren
    Acked-by: Soren Brinkmann
    Tested-by: Tetsuo Handa
    Tested-by: Konrad Rzeszutek Wilk
    Signed-off-by: Linus Torvalds

    Chris Mason
     
  • Pull block core updates from Jens Axboe:

    - Major bit is Kents prep work for immutable bio vecs.

    - Stable candidate fix for a scheduling-while-atomic in the queue
    bypass operation.

    - Fix for the hang on exceeded rq->datalen 32-bit unsigned when merging
    discard bios.

    - Tejuns changes to convert the writeback thread pool to the generic
    workqueue mechanism.

    - Runtime PM framework, SCSI patches exists on top of these in James'
    tree.

    - A few random fixes.

    * 'for-3.10/core' of git://git.kernel.dk/linux-block: (40 commits)
    relay: move remove_buf_file inside relay_close_buf
    partitions/efi.c: replace useless kzalloc's by kmalloc's
    fs/block_dev.c: fix iov_shorten() criteria in blkdev_aio_read()
    block: fix max discard sectors limit
    blkcg: fix "scheduling while atomic" in blk_queue_bypass_start
    Documentation: cfq-iosched: update documentation help for cfq tunables
    writeback: expose the bdi_wq workqueue
    writeback: replace custom worker pool implementation with unbound workqueue
    writeback: remove unused bdi_pending_list
    aoe: Fix unitialized var usage
    bio-integrity: Add explicit field for owner of bip_buf
    block: Add an explicit bio flag for bios that own their bvec
    block: Add bio_alloc_pages()
    block: Convert some code to bio_for_each_segment_all()
    block: Add bio_for_each_segment_all()
    bounce: Refactor __blk_queue_bounce to not use bi_io_vec
    raid1: use bio_copy_data()
    pktcdvd: Use bio_reset() in disabled code to kill bi_idx usage
    pktcdvd: use bio_copy_data()
    block: Add bio_copy_data()
    ...

    Linus Torvalds
     

08 May, 2013

5 commits

  • Faster kernel compiles by way of fewer unnecessary includes.

    [akpm@linux-foundation.org: fix fallout]
    [akpm@linux-foundation.org: fix build]
    Signed-off-by: Kent Overstreet
    Cc: Zach Brown
    Cc: Felipe Balbi
    Cc: Greg Kroah-Hartman
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Rusty Russell
    Cc: Jens Axboe
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Cc: Jeff Moyer
    Cc: Al Viro
    Cc: Benjamin LaHaise
    Reviewed-by: "Theodore Ts'o"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kent Overstreet
     
  • Bunch of performance improvements and cleanups Zach Brown and I have
    been working on. The code should be pretty solid at this point, though
    it could of course use more review and testing.

    The results in my testing are pretty impressive, particularly when an
    ioctx is being shared between multiple threads. In my crappy synthetic
    benchmark, with 4 threads submitting and one thread reaping completions,
    I saw overhead in the aio code go from ~50% (mostly ioctx lock
    contention) to low single digits. Performance with ioctx per thread
    improved too, but I'd have to rerun those benchmarks.

    The reason I've been focused on performance when the ioctx is shared is
    that for a fair number of real world completions, userspace needs the
    completions aggregated somehow - in practice people just end up
    implementing this aggregation in userspace today, but if it's done right
    we can do it much more efficiently in the kernel.

    Performance wise, the end result of this patch series is that submitting
    a kiocb writes to _no_ shared cachelines - the penalty for sharing an
    ioctx is gone there. There's still going to be some cacheline
    contention when we deliver the completions to the aio ringbuffer (at
    least if you have interrupts being delivered on multiple cores, which
    for high end stuff you do) but I have a couple more patches not in this
    series that implement coalescing for that (by taking advantage of
    interrupt coalescing). With that, there's basically no bottlenecks or
    performance issues to speak of in the aio code.

    This patch:

    use_mm() is used in more places than just aio. There's no need to mention
    callers when describing the function.

    Signed-off-by: Zach Brown
    Signed-off-by: Kent Overstreet
    Cc: Felipe Balbi
    Cc: Greg Kroah-Hartman
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Rusty Russell
    Cc: Jens Axboe
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Acked-by: Jeff Moyer
    Cc: Al Viro
    Cc: Benjamin LaHaise
    Reviewed-by: "Theodore Ts'o"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zach Brown
     
  • Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • The current kernel returns -EINVAL unless a given mmap length is
    "almost" hugepage aligned. This is because in sys_mmap_pgoff() the
    given length is passed to vm_mmap_pgoff() as it is without being aligned
    with hugepage boundary.

    This is a regression introduced in commit 40716e29243d ("hugetlbfs: fix
    alignment of huge page requests"), where alignment code is pushed into
    hugetlb_file_setup() and the variable len in caller side is not changed.

    To fix this, this patch partially reverts that commit, and adds
    alignment code in caller side. And it also introduces hstate_sizelog()
    in order to get proper hstate to specified hugepage size.

    Addresses https://bugzilla.kernel.org/show_bug.cgi?id=56881

    [akpm@linux-foundation.org: fix warning when CONFIG_HUGETLB_PAGE=n]
    Signed-off-by: Naoya Horiguchi
    Signed-off-by: Johannes Weiner
    Reported-by:
    Cc: Steven Truelove
    Cc: Jianguo Wu
    Cc: Hugh Dickins
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • This exports the amount of anonymous transparent hugepages for each
    memcg via the new "rss_huge" stat in memory.stat. The units are in
    bytes.

    This is helpful to determine the hugepage utilization for individual
    jobs on the system in comparison to rss and opportunities where
    MADV_HUGEPAGE may be helpful.

    The amount of anonymous transparent hugepages is also included in "rss"
    for backwards compatibility.

    Signed-off-by: David Rientjes
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes