12 Nov, 2016

1 commit

  • When memory_failure() runs on a thp tail page after pmd is split, we
    trigger the following VM_BUG_ON_PAGE():

    page:ffffd7cd819b0040 count:0 mapcount:0 mapping: (null) index:0x1
    flags: 0x1fffc000400000(hwpoison)
    page dumped because: VM_BUG_ON_PAGE(!page_count(p))
    ------------[ cut here ]------------
    kernel BUG at /src/linux-dev/mm/memory-failure.c:1132!

    memory_failure() passed refcount and page lock from tail page to head
    page, which is not needed because we can pass any subpage to
    split_huge_page().

    Fixes: 61f5d698cc97 ("mm: re-enable THP")
    Link: http://lkml.kernel.org/r/1477961577-7183-1-git-send-email-n-horiguchi@ah.jp.nec.com
    Signed-off-by: Naoya Horiguchi
    Cc: [4.5+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

29 Jul, 2016

2 commits

  • dequeue_hwpoisoned_huge_page() can be called without page lock hold, so
    let's remove incorrect comment.

    The reason why the page lock is not really needed is that
    dequeue_hwpoisoned_huge_page() checks page_huge_active() inside
    hugetlb_lock, which allows us to avoid trying to dequeue a hugepage that
    are just allocated but not linked to active list yet, even without
    taking page lock.

    Link: http://lkml.kernel.org/r/20160720092901.GA15995@www9186uo.sakura.ne.jp
    Signed-off-by: Naoya Horiguchi
    Reported-by: Zhan Chen
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • This moves the LRU lists from the zone to the node and related data such
    as counters, tracing, congestion tracking and writeback tracking.

    Unfortunately, due to reclaim and compaction retry logic, it is
    necessary to account for the number of LRU pages on both zone and node
    logic. Most reclaim logic is based on the node counters but the retry
    logic uses the zone counters which do not distinguish inactive and
    active sizes. It would be possible to leave the LRU counters on a
    per-zone basis but it's a heavier calculation across multiple cache
    lines that is much more frequent than the retry checks.

    Other than the LRU counters, this is mostly a mechanical patch but note
    that it introduces a number of anomalies. For example, the scans are
    per-zone but using per-node counters. We also mark a node as congested
    when a zone is congested. This causes weird problems that are fixed
    later but is easier to review.

    In the event that there is excessive overhead on 32-bit systems due to
    the nodes being on LRU then there are two potential solutions

    1. Long-term isolation of highmem pages when reclaim is lowmem

    When pages are skipped, they are immediately added back onto the LRU
    list. If lowmem reclaim persisted for long periods of time, the same
    highmem pages get continually scanned. The idea would be that lowmem
    keeps those pages on a separate list until a reclaim for highmem pages
    arrives that splices the highmem pages back onto the LRU. It potentially
    could be implemented similar to the UNEVICTABLE list.

    That would reduce the skip rate with the potential corner case is that
    highmem pages have to be scanned and reclaimed to free lowmem slab pages.

    2. Linear scan lowmem pages if the initial LRU shrink fails

    This will break LRU ordering but may be preferable and faster during
    memory pressure than skipping LRU pages.

    Link: http://lkml.kernel.org/r/1467970510-21195-4-git-send-email-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Johannes Weiner
    Acked-by: Vlastimil Babka
    Cc: Hillf Danton
    Cc: Joonsoo Kim
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

21 May, 2016

1 commit

  • HWPoison was specific to some particular x86 platforms. And it is often
    seen as high level machine check handler. And therefore, 'MCE' is used
    for the format prefix of printk(). However, 'PowerNV' has also used
    HWPoison for handling memory errors[1], so 'MCE' is no longer suitable
    to memory_failure.c.

    Additionally, 'MCE' and 'Memory failure' have different context. The
    former belongs to exception context and the latter belongs to process
    context. Furthermore, HWPoison can also be used for off-lining those
    sub-health pages that do not trigger any machine check exception.

    This patch aims to replace 'MCE' with a more appropriate prefix.

    [1] commit 75eb3d9b60c2 ("powerpc/powernv: Get FSP memory errors
    and plumb into memory poison infrastructure.")

    Signed-off-by: Chen Yucong
    Acked-by: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chen Yucong
     

29 Apr, 2016

1 commit


05 Apr, 2016

1 commit

  • PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
    ago with promise that one day it will be possible to implement page
    cache with bigger chunks than PAGE_SIZE.

    This promise never materialized. And unlikely will.

    We have many places where PAGE_CACHE_SIZE assumed to be equal to
    PAGE_SIZE. And it's constant source of confusion on whether
    PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
    especially on the border between fs and mm.

    Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
    breakage to be doable.

    Let's stop pretending that pages in page cache are special. They are
    not.

    The changes are pretty straight-forward:

    - << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> ;

    - >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> ;

    - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};

    - page_cache_get() -> get_page();

    - page_cache_release() -> put_page();

    This patch contains automated changes generated with coccinelle using
    script below. For some reason, coccinelle doesn't patch header files.
    I've called spatch for them manually.

    The only adjustment after coccinelle is revert of changes to
    PAGE_CAHCE_ALIGN definition: we are going to drop it later.

    There are few places in the code where coccinelle didn't reach. I'll
    fix them manually in a separate patch. Comments and documentation also
    will be addressed with the separate patch.

    virtual patch

    @@
    expression E;
    @@
    - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
    + E

    @@
    expression E;
    @@
    - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
    + E

    @@
    @@
    - PAGE_CACHE_SHIFT
    + PAGE_SHIFT

    @@
    @@
    - PAGE_CACHE_SIZE
    + PAGE_SIZE

    @@
    @@
    - PAGE_CACHE_MASK
    + PAGE_MASK

    @@
    expression E;
    @@
    - PAGE_CACHE_ALIGN(E)
    + PAGE_ALIGN(E)

    @@
    expression E;
    @@
    - page_cache_get(E)
    + get_page(E)

    @@
    expression E;
    @@
    - page_cache_release(E)
    + put_page(E)

    Signed-off-by: Kirill A. Shutemov
    Acked-by: Michal Hocko
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     

18 Mar, 2016

1 commit

  • Most of the mm subsystem uses pr_ so make it consistent.

    Miscellanea:

    - Realign arguments
    - Add missing newline to format
    - kmemleak-test.c has a "kmemleak: " prefix added to the
    "Kmemleak testing" logging message via pr_fmt

    Signed-off-by: Joe Perches
    Acked-by: Tejun Heo [percpu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

16 Mar, 2016

1 commit


16 Jan, 2016

6 commits

  • Currently memory_failure() doesn't handle non anonymous thp case,
    because we can hardly expect the error handling to be successful, and it
    can just hit some corner case which results in BUG_ON or something
    severe like that. This is also the case for soft offline code, so let's
    make it in the same way.

    Orignal code has a MF_COUNT_INCREASED check before put_hwpoison_page(),
    but it's unnecessary because get_any_page() is already called when
    running on this code, which takes a refcount of the target page
    regardress of the flag. So this patch also removes it.

    [akpm@linux-foundation.org: fix build]
    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • soft_offline_page() has some deeply indented code, that's the sign of
    demand for cleanup. So let's do this. No functionality change.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Some mm-related BUG_ON()s could trigger from hwpoison code due to recent
    changes in thp refcounting rule. This patch fixes them up.

    In the new refcounting, we no longer use tail->_mapcount to keep tail's
    refcount, and thereby we can simplify get/put_hwpoison_page().

    And another change is that tail's refcount is not transferred to the raw
    page during thp split (more precisely, in new rule we don't take
    refcount on tail page any more.) So when we need thp split, we have to
    transfer the refcount properly to the 4kB soft-offlined page before
    migration.

    thp split code goes into core code only when precheck
    (total_mapcount(head) == page_count(head) - 1) passes to avoid useless
    split, where we assume that one refcount is held by the caller of thp
    split and the others are taken via mapping. To meet this assumption,
    this patch moves thp split part in soft_offline_page() after
    get_any_page().

    [akpm@linux-foundation.org: remove unneeded #define, per Kirill]
    Signed-off-by: Naoya Horiguchi
    Acked-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • I saw the following BUG_ON triggered in a testcase where a process calls
    madvise(MADV_SOFT_OFFLINE) on thps, along with a background process that
    calls migratepages command repeatedly (doing ping-pong among different
    NUMA nodes) for the first process:

    Soft offlining page 0x60000 at 0x700000600000
    __get_any_page: 0x60000 free buddy page
    page:ffffea0001800000 count:0 mapcount:-127 mapping: (null) index:0x1
    flags: 0x1fffc0000000000()
    page dumped because: VM_BUG_ON_PAGE(atomic_read(&page->_count) == 0)
    ------------[ cut here ]------------
    kernel BUG at /src/linux-dev/include/linux/mm.h:342!
    invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
    Modules linked in: cfg80211 rfkill crc32c_intel serio_raw virtio_balloon i2c_piix4 virtio_blk virtio_net ata_generic pata_acpi
    CPU: 3 PID: 3035 Comm: test_alloc_gene Tainted: G O 4.4.0-rc8-v4.4-rc8-160107-1501-00000-rc8+ #74
    Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
    task: ffff88007c63d5c0 ti: ffff88007c210000 task.ti: ffff88007c210000
    RIP: 0010:[] [] put_page+0x5c/0x60
    RSP: 0018:ffff88007c213e00 EFLAGS: 00010246
    Call Trace:
    put_hwpoison_page+0x4e/0x80
    soft_offline_page+0x501/0x520
    SyS_madvise+0x6bc/0x6f0
    entry_SYSCALL_64_fastpath+0x12/0x6a
    Code: 8b fc ff ff 5b 5d c3 48 89 df e8 b0 fa ff ff 48 89 df 31 f6 e8 c6 7d ff ff 5b 5d c3 48 c7 c6 08 54 a2 81 48 89 df e8 a4 c5 01 00 0b 66 90 66 66 66 66 90 55 48 89 e5 41 55 41 54 53 48 8b 47
    RIP [] put_page+0x5c/0x60
    RSP

    The root cause resides in get_any_page() which retries to get a refcount
    of the page to be soft-offlined. This function calls
    put_hwpoison_page(), expecting that the target page is putback to LRU
    list. But it can be also freed to buddy. So the second check need to
    care about such case.

    Fixes: af8fae7c0886 ("mm/memory-failure.c: clean up soft_offline_page()")
    Signed-off-by: Naoya Horiguchi
    Cc: Sasha Levin
    Cc: Aneesh Kumar K.V
    Cc: Vlastimil Babka
    Cc: Jerome Marchand
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Cc: Dave Hansen
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Steve Capper
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: [3.9+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • We're going to use migration entries instead of compound_lock() to
    stabilize page refcounts. Setup and remove migration entries require
    page to be locked.

    Some of split_huge_page() callers already have the page locked. Let's
    require everybody to lock the page before calling split_huge_page().

    Signed-off-by: Kirill A. Shutemov
    Tested-by: Sasha Levin
    Tested-by: Aneesh Kumar K.V
    Acked-by: Vlastimil Babka
    Acked-by: Jerome Marchand
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Cc: Dave Hansen
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Naoya Horiguchi
    Cc: Steve Capper
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Christoph Lameter
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     
  • lock_page() must operate on the whole compound page. It doesn't make
    much sense to lock part of compound page. Change code to use head
    page's PG_locked, if tail page is passed.

    This patch also gets rid of custom helper functions --
    __set_page_locked() and __clear_page_locked(). They are replaced with
    helpers generated by __SETPAGEFLAG/__CLEARPAGEFLAG. Tail pages to these
    helper would trigger VM_BUG_ON().

    SLUB uses PG_locked as a bit spin locked. IIUC, tail pages should never
    appear there. VM_BUG_ON() is added to make sure that this assumption is
    correct.

    [akpm@linux-foundation.org: fix fs/cifs/file.c]
    Signed-off-by: Kirill A. Shutemov
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Cc: Dave Hansen
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Vlastimil Babka
    Cc: Christoph Lameter
    Cc: Naoya Horiguchi
    Cc: Steve Capper
    Cc: "Aneesh Kumar K.V"
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Jerome Marchand
    Cc: Jérôme Glisse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     

07 Nov, 2015

1 commit

  • Hugh has pointed that compound_head() call can be unsafe in some
    context. There's one example:

    CPU0 CPU1

    isolate_migratepages_block()
    page_count()
    compound_head()
    !!PageTail() == true
    put_page()
    tail->first_page = NULL
    head = tail->first_page
    alloc_pages(__GFP_COMP)
    prep_compound_page()
    tail->first_page = head
    __SetPageTail(p);
    !!PageTail() == true

    The race is pure theoretical. I don't it's possible to trigger it in
    practice. But who knows.

    We can fix the race by changing how encode PageTail() and compound_head()
    within struct page to be able to update them in one shot.

    The patch introduces page->compound_head into third double word block in
    front of compound_dtor and compound_order. Bit 0 encodes PageTail() and
    the rest bits are pointer to head page if bit zero is set.

    The patch moves page->pmd_huge_pte out of word, just in case if an
    architecture defines pgtable_t into something what can have the bit 0
    set.

    hugetlb_cgroup uses page->lru.next in the second tail page to store
    pointer struct hugetlb_cgroup. The patch switch it to use page->private
    in the second tail page instead. The space is free since ->first_page is
    removed from the union.

    The patch also opens possibility to remove HUGETLB_CGROUP_MIN_ORDER
    limitation, since there's now space in first tail page to store struct
    hugetlb_cgroup pointer. But that's out of scope of the patch.

    That means page->compound_head shares storage space with:

    - page->lru.next;
    - page->next;
    - page->rcu_head.next;

    That's too long list to be absolutely sure, but looks like nobody uses
    bit 0 of the word.

    page->rcu_head.next guaranteed[1] to have bit 0 clean as long as we use
    call_rcu(), call_rcu_bh(), call_rcu_sched(), or call_srcu(). But future
    call_rcu_lazy() is not allowed as it makes use of the bit and we can
    get false positive PageTail().

    [1] http://lkml.kernel.org/g/20150827163634.GD4029@linux.vnet.ibm.com

    Signed-off-by: Kirill A. Shutemov
    Acked-by: Michal Hocko
    Reviewed-by: Andrea Arcangeli
    Cc: Hugh Dickins
    Cc: David Rientjes
    Cc: Vlastimil Babka
    Acked-by: Paul E. McKenney
    Cc: Aneesh Kumar K.V
    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Sergey Senozhatsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     

06 Nov, 2015

1 commit

  • Currently kernel prints out results of every single unpoison event, which
    i= s not necessary because unpoison is purely a testing feature and
    testers can = get little or no information from lots of lines of unpoison
    log storm. So this patch ratelimits printk in unpoison_memory().

    This patch introduces a file local ratelimit_state, which adds 64 bytes to
    memory-failure.o. If we apply pr_info_ratelimited() for 8 callsite below,
    2= 56 bytes is added, so it's a win.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Wanpeng Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

11 Sep, 2015

1 commit

  • Hwpoison allows to filter pages by memory cgroup ino. Currently, it
    calls try_get_mem_cgroup_from_page to obtain the cgroup from a page and
    then its ino using cgroup_ino, but now we have a helper method for
    that, page_cgroup_ino, so use it instead.

    This patch also loosens the hwpoison memcg filter dependency rules - it
    makes it depend on CONFIG_MEMCG instead of CONFIG_MEMCG_SWAP, because
    hwpoison memcg filter does not require anything (nor it used to) from
    CONFIG_MEMCG_SWAP side.

    Signed-off-by: Vladimir Davydov
    Reviewed-by: Andres Lagar-Cavilla
    Cc: Minchan Kim
    Cc: Raghavendra K T
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Greg Thelen
    Cc: Michel Lespinasse
    Cc: David Rientjes
    Cc: Pavel Emelyanov
    Cc: Cyrill Gorcunov
    Cc: Jonathan Corbet
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     

09 Sep, 2015

9 commits

  • alloc_pages_exact_node() was introduced in commit 6484eb3e2a81 ("page
    allocator: do not check NUMA node ID when the caller knows the node is
    valid") as an optimized variant of alloc_pages_node(), that doesn't
    fallback to current node for nid == NUMA_NO_NODE. Unfortunately the
    name of the function can easily suggest that the allocation is
    restricted to the given node and fails otherwise. In truth, the node is
    only preferred, unless __GFP_THISNODE is passed among the gfp flags.

    The misleading name has lead to mistakes in the past, see for example
    commits 5265047ac301 ("mm, thp: really limit transparent hugepage
    allocation to local node") and b360edb43f8e ("mm, mempolicy:
    migrate_to_node should only migrate to node").

    Another issue with the name is that there's a family of
    alloc_pages_exact*() functions where 'exact' means exact size (instead
    of page order), which leads to more confusion.

    To prevent further mistakes, this patch effectively renames
    alloc_pages_exact_node() to __alloc_pages_node() to better convey that
    it's an optimized variant of alloc_pages_node() not intended for general
    usage. Both functions get described in comments.

    It has been also considered to really provide a convenience function for
    allocations restricted to a node, but the major opinion seems to be that
    __GFP_THISNODE already provides that functionality and we shouldn't
    duplicate the API needlessly. The number of users would be small
    anyway.

    Existing callers of alloc_pages_exact_node() are simply converted to
    call __alloc_pages_node(), with the exception of sba_alloc_coherent()
    which open-codes the check for NUMA_NO_NODE, so it is converted to use
    alloc_pages_node() instead. This means it no longer performs some
    VM_BUG_ON checks, and since the current check for nid in
    alloc_pages_node() uses a 'nid < 0' comparison (which includes
    NUMA_NO_NODE), it may hide wrong values which would be previously
    exposed.

    Both differences will be rectified by the next patch.

    To sum up, this patch makes no functional changes, except temporarily
    hiding potentially buggy callers. Restricting the checks in
    alloc_pages_node() is left for the next patch which can in turn expose
    more existing buggy callers.

    Signed-off-by: Vlastimil Babka
    Acked-by: Johannes Weiner
    Acked-by: Robin Holt
    Acked-by: Michal Hocko
    Acked-by: Christoph Lameter
    Acked-by: Michael Ellerman
    Cc: Mel Gorman
    Cc: David Rientjes
    Cc: Greg Thelen
    Cc: Aneesh Kumar K.V
    Cc: Pekka Enberg
    Cc: Joonsoo Kim
    Cc: Naoya Horiguchi
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Gleb Natapov
    Cc: Paolo Bonzini
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Cliff Whickman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vlastimil Babka
     
  • memory_failure() can be called at any page at any time, which means that
    we can't eliminate the possibility of containment failure. In such case
    the best option is to leak the page intentionally (and never touch it
    later.)

    We have an unpoison function for testing, and it cannot handle such
    containment-failed pages, which results in kernel panic (visible with
    various calltraces.) So this patch suggests that we limit the
    unpoisonable pages to properly contained pages and ignore any other
    ones.

    Testers are recommended to keep in mind that there're un-unpoisonable
    pages when writing test programs.

    Signed-off-by: Naoya Horiguchi
    Tested-by: Wanpeng Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Wanpeng Li reported a race between soft_offline_page() and
    unpoison_memory(), which causes the following kernel panic:

    BUG: Bad page state in process bash pfn:97000
    page:ffffea00025c0000 count:0 mapcount:1 mapping: (null) index:0x7f4fdbe00
    flags: 0x1fffff80080048(uptodate|active|swapbacked)
    page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set
    bad because of flags:
    flags: 0x40(active)
    Modules linked in: snd_hda_codec_hdmi i915 rpcsec_gss_krb5 nfsv4 dns_resolver bnep rfcomm nfsd bluetooth auth_rpcgss nfs_acl nfs rfkill lockd grace sunrpc i2c_algo_bit drm_kms_helper snd_hda_codec_realtek snd_hda_codec_generic drm snd_hda_intel fscache snd_hda_codec x86_pkg_temp_thermal coretemp kvm_intel snd_hda_core snd_hwdep kvm snd_pcm snd_seq_dummy snd_seq_oss crct10dif_pclmul snd_seq_midi crc32_pclmul snd_seq_midi_event ghash_clmulni_intel snd_rawmidi aesni_intel lrw gf128mul snd_seq glue_helper ablk_helper snd_seq_device cryptd fuse snd_timer dcdbas serio_raw mei_me parport_pc snd mei ppdev i2c_core video lp soundcore parport lpc_ich shpchp mfd_core ext4 mbcache jbd2 sd_mod e1000e ahci ptp libahci crc32c_intel libata pps_core
    CPU: 3 PID: 2211 Comm: bash Not tainted 4.2.0-rc5-mm1+ #45
    Hardware name: Dell Inc. OptiPlex 7020/0F5C5X, BIOS A03 01/08/2015
    Call Trace:
    dump_stack+0x48/0x5c
    bad_page+0xe6/0x140
    free_pages_prepare+0x2f9/0x320
    ? uncharge_list+0xdd/0x100
    free_hot_cold_page+0x40/0x170
    __put_single_page+0x20/0x30
    put_page+0x25/0x40
    unmap_and_move+0x1a6/0x1f0
    migrate_pages+0x100/0x1d0
    ? kill_procs+0x100/0x100
    ? unlock_page+0x6f/0x90
    __soft_offline_page+0x127/0x2a0
    soft_offline_page+0xa6/0x200

    This race is explained like below:

    CPU0 CPU1

    soft_offline_page
    __soft_offline_page
    TestSetPageHWPoison
    unpoison_memory
    PageHWPoison check (true)
    TestClearPageHWPoison
    put_page -> release refcount held by get_hwpoison_page in unpoison_memory
    put_page -> release refcount held by isolate_lru_page in __soft_offline_page
    migrate_pages

    The second put_page() releases refcount held by isolate_lru_page() which
    will lead to unmap_and_move() releases the last refcount of page and w/
    mapcount still 1 since try_to_unmap() is not called if there is only one
    user map the page. Anyway, the page refcount and mapcount will still
    mess if the page is mapped by multiple users.

    This race was introduced by commit 4491f71260 ("mm/memory-failure: set
    PageHWPoison before migrate_pages()"), which focuses on preventing the
    reuse of successfully migrated page. Before this commit we prevent the
    reuse by changing the migratetype to MIGRATE_ISOLATE during soft
    offlining, which has the following problems, so simply reverting the
    commit is not a best option:

    1) it doesn't eliminate the reuse completely, because
    set_migratetype_isolate() can fail to set MIGRATE_ISOLATE to the
    target page if the pageblock of the page contains one or more
    unmovable pages (i.e. has_unmovable_pages() returns true).

    2) the original code changes migratetype to MIGRATE_ISOLATE
    forcibly, and sets it to MIGRATE_MOVABLE forcibly after soft offline,
    regardless of the original migratetype state, which could impact
    other subsystems like memory hotplug or compaction.

    This patch moves PageSetHWPoison just after put_page() in
    unmap_and_move(), which closes up the reported race window and minimizes
    another race window b/w SetPageHWPoison and reallocation (which causes
    the reuse of soft-offlined page.) The latter race window still exists
    but it's acceptable, because it's rare and effectively the same as
    ordinary "containment failure" case even if it happens, so keep the
    window open is acceptable.

    Fixes: 4491f71260 ("mm/memory-failure: set PageHWPoison before migrate_pages()")
    Signed-off-by: Wanpeng Li
    Signed-off-by: Naoya Horiguchi
    Reported-by: Wanpeng Li
    Tested-by: Wanpeng Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • num_poisoned_pages counter will be changed outside mm/memory-failure.c
    by a subsequent patch, so this patch prepares wrappers to manipulate it.

    Signed-off-by: Naoya Horiguchi
    Tested-by: Wanpeng Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Replace most instances of put_page() in memory error handling with
    put_hwpoison_page().

    Signed-off-by: Wanpeng Li
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • Introduce put_hwpoison_page to put refcount for memory error handling.

    Signed-off-by: Wanpeng Li
    Suggested-by: Naoya Horiguchi
    Acked-by: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • There is a race between madvise_hwpoison path and memory_failure:

    CPU0 CPU1

    madvise_hwpoison
    get_user_pages_fast
    PageHWPoison check (false)
    memory_failure
    TestSetPageHWPoison
    soft_offline_page
    PageHWPoison check (true)
    return -EBUSY (without put_page)

    Signed-off-by: Wanpeng Li
    Suggested-by: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • THP pages will get a refcount in madvise_hwpoison() w/
    MF_COUNT_INCREASED flag, however, the refcount is still held when fail
    to split THP pages.

    Fix it by reducing the refcount of THP pages when fail to split THP.

    Signed-off-by: Wanpeng Li
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • mem_cgroup structure is defined in mm/memcontrol.c currently which means
    that the code outside of this file has to use external API even for
    trivial access stuff.

    This patch exports mm_struct with its dependencies and makes some of the
    exported functions inlines. This even helps to reduce the code size a bit
    (make defconfig + CONFIG_MEMCG=y)

    text data bss dec hex filename
    12355346 1823792 1089536 15268674 e8fb42 vmlinux.before
    12354970 1823792 1089536 15268298 e8f9ca vmlinux.after

    This is not much (370B) but better than nothing.

    We also save a function call in some hot paths like callers of
    mem_cgroup_count_vm_event which is used for accounting.

    The patch doesn't introduce any functional changes.

    [vdavykov@parallels.com: inline memcg_kmem_is_active]
    [vdavykov@parallels.com: do not expose type outside of CONFIG_MEMCG]
    [akpm@linux-foundation.org: memcontrol.h needs eventfd.h for eventfd_ctx]
    [akpm@linux-foundation.org: export mem_cgroup_from_task() to modules]
    Signed-off-by: Michal Hocko
    Reviewed-by: Vladimir Davydov
    Suggested-by: Johannes Weiner
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

15 Aug, 2015

3 commits

  • Bug:

    ------------[ cut here ]------------
    kernel BUG at mm/huge_memory.c:1957!
    invalid opcode: 0000 [#1] SMP
    Modules linked in: snd_hda_codec_hdmi i915 rpcsec_gss_krb5 snd_hda_codec_realtek snd_hda_codec_generic nfsv4 dns_re
    CPU: 2 PID: 2576 Comm: test_huge Not tainted 4.2.0-rc5-mm1+ #27
    Hardware name: Dell Inc. OptiPlex 7020/0F5C5X, BIOS A03 01/08/2015
    task: ffff880204e3d600 ti: ffff8800db16c000 task.ti: ffff8800db16c000
    RIP: split_huge_page_to_list+0xdb/0x120
    Call Trace:
    memory_failure+0x32e/0x7c0
    madvise_hwpoison+0x8b/0x160
    SyS_madvise+0x40/0x240
    ? do_page_fault+0x37/0x90
    entry_SYSCALL_64_fastpath+0x12/0x71
    Code: ff f0 41 ff 4c 24 30 74 0d 31 c0 48 83 c4 08 5b 41 5c 41 5d c9 c3 4c 89 e7 e8 e2 58 fd ff 48 83 c4 08 31 c0
    RIP split_huge_page_to_list+0xdb/0x120
    RSP
    ---[ end trace aee7ce0df8e44076 ]---

    Testcase:

    #define _GNU_SOURCE
    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include

    #define MB 1024*1024

    int main(void)
    {
    char *mem;

    posix_memalign((void **)&mem, 2 * MB, 200 * MB);

    madvise(mem, 200 * MB, MADV_HWPOISON);

    free(mem);

    return 0;
    }

    Huge zero page is allocated if page fault w/o FAULT_FLAG_WRITE flag.
    The get_user_pages_fast() which called in madvise_hwpoison() will get
    huge zero page if the page is not allocated before. Huge zero page is a
    tranparent huge page, however, it is not an anonymous page.
    memory_failure will split the huge zero page and trigger
    BUG_ON(is_huge_zero_page(page));

    After commit 98ed2b0052e6 ("mm/memory-failure: give up error handling
    for non-tail-refcounted thp"), memory_failure will not catch non anon
    thp from madvise_hwpoison path and this bug occur.

    Fix it by catching non anon thp in memory_failure in order to not split
    huge zero page in madvise_hwpoison path.

    After this patch:

    Injecting memory failure for page 0x202800 at 0x7fd8ae800000
    MCE: 0x202800: non anonymous thp
    [...]

    [akpm@linux-foundation.org: remove second split, per Wanpeng]
    Signed-off-by: Wanpeng Li
    Acked-by: Naoya Horiguchi
    Signed-off-by: Andrew Morton

    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • Hugetlbfs pages will get a refcount in get_any_page() or
    madvise_hwpoison() if soft offlining through madvise. The refcount which
    is held by the soft offline path should be released if we fail to isolate
    hugetlbfs pages.

    Fix it by reducing the refcount for both isolation success and failure.

    Signed-off-by: Wanpeng Li
    Acked-by: Naoya Horiguchi
    Cc: [3.9+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • After trying to drain pages from pagevec/pageset, we try to get reference
    count of the page again, however, the reference count of the page is not
    reduced if the page is still not on LRU list.

    Fix it by adding the put_page() to drop the page reference which is from
    __get_any_page().

    Signed-off-by: Wanpeng Li
    Acked-by: Naoya Horiguchi
    Cc: [3.9+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     

07 Aug, 2015

4 commits

  • Now page freeing code doesn't consider PageHWPoison as a bad page, so by
    setting it before completing the page containment, we can prevent the
    error page from being reused just after successful page migration.

    I added TTU_IGNORE_HWPOISON for try_to_unmap() to make sure that the
    page table entry is transformed into migration entry, not to hwpoison
    entry.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Dean Nelson
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Cc: Hugh Dickins
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • "non anonymous thp" case is still racy with freeing thp, which causes
    panic due to put_page() for refcount-0 page. It seems that closing up
    this race might be hard (and/or not worth doing,) so let's give up the
    error handling for this case.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Dean Nelson
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Cc: Hugh Dickins
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • When memory_failure() is called on a page which are just freed after
    page migration from soft offlining, the counter num_poisoned_pages is
    raised twi= ce. So let's fix it with using TestSetPageHWPoison.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Dean Nelson
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Cc: Hugh Dickins
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Recently I addressed a few of hwpoison race problems and the patches are
    merged on v4.2-rc1. It made progress, but unfortunately some problems
    still remain due to less coverage of my testing. So I'm trying to fix
    or avoid them in this series.

    One point I'm expecting to discuss is that patch 4/5 changes the page
    flag set to be checked on free time. In current behavior, __PG_HWPOISON
    is not supposed to be set when the page is freed. I think that there is
    no strong reason for this behavior, and it causes a problem hard to fix
    only in error handler side (because __PG_HWPOISON could be set at
    arbitrary timing.) So I suggest to change it.

    With this patchset, hwpoison stress testing in official mce-test
    testsuite (which previously failed) passes.

    This patch (of 5):

    In "just unpoisoned" path, we do put_page and then unlock_page, which is
    a wrong order and causes "freeing locked page" bug. So let's fix it.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Dean Nelson
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Cc: Hugh Dickins
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     

25 Jun, 2015

7 commits

  • RAS user space tools like rasdaemon which base on trace event, could
    receive mce error event, but no memory recovery result event. So, I want
    to add this event to make this scenario complete.

    This patch add a event at ras group for memory-failure.

    The output like below:
    # tracer: nop
    #
    # entries-in-buffer/entries-written: 2/2 #P:24
    #
    # _-----=> irqs-off
    # / _----=> need-resched
    # | / _---=> hardirq/softirq
    # || / _--=> preempt-depth
    # ||| / delay
    # TASK-PID CPU# |||| TIMESTAMP FUNCTION
    # | | | |||| | |
    mce-inject-13150 [001] .... 277.019359: memory_failure_event: pfn 0x19869: recovery action for free buddy page: Delayed

    [xiexiuqi@huawei.com: fix build error]
    Signed-off-by: Xie XiuQi
    Reviewed-by: Naoya Horiguchi
    Acked-by: Steven Rostedt
    Cc: Tony Luck
    Cc: Chen Gong
    Cc: Jim Davis
    Signed-off-by: Xie XiuQi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xie XiuQi
     
  • Change type of action_result's param 3 to enum for type consistency,
    and rename mf_outcome to mf_result for clearly.

    Signed-off-by: Xie XiuQi
    Acked-by: Naoya Horiguchi
    Cc: Chen Gong
    Cc: Jim Davis
    Cc: Steven Rostedt
    Cc: Tony Luck
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xie XiuQi
     
  • Export 'outcome' and 'action_page_type' to mm.h, so we could use
    this emnus outside.

    This patch is preparation for adding trace events for memory-failure
    recovery action.

    Signed-off-by: Xie XiuQi
    Acked-by: Naoya Horiguchi
    Cc: Chen Gong
    Cc: Jim Davis
    Cc: Steven Rostedt
    Cc: Tony Luck
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xie XiuQi
     
  • memory_failure() is supposed not to handle thp itself, but to split it.
    But if something were wrong and page_action() were called on thp,
    me_huge_page() (action routine for hugepages) should be better to take
    no action, rather than to take wrong action prepared for hugetlb (which
    triggers BUG_ON().)

    This change is for potential problems, but makes sense to me because thp
    is an actively developing feature and this code path can be open in the
    future.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • Stress testing showed that soft offline events for a process iterating
    "mmap-pagefault-munmap" loop can trigger
    VM_BUG_ON(PAGE_FLAGS_CHECK_AT_PREP) in __free_one_page():

    Soft offlining page 0x70fe1 at 0x70100008d000
    Soft offlining page 0x705fb at 0x70300008d000
    page:ffffea0001c3f840 count:0 mapcount:0 mapping: (null) index:0x2
    flags: 0x1fffff80800000(hwpoison)
    page dumped because: VM_BUG_ON_PAGE(page->flags & ((1 << 25) - 1))
    ------------[ cut here ]------------
    kernel BUG at /src/linux-dev/mm/page_alloc.c:585!
    invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
    Modules linked in: cfg80211 rfkill crc32c_intel microcode ppdev parport_pc pcspkr serio_raw virtio_balloon parport i2c_piix4 virtio_blk virtio_net ata_generic pata_acpi floppy
    CPU: 3 PID: 1779 Comm: test_base_madv_ Not tainted 4.0.0-v4.0-150511-1451-00009-g82360a3730e6 #139
    RIP: free_pcppages_bulk+0x52a/0x6f0
    Call Trace:
    drain_pages_zone+0x3d/0x50
    drain_local_pages+0x1d/0x30
    on_each_cpu_mask+0x46/0x80
    drain_all_pages+0x14b/0x1e0
    soft_offline_page+0x432/0x6e0
    SyS_madvise+0x73c/0x780
    system_call_fastpath+0x12/0x17
    Code: ff 89 45 b4 48 8b 45 c0 48 83 b8 a8 00 00 00 00 0f 85 e3 fb ff ff 0f 1f 00 0f 0b 48 8b 7d 90 48 c7 c6 e8 95 a6 81 e8 e6 32 02 00 0b 8b 45 cc 49 89 47 30 41 8b 47 18 83 f8 ff 0f 85 10 ff ff
    RIP [] free_pcppages_bulk+0x52a/0x6f0
    RSP
    ---[ end trace 53926436e76d1f35 ]---

    When soft offline successfully migrates page, the source page is supposed
    to be freed. But there is a race condition where a source page looks
    isolated (i.e. the refcount is 0 and the PageHWPoison is set) but
    somewhat linked to pcplist. Then another soft offline event calls
    drain_all_pages() and tries to free such hwpoisoned page, which is
    forbidden.

    This odd page state seems to happen due to the race between put_page() in
    putback_lru_page() and __pagevec_lru_add_fn(). But I don't want to play
    with tweaking drain code as done in commit 9ab3b598d2df "mm: hwpoison:
    drop lru_add_drain_all() in __soft_offline_page()", or to change page
    freeing code for this soft offline's purpose.

    Instead, let's think about the difference between hard offline and soft
    offline. There is an interesting difference in how to isolate the in-use
    page between these, that is, hard offline marks PageHWPoison of the target
    page at first, and doesn't free it by keeping its refcount 1. OTOH, soft
    offline tries to free the target page then marks PageHWPoison. This
    difference might be the source of complexity and result in bugs like the
    above. So making soft offline isolate with keeping refcount can be a
    solution for this problem.

    We can pass to page migration code the "reason" which shows the caller, so
    let's use this more to avoid calling putback_lru_page() when called from
    soft offline, which effectively does the isolation for soft offline. With
    this change, target pages of soft offline never be reused without changing
    migratetype, so this patch also removes the related code.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • memory_failure() can run in 2 different mode (specified by
    MF_COUNT_INCREASED) in page refcount perspective. When
    MF_COUNT_INCREASED is set, memory_failure() assumes that the caller
    takes a refcount of the target page. And if cleared, memory_failure()
    takes it in it's own.

    In current code, however, refcounting is done differently in each caller.
    For example, madvise_hwpoison() uses get_user_pages_fast() and
    hwpoison_inject() uses get_page_unless_zero(). So this inconsistent
    refcounting causes refcount failure especially for thp tail pages.
    Typical user visible effects are like memory leak or
    VM_BUG_ON_PAGE(!page_count(page)) in isolate_lru_page().

    To fix this refcounting issue, this patch introduces get_hwpoison_page()
    to handle thp tail pages in the same manner for each caller of hwpoison
    code.

    memory_failure() might fail to split thp and in such case it returns
    without completing page isolation. This is not good because PageHWPoison
    on the thp is still set and there's no easy way to unpoison such thps. So
    this patch try to roll back any action to the thp in "non anonymous thp"
    case and "thp split failed" case, expecting an MCE(SRAR) generated by
    later access afterward will properly free such thps.

    [akpm@linux-foundation.org: fix CONFIG_HWPOISON_INJECT=m]
    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi
     
  • memory_failure() doesn't handle thp itself at this time and need to split
    it before doing isolation. Currently thp is split in the middle of
    hwpoison_user_mappings(), but there're corner cases where memory_failure()
    wrongly tries to handle thp without splitting.

    1) "non anonymous" thp, which is not a normal operating mode of thp,
    but a memory error could hit a thp before anon_vma is initialized. In
    such case, split_huge_page() fails and me_huge_page() (intended for
    hugetlb) is called for thp, which triggers BUG_ON in page_hstate().

    2) !PageLRU case, where hwpoison_user_mappings() returns with
    SWAP_SUCCESS and the result is the same as case 1.

    memory_failure() can't avoid splitting, so let's split it more earlier,
    which also reduces code which are prepared for both of normal page and
    thp.

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Tony Luck
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Naoya Horiguchi