13 Aug, 2010

1 commit

  • * 'hwpoison' of git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6:
    hugetlb: add missing unlock in avoidcopy path in hugetlb_cow()
    hwpoison: rename CONFIG
    HWPOISON, hugetlb: support hwpoison injection for hugepage
    HWPOISON, hugetlb: detect hwpoison in hugetlb code
    HWPOISON, hugetlb: isolate corrupted hugepage
    HWPOISON, hugetlb: maintain mce_bad_pages in handling hugepage error
    HWPOISON, hugetlb: set/clear PG_hwpoison bits on hugepage
    HWPOISON, hugetlb: enable error handling path for hugepage
    hugetlb, rmap: add reverse mapping for hugepage
    hugetlb: move definition of is_vm_hugetlb_page() to hugepage_inline.h

    Fix up trivial conflicts in mm/memory-failure.c

    Linus Torvalds
     

11 Aug, 2010

4 commits

  • If error hugepage is not in-use, we can fully recovery from error
    by dequeuing it from freelist, so return RECOVERY.
    Otherwise whether or not we can recovery depends on user processes,
    so return DELAYED.

    Dependency:
    "HWPOISON, hugetlb: enable error handling path for hugepage"

    Signed-off-by: Naoya Horiguchi
    Cc: Andrew Morton
    Acked-by: Fengguang Wu
    Signed-off-by: Andi Kleen

    Naoya Horiguchi
     
  • For now all pages in the error hugepage are considered as hwpoisoned,
    so count all of them in mce_bad_pages.

    Dependency:
    "HWPOISON, hugetlb: enable error handling path for hugepage"

    Signed-off-by: Naoya Horiguchi
    Cc: Andrew Morton
    Acked-by: Fengguang Wu
    Signed-off-by: Andi Kleen

    Naoya Horiguchi
     
  • To avoid race condition between concurrent memory errors on identified
    hugepage, we atomically test and set PG_hwpoison bit on the head page.
    All pages in the error hugepage are considered as hwpoisoned
    for now, so set and clear all PG_hwpoison bits in the hugepage
    with page lock of the head page held.

    Dependency:
    "HWPOISON, hugetlb: enable error handling path for hugepage"

    Signed-off-by: Naoya Horiguchi
    Cc: Andrew Morton
    Acked-by: Fengguang Wu
    Signed-off-by: Andi Kleen

    Naoya Horiguchi
     
  • This patch just enables handling path. Real containing and
    recovering operation will be implemented in following patches.

    Dependency:
    "hugetlb, rmap: add reverse mapping for hugepage."

    Signed-off-by: Naoya Horiguchi
    Cc: Andi Kleen
    Cc: Andrew Morton
    Acked-by: Fengguang Wu
    Signed-off-by: Andi Kleen

    Naoya Horiguchi
     

01 Aug, 2010

2 commits

  • is_hwpoison_address accesses the page table, so the caller must hold
    current->mm->mmap_sem in read mode. So fix its usage in hva_to_pfn of
    kvm accordingly.

    Comment is_hwpoison_address to remind other users.

    Reported-by: Avi Kivity
    Signed-off-by: Huang Ying
    Signed-off-by: Avi Kivity

    Huang Ying
     
  • In common cases, guest SRAO MCE will cause corresponding poisoned page
    be un-mapped and SIGBUS be sent to QEMU-KVM, then QEMU-KVM will relay
    the MCE to guest OS.

    But it is reported that if the poisoned page is accessed in guest
    after unmapping and before MCE is relayed to guest OS, userspace will
    be killed.

    The reason is as follows. Because poisoned page has been un-mapped,
    guest access will cause guest exit and kvm_mmu_page_fault will be
    called. kvm_mmu_page_fault can not get the poisoned page for fault
    address, so kernel and user space MMIO processing is tried in turn. In
    user MMIO processing, poisoned page is accessed again, then userspace
    is killed by force_sig_info.

    To fix the bug, kvm_mmu_page_fault send HWPOISON signal to QEMU-KVM
    and do not try kernel and user space MMIO processing for poisoned
    page.

    [xiao: fix warning introduced by avi]

    Reported-by: Max Asbock
    Signed-off-by: Huang Ying
    Signed-off-by: Xiao Guangrong
    Signed-off-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Huang Ying
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

07 Mar, 2010

1 commit

  • The old anon_vma code can lead to scalability issues with heavily forking
    workloads. Specifically, each anon_vma will be shared between the parent
    process and all its child processes.

    In a workload with 1000 child processes and a VMA with 1000 anonymous
    pages per process that get COWed, this leads to a system with a million
    anonymous pages in the same anon_vma, each of which is mapped in just one
    of the 1000 processes. However, the current rmap code needs to walk them
    all, leading to O(N) scanning complexity for each page.

    This can result in systems where one CPU is walking the page tables of
    1000 processes in page_referenced_one, while all other CPUs are stuck on
    the anon_vma lock. This leads to catastrophic failure for a benchmark
    like AIM7, where the total number of processes can reach in the tens of
    thousands. Real workloads are still a factor 10 less process intensive
    than AIM7, but they are catching up.

    This patch changes the way anon_vmas and VMAs are linked, which allows us
    to associate multiple anon_vmas with a VMA. At fork time, each child
    process gets its own anon_vmas, in which its COWed pages will be
    instantiated. The parents' anon_vma is also linked to the VMA, because
    non-COWed pages could be present in any of the children.

    This reduces rmap scanning complexity to O(1) for the pages of the 1000
    child processes, with O(N) complexity for at most 1/N pages in the system.
    This reduces the average scanning cost in heavily forking workloads from
    O(N) to 2.

    The only real complexity in this patch stems from the fact that linking a
    VMA to anon_vmas now involves memory allocations. This means vma_adjust
    can fail, if it needs to attach a VMA to anon_vma structures. This in
    turn means error handling needs to be added to the calling functions.

    A second source of complexity is that, because there can be multiple
    anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
    "the" anon_vma lock. To prevent the rmap code from walking up an
    incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
    flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
    to make sure it is impossible to compile a kernel that needs both symbolic
    values for the same bitflag.

    Some test results:

    Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
    box with 16GB RAM and not quite enough IO), the system ends up running
    >99% in system time, with every CPU on the same anon_vma lock in the
    pageout code.

    With these changes, AIM7 hits the cross-over point around 29.7k users.
    This happens with ~99% IO wait time, there never seems to be any spike in
    system time. The anon_vma lock contention appears to be resolved.

    [akpm@linux-foundation.org: cleanups]
    Signed-off-by: Rik van Riel
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Cc: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     

22 Dec, 2009

1 commit

  • The injector filter requires stable_page_flags() which is supplied
    by procfs. So make it dependent on that.

    Also add ifdefs around the filter code in memory-failure.c so that
    when the filter is disabled due to missing dependencies the whole
    code still builds.

    Reported-by: Ingo Molnar
    Signed-off-by: Andi Kleen

    Andi Kleen
     

16 Dec, 2009

23 commits

  • Better to have complete sentences.

    Signed-off-by: Andi Kleen

    Andi Kleen
     
  • Signed-off-by: Andi Kleen

    Andi Kleen
     
  • This is a simpler, gentler variant of memory_failure() for soft page
    offlining controlled from user space. It doesn't kill anything, just
    tries to invalidate and if that doesn't work migrate the
    page away.

    This is useful for predictive failure analysis, where a page has
    a high rate of corrected errors, but hasn't gone bad yet. Instead
    it can be offlined early and avoided.

    The offlining is controlled from sysfs, including a new generic
    entry point for hard page offlining for symmetry too.

    We use the page isolate facility to prevent re-allocation
    race. Normally this is only used by memory hotplug. To avoid
    races with memory allocation I am using lock_system_sleep().
    This avoids the situation where memory hotplug is about
    to isolate a page range and then hwpoison undoes that work.
    This is a big hammer currently, but the simplest solution
    currently.

    When the page is not free or LRU we try to free pages
    from slab and other caches. The slab freeing is currently
    quite dumb and does not try to focus on the specific slab
    cache which might own the page. This could be potentially
    improved later.

    Thanks to Fengguang Wu and Haicheng Li for some fixes.

    [Added fix from Andrew Morton to adapt to new migrate_pages prototype]
    Signed-off-by: Andi Kleen

    Andi Kleen
     
  • Signed-off-by: Andi Kleen

    Andi Kleen
     
  • shake_page handles more types of page caches than
    the much simpler lru_add_drain_all:

    - slab (quite inefficiently for now)
    - any other caches with a shrinker callback
    - per cpu page allocator pages
    - per CPU LRU

    Use this call to try to turn pages into free or LRU pages.
    Then handle the case of the page becoming free after drain everything.

    Signed-off-by: Andi Kleen

    Andi Kleen
     
  • In some use cases, user doesn't need extra filtering. E.g. user program
    can inject errors through madvise syscall to its own pages, however it
    might not know what the page state exactly is or which inode the page
    belongs to.

    So introduce an one-off interface "corrupt-filter-enable".

    Echo 0 to switch off page filters, and echo 1 to switch on the filters.
    [AK: changed default to 0]

    Signed-off-by: Haicheng Li
    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Haicheng Li
     
  • The hwpoison test suite need to inject hwpoison to a collection of
    selected task pages, and must not touch pages not owned by them and
    thus kill important system processes such as init. (But it's OK to
    mis-hwpoison free/unowned pages as well as shared clean pages.
    Mis-hwpoison of shared dirty pages will kill all tasks, so the test
    suite will target all or non of such tasks in the first place.)

    The memory cgroup serves this purpose well. We can put the target
    processes under the control of a memory cgroup, and tell the hwpoison
    injection code to only kill pages associated with some active memory
    cgroup.

    The prerequisite for doing hwpoison stress tests with mem_cgroup is,
    the mem_cgroup code tracks task pages _accurately_ (unless page is
    locked). Which we believe is/should be true.

    The benefits are simplification of hwpoison injector code. Also the
    mem_cgroup code will automatically be tested by hwpoison test cases.

    The alternative interfaces pin-pfn/unpin-pfn can also delegate the
    (process and page flags) filtering functions reliably to user space.
    However prototype implementation shows that this scheme adds more
    complexity than we wanted.

    Example test case:

    mkdir /cgroup/hwpoison

    usemem -m 100 -s 1000 &
    echo `jobs -p` > /cgroup/hwpoison/tasks

    memcg_ino=$(ls -id /cgroup/hwpoison | cut -f1 -d' ')
    echo $memcg_ino > /debug/hwpoison/corrupt-filter-memcg

    page-types -p `pidof init` --hwpoison # shall do nothing
    page-types -p `pidof usemem` --hwpoison # poison its pages

    [AK: Fix documentation]
    [Add fix for problem noticed by Li Zefan ;
    dentry in the css could be NULL]

    CC: KOSAKI Motohiro
    CC: Hugh Dickins
    CC: Daisuke Nishimura
    CC: Balbir Singh
    CC: KAMEZAWA Hiroyuki
    CC: Li Zefan
    CC: Paul Menage
    CC: Nick Piggin
    CC: Andi Kleen
    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Andi Kleen
     
  • When specified, only poison pages if ((page_flags & mask) == value).

    - corrupt-filter-flags-mask
    - corrupt-filter-flags-value

    This allows stress testing of many kinds of pages.

    Strictly speaking, the buddy pages requires taking zone lock, to avoid
    setting PG_hwpoison on a "was buddy but now allocated to someone" page.
    However we can just do nothing because we set PG_locked in the beginning,
    this prevents the page allocator from allocating it to someone. (It will
    BUG() on the unexpected PG_locked, which is fine for hwpoison testing.)

    [AK: Add select PROC_PAGE_MONITOR to satisfy dependency]

    CC: Nick Piggin
    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Filesystem data/metadata present the most tricky-to-isolate pages.
    It requires careful code review and stress testing to get them right.

    The fs/device filter helps to target the stress tests to some specific
    filesystem pages. The filter condition is block device's major/minor
    numbers:
    - corrupt-filter-dev-major
    - corrupt-filter-dev-minor
    When specified (non -1), only page cache pages that belong to that
    device will be poisoned.

    The filters are checked reliably on the locked and refcounted page.

    Haicheng: clear PG_hwpoison and drop bad page count if filter not OK
    AK: Add documentation

    CC: Haicheng Li
    CC: Nick Piggin
    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Return 0 to indicate success, when
    - action result is RECOVERED or DELAYED
    - no extra page reference

    Note that dirty swapcache pages are kept in swapcache, so can have one
    more reference count.

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Change semantics for
    - IGNORED: not handled; it may well be _unsafe_
    - DELAYED: to be handled later; it is _safe_

    With this change,
    - IGNORED/FAILED mean (maybe) Error
    - DELAYED/RECOVERED mean Success

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • The unpoisoning interface is useful for stress testing tools to
    reclaim poisoned pages (to prevent OOM)

    There is no hardware level unpoisioning, so this
    cannot be used for real memory errors, only for software injected errors.

    Note that it may leak pages silently - those who have been removed from
    LRU cache, but not isolated from page cache/swap cache at hwpoison time.
    Especially the stress test of dirty swap cache pages shall reboot system
    before exhausting memory.

    AK: Fix comments, add documentation, add printks, rename symbol

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Most free pages in the buddy system have no PG_buddy set.
    Introduce is_free_buddy_page() for detecting them reliably.

    CC: Nick Piggin
    CC: Mel Gorman
    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • The buddy page has already be handled in the very beginning.
    So remove redundant code.

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Introduce delete_from_lru_cache() to
    - clear PG_active, PG_unevictable to avoid complains at unpoison time
    - move the isolate_lru_page() call back to the handlers instead of the
    entrance of __memory_failure(), this is more hwpoison filter friendly

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Don't try to isolate a still mapped page. Otherwise we will hit the
    BUG_ON(page_mapped(page)) in __remove_from_page_cache().

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Now that "ref" is just a boolean turn it into
    a flags argument. First step is only a single flag
    that makes the code's intention more clear, but more
    may follow.

    Signed-off-by: Andi Kleen

    Andi Kleen
     
  • If page is double referenced in madvise_hwpoison() and __memory_failure(),
    remove_mapping() will fail because it expects page_count=2. Fix it by
    not grabbing extra page count in __memory_failure().

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Use a different errno than the usual EIO for invalid page numbers.
    This is mainly for better reporting for the injector.

    This also avoids calling action_result() with invalid pfn.

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • (PG_swapbacked && !PG_lru) pages should not happen.
    Better to treat them as unknown pages.

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • shake_page handles more types of page caches than lru_drain_all()

    - per cpu page allocator pages
    - per CPU LRU

    Stops early when the page became free.

    Used in followon patches.

    Signed-off-by: Andi Kleen

    Andi Kleen
     
  • Remove three degrees of obfuscation, left over from when we had
    CONFIG_UNEVICTABLE_LRU. MLOCK_PAGES is CONFIG_HAVE_MLOCKED_PAGE_BIT is
    CONFIG_HAVE_MLOCK is CONFIG_MMU. rmap.o (and memory-failure.o) are only
    built when CONFIG_MMU, so don't need such conditions at all.

    Somehow, I feel no compulsion to remove the CONFIG_HAVE_MLOCK* lines from
    169 defconfigs: leave those to evolve in due course.

    Signed-off-by: Hugh Dickins
    Cc: Izik Eidus
    Cc: Andrea Arcangeli
    Cc: Nick Piggin
    Reviewed-by: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Lee Schermerhorn
    Cc: Andi Kleen
    Cc: KAMEZAWA Hiroyuki
    Cc: Wu Fengguang
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

04 Dec, 2009

1 commit

  • That is "success", "unknown", "through", "performance", "[re|un]mapping"
    , "access", "default", "reasonable", "[con]currently", "temperature"
    , "channel", "[un]used", "application", "example","hierarchy", "therefore"
    , "[over|under]flow", "contiguous", "threshold", "enough" and others.

    Signed-off-by: André Goddard Rosa
    Signed-off-by: Jiri Kosina

    André Goddard Rosa
     

19 Oct, 2009

3 commits

  • The madvise injector already holds a reference when passing in a page
    to the memory-failure code. The code corrects for this additional reference
    for its checks, but the final printk output didn't. Fix that.

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen

    Wu Fengguang
     
  • Memory failure on a KSM page currently oopses on its NULL anon_vma in
    page_lock_anon_vma(): that may not be much worse than the consequence
    of ignoring it, but it is better to be consistent with how ZERO_PAGE
    and hugetlb pages and other awkward cases are treated. Just skip it.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andi Kleen

    Hugh Dickins
     
  • Right now we have some trouble with non atomic access
    to page flags when locking the page. To plug this hole
    for now, limit error recovery to LRU pages for now.

    This could be better fixed by defining a suitable protocol,
    but let's go this simple way for now

    This avoids unnecessary races with __set_page_locked() and
    __SetPageSlab*() and maybe more non-atomic page flag operations.

    This loses isolated pages which are currently in page reclaim, but these
    are relatively limited compared to the total memory.

    Signed-off-by: Wu Fengguang
    Signed-off-by: Andi Kleen
    [AK: new description, bug fixes, cleanups]

    Wu Fengguang
     

16 Sep, 2009

1 commit

  • Add the high level memory handler that poisons pages
    that got corrupted by hardware (typically by a two bit flip in a DIMM
    or a cache) on the Linux level. The goal is to prevent everyone
    from accessing these pages in the future.

    This done at the VM level by marking a page hwpoisoned
    and doing the appropriate action based on the type of page
    it is.

    The code that does this is portable and lives in mm/memory-failure.c

    To quote the overview comment:

    High level machine check handler. Handles pages reported by the
    hardware as being corrupted usually due to a 2bit ECC memory or cache
    failure.

    This focuses on pages detected as corrupted in the background.
    When the current CPU tries to consume corruption the currently
    running process can just be killed directly instead. This implies
    that if the error cannot be handled for some reason it's safe to
    just ignore it because no corruption has been consumed yet. Instead
    when that happens another machine check will happen.

    Handles page cache pages in various states. The tricky part
    here is that we can access any page asynchronous to other VM
    users, because memory failures could happen anytime and anywhere,
    possibly violating some of their assumptions. This is why this code
    has to be extremely careful. Generally it tries to use normal locking
    rules, as in get the standard locks, even if that means the
    error handling takes potentially a long time.

    Some of the operations here are somewhat inefficient and have non
    linear algorithmic complexity, because the data structures have not
    been optimized for this case. This is in particular the case
    for the mapping from a vma to a process. Since this case is expected
    to be rare we hope we can get away with this.

    There are in principle two strategies to kill processes on poison:
    - just unmap the data and wait for an actual reference before
    killing
    - kill as soon as corruption is detected.
    Both have advantages and disadvantages and should be used
    in different situations. Right now both are implemented and can
    be switched with a new sysctl vm.memory_failure_early_kill
    The default is early kill.

    The patch does some rmap data structure walking on its own to collect
    processes to kill. This is unusual because normally all rmap data structure
    knowledge is in rmap.c only. I put it here for now to keep
    everything together and rmap knowledge has been seeping out anyways

    Includes contributions from Johannes Weiner, Chris Mason, Fengguang Wu,
    Nick Piggin (who did a lot of great work) and others.

    Cc: npiggin@suse.de
    Cc: riel@redhat.com
    Signed-off-by: Andi Kleen
    Acked-by: Rik van Riel
    Reviewed-by: Hidehiro Kawai

    Andi Kleen