03 May, 2012

1 commit


24 Mar, 2012

1 commit

  • Change oom_kill_task() to use do_send_sig_info(SEND_SIG_FORCED) instead
    of force_sig(SIGKILL). With the recent changes we do not need force_ to
    kill the CLONE_NEWPID tasks.

    And this is more correct. force_sig() can race with the exiting thread
    even if oom_kill_task() checks p->mm != NULL, while
    do_send_sig_info(group => true) kille the whole process.

    Signed-off-by: Oleg Nesterov
    Cc: Tejun Heo
    Cc: Anton Vorontsov
    Cc: "Eric W. Biederman"
    Cc: KOSAKI Motohiro
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

22 Mar, 2012

6 commits

  • The oom killer typically displays the allocation order at the time of oom
    as a part of its diangostic messages (for global, cpuset, and mempolicy
    ooms).

    The memory controller may also pass the charge order to the oom killer so
    it can emit the same information. This is useful in determining how large
    the memory allocation is that triggered the oom killer.

    Signed-off-by: David Rientjes
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Balbir Singh
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer chooses not to kill a thread if:

    - an eligible thread has already been oom killed and has yet to exit,
    and

    - an eligible thread is exiting but has yet to free all its memory and
    is not the thread attempting to currently allocate memory.

    SysRq+F manually invokes the global oom killer to kill a memory-hogging
    task. This is normally done as a last resort to free memory when no
    progress is being made or to test the oom killer itself.

    For both uses, we always want to kill a thread and never defer. This
    patch causes SysRq+F to always kill an eligible thread and can be used to
    force a kill even if another oom killed thread has failed to exit.

    Signed-off-by: David Rientjes
    Acked-by: KOSAKI Motohiro
    Acked-by: Pekka Enberg
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • printk_ratelimit() uses the global ratelimit state for all printks. The
    oom killer should not be subjected to this state just because another
    subsystem or driver may be flooding the kernel log.

    This patch introduces printk ratelimiting specifically for the oom killer.

    Signed-off-by: David Rientjes
    Acked-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • If a thread is chosen for oom kill and is already PF_EXITING, then the oom
    killer simply sets TIF_MEMDIE and returns. This allows the thread to have
    access to memory reserves so that it may quickly exit. This logic is
    preceeded with a comment saying there's no need to alarm the sysadmin.
    This patch adds truth to that statement.

    There's no need to emit any warning about the oom condition if the thread
    is already exiting since it will not be killed. In this condition, just
    silently return the oom killer since its only giving access to memory
    reserves and is otherwise a no-op.

    Acked-by: KOSAKI Motohiro
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: David Rientjes
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • oom_kill_task() has a single caller, so fold it into its parent function,
    oom_kill_process(). Slightly reduces the number of lines in the oom
    killer.

    Acked-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Signed-off-by: David Rientjes
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • oom_kill_task() returns non-zero iff the chosen process does not have any
    threads with an attached ->mm.

    In such a case, it's better to just return to the page allocator and retry
    the allocation because memory could have been freed in the interim and the
    oom condition may no longer exist. It's unnecessary to loop in the oom
    killer and find another thread to kill.

    This allows both oom_kill_task() and oom_kill_process() to be converted to
    void functions. If the oom condition persists, the oom killer will be
    recalled.

    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: David Rientjes
    Cc: KOSAKI Motohiro
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

13 Jan, 2012

2 commits


11 Jan, 2012

1 commit

  • oom_score_adj is used for guarding processes from OOM-Killer. One of
    problem is that it's inherited at fork(). When a daemon set oom_score_adj
    and make children, it's hard to know where the value is set.

    This patch adds some tracepoints useful for debugging. This patch adds
    3 trace points.
    - creating new task
    - renaming a task (exec)
    - set oom_score_adj

    To debug, users need to enable some trace pointer. Maybe filtering is useful as

    # EVENT=/sys/kernel/debug/tracing/events/task/
    # echo "oom_score_adj != 0" > $EVENT/task_newtask/filter
    # echo "oom_score_adj != 0" > $EVENT/task_rename/filter
    # echo 1 > $EVENT/enable
    # EVENT=/sys/kernel/debug/tracing/events/oom/
    # echo 1 > $EVENT/enable

    output will be like this.
    # grep oom /sys/kernel/debug/tracing/trace
    bash-7699 [007] d..3 5140.744510: oom_score_adj_update: pid=7699 comm=bash oom_score_adj=-1000
    bash-7699 [007] ...1 5151.818022: task_newtask: pid=7729 comm=bash clone_flags=1200011 oom_score_adj=-1000
    ls-7729 [003] ...2 5151.818504: task_rename: pid=7729 oldcomm=bash newcomm=ls oom_score_adj=-1000
    bash-7699 [002] ...1 5175.701468: task_newtask: pid=7730 comm=bash clone_flags=1200011 oom_score_adj=-1000
    grep-7730 [007] ...2 5175.701993: task_rename: pid=7730 oldcomm=bash newcomm=grep oom_score_adj=-1000

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

22 Dec, 2011

1 commit

  • * master: (848 commits)
    SELinux: Fix RCU deref check warning in sel_netport_insert()
    binary_sysctl(): fix memory leak
    mm/vmalloc.c: remove static declaration of va from __get_vm_area_node
    ipmi_watchdog: restore settings when BMC reset
    oom: fix integer overflow of points in oom_badness
    memcg: keep root group unchanged if creation fails
    nilfs2: potential integer overflow in nilfs_ioctl_clean_segments()
    nilfs2: unbreak compat ioctl
    cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask
    evm: prevent racing during tfm allocation
    evm: key must be set once during initialization
    mmc: vub300: fix type of firmware_rom_wait_states module parameter
    Revert "mmc: enable runtime PM by default"
    mmc: sdhci: remove "state" argument from sdhci_suspend_host
    x86, dumpstack: Fix code bytes breakage due to missing KERN_CONT
    IB/qib: Correct sense on freectxts increment and decrement
    RDMA/cma: Verify private data length
    cgroups: fix a css_set not found bug in cgroup_attach_proc
    oprofile: Fix uninitialized memory access when writing to writing to oprofilefs
    Revert "xen/pv-on-hvm kexec: add xs_reset_watches to shutdown watches from old kernel"
    ...

    Conflicts:
    kernel/cgroup_freezer.c

    Rafael J. Wysocki
     

21 Dec, 2011

1 commit

  • An integer overflow will happen on 64bit archs if task's sum of rss,
    swapents and nr_ptes exceeds (2^31)/1000 value. This was introduced by
    commit

    f755a04 oom: use pte pages in OOM score

    where the oom score computation was divided into several steps and it's no
    longer computed as one expression in unsigned long(rss, swapents, nr_pte
    are unsigned long), where the result value assigned to points(int) is in
    range(1..1000). So there could be an int overflow while computing

    176 points *= 1000;

    and points may have negative value. Meaning the oom score for a mem hog task
    will be one.

    196 if (points
    Acked-by: KOSAKI Motohiro
    Acked-by: Oleg Nesterov
    Acked-by: David Rientjes
    Cc: [2.6.36+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Frantisek Hrbata
     

22 Nov, 2011

1 commit

  • thaw_process() now has only internal users - system and cgroup
    freezers. Remove the unnecessary return value, rename, unexport and
    collapse __thaw_process() into it. This will help further updates to
    the freezer code.

    -v3: oom_kill grew a use of thaw_process() while this patch was
    pending. Convert it to use __thaw_task() for now. In the longer
    term, this should be handled by allowing tasks to die if killed
    even if it's frozen.

    -v2: minor style update as suggested by Matt.

    Signed-off-by: Tejun Heo
    Cc: Paul Menage
    Cc: Matt Helsley

    Tejun Heo
     

16 Nov, 2011

1 commit

  • Commit c9f01245 ("oom: remove oom_disable_count") has removed the
    oom_disable_count counter which has been used for early break out from
    oom_badness so we could never select a task with oom_score_adj set to
    OOM_SCORE_ADJ_MIN (oom disabled).

    Now that the counter is gone we are always going through heuristics
    calculation and we always return a non zero positive value. This means
    that we can end up killing a task with OOM disabled because it is
    indistinguishable from regular tasks with 1% resp. CAP_SYS_ADMIN tasks
    with 3% usage of memory or tasks with oom_score_adj set but OOM enabled.

    Let's break out early if the task should have OOM disabled.

    Signed-off-by: Michal Hocko
    Acked-by: David Rientjes
    Acked-by: KOSAKI Motohiro
    Cc: Oleg Nesterov
    Cc: Ying Han
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

07 Nov, 2011

1 commit

  • * 'modsplit-Oct31_2011' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: (230 commits)
    Revert "tracing: Include module.h in define_trace.h"
    irq: don't put module.h into irq.h for tracking irqgen modules.
    bluetooth: macroize two small inlines to avoid module.h
    ip_vs.h: fix implicit use of module_get/module_put from module.h
    nf_conntrack.h: fix up fallout from implicit moduleparam.h presence
    include: replace linux/module.h with "struct module" wherever possible
    include: convert various register fcns to macros to avoid include chaining
    crypto.h: remove unused crypto_tfm_alg_modname() inline
    uwb.h: fix implicit use of asm/page.h for PAGE_SIZE
    pm_runtime.h: explicitly requires notifier.h
    linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h
    miscdevice.h: fix up implicit use of lists and types
    stop_machine.h: fix implicit use of smp.h for smp_processor_id
    of: fix implicit use of errno.h in include/linux/of.h
    of_platform.h: delete needless include
    acpi: remove module.h include from platform/aclinux.h
    miscdevice.h: delete unnecessary inclusion of module.h
    device_cgroup.h: delete needless include
    net: sch_generic remove redundant use of
    net: inet_timewait_sock doesnt need
    ...

    Fix up trivial conflicts (other header files, and removal of the ab3550 mfd driver) in
    - drivers/media/dvb/frontends/dibx000_common.c
    - drivers/media/video/{mt9m111.c,ov6650.c}
    - drivers/mfd/ab3550-core.c
    - include/linux/dmaengine.h

    Linus Torvalds
     

01 Nov, 2011

4 commits

  • test_set_oom_score_adj() was introduced in 72788c385604 ("oom: replace
    PF_OOM_ORIGIN with toggling oom_score_adj") to temporarily elevate
    current's oom_score_adj for ksm and swapoff without requiring an
    additional per-process flag.

    Using that function to both set oom_score_adj to OOM_SCORE_ADJ_MAX and
    then reinstate the previous value is racy since it's possible that
    userspace can set the value to something else itself before the old value
    is reinstated. That results in userspace setting current's oom_score_adj
    to a different value and then the kernel immediately setting it back to
    its previous value without notification.

    To fix this, a new compare_swap_oom_score_adj() function is introduced
    with the same semantics as the compare and swap CAS instruction, or
    CMPXCHG on x86. It is used to reinstate the previous value of
    oom_score_adj if and only if the present value is the same as the old
    value.

    Signed-off-by: David Rientjes
    Cc: Oleg Nesterov
    Cc: Ying Han
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • This removes mm->oom_disable_count entirely since it's unnecessary and
    currently buggy. The counter was intended to be per-process but it's
    currently decremented in the exit path for each thread that exits, causing
    it to underflow.

    The count was originally intended to prevent oom killing threads that
    share memory with threads that cannot be killed since it doesn't lead to
    future memory freeing. The counter could be fixed to represent all
    threads sharing the same mm, but it's better to remove the count since:

    - it is possible that the OOM_DISABLE thread sharing memory with the
    victim is waiting on that thread to exit and will actually cause
    future memory freeing, and

    - there is no guarantee that a thread is disabled from oom killing just
    because another thread sharing its mm is oom disabled.

    Signed-off-by: David Rientjes
    Reported-by: Oleg Nesterov
    Reviewed-by: Oleg Nesterov
    Cc: Ying Han
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • After selecting a task to kill, the oom killer iterates all processes and
    kills all other threads that share the same mm_struct in different thread
    groups. It would not otherwise be helpful to kill a thread if its memory
    would not be subsequently freed.

    A kernel thread, however, may assume a user thread's mm by using
    use_mm(). This is only temporary and should not result in sending a
    SIGKILL to that kthread.

    This patch ensures that only user threads and not kthreads are sent a
    SIGKILL if they share the same mm_struct as the oom killed task.

    Signed-off-by: David Rientjes
    Reviewed-by: Michal Hocko
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • If a thread has been oom killed and is frozen, thaw it before returning to
    the page allocator. Otherwise, it can stay frozen indefinitely and no
    memory will be freed.

    Signed-off-by: David Rientjes
    Reported-by: Konstantin Khlebnikov
    Cc: Oleg Nesterov
    Cc: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Cc: "Rafael J. Wysocki"
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

31 Oct, 2011

1 commit


02 Aug, 2011

1 commit

  • exit_mm() sets ->mm == NULL then it does mmput()->exit_mmap() which
    frees the memory.

    However select_bad_process() checks ->mm != NULL before TIF_MEMDIE,
    so it continues to kill other tasks even if we have the oom-killed
    task freeing its memory.

    Change select_bad_process() to check ->mm after TIF_MEMDIE, but skip
    the tasks which have already passed exit_notify() to ensure a zombie
    with TIF_MEMDIE set can't block oom-killer. Alternatively we could
    probably clear TIF_MEMDIE after exit_mmap().

    Signed-off-by: Oleg Nesterov
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

26 Jul, 2011

1 commit

  • The badness() function in the oom killer was renamed to oom_badness() in
    a63d83f427fb ("oom: badness heuristic rewrite") since it is a globally
    exported function for clarity.

    The prototype for the old function still existed in linux/oom.h, so remove
    it. There are no existing users.

    Also fixes documentation and comment references to badness() and adjusts
    them accordingly.

    Signed-off-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

23 Jun, 2011

1 commit

  • task_ptrace(task) simply dereferences task->ptrace and isn't even used
    consistently only adding confusion. Kill it and directly access
    ->ptrace instead.

    This doesn't introduce any behavior change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Oleg Nesterov

    Tejun Heo
     

25 May, 2011

1 commit

  • There's a kernel-wide shortage of per-process flags, so it's always
    helpful to trim one when possible without incurring a significant penalty.
    It's even more important when you're planning on adding a per- process
    flag yourself, which I plan to do shortly for transparent hugepages.

    PF_OOM_ORIGIN is used by ksm and swapoff to prefer current since it has a
    tendency to allocate large amounts of memory and should be preferred for
    killing over other tasks. We'd rather immediately kill the task making
    the errant syscall rather than penalizing an innocent task.

    This patch removes PF_OOM_ORIGIN since its behavior is equivalent to
    setting the process's oom_score_adj to OOM_SCORE_ADJ_MAX.

    The process's old oom_score_adj is stored and then set to
    OOM_SCORE_ADJ_MAX during the time it used to have PF_OOM_ORIGIN. The old
    value is then reinstated when the process should no longer be considered a
    high priority for oom killing.

    Signed-off-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: Hugh Dickins
    Cc: Izik Eidus
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

29 Apr, 2011

1 commit

  • PTE pages eat up memory just like anything else, but we do not account for
    them in any way in the OOM scores. They are also _guaranteed_ to get
    freed up when a process is OOM killed, while RSS is not.

    Reported-by: Dave Hansen
    Signed-off-by: KOSAKI Motohiro
    Cc: Hugh Dickins
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Acked-by: David Rientjes
    Cc: [2.6.36+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     

15 Apr, 2011

1 commit

  • This is an almost-revert of commit 93b43fa ("oom: give the dying task a
    higher priority").

    That commit dramatically improved oom killer logic when a fork-bomb
    occurs. But I've found that it has nasty corner case. Now cpu cgroup has
    strange default RT runtime. It's 0! That said, if a process under cpu
    cgroup promote RT scheduling class, the process never run at all.

    If an admin inserts a !RT process into a cpu cgroup by setting
    rtruntime=0, usually it runs perfectly because a !RT task isn't affected
    by the rtruntime knob. But if it promotes an RT task via an explicit
    setscheduler() syscall or an OOM, the task can't run at all. In short,
    the oom killer doesn't work at all if admins are using cpu cgroup and don't
    touch the rtruntime knob.

    Eventually, kernel may hang up when oom kill occur. I and the original
    author Luis agreed to disable this logic.

    Signed-off-by: KOSAKI Motohiro
    Acked-by: Luis Claudio R. Goncalves
    Acked-by: KAMEZAWA Hiroyuki
    Reviewed-by: Minchan Kim
    Acked-by: David Rientjes
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     

25 Mar, 2011

1 commit

  • Commit ddd588b5dd55 ("oom: suppress nodes that are not allowed from
    meminfo on oom kill") moved lib/show_mem.o out of lib/lib.a, which
    resulted in build warnings on all architectures that implement their own
    versions of show_mem():

    lib/lib.a(show_mem.o): In function `show_mem':
    show_mem.c:(.text+0x1f4): multiple definition of `show_mem'
    arch/sparc/mm/built-in.o:(.text+0xd70): first defined here

    The fix is to remove __show_mem() and add its argument to show_mem() in
    all implementations to prevent this breakage.

    Architectures that implement their own show_mem() actually don't do
    anything with the argument yet, but they could be made to filter nodes
    that aren't allowed in the current context in the future just like the
    generic implementation.

    Reported-by: Stephen Rothwell
    Reported-by: James Bottomley
    Suggested-by: Andrew Morton
    Signed-off-by: David Rientjes
    Signed-off-by: Linus Torvalds

    David Rientjes
     

24 Mar, 2011

1 commit

  • When a memcg is oom and current has already received a SIGKILL, then give
    it access to memory reserves with a higher scheduling priority so that it
    may quickly exit and free its memory.

    This is identical to the global oom killer and is done even before
    checking for panic_on_oom: a pending SIGKILL here while panic_on_oom is
    selected is guaranteed to have come from userspace; the thread only needs
    access to memory reserves to exit and thus we don't unnecessarily panic
    the machine until the kernel has no last resort to free memory.

    Signed-off-by: David Rientjes
    Cc: Balbir Singh
    Cc: Daisuke Nishimura
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

23 Mar, 2011

4 commits

  • The oom killer is extremely verbose for machines with a large number of
    cpus and/or nodes. This verbosity can often be harmful if it causes other
    important messages to be scrolled from the kernel log and incurs a
    signicant time delay, specifically for kernels with CONFIG_NODES_SHIFT >
    8.

    This patch causes only memory information to be displayed for nodes that
    are allowed by current's cpuset when dumping the VM state. Information
    for all other nodes is irrelevant to the oom condition; we don't care if
    there's an abundance of memory elsewhere if we can't access it.

    This only affects the behavior of dumping memory information when an oom
    is triggered. Other dumps, such as for sysrq+m, still display the
    unfiltered form when using the existing show_mem() interface.

    Additionally, the per-cpu pageset statistics are extremely verbose in oom
    killer output, so it is now suppressed. This removes

    nodes_weight(current->mems_allowed) * (1 + nr_cpus)

    lines from the oom killer output.

    Callers may use __show_mem(SHOW_MEM_FILTER_NODES) to filter disallowed
    nodes.

    Signed-off-by: David Rientjes
    Cc: Mel Gorman
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer naturally defers killing anything if it finds an eligible
    task that is already exiting and has yet to detach its ->mm. This avoids
    unnecessarily killing tasks when one is already in the exit path and may
    free enough memory that the oom killer is no longer needed. This is
    detected by PF_EXITING since threads that have already detached its ->mm
    are no longer considered at all.

    The problem with always deferring when a thread is PF_EXITING, however, is
    that it may never actually exit when being traced, specifically if another
    task is tracing it with PTRACE_O_TRACEEXIT. The oom killer does not want
    to defer in this case since there is no guarantee that thread will ever
    exit without intervention.

    This patch will now only defer the oom killer when a thread is PF_EXITING
    and no ptracer has stopped its progress in the exit path. It also ensures
    that a child is sacrificed for the chosen parent only if it has a
    different ->mm as the comment implies: this ensures that the thread group
    leader is always targeted appropriately.

    Signed-off-by: David Rientjes
    Reported-by: Oleg Nesterov
    Cc: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Cc: Andrey Vagin
    Cc: [2.6.38.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • We shouldn't defer oom killing if a thread has already detached its ->mm
    and still has TIF_MEMDIE set. Memory needs to be freed, so find kill
    other threads that pin the same ->mm or find another task to kill.

    Signed-off-by: Andrey Vagin
    Signed-off-by: David Rientjes
    Cc: KOSAKI Motohiro
    Cc: [2.6.38.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Vagin
     
  • This patch prevents unnecessary oom kills or kernel panics by reverting
    two commits:

    495789a5 (oom: make oom_score to per-process value)
    cef1d352 (oom: multi threaded process coredump don't make deadlock)

    First, 495789a5 (oom: make oom_score to per-process value) ignores the
    fact that all threads in a thread group do not necessarily exit at the
    same time.

    It is imperative that select_bad_process() detect threads that are in the
    exit path, specifically those with PF_EXITING set, to prevent needlessly
    killing additional tasks. If a process is oom killed and the thread group
    leader exits, select_bad_process() cannot detect the other threads that
    are PF_EXITING by iterating over only processes. Thus, it currently
    chooses another task unnecessarily for oom kill or panics the machine when
    nothing else is eligible.

    By iterating over threads instead, it is possible to detect threads that
    are exiting and nominate them for oom kill so they get access to memory
    reserves.

    Second, cef1d352 (oom: multi threaded process coredump don't make
    deadlock) erroneously avoids making the oom killer a no-op when an
    eligible thread other than current isfound to be exiting. We want to
    detect this situation so that we may allow that exiting thread time to
    exit and free its memory; if it is able to exit on its own, that should
    free memory so current is no loner oom. If it is not able to exit on its
    own, the oom killer will nominate it for oom kill which, in this case,
    only means it will get access to memory reserves.

    Without this change, it is easy for the oom killer to unnecessarily target
    tasks when all threads of a victim don't exit before the thread group
    leader or, in the worst case, panic the machine.

    Signed-off-by: David Rientjes
    Cc: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Cc: Hugh Dickins
    Cc: Andrey Vagin
    Cc: [2.6.38.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

15 Mar, 2011

2 commits

  • This reverts the parent commit. I hate doing that, but it's generating
    some discussion ("half of it is right"), and since I am planning on
    doing the 2.6.38 release later today we can punt it to stable if
    required. Let's not rock the boat right now.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • oom_kill_process() starts with victim_points == 0. This means that
    (most likely) any child has more points and can be killed erroneously.

    Also, "children has a different mm" doesn't match the reality, we should
    check child->mm != t->mm. This check is not exactly correct if t->mm ==
    NULL but this doesn't really matter, oom_kill_task() will kill them
    anyway.

    Note: "Kill all processes sharing p->mm" in oom_kill_task() is wrong
    too.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

27 Oct, 2010

2 commits

  • It's necessary to kill all threads that share an oom killed task's mm if
    the goal is to lead to future memory freeing.

    This patch reintroduces the code removed in 8c5cd6f3 (oom: oom_kill
    doesn't kill vfork parent (or child)) since it is obsoleted.

    It's now guaranteed that any task passed to oom_kill_task() does not share
    an mm with any thread that is unkillable. Thus, we're safe to issue a
    SIGKILL to any thread sharing the same mm.

    This is especially necessary to solve an mm->mmap_sem livelock issue
    whereas an oom killed thread must acquire the lock in the exit path while
    another thread is holding it in the page allocator while trying to
    allocate memory itself (and will preempt the oom killer since a task was
    already killed). Since tasks with pending fatal signals are now granted
    access to memory reserves, the thread holding the lock may quickly
    allocate and release the lock so that the oom killed task may exit.

    This mainly is for threads that are cloned with CLONE_VM but not
    CLONE_THREAD, so they are in a different thread group. Non-NPTL threads
    exist in the wild and this change is necessary to prevent the livelock in
    such cases. We care more about preventing the livelock than incurring the
    additional tasklist in the oom killer when a task has been killed.
    Systems that are sufficiently large to not want the tasklist scan in the
    oom killer in the first place already have the option of enabling
    /proc/sys/vm/oom_kill_allocating_task, which was designed specifically for
    that purpose.

    This code had existed in the oom killer for over eight years dating back
    to the 2.4 kernel.

    [akpm@linux-foundation.org: add nice comment]
    Signed-off-by: David Rientjes
    Acked-by: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Ying Han
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer's goal is to kill a memory-hogging task so that it may
    exit, free its memory, and allow the current context to allocate the
    memory that triggered it in the first place. Thus, killing a task is
    pointless if other threads sharing its mm cannot be killed because of its
    /proc/pid/oom_adj or /proc/pid/oom_score_adj value.

    This patch checks whether any other thread sharing p->mm has an
    oom_score_adj of OOM_SCORE_ADJ_MIN. If so, the thread cannot be killed
    and oom_badness(p) returns 0, meaning it's unkillable.

    Signed-off-by: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Ying Han
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

23 Sep, 2010

2 commits

  • /proc/sys/vm/oom_dump_tasks is enabled by default, so it's necessary to
    limit as much information as possible that it should emit.

    The tasklist dump should be filtered to only those tasks that are eligible
    for oom kill. This is already done for memcg ooms, but this patch extends
    it to both cpuset and mempolicy ooms as well as init.

    In addition to suppressing irrelevant information, this also reduces
    confusion since users currently don't know which tasks in the tasklist
    aren't eligible for kill (such as those attached to cpusets or bound to
    mempolicies with a disjoint set of mems or nodes, respectively) since that
    information is not shown.

    Signed-off-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • A task's badness score is roughly a proportion of its rss and swap
    compared to the system's capacity. The scale ranges from 0 to 1000 with
    the highest score chosen for kill. Thus, this scale operates on a
    resolution of 0.1% of RAM + swap. Admin tasks are also given a 3% bonus,
    so the badness score of an admin task using 3% of memory, for example,
    would still be 0.

    It's possible that an exceptionally large number of tasks will combine to
    exhaust all resources but never have a single task that uses more than
    0.1% of RAM and swap (or 3.0% for admin tasks).

    This patch ensures that the badness score of any eligible task is never 0
    so the machine doesn't unnecessarily panic because it cannot find a task
    to kill.

    Signed-off-by: David Rientjes
    Cc: Dave Hansen
    Cc: Nitin Gupta
    Cc: Pekka Enberg
    Cc: Minchan Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

21 Aug, 2010

1 commit

  • dump_tasks() needs to hold the RCU read lock around its access of the
    target task's UID. To this end it should use task_uid() as it only needs
    that one thing from the creds.

    The fact that dump_tasks() holds tasklist_lock is insufficient to prevent the
    target process replacing its credentials on another CPU.

    Then, this patch change to call rcu_read_lock() explicitly.

    ===================================================
    [ INFO: suspicious rcu_dereference_check() usage. ]
    ---------------------------------------------------
    mm/oom_kill.c:410 invoked rcu_dereference_check() without protection!

    other info that might help us debug this:

    rcu_scheduler_active = 1, debug_locks = 1
    4 locks held by kworker/1:2/651:
    #0: (events){+.+.+.}, at: []
    process_one_work+0x137/0x4a0
    #1: (moom_work){+.+...}, at: []
    process_one_work+0x137/0x4a0
    #2: (tasklist_lock){.+.+..}, at: []
    out_of_memory+0x164/0x3f0
    #3: (&(&p->alloc_lock)->rlock){+.+...}, at: []
    find_lock_task_mm+0x2e/0x70

    Signed-off-by: KOSAKI Motohiro
    Signed-off-by: David Howells
    Acked-by: Paul E. McKenney
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro