25 May, 2011

1 commit

  • There's a kernel-wide shortage of per-process flags, so it's always
    helpful to trim one when possible without incurring a significant penalty.
    It's even more important when you're planning on adding a per- process
    flag yourself, which I plan to do shortly for transparent hugepages.

    PF_OOM_ORIGIN is used by ksm and swapoff to prefer current since it has a
    tendency to allocate large amounts of memory and should be preferred for
    killing over other tasks. We'd rather immediately kill the task making
    the errant syscall rather than penalizing an innocent task.

    This patch removes PF_OOM_ORIGIN since its behavior is equivalent to
    setting the process's oom_score_adj to OOM_SCORE_ADJ_MAX.

    The process's old oom_score_adj is stored and then set to
    OOM_SCORE_ADJ_MAX during the time it used to have PF_OOM_ORIGIN. The old
    value is then reinstated when the process should no longer be considered a
    high priority for oom killing.

    Signed-off-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: Hugh Dickins
    Cc: Izik Eidus
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

29 Apr, 2011

1 commit

  • PTE pages eat up memory just like anything else, but we do not account for
    them in any way in the OOM scores. They are also _guaranteed_ to get
    freed up when a process is OOM killed, while RSS is not.

    Reported-by: Dave Hansen
    Signed-off-by: KOSAKI Motohiro
    Cc: Hugh Dickins
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Acked-by: David Rientjes
    Cc: [2.6.36+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     

15 Apr, 2011

1 commit

  • This is an almost-revert of commit 93b43fa ("oom: give the dying task a
    higher priority").

    That commit dramatically improved oom killer logic when a fork-bomb
    occurs. But I've found that it has nasty corner case. Now cpu cgroup has
    strange default RT runtime. It's 0! That said, if a process under cpu
    cgroup promote RT scheduling class, the process never run at all.

    If an admin inserts a !RT process into a cpu cgroup by setting
    rtruntime=0, usually it runs perfectly because a !RT task isn't affected
    by the rtruntime knob. But if it promotes an RT task via an explicit
    setscheduler() syscall or an OOM, the task can't run at all. In short,
    the oom killer doesn't work at all if admins are using cpu cgroup and don't
    touch the rtruntime knob.

    Eventually, kernel may hang up when oom kill occur. I and the original
    author Luis agreed to disable this logic.

    Signed-off-by: KOSAKI Motohiro
    Acked-by: Luis Claudio R. Goncalves
    Acked-by: KAMEZAWA Hiroyuki
    Reviewed-by: Minchan Kim
    Acked-by: David Rientjes
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     

25 Mar, 2011

1 commit

  • Commit ddd588b5dd55 ("oom: suppress nodes that are not allowed from
    meminfo on oom kill") moved lib/show_mem.o out of lib/lib.a, which
    resulted in build warnings on all architectures that implement their own
    versions of show_mem():

    lib/lib.a(show_mem.o): In function `show_mem':
    show_mem.c:(.text+0x1f4): multiple definition of `show_mem'
    arch/sparc/mm/built-in.o:(.text+0xd70): first defined here

    The fix is to remove __show_mem() and add its argument to show_mem() in
    all implementations to prevent this breakage.

    Architectures that implement their own show_mem() actually don't do
    anything with the argument yet, but they could be made to filter nodes
    that aren't allowed in the current context in the future just like the
    generic implementation.

    Reported-by: Stephen Rothwell
    Reported-by: James Bottomley
    Suggested-by: Andrew Morton
    Signed-off-by: David Rientjes
    Signed-off-by: Linus Torvalds

    David Rientjes
     

24 Mar, 2011

1 commit

  • When a memcg is oom and current has already received a SIGKILL, then give
    it access to memory reserves with a higher scheduling priority so that it
    may quickly exit and free its memory.

    This is identical to the global oom killer and is done even before
    checking for panic_on_oom: a pending SIGKILL here while panic_on_oom is
    selected is guaranteed to have come from userspace; the thread only needs
    access to memory reserves to exit and thus we don't unnecessarily panic
    the machine until the kernel has no last resort to free memory.

    Signed-off-by: David Rientjes
    Cc: Balbir Singh
    Cc: Daisuke Nishimura
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

23 Mar, 2011

4 commits

  • The oom killer is extremely verbose for machines with a large number of
    cpus and/or nodes. This verbosity can often be harmful if it causes other
    important messages to be scrolled from the kernel log and incurs a
    signicant time delay, specifically for kernels with CONFIG_NODES_SHIFT >
    8.

    This patch causes only memory information to be displayed for nodes that
    are allowed by current's cpuset when dumping the VM state. Information
    for all other nodes is irrelevant to the oom condition; we don't care if
    there's an abundance of memory elsewhere if we can't access it.

    This only affects the behavior of dumping memory information when an oom
    is triggered. Other dumps, such as for sysrq+m, still display the
    unfiltered form when using the existing show_mem() interface.

    Additionally, the per-cpu pageset statistics are extremely verbose in oom
    killer output, so it is now suppressed. This removes

    nodes_weight(current->mems_allowed) * (1 + nr_cpus)

    lines from the oom killer output.

    Callers may use __show_mem(SHOW_MEM_FILTER_NODES) to filter disallowed
    nodes.

    Signed-off-by: David Rientjes
    Cc: Mel Gorman
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer naturally defers killing anything if it finds an eligible
    task that is already exiting and has yet to detach its ->mm. This avoids
    unnecessarily killing tasks when one is already in the exit path and may
    free enough memory that the oom killer is no longer needed. This is
    detected by PF_EXITING since threads that have already detached its ->mm
    are no longer considered at all.

    The problem with always deferring when a thread is PF_EXITING, however, is
    that it may never actually exit when being traced, specifically if another
    task is tracing it with PTRACE_O_TRACEEXIT. The oom killer does not want
    to defer in this case since there is no guarantee that thread will ever
    exit without intervention.

    This patch will now only defer the oom killer when a thread is PF_EXITING
    and no ptracer has stopped its progress in the exit path. It also ensures
    that a child is sacrificed for the chosen parent only if it has a
    different ->mm as the comment implies: this ensures that the thread group
    leader is always targeted appropriately.

    Signed-off-by: David Rientjes
    Reported-by: Oleg Nesterov
    Cc: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Cc: Andrey Vagin
    Cc: [2.6.38.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • We shouldn't defer oom killing if a thread has already detached its ->mm
    and still has TIF_MEMDIE set. Memory needs to be freed, so find kill
    other threads that pin the same ->mm or find another task to kill.

    Signed-off-by: Andrey Vagin
    Signed-off-by: David Rientjes
    Cc: KOSAKI Motohiro
    Cc: [2.6.38.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Vagin
     
  • This patch prevents unnecessary oom kills or kernel panics by reverting
    two commits:

    495789a5 (oom: make oom_score to per-process value)
    cef1d352 (oom: multi threaded process coredump don't make deadlock)

    First, 495789a5 (oom: make oom_score to per-process value) ignores the
    fact that all threads in a thread group do not necessarily exit at the
    same time.

    It is imperative that select_bad_process() detect threads that are in the
    exit path, specifically those with PF_EXITING set, to prevent needlessly
    killing additional tasks. If a process is oom killed and the thread group
    leader exits, select_bad_process() cannot detect the other threads that
    are PF_EXITING by iterating over only processes. Thus, it currently
    chooses another task unnecessarily for oom kill or panics the machine when
    nothing else is eligible.

    By iterating over threads instead, it is possible to detect threads that
    are exiting and nominate them for oom kill so they get access to memory
    reserves.

    Second, cef1d352 (oom: multi threaded process coredump don't make
    deadlock) erroneously avoids making the oom killer a no-op when an
    eligible thread other than current isfound to be exiting. We want to
    detect this situation so that we may allow that exiting thread time to
    exit and free its memory; if it is able to exit on its own, that should
    free memory so current is no loner oom. If it is not able to exit on its
    own, the oom killer will nominate it for oom kill which, in this case,
    only means it will get access to memory reserves.

    Without this change, it is easy for the oom killer to unnecessarily target
    tasks when all threads of a victim don't exit before the thread group
    leader or, in the worst case, panic the machine.

    Signed-off-by: David Rientjes
    Cc: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Cc: Hugh Dickins
    Cc: Andrey Vagin
    Cc: [2.6.38.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

15 Mar, 2011

2 commits

  • This reverts the parent commit. I hate doing that, but it's generating
    some discussion ("half of it is right"), and since I am planning on
    doing the 2.6.38 release later today we can punt it to stable if
    required. Let's not rock the boat right now.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • oom_kill_process() starts with victim_points == 0. This means that
    (most likely) any child has more points and can be killed erroneously.

    Also, "children has a different mm" doesn't match the reality, we should
    check child->mm != t->mm. This check is not exactly correct if t->mm ==
    NULL but this doesn't really matter, oom_kill_task() will kill them
    anyway.

    Note: "Kill all processes sharing p->mm" in oom_kill_task() is wrong
    too.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

27 Oct, 2010

2 commits

  • It's necessary to kill all threads that share an oom killed task's mm if
    the goal is to lead to future memory freeing.

    This patch reintroduces the code removed in 8c5cd6f3 (oom: oom_kill
    doesn't kill vfork parent (or child)) since it is obsoleted.

    It's now guaranteed that any task passed to oom_kill_task() does not share
    an mm with any thread that is unkillable. Thus, we're safe to issue a
    SIGKILL to any thread sharing the same mm.

    This is especially necessary to solve an mm->mmap_sem livelock issue
    whereas an oom killed thread must acquire the lock in the exit path while
    another thread is holding it in the page allocator while trying to
    allocate memory itself (and will preempt the oom killer since a task was
    already killed). Since tasks with pending fatal signals are now granted
    access to memory reserves, the thread holding the lock may quickly
    allocate and release the lock so that the oom killed task may exit.

    This mainly is for threads that are cloned with CLONE_VM but not
    CLONE_THREAD, so they are in a different thread group. Non-NPTL threads
    exist in the wild and this change is necessary to prevent the livelock in
    such cases. We care more about preventing the livelock than incurring the
    additional tasklist in the oom killer when a task has been killed.
    Systems that are sufficiently large to not want the tasklist scan in the
    oom killer in the first place already have the option of enabling
    /proc/sys/vm/oom_kill_allocating_task, which was designed specifically for
    that purpose.

    This code had existed in the oom killer for over eight years dating back
    to the 2.4 kernel.

    [akpm@linux-foundation.org: add nice comment]
    Signed-off-by: David Rientjes
    Acked-by: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Ying Han
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer's goal is to kill a memory-hogging task so that it may
    exit, free its memory, and allow the current context to allocate the
    memory that triggered it in the first place. Thus, killing a task is
    pointless if other threads sharing its mm cannot be killed because of its
    /proc/pid/oom_adj or /proc/pid/oom_score_adj value.

    This patch checks whether any other thread sharing p->mm has an
    oom_score_adj of OOM_SCORE_ADJ_MIN. If so, the thread cannot be killed
    and oom_badness(p) returns 0, meaning it's unkillable.

    Signed-off-by: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Ying Han
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

23 Sep, 2010

2 commits

  • /proc/sys/vm/oom_dump_tasks is enabled by default, so it's necessary to
    limit as much information as possible that it should emit.

    The tasklist dump should be filtered to only those tasks that are eligible
    for oom kill. This is already done for memcg ooms, but this patch extends
    it to both cpuset and mempolicy ooms as well as init.

    In addition to suppressing irrelevant information, this also reduces
    confusion since users currently don't know which tasks in the tasklist
    aren't eligible for kill (such as those attached to cpusets or bound to
    mempolicies with a disjoint set of mems or nodes, respectively) since that
    information is not shown.

    Signed-off-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • A task's badness score is roughly a proportion of its rss and swap
    compared to the system's capacity. The scale ranges from 0 to 1000 with
    the highest score chosen for kill. Thus, this scale operates on a
    resolution of 0.1% of RAM + swap. Admin tasks are also given a 3% bonus,
    so the badness score of an admin task using 3% of memory, for example,
    would still be 0.

    It's possible that an exceptionally large number of tasks will combine to
    exhaust all resources but never have a single task that uses more than
    0.1% of RAM and swap (or 3.0% for admin tasks).

    This patch ensures that the badness score of any eligible task is never 0
    so the machine doesn't unnecessarily panic because it cannot find a task
    to kill.

    Signed-off-by: David Rientjes
    Cc: Dave Hansen
    Cc: Nitin Gupta
    Cc: Pekka Enberg
    Cc: Minchan Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

21 Aug, 2010

3 commits

  • dump_tasks() needs to hold the RCU read lock around its access of the
    target task's UID. To this end it should use task_uid() as it only needs
    that one thing from the creds.

    The fact that dump_tasks() holds tasklist_lock is insufficient to prevent the
    target process replacing its credentials on another CPU.

    Then, this patch change to call rcu_read_lock() explicitly.

    ===================================================
    [ INFO: suspicious rcu_dereference_check() usage. ]
    ---------------------------------------------------
    mm/oom_kill.c:410 invoked rcu_dereference_check() without protection!

    other info that might help us debug this:

    rcu_scheduler_active = 1, debug_locks = 1
    4 locks held by kworker/1:2/651:
    #0: (events){+.+.+.}, at: []
    process_one_work+0x137/0x4a0
    #1: (moom_work){+.+...}, at: []
    process_one_work+0x137/0x4a0
    #2: (tasklist_lock){.+.+..}, at: []
    out_of_memory+0x164/0x3f0
    #3: (&(&p->alloc_lock)->rlock){+.+...}, at: []
    find_lock_task_mm+0x2e/0x70

    Signed-off-by: KOSAKI Motohiro
    Signed-off-by: David Howells
    Acked-by: Paul E. McKenney
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Commit 0aad4b3124 ("oom: fold __out_of_memory into out_of_memory")
    introduced a tasklist_lock leak. Then it caused following obvious
    danger warnings and panic.

    ================================================
    [ BUG: lock held when returning to user space! ]
    ------------------------------------------------
    rsyslogd/1422 is leaving the kernel with locks still held!
    1 lock held by rsyslogd/1422:
    #0: (tasklist_lock){.+.+.+}, at: [] out_of_memory+0x164/0x3f0
    BUG: scheduling while atomic: rsyslogd/1422/0x00000002
    INFO: lockdep is turned off.

    This patch fixes it.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Commit b940fd7035 ("oom: remove unnecessary code and cleanup") added an
    unnecessary NULL pointer dereference. remove it.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     

11 Aug, 2010

1 commit

  • When the OOM killer scans task, it check a task is under memcg or
    not when it's called via memcg's context.

    But, as Oleg pointed out, a thread group leader may have NULL ->mm
    and task_in_mem_cgroup() may do wrong decision. We have to use
    find_lock_task_mm() in memcg as generic OOM-Killer does.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Cc: Daisuke Nishimura
    Cc: Balbir Singh
    Reviewed-by: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

10 Aug, 2010

21 commits

  • This a complete rewrite of the oom killer's badness() heuristic which is
    used to determine which task to kill in oom conditions. The goal is to
    make it as simple and predictable as possible so the results are better
    understood and we end up killing the task which will lead to the most
    memory freeing while still respecting the fine-tuning from userspace.

    Instead of basing the heuristic on mm->total_vm for each task, the task's
    rss and swap space is used instead. This is a better indication of the
    amount of memory that will be freeable if the oom killed task is chosen
    and subsequently exits. This helps specifically in cases where KDE or
    GNOME is chosen for oom kill on desktop systems instead of a memory
    hogging task.

    The baseline for the heuristic is a proportion of memory that each task is
    currently using in memory plus swap compared to the amount of "allowable"
    memory. "Allowable," in this sense, means the system-wide resources for
    unconstrained oom conditions, the set of mempolicy nodes, the mems
    attached to current's cpuset, or a memory controller's limit. The
    proportion is given on a scale of 0 (never kill) to 1000 (always kill),
    roughly meaning that if a task has a badness() score of 500 that the task
    consumes approximately 50% of allowable memory resident in RAM or in swap
    space.

    The proportion is always relative to the amount of "allowable" memory and
    not the total amount of RAM systemwide so that mempolicies and cpusets may
    operate in isolation; they shall not need to know the true size of the
    machine on which they are running if they are bound to a specific set of
    nodes or mems, respectively.

    Root tasks are given 3% extra memory just like __vm_enough_memory()
    provides in LSMs. In the event of two tasks consuming similar amounts of
    memory, it is generally better to save root's task.

    Because of the change in the badness() heuristic's baseline, it is also
    necessary to introduce a new user interface to tune it. It's not possible
    to redefine the meaning of /proc/pid/oom_adj with a new scale since the
    ABI cannot be changed for backward compatability. Instead, a new tunable,
    /proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
    be used to polarize the heuristic such that certain tasks are never
    considered for oom kill while others may always be considered. The value
    is added directly into the badness() score so a value of -500, for
    example, means to discount 50% of its memory consumption in comparison to
    other tasks either on the system, bound to the mempolicy, in the cpuset,
    or sharing the same memory controller.

    /proc/pid/oom_adj is changed so that its meaning is rescaled into the
    units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
    these per-task tunables will rescale the value of the other to an
    equivalent meaning. Although /proc/pid/oom_adj was originally defined as
    a bitshift on the badness score, it now shares the same linear growth as
    /proc/pid/oom_score_adj but with different granularity. This is required
    so the ABI is not broken with userspace applications and allows oom_adj to
    be deprecated for future removal.

    Signed-off-by: David Rientjes
    Cc: Nick Piggin
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Oleg Nesterov
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Oleg pointed out current PF_EXITING check is wrong. Because PF_EXITING
    is per-thread flag, not per-process flag. He said,

    Two threads, group-leader L and its sub-thread T. T dumps the code.
    In this case both threads have ->mm != NULL, L has PF_EXITING.

    The first problem is, select_bad_process() always return -1 in this
    case (even if the caller is T, this doesn't matter).

    The second problem is that we should add TIF_MEMDIE to T, not L.

    I think we can remove this dubious PF_EXITING check. but as first step,
    This patch add the protection of multi threaded issue.

    Signed-off-by: KOSAKI Motohiro
    Cc: Oleg Nesterov
    Cc: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • In a system under heavy load it was observed that even after the
    oom-killer selects a task to die, the task may take a long time to die.

    Right after sending a SIGKILL to the task selected by the oom-killer this
    task has its priority increased so that it can exit() soon, freeing
    memory. That is accomplished by:

    /*
    * We give our sacrificial lamb high priority and access to
    * all the memory it needs. That way it should be able to
    * exit() and clear out its resources quickly...
    */
    p->rt.time_slice = HZ;
    set_tsk_thread_flag(p, TIF_MEMDIE);

    It sounds plausible giving the dying task an even higher priority to be
    sure it will be scheduled sooner and free the desired memory. It was
    suggested on LKML using SCHED_FIFO:1, the lowest RT priority so that this
    task won't interfere with any running RT task.

    If the dying task is already an RT task, leave it untouched. Another good
    suggestion, implemented here, was to avoid boosting the dying task
    priority in case of mem_cgroup OOM.

    Signed-off-by: Luis Claudio R. Goncalves
    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luis Claudio R. Goncalves
     
  • The current "child->mm == p->mm" check prevents selection of vfork()ed
    task. But we don't have any reason to don't consider vfork().

    Removed.

    Signed-off-by: KOSAKI Motohiro
    Cc: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • presently has_intersects_mems_allowed() has own thread iterate logic, but
    it should use while_each_thread().

    It slightly improve the code readability.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Presently if oom_kill_allocating_task is enabled and current have
    OOM_DISABLED, following printk in oom_kill_process is called twice.

    pr_err("%s: Kill process %d (%s) score %lu or sacrifice child\n",
    message, task_pid_nr(p), p->comm, points);

    So, OOM_DISABLE check should be more early.

    Signed-off-by: KOSAKI Motohiro
    Cc: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • select_bad_process() and badness() have the same OOM_DISABLE check. This
    patch kills one.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • If a kernel thread is using use_mm(), badness() returns a positive value.
    This is not a big issue because caller take care of it correctly. But
    there is one exception, /proc//oom_score calls badness() directly and
    doesn't care that the task is a regular process.

    Another example, /proc/1/oom_score return !0 value. But it's unkillable.
    This incorrectness makes administration a little confusing.

    This patch fixes it.

    Signed-off-by: KOSAKI Motohiro
    Cc: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • When oom_kill_allocating_task is enabled, an argument task of
    oom_kill_process is not selected by select_bad_process(), It's just
    out_of_memory() caller task. It mean the task can be unkillable. check
    it first.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Presently we have the same task check in two places. Unify it.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Presently select_bad_process() has a PF_KTHREAD check, but
    oom_kill_process doesn't. It mean oom_kill_process() may choose wrong
    task, especially, when the child are using use_mm().

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Presently, badness() doesn't care about either CPUSET nor mempolicy. Then
    if the victim child process have disjoint nodemask, OOM Killer might kill
    innocent process.

    This patch fixes it.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • __out_of_memory() only has a single caller, so fold it into
    out_of_memory() and add a comment about locking for its call to
    oom_kill_process().

    Signed-off-by: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Acked-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • select_bad_process() and __out_of_memory() doe not need their enum
    oom_constraint arguments: it's possible to pass a NULL nodemask if
    constraint == CONSTRAINT_MEMORY_POLICY in the caller, out_of_memory().

    Signed-off-by: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Acked-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • We have been used naming try_set_zone_oom and clear_zonelist_oom.
    The role of functions is to lock of zonelist for preventing parallel
    OOM. So clear_zonelist_oom makes sense but try_set_zone_oome is rather
    awkward and unmatched with clear_zonelist_oom.

    Let's change it with try_set_zonelist_oom.

    Signed-off-by: Minchan Kim
    Acked-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • Remove the redundancy in __oom_kill_task() since:

    - init can never be passed to this function: it will never be PF_EXITING
    or selectable from select_bad_process(), and

    - it will never be passed a task from oom_kill_task() without an ->mm
    and we're unconcerned about detachment from exiting tasks, there's no
    reason to protect them against SIGKILL or access to memory reserves.

    Also moves the kernel log message to a higher level since the verbosity is
    not always emitted here; we need not print an error message if an exiting
    task is given a longer timeslice.

    __oom_kill_task() only has a single caller, so it can be merged into that
    function at the same time.

    Signed-off-by: David Rientjes
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • It is possible to remove the special pagefault oom handler by simply oom
    locking all system zones and then calling directly into out_of_memory().

    All populated zones must have ZONE_OOM_LOCKED set, otherwise there is a
    parallel oom killing in progress that will lead to eventual memory freeing
    so it's not necessary to needlessly kill another task. The context in
    which the pagefault is allocating memory is unknown to the oom killer, so
    this is done on a system-wide level.

    If a task has already been oom killed and hasn't fully exited yet, this
    will be a no-op since select_bad_process() recognizes tasks across the
    system with TIF_MEMDIE set.

    Signed-off-by: David Rientjes
    Acked-by: Nick Piggin
    Acked-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • There are various points in the oom killer where the kernel must determine
    whether to panic or not. It's better to extract this to a helper function
    to remove all the confusion as to its semantics.

    Also fix a call to dump_header() where tasklist_lock is not read- locked,
    as required.

    There's no functional change with this patch.

    Acked-by: KOSAKI Motohiro
    Signed-off-by: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer tasklist dump, enabled with the oom_dump_tasks sysctl, is
    very helpful information in diagnosing why a user's task has been killed.
    It emits useful information such as each eligible thread's memory usage
    that can determine why the system is oom, so it should be enabled by
    default.

    Signed-off-by: David Rientjes
    Acked-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer presently kills current whenever there is no more memory
    free or reclaimable on its mempolicy's nodes. There is no guarantee that
    current is a memory-hogging task or that killing it will free any
    substantial amount of memory, however.

    In such situations, it is better to scan the tasklist for nodes that are
    allowed to allocate on current's set of nodes and kill the task with the
    highest badness() score. This ensures that the most memory-hogging task,
    or the one configured by the user with /proc/pid/oom_adj, is always
    selected in such scenarios.

    Signed-off-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • When a task is chosen for oom kill, the oom killer first attempts to
    sacrifice a child not sharing its parent's memory instead. Unfortunately,
    this often kills in a seemingly random fashion based on the ordering of
    the selected task's child list. Additionally, it is not guaranteed at all
    to free a large amount of memory that we need to prevent additional oom
    killing in the very near future.

    Instead, we now only attempt to sacrifice the worst child not sharing its
    parent's memory, if one exists. The worst child is indicated with the
    highest badness() score. This serves two advantages: we kill a
    memory-hogging task more often, and we allow the configurable
    /proc/pid/oom_adj value to be considered as a factor in which child to
    kill.

    Reviewers may observe that the previous implementation would iterate
    through the children and attempt to kill each until one was successful and
    then the parent if none were found while the new code simply kills the
    most memory-hogging task or the parent. Note that the only time
    oom_kill_task() fails, however, is when a child does not have an mm or has
    a /proc/pid/oom_adj of OOM_DISABLE. badness() returns 0 for both cases,
    so the final oom_kill_task() will always succeed.

    Signed-off-by: David Rientjes
    Acked-by: Rik van Riel
    Acked-by: Nick Piggin
    Acked-by: Balbir Singh
    Cc: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes