10 Aug, 2010

40 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (59 commits)
    igbvf.txt: Add igbvf Documentation
    igb.txt: Add igb documentation
    e100/e1000*/igb*/ixgb*: Add missing read memory barrier
    ixgbe: fix build error with FCOE_CONFIG without DCB_CONFIG
    netxen: protect tx timeout recovery by rtnl lock
    isdn: gigaset: use after free
    isdn: gigaset: add missing unlock
    solos-pci: Fix race condition in tasklet RX handling
    pkt_sched: Fix sch_sfq vs tcf_bind_filter oops
    net: disable preemption before call smp_processor_id()
    tcp: no md5sig option size check bug
    iwlwifi: fix locking assertions
    iwlwifi: fix TX tracer
    isdn: fix information leak
    net: Fix napi_gro_frags vs netpoll path
    usbnet: remove noisy and hardly useful printk
    rtl8180: avoid potential NULL deref in rtl8180_beacon_work
    ath9k: Remove myself from the MAINTAINERS list
    libertas: scan before assocation if no BSSID was given
    libertas: fix association with some APs by using extended rates
    ...

    Linus Torvalds
     
  • * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (30 commits)
    DMAENGINE: at_hdmac: locking fixlet
    DMAENGINE: pch_dma: kill another usage of __raw_{read|write}l
    dma: dmatest: fix potential sign bug
    ioat2: catch and recover from broken vtd configurations v6
    DMAENGINE: add runtime slave control to COH 901 318 v3
    DMAENGINE: add runtime slave config to DMA40 v3
    DMAENGINE: generic slave channel control v3
    dmaengine: Driver for Topcliff PCH DMA controller
    intel_mid: Add Mrst & Mfld DMA Drivers
    drivers/dma: Eliminate a NULL pointer dereference
    dma/timb_dma: compile warning on 32 bit
    DMAENGINE: ste_dma40: support older silicon
    DMAENGINE: ste_dma40: support disabling physical channels
    DMAENGINE: ste_dma40: no disabled phy channels on ux500
    DMAENGINE: ste_dma40: fix suspend bug
    DMAENGINE: ste_dma40: add DB8500 memcpy channels
    DMAENGINE: ste_dma40: no flow control on memcpy
    DMAENGINE: ste_dma40: arch updates for LCLA and LCPA
    DMAENGINE: ste_dma40: allocate LCLA dynamically
    DMAENGINE: ste_dma40: no premature stop
    ...

    Fix up trivial conflicts in arch/arm/mach-ux500/devices-db8500.c

    Linus Torvalds
     
  • Getting and putting arrays of pointers with flex arrays is a PITA. You
    have to remember to pass &ptr to the _put and you have to do weird and
    wacky casting to get the ptr back from the _get. Add two functions
    flex_array_get_ptr() and flex_array_put_ptr() to handle all of the magic.

    [akpm@linux-foundation.org: simplification suggested by Joe]
    Signed-off-by: Eric Paris
    Cc: David Rientjes
    Cc: Dave Hansen
    Cc: Joe Perches
    Cc: James Morris
    Cc: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Paris
     
  • A profile of a network benchmark showed iommu_num_pages rather high up:

    0.52% iommu_num_pages

    Looking at the profile, an integer divide is taking almost all of the time:

    %
    : c000000000376ea4 :
    1.93 : c000000000376ea4: fb e1 ff f8 std r31,-8(r1)
    0.00 : c000000000376ea8: f8 21 ff c1 stdu r1,-64(r1)
    0.00 : c000000000376eac: 7c 3f 0b 78 mr r31,r1
    3.86 : c000000000376eb0: 38 84 ff ff addi r4,r4,-1
    0.00 : c000000000376eb4: 38 05 ff ff addi r0,r5,-1
    0.00 : c000000000376eb8: 7c 84 2a 14 add r4,r4,r5
    46.95 : c000000000376ebc: 7c 00 18 38 and r0,r0,r3
    45.66 : c000000000376ec0: 7c 84 02 14 add r4,r4,r0
    0.00 : c000000000376ec4: 7c 64 2b 92 divdu r3,r4,r5
    0.00 : c000000000376ec8: 38 3f 00 40 addi r1,r31,64
    0.00 : c000000000376ecc: eb e1 ff f8 ld r31,-8(r1)
    1.61 : c000000000376ed0: 4e 80 00 20 blr

    Since every caller of iommu_num_pages passes in a constant power of two
    we can inline this such that the divide is replaced by a shift. The
    entire function is only a few instructions once optimised, so it is
    a good candidate for inlining overall.

    Signed-off-by: Anton Blanchard
    Cc: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anton Blanchard
     
  • There are no more uses of NIPQUAD or NIPQUAD_FMT. Remove the definitions.

    Signed-off-by: Joe Perches
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • We should use the __same_type() helper in __must_be_array().

    Signed-off-by: Rusty Russell
    Reported-by: Andrew Morton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rusty Russell
     
  • The asm-generic/iomap.h provides these functions already, but the
    non-generic fallback defines do not.

    Signed-off-by: Mike Frysinger
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Frysinger
     
  • On some SoC chips, HW resources may be in use during any particular idle
    period. As a consequence, the cpuidle states that the SoC is safe to
    enter can change from idle period to idle period. In addition, the
    latency and threshold of each cpuidle state can vary, depending on the
    operating condition when the CPU becomes idle, e.g. the current cpu
    frequency, the current state of the HW blocks, etc.

    cpuidle core and the menu governor, in the current form, are geared
    towards cpuidle states that are static, i.e. the availabiltiy of the
    states, their latencies, their thresholds are non-changing during run
    time. cpuidle does not provide any hook that cpuidle drivers can use to
    adjust those values on the fly for the current idle period before the menu
    governor selects the target cpuidle state.

    This patch extends cpuidle core and the menu governor to handle states
    that are dynamic. There are three additions in the patch and the patch
    maintains backwards-compatibility with existing cpuidle drivers.

    1) add prepare() to struct cpuidle_device. A cpuidle driver can hook
    into the callback and cpuidle will call prepare() before calling the
    governor's select function. The callback gives the cpuidle driver a
    chance to update the dynamic information of the cpuidle states for the
    current idle period, e.g. state availability, latencies, thresholds,
    power values, etc.

    2) add CPUIDLE_FLAG_IGNORE as one of the state flags. In the prepare()
    function, a cpuidle driver can set/clear the flag to indicate to the
    menu governor whether a cpuidle state should be ignored, i.e. not
    available, during the current idle period.

    3) add power_specified bit to struct cpuidle_device. The menu governor
    currently assumes that the cpuidle states are arranged in the order of
    increasing latency, threshold, and power savings. This is true or can
    be made true for static states. Once the state parameters are dynamic,
    the latencies, thresholds, and power savings for the cpuidle states can
    increase or decrease by different amounts from idle period to idle
    period. So the assumption of increasing latency, threshold, and power
    savings from Cn to C(n+1) can no longer be guaranteed.

    It can be straightforward to calculate the power consumption of each
    available state and to specify it in power_usage for the idle period.
    Using the power_usage fields, the menu governor then selects the state
    that has the lowest power consumption and that still satisfies all other
    critieria. The power_specified bit defaults to 0. For existing cpuidle
    drivers, cpuidle detects that power_specified is 0 and fills in a dummy
    set of power_usage values.

    Signed-off-by: Ai Li
    Cc: Len Brown
    Acked-by: Arjan van de Ven
    Cc: Ingo Molnar
    Cc: Venkatesh Pallipadi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ai Li
     
  • When taking a memory snapshot in hibernate_snapshot(), all (directly
    called) memory allocations use GFP_ATOMIC. Hence swap misusage during
    hibernation never occurs.

    But from a pessimistic point of view, there is no guarantee that no page
    allcation has __GFP_WAIT. It is better to have a global indication "we
    enter hibernation, don't use swap!".

    This patch tries to freeze new-swap-allocation during hibernation. (All
    user processes are frozenm so swapin is not a concern).

    This way, no updates will happen to swap_map[] between
    hibernate_snapshot() and save_image(). Swap is thawed when swsusp_free()
    is called. We can be assured that swap corruption will not occur.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: "Rafael J. Wysocki"
    Cc: Hugh Dickins
    Cc: KOSAKI Motohiro
    Cc: Ondrej Zary
    Cc: Balbir Singh
    Cc: Andrea Arcangeli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Memcg also need to trace page isolation information as global reclaim.
    This patch does it.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Acked-by: Mel Gorman
    Acked-by: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Mel Gorman recently added some vmscan tracepoints. Unfortunately they are
    covered only global reclaim. But we want to trace memcg reclaim too.

    Thus, this patch convert them to DEFINE_TRACE macro. it help to reuse
    tracepoint definition for other similar usage (i.e. memcg). This patch
    have no functionally change.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Acked-by: Balbir Singh
    Acked-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Memcg also need to trace reclaim progress as direct reclaim. This patch
    add it.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Acked-by: Mel Gorman
    Acked-by: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Mel Gorman recently added some vmscan tracepoints. Unfortunately they are
    covered only global reclaim. But we want to trace memcg reclaim too.

    Thus, this patch convert them to DEFINE_TRACE macro. it help to reuse
    tracepoint definition for other similar usage (i.e. memcg). This patch
    have no functionally change.

    Signed-off-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Acked-by: Mel Gorman
    Acked-by: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • On swapin it is fairly common for a page to be owned exclusively by one
    process. In that case we want to add the page to the anon_vma of that
    process's VMA, instead of to the root anon_vma.

    This will reduce the amount of rmap searching that the swapout code needs
    to do.

    Signed-off-by: Rik van Riel
    Cc: Andrea Arcangeli
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • /proc/pid/oom_adj is now deprecated so that that it may eventually be
    removed. The target date for removal is August 2012.

    A warning will be printed to the kernel log if a task attempts to use this
    interface. Future warning will be suppressed until the kernel is rebooted
    to prevent spamming the kernel log.

    Signed-off-by: David Rientjes
    Cc: Nick Piggin
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Oleg Nesterov
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • This a complete rewrite of the oom killer's badness() heuristic which is
    used to determine which task to kill in oom conditions. The goal is to
    make it as simple and predictable as possible so the results are better
    understood and we end up killing the task which will lead to the most
    memory freeing while still respecting the fine-tuning from userspace.

    Instead of basing the heuristic on mm->total_vm for each task, the task's
    rss and swap space is used instead. This is a better indication of the
    amount of memory that will be freeable if the oom killed task is chosen
    and subsequently exits. This helps specifically in cases where KDE or
    GNOME is chosen for oom kill on desktop systems instead of a memory
    hogging task.

    The baseline for the heuristic is a proportion of memory that each task is
    currently using in memory plus swap compared to the amount of "allowable"
    memory. "Allowable," in this sense, means the system-wide resources for
    unconstrained oom conditions, the set of mempolicy nodes, the mems
    attached to current's cpuset, or a memory controller's limit. The
    proportion is given on a scale of 0 (never kill) to 1000 (always kill),
    roughly meaning that if a task has a badness() score of 500 that the task
    consumes approximately 50% of allowable memory resident in RAM or in swap
    space.

    The proportion is always relative to the amount of "allowable" memory and
    not the total amount of RAM systemwide so that mempolicies and cpusets may
    operate in isolation; they shall not need to know the true size of the
    machine on which they are running if they are bound to a specific set of
    nodes or mems, respectively.

    Root tasks are given 3% extra memory just like __vm_enough_memory()
    provides in LSMs. In the event of two tasks consuming similar amounts of
    memory, it is generally better to save root's task.

    Because of the change in the badness() heuristic's baseline, it is also
    necessary to introduce a new user interface to tune it. It's not possible
    to redefine the meaning of /proc/pid/oom_adj with a new scale since the
    ABI cannot be changed for backward compatability. Instead, a new tunable,
    /proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
    be used to polarize the heuristic such that certain tasks are never
    considered for oom kill while others may always be considered. The value
    is added directly into the badness() score so a value of -500, for
    example, means to discount 50% of its memory consumption in comparison to
    other tasks either on the system, bound to the mempolicy, in the cpuset,
    or sharing the same memory controller.

    /proc/pid/oom_adj is changed so that its meaning is rescaled into the
    units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
    these per-task tunables will rescale the value of the other to an
    equivalent meaning. Although /proc/pid/oom_adj was originally defined as
    a bitshift on the badness score, it now shares the same linear growth as
    /proc/pid/oom_score_adj but with different granularity. This is required
    so the ABI is not broken with userspace applications and allows oom_adj to
    be deprecated for future removal.

    Signed-off-by: David Rientjes
    Cc: Nick Piggin
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Oleg Nesterov
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Cc: Minchan Kim
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Since 2.6.28 zone->prev_priority is unused. Then it can be removed
    safely. It reduce stack usage slightly.

    Now I have to say that I'm sorry. 2 years ago, I thought prev_priority
    can be integrate again, it's useful. but four (or more) times trying
    haven't got good performance number. Thus I give up such approach.

    The rest of this changelog is notes on prev_priority and why it existed in
    the first place and why it might be not necessary any more. This information
    is based heavily on discussions between Andrew Morton, Rik van Riel and
    Kosaki Motohiro who is heavily quotes from.

    Historically prev_priority was important because it determined when the VM
    would start unmapping PTE pages. i.e. there are no balances of note within
    the VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there
    is a potential risk of unnecessarily increasing minor faults as a large
    amount of read activity of use-once pages could push mapped pages to the
    end of the LRU and get unmapped.

    There is no proof this is still a problem but currently it is not considered
    to be. Active files are not deactivated if the active file list is smaller
    than the inactive list reducing the liklihood that file-mapped pages are
    being pushed off the LRU and referenced executable pages are kept on the
    active list to avoid them getting pushed out by read activity.

    Even if it is a problem, prev_priority prev_priority wouldn't works
    nowadays. First of all, current vmscan still a lot of UP centric code. it
    expose some weakness on some dozens CPUs machine. I think we need more and
    more improvement.

    The problem is, current vmscan mix up per-system-pressure, per-zone-pressure
    and per-task-pressure a bit. example, prev_priority try to boost priority to
    other concurrent priority. but if the another task have mempolicy restriction,
    it is unnecessary, but also makes wrong big latency and exceeding reclaim.
    per-task based priority + prev_priority adjustment make the emulation of
    per-system pressure. but it have two issue 1) too rough and brutal emulation
    2) we need per-zone pressure, not per-system.

    Another example, currently DEF_PRIORITY is 12. it mean the lru rotate about
    2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer.
    but if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the
    system have higher memory pressure than priority==0 (1/4096*10,000 > 2).
    prev_priority can't solve such multithreads workload issue. In other word,
    prev_priority concept assume the sysmtem don't have lots threads."

    Signed-off-by: KOSAKI Motohiro
    Signed-off-by: Mel Gorman
    Reviewed-by: Johannes Weiner
    Reviewed-by: Rik van Riel
    Cc: Dave Chinner
    Cc: Chris Mason
    Cc: Nick Piggin
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Cc: Christoph Hellwig
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Andrea Arcangeli
    Cc: Michael Rubin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Add a trace event for when page reclaim queues a page for IO and records
    whether it is synchronous or asynchronous. Excessive synchronous IO for a
    process can result in noticeable stalls during direct reclaim. Excessive
    IO from page reclaim may indicate that the system is seriously under
    provisioned for the amount of dirty pages that exist.

    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Acked-by: Larry Woodman
    Cc: Dave Chinner
    Cc: Chris Mason
    Cc: Nick Piggin
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Cc: Christoph Hellwig
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Andrea Arcangeli
    Cc: Michael Rubin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Add an event for when pages are isolated en-masse from the LRU lists.
    This event augments the information available on LRU traffic and can be
    used to evaluate lumpy reclaim.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Acked-by: Larry Woodman
    Cc: Dave Chinner
    Cc: Chris Mason
    Cc: Nick Piggin
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Cc: Christoph Hellwig
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Andrea Arcangeli
    Cc: Michael Rubin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Add two trace events for kswapd waking up and going asleep for the
    purposes of tracking kswapd activity and two trace events for direct
    reclaim beginning and ending. The information can be used to work out how
    much time a process or the system is spending on the reclamation of pages
    and in the case of direct reclaim, how many pages were reclaimed for that
    process. High frequency triggering of these events could point to memory
    pressure problems.

    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Acked-by: Larry Woodman
    Cc: Dave Chinner
    Cc: Chris Mason
    Cc: Nick Piggin
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Cc: Christoph Hellwig
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Andrea Arcangeli
    Cc: Michael Rubin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • We try to avoid livelocks of writeback when some steadily creates dirty
    pages in a mapping we are writing out. For memory-cleaning writeback,
    using nr_to_write works reasonably well but we cannot really use it for
    data integrity writeback. This patch tries to solve the problem.

    The idea is simple: Tag all pages that should be written back with a
    special tag (TOWRITE) in the radix tree. This can be done rather quickly
    and thus livelocks should not happen in practice. Then we start doing the
    hard work of locking pages and sending them to disk only for those pages
    that have TOWRITE tag set.

    Note: Adding new radix tree tag grows radix tree node from 288 to 296
    bytes for 32-bit archs and from 552 to 560 bytes for 64-bit archs.
    However, the number of slab/slub items per page remains the same (13 and 7
    respectively).

    Signed-off-by: Jan Kara
    Cc: Dave Chinner
    Cc: Nick Piggin
    Cc: Chris Mason
    Cc: Theodore Ts'o
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • Implement function for setting one tag if another tag is set for each item
    in given range.

    Signed-off-by: Jan Kara
    Cc: Dave Chinner
    Cc: Nick Piggin
    Cc: Chris Mason
    Cc: Theodore Ts'o
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • The new anon-vma code, was suboptimal and it lead to erratic invocation of
    ksm_does_need_to_copy. That leads to host hangs or guest vnc lockup, or
    weird behavior. It's unclear why ksm_does_need_to_copy is unstable but
    the point is that when KSM is not in use, ksm_does_need_to_copy must never
    run or we bounce pages for no good reason. I suspect the same hangs will
    happen with KVM swaps. But this at least fixes the regression in the
    new-anon-vma code and it only let KSM bugs triggers when KSM is in use.

    The code in do_swap_page likely doesn't cope well with a not-swapcache,
    especially the memcg code.

    Signed-off-by: Andrea Arcangeli
    Signed-off-by: Rik van Riel
    Cc: Izik Eidus
    Cc: Avi Kivity
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     
  • The current implementation of tmpfs is not scalable. We found that
    stat_lock is contended by multiple threads when we need to get a new page,
    leading to useless spinning inside this spin lock.

    This patch makes use of the percpu_counter library to maintain local count
    of used blocks to speed up getting and returning of pages. So the
    acquisition of stat_lock is unnecessary for getting and returning blocks,
    improving the performance of tmpfs on system with large number of cpus.
    On a 4 socket 32 core NHM-EX system, we saw improvement of 270%.

    The implementation below has a slight chance of race between threads
    causing a slight overshoot of the maximum configured blocks. However, any
    overshoot is small, and is bounded by the number of cpus. This happens
    when the number of used blocks is slightly below the maximum configured
    blocks when a thread checks the used block count, and another thread
    allocates the last block before the current thread does. This should not
    be a problem for tmpfs, as the overshoot is most likely to be a few blocks
    and bounded. If a strict limit is really desired, then configured the max
    blocks to be the limit less the number of cpus in system.

    Signed-off-by: Tim Chen
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     
  • Add percpu_counter_compare that allows for a quick but accurate comparison
    of percpu_counter with a given value.

    A rough count is provided by the count field in percpu_counter structure,
    without accounting for the other values stored in individual cpu counters.

    The actual count is a sum of count and the cpu counters. However, count
    field is never different from the actual value by a factor of
    batch*num_online_cpu. We do not need to get actual count for comparison
    if count is different from the given value by this factor and allows for
    quick comparison without summing up all the per cpu counters.

    Signed-off-by: Tim Chen
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     
  • No real bugs, just some dead code and some fixups.

    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • Avoid quite a lot of warnings in header files in a gcc 4.6 -Wall builds

    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • Define stubs for the numa_*_id() generic percpu related functions for
    non-NUMA configurations in where the other
    non-numa stubs live.

    Fixes ia64 !NUMA build breakage -- e.g., tiger_defconfig

    Back out now unneeded '#ifndef CONFIG_NUMA' guards from ia64 smpboot.c

    Signed-off-by: Lee Schermerhorn
    Tested-by: Tony Luck
    Acked-by: Tony Luck
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • get_zone_counts() was dropped from kernel tree, see:
    http://www.mail-archive.com/mm-commits@vger.kernel.org/msg07313.html but
    its prototype remains.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Nevenchannyy
     
  • We have been used naming try_set_zone_oom and clear_zonelist_oom.
    The role of functions is to lock of zonelist for preventing parallel
    OOM. So clear_zonelist_oom makes sense but try_set_zone_oome is rather
    awkward and unmatched with clear_zonelist_oom.

    Let's change it with try_set_zonelist_oom.

    Signed-off-by: Minchan Kim
    Acked-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • The three oom killer sysctl variables (sysctl_oom_dump_tasks,
    sysctl_oom_kill_allocating_task, and sysctl_panic_on_oom) are better
    declared in include/linux/oom.h rather than kernel/sysctl.c.

    Signed-off-by: David Rientjes
    Acked-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • There are various points in the oom killer where the kernel must determine
    whether to panic or not. It's better to extract this to a helper function
    to remove all the confusion as to its semantics.

    Also fix a call to dump_header() where tasklist_lock is not read- locked,
    as required.

    There's no functional change with this patch.

    Acked-by: KOSAKI Motohiro
    Signed-off-by: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The oom killer presently kills current whenever there is no more memory
    free or reclaimable on its mempolicy's nodes. There is no guarantee that
    current is a memory-hogging task or that killing it will free any
    substantial amount of memory, however.

    In such situations, it is better to scan the tasklist for nodes that are
    allowed to allocate on current's set of nodes and kill the task with the
    highest badness() score. This ensures that the most memory-hogging task,
    or the one configured by the user with /proc/pid/oom_adj, is always
    selected in such scenarios.

    Signed-off-by: David Rientjes
    Reviewed-by: KOSAKI Motohiro
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The comment suggests that when b_count equals zero it is calling
    __wait_no_buffer to trigger some debug, but as there is no debug in
    __wait_on_buffer the whole thing is redundant.

    AFAICT from the git log this has been the case for at least 5 years, so
    it seems safe just to remove this.

    Signed-off-by: Richard Kennedy
    Cc: Nick Piggin
    Cc: Jens Axboe
    Cc: Jeff Mahoney
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Richard Kennedy
     
  • KSM reference counts can cause an anon_vma to exist after the processe it
    belongs to have already exited. Because the anon_vma lock now lives in
    the root anon_vma, we need to ensure that the root anon_vma stays around
    until after all the "child" anon_vmas have been freed.

    The obvious way to do this is to have a "child" anon_vma take a reference
    to the root in anon_vma_fork. When the anon_vma is freed at munmap or
    process exit, we drop the refcount in anon_vma_unlink and possibly free
    the root anon_vma.

    The KSM anon_vma reference count function also needs to be modified to
    deal with the possibility of freeing 2 levels of anon_vma. The easiest
    way to do this is to break out the KSM magic and make it generic.

    When compiling without CONFIG_KSM, this code is compiled out.

    Signed-off-by: Rik van Riel
    Tested-by: Larry Woodman
    Acked-by: Larry Woodman
    Reviewed-by: Minchan Kim
    Cc: KAMEZAWA Hiroyuki
    Acked-by: Mel Gorman
    Acked-by: Linus Torvalds
    Tested-by: Dave Young
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • Always (and only) lock the root (oldest) anon_vma whenever we do something
    in an anon_vma. The recently introduced anon_vma scalability is due to
    the rmap code scanning only the VMAs that need to be scanned. Many common
    operations still took the anon_vma lock on the root anon_vma, so always
    taking that lock is not expected to introduce any scalability issues.

    However, always taking the same lock does mean we only need to take one
    lock, which means rmap_walk on pages from any anon_vma in the vma is
    excluded from occurring during an munmap, expand_stack or other operation
    that needs to exclude rmap_walk and similar functions.

    Also add the proper locking to vma_adjust.

    Signed-off-by: Rik van Riel
    Tested-by: Larry Woodman
    Acked-by: Larry Woodman
    Reviewed-by: Minchan Kim
    Reviewed-by: KAMEZAWA Hiroyuki
    Acked-by: Mel Gorman
    Acked-by: Linus Torvalds
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • Track the root (oldest) anon_vma in each anon_vma tree. Because we only
    take the lock on the root anon_vma, we cannot use the lock on higher-up
    anon_vmas to lock anything. This makes it impossible to do an indirect
    lookup of the root anon_vma, since the data structures could go away from
    under us.

    However, a direct pointer is safe because the root anon_vma is always the
    last one that gets freed on munmap or exit, by virtue of the same_vma list
    order and unlink_anon_vmas walking the list forward.

    [akpm@linux-foundation.org: fix typo]
    Signed-off-by: Rik van Riel
    Acked-by: Mel Gorman
    Acked-by: KAMEZAWA Hiroyuki
    Tested-by: Larry Woodman
    Acked-by: Larry Woodman
    Reviewed-by: Minchan Kim
    Acked-by: Linus Torvalds
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • Subsitute a direct call of spin_lock(anon_vma->lock) with an inline
    function doing exactly the same.

    This makes it easier to do the substitution to the root anon_vma lock in a
    following patch.

    We will deal with the handful of special locks (nested, dec_and_lock, etc)
    separately.

    Signed-off-by: Rik van Riel
    Acked-by: Mel Gorman
    Acked-by: KAMEZAWA Hiroyuki
    Tested-by: Larry Woodman
    Acked-by: Larry Woodman
    Reviewed-by: Minchan Kim
    Acked-by: Linus Torvalds
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • Rename anon_vma_lock to vma_lock_anon_vma. This matches the naming style
    used in page_lock_anon_vma and will come in really handy further down in
    this patch series.

    Signed-off-by: Rik van Riel
    Acked-by: Mel Gorman
    Acked-by: KAMEZAWA Hiroyuki
    Tested-by: Larry Woodman
    Acked-by: Larry Woodman
    Reviewed-by: Minchan Kim
    Acked-by: Linus Torvalds
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel