17 Oct, 2007

40 commits

  • There are problems in the use of SPARSEMEM and pageblock flags that causes
    problems on ia64.

    The first part of the problem is that units are incorrect in
    SECTION_BLOCKFLAGS_BITS computation. This results in a map_section's
    section_mem_map being treated as part of a bitmap which isn't good. This
    was evident with an invalid virtual address when mem_init attempted to free
    bootmem pages while relinquishing control from the bootmem allocator.

    The second part of the problem occurs because the pageblock flags bitmap is
    be located with the mem_section. The SECTIONS_PER_ROOT computation using
    sizeof (mem_section) may not be a power of 2 depending on the size of the
    bitmap. This renders masks and other such things not power of 2 base.
    This issue was seen with SPARSEMEM_EXTREME on ia64. This patch moves the
    bitmap outside of mem_section and uses a pointer instead in the
    mem_section. The bitmaps are allocated when the section is being
    initialised.

    Note that sparse_early_usemap_alloc() does not use alloc_remap() like
    sparse_early_mem_map_alloc(). The allocation required for the bitmap on
    x86, the only architecture that uses alloc_remap is typically smaller than
    a cache line. alloc_remap() pads out allocations to the cache size which
    would be a needless waste.

    Credit to Bob Picco for identifying the original problem and effecting a
    fix for the SECTION_BLOCKFLAGS_BITS calculation. Credit to Andy Whitcroft
    for devising the best way of allocating the bitmaps only when required for
    the section.

    [wli@holomorphy.com: warning fix]
    Signed-off-by: Bob Picco
    Signed-off-by: Andy Whitcroft
    Signed-off-by: Mel Gorman
    Cc: "Luck, Tony"
    Signed-off-by: William Irwin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • MIGRATE_RECLAIMABLE allocations tend to be very bursty in nature like when
    updatedb starts. It is likely this will occur in situations where MAX_ORDER
    blocks of pages are not free. This means that updatedb can scatter
    MIGRATE_RECLAIMABLE pages throughout the address space. This patch is more
    agressive about stealing blocks of pages for MIGRATE_RECLAIMABLE.

    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This patch chooses blocks with lower PFNs when placing kernel allocations.
    This is particularly important during fallback in low memory situations to
    stop unmovable pages being placed throughout the entire address space.

    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Grouping pages by mobility can only successfully operate when there are more
    MAX_ORDER_NR_PAGES areas than mobility types. When there are insufficient
    areas, fallbacks cannot be avoided. This has noticeable performance impacts
    on machines with small amounts of memory in comparison to MAX_ORDER_NR_PAGES.
    For example, on IA64 with a configuration including huge pages spans 1GiB with
    MAX_ORDER_NR_PAGES so would need at least 4GiB of RAM before grouping pages by
    mobility would be useful. In comparison, an x86 would need 16MB.

    This patch checks the size of vm_total_pages in build_all_zonelists(). If
    there are not enough areas, mobility is effectivly disabled by considering
    all allocations as the same type (UNMOVABLE). This is achived via a
    __read_mostly flag.

    With this patch, performance is comparable to disabling grouping pages
    by mobility at compile-time on a test machine with insufficient memory.
    With this patch, it is reasonable to get rid of grouping pages by mobility
    a compile-time option.

    Signed-off-by: Mel Gorman
    Acked-by: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • In rare cases, the kernel needs to allocate a high-order block of pages
    without sleeping. For example, this is the case with e1000 cards configured
    to use jumbo frames. Migrating or reclaiming pages in this situation is not
    an option.

    This patch groups these allocations together as much as possible by adding a
    new MIGRATE_TYPE. The MIGRATE_HIGHATOMIC type are exactly what they sound
    like. Care is taken that pages of other migrate types do not use the same
    blocks as high-order atomic allocations.

    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This patch marks a number of allocations that are either short-lived such as
    network buffers or are reclaimable such as inode allocations. When something
    like updatedb is called, long-lived and unmovable kernel allocations tend to
    be spread throughout the address space which increases fragmentation.

    This patch groups these allocations together as much as possible by adding a
    new MIGRATE_TYPE. The MIGRATE_RECLAIMABLE type is for allocations that can be
    reclaimed on demand, but not moved. i.e. they can be migrated by deleting
    them and re-reading the information from elsewhere.

    Signed-off-by: Mel Gorman
    Cc: Andy Whitcroft
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • When a fallback occurs, there will be free pages for one allocation type
    stored on the list for another. When a large steal occurs, this patch will
    move all the free pages within one list to the other.

    [y-goto@jp.fujitsu.com: fix BUG_ON check at move_freepages()]
    [apw@shadowen.org: Move to using pfn_valid_within()]
    Signed-off-by: Mel Gorman
    Cc: Christoph Lameter
    Signed-off-by: Yasunori Goto
    Cc: Bjorn Helgaas
    Signed-off-by: Andy Whitcroft
    Cc: Bob Picco
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Per-cpu pages can accidentally cause fragmentation because they are free, but
    pinned pages in an otherwise contiguous block. When this patch is applied,
    the per-cpu caches are drained after the direct-reclaim is entered if the
    requested order is greater than 0. It simply reuses the code used by suspend
    and hotplug.

    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • The grouping mechanism has some memory overhead and a more complex allocation
    path. This patch allows the strategy to be disabled for small memory systems
    or if it is known the workload is suffering because of the strategy. It also
    acts to show where the page groupings strategy interacts with the standard
    buddy allocator.

    Signed-off-by: Mel Gorman
    Signed-off-by: Joel Schopp
    Cc: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • The freelists for each migrate type can slowly become polluted due to the
    per-cpu list. Consider what happens when the following happens

    1. A 2^(MAX_ORDER-1) list is reserved for __GFP_MOVABLE pages
    2. An order-0 page is allocated from the newly reserved block
    3. The page is freed and placed on the per-cpu list
    4. alloc_page() is called with GFP_KERNEL as the gfp_mask
    5. The per-cpu list is used to satisfy the allocation

    This results in a kernel page is in the middle of a migratable region. This
    patch prevents this leak occuring by storing the MIGRATE_ type of the page in
    page->private. On allocate, a page will only be returned of the desired type,
    else more pages will be allocated. This may temporarily allow a per-cpu list
    to go over the pcp->high limit but it'll be corrected on the next free. Care
    is taken to preserve the hotness of pages recently freed.

    The additional code is not measurably slower for the workloads we've tested.

    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This patch adds the core of the fragmentation reduction strategy. It works by
    grouping pages together based on their ability to migrate or be reclaimed.
    Basically, it works by breaking the list in zone->free_area list into
    MIGRATE_TYPES number of lists.

    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Here is the latest revision of the anti-fragmentation patches. Of particular
    note in this version is special treatment of high-order atomic allocations.
    Care is taken to group them together and avoid grouping pages of other types
    near them. Artifical tests imply that it works. I'm trying to get the
    hardware together that would allow setting up of a "real" test. If anyone
    already has a setup and test that can trigger the atomic-allocation problem,
    I'd appreciate a test of these patches and a report. The second major change
    is that these patches will apply cleanly with patches that implement
    anti-fragmentation through zones.

    kernbench shows effectively no performance difference varying between -0.2%
    and +2% on a variety of test machines. Success rates for huge page allocation
    are dramatically increased. For example, on a ppc64 machine, the vanilla
    kernel was only able to allocate 1% of memory as a hugepage and this was due
    to a single hugepage reserved as min_free_kbytes. With these patches applied,
    17% was allocatable as superpages. With reclaim-related fixes from Andy
    Whitcroft, it was 40% and further reclaim-related improvements should increase
    this further.

    Changelog Since V28
    o Group high-order atomic allocations together
    o It is no longer required to set min_free_kbytes to 10% of memory. A value
    of 16384 in most cases will be sufficient
    o Now applied with zone-based anti-fragmentation
    o Fix incorrect VM_BUG_ON within buffered_rmqueue()
    o Reorder the stack so later patches do not back out work from earlier patches
    o Fix bug were journal pages were being treated as movable
    o Bias placement of non-movable pages to lower PFNs
    o More agressive clustering of reclaimable pages in reactions to workloads
    like updatedb that flood the size of inode caches

    Changelog Since V27

    o Renamed anti-fragmentation to Page Clustering. Anti-fragmentation was giving
    the mistaken impression that it was the 100% solution for high order
    allocations. Instead, it greatly increases the chances high-order
    allocations will succeed and lays the foundation for defragmentation and
    memory hot-remove to work properly
    o Redefine page groupings based on ability to migrate or reclaim instead of
    basing on reclaimability alone
    o Get rid of spurious inits
    o Per-cpu lists are no longer split up per-type. Instead the per-cpu list is
    searched for a page of the appropriate type
    o Added more explanation commentary
    o Fix up bug in pageblock code where bitmap was used before being initalised

    Changelog Since V26
    o Fix double init of lists in setup_pageset

    Changelog Since V25
    o Fix loop order of for_each_rclmtype_order so that order of loop matches args
    o gfpflags_to_rclmtype uses gfp_t instead of unsigned long
    o Rename get_pageblock_type() to get_page_rclmtype()
    o Fix alignment problem in move_freepages()
    o Add mechanism for assigning flags to blocks of pages instead of page->flags
    o On fallback, do not examine the preferred list of free pages a second time

    The purpose of these patches is to reduce external fragmentation by grouping
    pages of related types together. When pages are migrated (or reclaimed under
    memory pressure), large contiguous pages will be freed.

    This patch works by categorising allocations by their ability to migrate;

    Movable - The pages may be moved with the page migration mechanism. These are
    generally userspace pages.

    Reclaimable - These are allocations for some kernel caches that are
    reclaimable or allocations that are known to be very short-lived.

    Unmovable - These are pages that are allocated by the kernel that
    are not trivially reclaimed. For example, the memory allocated for a
    loaded module would be in this category. By default, allocations are
    considered to be of this type

    HighAtomic - These are high-order allocations belonging to callers that
    cannot sleep or perform any IO. In practice, this is restricted to
    jumbo frame allocation for network receive. It is assumed that the
    allocations are short-lived

    Instead of having one MAX_ORDER-sized array of free lists in struct free_area,
    there is one for each type of reclaimability. Once a 2^MAX_ORDER block of
    pages is split for a type of allocation, it is added to the free-lists for
    that type, in effect reserving it. Hence, over time, pages of the different
    types can be clustered together.

    When the preferred freelists are expired, the largest possible block is taken
    from an alternative list. Buddies that are split from that large block are
    placed on the preferred allocation-type freelists to mitigate fragmentation.

    This implementation gives best-effort for low fragmentation in all zones.
    Ideally, min_free_kbytes needs to be set to a value equal to 4 * (1 <<
    (MAX_ORDER-1)) pages in most cases. This would be 16384 on x86 and x86_64 for
    example.

    Our tests show that about 60-70% of physical memory can be allocated on a
    desktop after a few days uptime. In benchmarks and stress tests, we are
    finding that 80% of memory is available as contiguous blocks at the end of the
    test. To compare, a standard kernel was getting < 1% of memory as large pages
    on a desktop and about 8-12% of memory as large pages at the end of stress
    tests.

    Following this email are 12 patches that implement thie page grouping feature.
    The first patch introduces a mechanism for storing flags related to a whole
    block of pages. Then allocations are split between movable and all other
    allocations. Following that are patches to deal with per-cpu pages and make
    the mechanism configurable. The next patch moves free pages between lists
    when partially allocated blocks are used for pages of another migrate type.
    The second last patch groups reclaimable kernel allocations such as inode
    caches together. The final patch related to groupings keeps high-order atomic
    allocations.

    The last two patches are more concerned with control of fragmentation. The
    second last patch biases placement of non-movable allocations towards the
    start of memory. This is with a view of supporting memory hot-remove of DIMMs
    with higher PFNs in the future. The biasing could be enforced a lot heavier
    but it would cost. The last patch agressively clusters reclaimable pages like
    inode caches together.

    The fragmentation reduction strategy needs to track if pages within a block
    can be moved or reclaimed so that pages are freed to the appropriate list.
    This patch adds a bitmap for flags affecting a whole a MAX_ORDER block of
    pages.

    In non-SPARSEMEM configurations, the bitmap is stored in the struct zone and
    allocated during initialisation. SPARSEMEM statically allocates the bitmap in
    a struct mem_section so that bitmaps do not have to be resized during memory
    hotadd. This wastes a small amount of memory per unused section (usually
    sizeof(unsigned long)) but the complexity of dynamically allocating the memory
    is quite high.

    Additional credit to Andy Whitcroft who reviewed up an earlier implementation
    of the mechanism an suggested how to make it a *lot* cleaner.

    Signed-off-by: Mel Gorman
    Cc: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Current ia64 kernel flushes icache by lazy_mmu_prot_update() *after*
    set_pte(). This is too late. This patch removes lazy_mmu_prot_update and
    add modfied set_pte() for flushing if necessary.

    This patch flush icache of a page when
    new pte has exec bit.
    && new pte has present bit
    && new pte is user's page.
    && (old *ptep is not present
    || new pte's pfn is not same to old *ptep's ptn)
    && new pte's page has no Pg_arch_1 bit.
    Pg_arch_1 is set when a page is cache consistent.

    I think this condition checks are much easier to understand than considering
    "Where sync_icache_dcache() should be inserted ?".

    pte_user() for ia64 was removed by http://lkml.org/lkml/2007/6/12/67 as
    clean-up. So, I added it again.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: "Luck, Tony"
    Cc: Christoph Lameter
    Cc: Hugh Dickins
    Cc: Nick Piggin
    Acked-by: David S. Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • In migration, a new page should be cache flushed before set_pte() in some
    archs which have virtually-tagged cache.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: "Luck, Tony"
    Cc: Christoph Lameter
    Cc: Hugh Dickins
    Cc: Nick Piggin
    Acked-by: David S. Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Swappiness isn't a safe sysctl. Setting it to 0 for example can hang a
    system. That's a corner case but even setting it to 10 or lower can waste
    enormous amounts of cpu without making much progress. We've customers who
    wants to use swappiness but they can't because of the current
    implementation (if you change it so the system stops swapping it really
    stops swapping and nothing works sane anymore if you really had to swap
    something to make progress).

    This patch from Kurt Garloff makes swappiness safer to use (no more huge
    cpu usage or hangs with low swappiness values).

    I think the prev_priority can also be nuked since it wastes 4 bytes per
    zone (that would be an incremental patch but I wait the nr_scan_[in]active
    to be nuked first for similar reasons). Clearly somebody at some point
    noticed how broken that thing was and they had to add min(priority,
    prev_priority) to give it some reliability, but they didn't go the last
    mile to nuke prev_priority too. Calculating distress only in function of
    not-racy priority is correct and sure more than enough without having to
    add randomness into the equation.

    Patch is tested on older kernels but it compiles and it's quite simple
    so...

    Overall I'm not very satisified by the swappiness tweak, since it doesn't
    rally do anything with the dirty pagecache that may be inactive. We need
    another kind of tweak that controls the inactive scan and tunes the
    can_writepage feature (not yet in mainline despite having submitted it a
    few times), not only the active one. That new tweak will tell the kernel
    how hard to scan the inactive list for pure clean pagecache (something the
    mainline kernel isn't capable of yet). We already have that feature
    working in all our enterprise kernels with the default reasonable tune, or
    they can't even run a readonly backup with tar without triggering huge
    write I/O. I think it should be available also in mainline later.

    Cc: Nick Piggin
    Signed-off-by: Kurt Garloff
    Signed-off-by: Andrea Arcangeli
    Signed-off-by: Fengguang Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     
  • The function of GFP_LEVEL_MASK seems to be unclear. In order to clear up
    the mystery we get rid of it and replace GFP_LEVEL_MASK with 3 sets of GFP
    flags:

    GFP_RECLAIM_MASK Flags used to control page allocator reclaim behavior.

    GFP_CONSTRAINT_MASK Flags used to limit where allocations can occur.

    GFP_SLAB_BUG_MASK Flags that the slab allocator BUG()s on.

    These replace the uses of GFP_LEVEL mask in the slab allocators and in
    vmalloc.c.

    The use of the flags not included in these sets may occur as a result of a
    slab allocation standing in for a page allocation when constructing scatter
    gather lists. Extraneous flags are cleared and not passed through to the
    page allocator. __GFP_MOVABLE/RECLAIMABLE, __GFP_COLD and __GFP_COMP will
    now be ignored if passed to a slab allocator.

    Change the allocation of allocator meta data in SLAB and vmalloc to not
    pass through flags listed in GFP_CONSTRAINT_MASK. SLAB already removes the
    __GFP_THISNODE flag for such allocations. Generalize that to also cover
    vmalloc. The use of GFP_CONSTRAINT_MASK also includes __GFP_HARDWALL.

    The impact of allocator metadata placement on access latency to the
    cachelines of the object itself is minimal since metadata is only
    referenced on alloc and free. The attempt is still made to place the meta
    data optimally but we consistently allow fallback both in SLAB and vmalloc
    (SLUB does not need to allocate metadata like that).

    Allocator metadata may serve multiple in kernel users and thus should not
    be subject to the limitations arising from a single allocation context.

    [akpm@linux-foundation.org: fix fallback_alloc()]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • When a cpu is onlined on memory-less-node box, kernel panics due to touch
    NULL pointer of pgdat->kswapd. Current kswapd runs only nodes which have
    memory. So, calling of set_cpus_allowed() is not necessary for memory-less
    node.

    This is fix for it.

    Signed-off-by: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yasunori Goto
     
  • Here's a cut at fixing up uses of the online node map in generic code.

    mm/shmem.c:shmem_parse_mpol()

    Ensure nodelist is subset of nodes with memory.
    Use node_states[N_HIGH_MEMORY] as default for missing
    nodelist for interleave policy.

    mm/shmem.c:shmem_fill_super()

    initialize policy_nodes to node_states[N_HIGH_MEMORY]

    mm/page-writeback.c:highmem_dirtyable_memory()

    sum over nodes with memory

    mm/page_alloc.c:zlc_setup()

    allowednodes - use nodes with memory.

    mm/page_alloc.c:default_zonelist_order()

    average over nodes with memory.

    mm/page_alloc.c:find_next_best_node()

    skip nodes w/o memory.
    N_HIGH_MEMORY state mask may not be initialized at this time,
    unless we want to depend on early_calculate_totalpages() [see
    below]. Will ZONE_MOVABLE ever be configurable?

    mm/page_alloc.c:find_zone_movable_pfns_for_nodes()

    spread kernelcore over nodes with memory.

    This required calling early_calculate_totalpages()
    unconditionally, and populating N_HIGH_MEMORY node
    state therein from nodes in the early_node_map[].
    If we can depend on this, we can eliminate the
    population of N_HIGH_MEMORY mask from __build_all_zonelists()
    and use the N_HIGH_MEMORY mask in find_next_best_node().

    mm/mempolicy.c:mpol_check_policy()

    Ensure nodes specified for policy are subset of
    nodes with memory.

    [akpm@linux-foundation.org: fix warnings]
    Signed-off-by: Lee Schermerhorn
    Acked-by: Christoph Lameter
    Cc: Shaohua Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • cpusets try to ensure that any node added to a cpuset's mems_allowed is
    on-line and contains memory. The assumption was that online nodes contained
    memory. Thus, it is possible to add memoryless nodes to a cpuset and then add
    tasks to this cpuset. This results in continuous series of oom-kill and
    apparent system hang.

    Change cpusets to use node_states[N_HIGH_MEMORY] [a.k.a. node_memory_map] in
    place of node_online_map when vetting memories. Return error if admin
    attempts to write a non-empty mems_allowed node mask containing only
    memoryless-nodes.

    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Bob Picco
    Signed-off-by: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • GFP_THISNODE checks that the zone selected is within the pgdat (node) of the
    first zone of a nodelist. That only works if the node has memory. A
    memoryless node will have its first node on another pgdat (node).

    GFP_THISNODE currently will return simply memory on the first pgdat. Thus it
    is returning memory on other nodes. GFP_THISNODE should fail if there is no
    local memory on a node.

    Add a new set of zonelists for each node that only contain the nodes that
    belong to the zones itself so that no fallback is possible.

    Then modify gfp_type to pickup the right zone based on the presence of
    __GFP_THISNODE.

    Drop the existing GFP_THISNODE checks from the page_allocators hot path.

    Signed-off-by: Christoph Lameter
    Acked-by: Nishanth Aravamudan
    Tested-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • get_pfn_range_for_nid() is called multiple times for each node at boot time.
    Each time, it will warn about nodes with no memory, resulting in boot messages
    like:

    Node 0 active with no memory
    Node 0 active with no memory
    Node 0 active with no memory
    Node 0 active with no memory
    Node 0 active with no memory
    Node 0 active with no memory
    On node 0 totalpages: 0
    Node 0 active with no memory
    Node 0 active with no memory
    DMA zone: 0 pages used for memmap
    Node 0 active with no memory
    Node 0 active with no memory
    Normal zone: 0 pages used for memmap
    Node 0 active with no memory
    Node 0 active with no memory
    Movable zone: 0 pages used for memmap

    and so on for each memoryless node.

    We already have the "On node N totalpages: ..." and other related messages, so
    drop the "Node N active with no memory" warnings.

    Signed-off-by: Lee Schermerhorn
    Cc: Bob Picco
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • We need the check for a node with cpu in zone reclaim. Zone reclaim will not
    allow remote zone reclaim if a node has a cpu.

    [Lee.Schermerhorn@hp.com: Move setup of N_CPU node state mask]
    Signed-off-by: Christoph Lameter
    Tested-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Online nodes now may have no memory. The checks and initialization must
    therefore be changed to no longer use the online functions.

    This will correctly initialize the interleave on bootup to only target nodes
    with memory and will make sys_move_pages return an error when a page is to be
    moved to a memoryless node. Similarly we will get an error if MPOL_BIND and
    MPOL_INTERLEAVE is used on a memoryless node.

    These are somewhat new semantics. So far one could specify memoryless nodes
    and we would maybe do the right thing and just ignore the node (or we'd do
    something strange like with MPOL_INTERLEAVE). If we want to allow the
    specification of memoryless nodes via memory policies then we need to keep
    checking for online nodes.

    Signed-off-by: Christoph Lameter
    Acked-by: Nishanth Aravamudan
    Tested-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Processors on memoryless nodes must be able to fall back to remote nodes in
    order to get a profiling buffer. This may lead to excessive NUMA traffic but
    I think we should allow this rather than failing.

    Signed-off-by: Christoph Lameter
    Acked-by: Nishanth Aravamudan
    Acked-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • The checks for node_online in the uncached allocator are made to make sure
    that memory is available on these nodes. Thus switch all the checks to use
    N_HIGH_MEMORY and to N_ONLINE.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Jes Sorensen
    Acked-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Simply switch all for_each_online_node to for_each_node_state(NORMAL_MEMORY).
    That way SLUB only operates on nodes with regular memory. Any allocation
    attempt on a memoryless node or a node with just highmem will fall whereupon
    SLUB will fetch memory from a nearby node (depending on how memory policies
    and cpuset describe fallback).

    Signed-off-by: Christoph Lameter
    Tested-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Slab should not allocate control structures for nodes without memory. This
    may seem to work right now but its unreliable since not all allocations can
    fall back due to the use of GFP_THISNODE.

    Switching a few for_each_online_node's to N_NORMAL_MEMORY will allow us to
    only allocate for nodes that have regular memory.

    Signed-off-by: Christoph Lameter
    Acked-by: Nishanth Aravamudan
    Acked-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • A node without memory does not need a kswapd. So use the memory map instead
    of the online map when starting kswapd.

    Signed-off-by: Christoph Lameter
    Acked-by: Nishanth Aravamudan
    Tested-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • constrained_alloc() builds its own memory map for nodes with memory. We have
    that available in N_HIGH_MEMORY now. So simplify the code.

    Signed-off-by: Christoph Lameter
    Acked-by: Nishanth Aravamudan
    Acked-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • MPOL_INTERLEAVE currently simply loops over all nodes. Allocations on
    memoryless nodes will be redirected to nodes with memory. This results in an
    imbalance because the neighboring nodes to memoryless nodes will get
    significantly more interleave hits that the rest of the nodes on the system.

    We can avoid this imbalance by clearing the nodes in the interleave node set
    that have no memory. If we use the node map of the memory nodes instead of
    the online nodes then we have only the nodes we want.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Nishanth Aravamudan
    Tested-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • It is necessary to know if nodes have memory since we have recently begun to
    add support for memoryless nodes. For that purpose we introduce a two new
    node states: N_HIGH_MEMORY and N_NORMAL_MEMORY.

    A node has its bit in N_HIGH_MEMORY set if it has any memory regardless of the
    type of mmemory. If a node has memory then it has at least one zone defined
    in its pgdat structure that is located in the pgdat itself.

    A node has its bit in N_NORMAL_MEMORY set if it has a lower zone than
    ZONE_HIGHMEM. This means it is possible to allocate memory that is not
    subject to kmap.

    N_HIGH_MEMORY and N_NORMAL_MEMORY can then be used in various places to insure
    that we do the right thing when we encounter a memoryless node.

    [akpm@linux-foundation.org: build fix]
    [Lee.Schermerhorn@hp.com: update N_HIGH_MEMORY node state for memory hotadd]
    [y-goto@jp.fujitsu.com: Fix memory hotplug + sparsemem build]
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Nishanth Aravamudan
    Signed-off-by: Christoph Lameter
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Yasunori Goto
    Signed-off-by: Paul Mundt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Why do we need to support memoryless nodes?

    KAMEZAWA Hiroyuki wrote:

    > For fujitsu, problem is called "empty" node.
    >
    > When ACPI's SRAT table includes "possible nodes", ia64 bootstrap(acpi_numa_init)
    > creates nodes, which includes no memory, no cpu.
    >
    > I tried to remove empty-node in past, but that was denied.
    > It was because we can hot-add cpu to the empty node.
    > (node-hotplug triggered by cpu is not implemented now. and it will be ugly.)
    >
    >
    > For HP, (Lee can comment on this later), they have memory-less-node.
    > As far as I hear, HP's machine can have following configration.
    >
    > (example)
    > Node0: CPU0 memory AAA MB
    > Node1: CPU1 memory AAA MB
    > Node2: CPU2 memory AAA MB
    > Node3: CPU3 memory AAA MB
    > Node4: Memory XXX GB
    >
    > AAA is very small value (below 16MB) and will be omitted by ia64 bootstrap.
    > After boot, only Node 4 has valid memory (but have no cpu.)
    >
    > Maybe this is memory-interleave by firmware config.

    Christoph Lameter wrote:

    > Future SGI platforms (actually also current one can have but nothing like
    > that is deployed to my knowledge) have nodes with only cpus. Current SGI
    > platforms have nodes with just I/O that we so far cannot manage in the
    > core. So the arch code maps them to the nearest memory node.

    Lee Schermerhorn wrote:

    > For the HP platforms, we can configure each cell with from 0% to 100%
    > "cell local memory". When we configure with improve bandwidth at the expense of latency for numa-challenged
    > applications [and OSes, but not our problem ;-)]. When we boot Linux on
    > such a config, all of the real nodes have no memory--it all resides in a
    > single interleaved pseudo-node.
    >
    > When we boot Linux on a 100% CLM configuration [== NUMA], we still have
    > the interleaved pseudo-node. It contains a few hundred MB stolen from
    > the real nodes to contain the DMA zone. [Interleaved memory resides at
    > phys addr 0]. The memoryless-nodes patches, along with the zoneorder
    > patches, support this config as well.
    >
    > Also, when we boot a NUMA config with the "mem=" command line,
    > specifying less memory than actually exists, Linux takes the excluded
    > memory "off the top" rather than distributing it across the nodes. This
    > can result in memoryless nodes, as well.
    >

    This patch:

    Preparation for memoryless node patches.

    Provide a generic way to keep nodemasks describing various characteristics of
    NUMA nodes.

    Remove the node_online_map and the node_possible map and realize the same
    functionality using two nodes stats: N_POSSIBLE and N_ONLINE.

    [Lee.Schermerhorn@hp.com: Initialize N_*_MEMORY and N_CPU masks for non-NUMA config]
    Signed-off-by: Christoph Lameter
    Tested-by: Lee Schermerhorn
    Acked-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Lee Schermerhorn
    Cc: "Serge E. Hallyn"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • prepare/commit_write no longer returns AOP_TRUNCATED_PAGE since OCFS2 and
    GFS2 were converted to the new aops, so we can make some simplifications
    for that.

    [michal.k.k.piotrowski@gmail.com: fix warning]
    Signed-off-by: Nick Piggin
    Cc: Michael Halcrow
    Cc: Mark Fasheh
    Cc: Steven Whitehouse
    Signed-off-by: Michal Piotrowski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Implement nobh in new aops. This is a bit tricky. FWIW, nobh_truncate is
    now implemented in a way that does not create blocks in sparse regions,
    which is a silly thing for it to have been doing (isn't it?)

    ext2 survives fsx and fsstress. jfs is converted as well... ext3
    should be easy to do (but not done yet).

    [akpm@linux-foundation.org: coding-style fixes]
    Cc: Badari Pulavarty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Plug ocfs2 into the ->write_begin and ->write_end aops.

    A bunch of custom code is now gone - the iovec iteration stuff during write
    and the ocfs2 splice write actor.

    Signed-off-by: Mark Fasheh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Cc: Roman Zippel
    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Acked-by: Russell King
    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Signed-off-by: Nick Piggin
    Acked-by: Dave Kleikamp
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Signed-off-by: Nick Piggin
    Cc: Andries Brouwer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Signed-off-by: Nick Piggin
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin