20 Jul, 2007

2 commits

  • Share the same page flag bit for PG_readahead and PG_reclaim.

    One is used only on file reads, another is only for emergency writes. One
    is used mostly for fresh/young pages, another is for old pages.

    Combinations of possible interactions are:

    a) clear PG_reclaim => implicit clear of PG_readahead
    it will delay an asynchronous readahead into a synchronous one
    it actually does _good_ for readahead:
    the pages will be reclaimed soon, it's readahead thrashing!
    in this case, synchronous readahead makes more sense.

    b) clear PG_readahead => implicit clear of PG_reclaim
    one(and only one) page will not be reclaimed in time
    it can be avoided by checking PageWriteback(page) in readahead first

    c) set PG_reclaim => implicit set of PG_readahead
    will confuse readahead and make it restart the size rampup process
    it's a trivial problem, and can mostly be avoided by checking
    PageWriteback(page) first in readahead

    d) set PG_readahead => implicit set of PG_reclaim
    PG_readahead will never be set on already cached pages.
    PG_reclaim will always be cleared on dirtying a page.
    so not a problem.

    In summary,
    a) we get better behavior
    b,d) possible interactions can be avoided
    c) racy condition exists that might affect readahead, but the chance
    is _really_ low, and the hurt on readahead is trivial.

    Compound pages also use PG_reclaim, but for now they do not interact with
    reclaim/readahead code.

    Signed-off-by: Fengguang Wu
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Introduce a new page flag: PG_readahead.

    It acts as a look-ahead mark, which tells the page reader: Hey, it's time to
    invoke the read-ahead logic. For the sake of I/O pipelining, don't wait until
    it runs out of cached pages!

    Signed-off-by: Fengguang Wu
    Cc: Steven Pratt
    Cc: Ram Pai
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     

18 Jul, 2007

5 commits

  • Our original NFSv4 delegation policy was to give out a read delegation on any
    open when it was possible to.

    Since the lifetime of a delegation isn't limited to that of an open, a client
    may quite reasonably hang on to a delegation as long as it has the inode
    cached. This becomes an obvious problem the first time a client's inode cache
    approaches the size of the server's total memory.

    Our first quick solution was to add a hard-coded limit. This patch makes a
    mild incremental improvement by varying that limit according to the server's
    total memory size, allowing at most 4 delegations per megabyte of RAM.

    My quick back-of-the-envelope calculation finds that in the worst case (where
    every delegation is for a different inode), a delegation could take about
    1.5K, which would make the worst case usage about 6% of memory. The new limit
    works out to be about the same as the old on a 1-gig server.

    [akpm@linux-foundation.org: Don't needlessly bloat vmlinux]
    [akpm@linux-foundation.org: Make it right for highmem machines]
    Signed-off-by: "J. Bruce Fields"
    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Meelap Shah
     
  • When we are out of memory of a suitable size we enter reclaim. The current
    reclaim algorithm targets pages in LRU order, which is great for fairness at
    order-0 but highly unsuitable if you desire pages at higher orders. To get
    pages of higher order we must shoot down a very high proportion of memory;
    >95% in a lot of cases.

    This patch set adds a lumpy reclaim algorithm to the allocator. It targets
    groups of pages at the specified order anchored at the end of the active and
    inactive lists. This encourages groups of pages at the requested orders to
    move from active to inactive, and active to free lists. This behaviour is
    only triggered out of direct reclaim when higher order pages have been
    requested.

    This patch set is particularly effective when utilised with an
    anti-fragmentation scheme which groups pages of similar reclaimability
    together.

    This patch set is based on Peter Zijlstra's lumpy reclaim V2 patch which forms
    the foundation. Credit to Mel Gorman for sanitity checking.

    Mel said:

    The patches have an application with hugepage pool resizing.

    When lumpy-reclaim is used used with ZONE_MOVABLE, the hugepages pool can
    be resized with greater reliability. Testing on a desktop machine with 2GB
    of RAM showed that growing the hugepage pool with ZONE_MOVABLE on it's own
    was very slow as the success rate was quite low. Without lumpy-reclaim,
    each attempt to grow the pool by 100 pages would yield 1 or 2 hugepages.
    With lumpy-reclaim, getting 40 to 70 hugepages on each attempt was typical.

    [akpm@osdl.org: ia64 pfn_to_nid fixes and loop cleanup]
    [bunk@stusta.de: static declarations for internal functions]
    [a.p.zijlstra@chello.nl: initial lumpy V2 implementation]
    Signed-off-by: Andy Whitcroft
    Acked-by: Peter Zijlstra
    Acked-by: Mel Gorman
    Acked-by: Mel Gorman
    Cc: Bob Picco
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Whitcroft
     
  • This patch adds a new parameter for sizing ZONE_MOVABLE called
    movablecore=. While kernelcore= is used to specify the minimum amount of
    memory that must be available for all allocation types, movablecore= is
    used to specify the minimum amount of memory that is used for migratable
    allocations. The amount of memory used for migratable allocations
    determines how large the huge page pool could be dynamically resized to at
    runtime for example.

    How movablecore is actually handled is that the total number of pages in
    the system is calculated and a value is set for kernelcore that is

    kernelcore == totalpages - movablecore

    Both kernelcore= and movablecore= can be safely specified at the same time.

    Signed-off-by: Mel Gorman
    Acked-by: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This patch adds the kernelcore= parameter for x86.

    Once all patches are applied, a new command-line parameter exist and a new
    sysctl. This patch adds the necessary documentation.

    From: Yasunori Goto

    When "kernelcore" boot option is specified, kernel can't boot up on ia64
    because of an infinite loop. In addition, the parsing code can be handled
    in an architecture-independent manner.

    This patch uses common code to handle the kernelcore= parameter. It is
    only available to architectures that support arch-independent zone-sizing
    (i.e. define CONFIG_ARCH_POPULATES_NODE_MAP). Other architectures will
    ignore the boot parameter.

    [bunk@stusta.de: make cmdline_parse_kernelcore() static]
    Signed-off-by: Mel Gorman
    Signed-off-by: Yasunori Goto
    Acked-by: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • The following 8 patches against 2.6.20-mm2 create a zone called ZONE_MOVABLE
    that is only usable by allocations that specify both __GFP_HIGHMEM and
    __GFP_MOVABLE. This has the effect of keeping all non-movable pages within a
    single memory partition while allowing movable allocations to be satisfied
    from either partition. The patches may be applied with the list-based
    anti-fragmentation patches that groups pages together based on mobility.

    The size of the zone is determined by a kernelcore= parameter specified at
    boot-time. This specifies how much memory is usable by non-movable
    allocations and the remainder is used for ZONE_MOVABLE. Any range of pages
    within ZONE_MOVABLE can be released by migrating the pages or by reclaiming.

    When selecting a zone to take pages from for ZONE_MOVABLE, there are two
    things to consider. First, only memory from the highest populated zone is
    used for ZONE_MOVABLE. On the x86, this is probably going to be ZONE_HIGHMEM
    but it would be ZONE_DMA on ppc64 or possibly ZONE_DMA32 on x86_64. Second,
    the amount of memory usable by the kernel will be spread evenly throughout
    NUMA nodes where possible. If the nodes are not of equal size, the amount of
    memory usable by the kernel on some nodes may be greater than others.

    By default, the zone is not as useful for hugetlb allocations because they are
    pinned and non-migratable (currently at least). A sysctl is provided that
    allows huge pages to be allocated from that zone. This means that the huge
    page pool can be resized to the size of ZONE_MOVABLE during the lifetime of
    the system assuming that pages are not mlocked. Despite huge pages being
    non-movable, we do not introduce additional external fragmentation of note as
    huge pages are always the largest contiguous block we care about.

    Credit goes to Andy Whitcroft for catching a large variety of problems during
    review of the patches.

    This patch creates an additional zone, ZONE_MOVABLE. This zone is only usable
    by allocations which specify both __GFP_HIGHMEM and __GFP_MOVABLE. Hot-added
    memory continues to be placed in their existing destination as there is no
    mechanism to redirect them to a specific zone.

    [y-goto@jp.fujitsu.com: Fix section mismatch of memory hotplug related code]
    [akpm@linux-foundation.org: various fixes]
    Signed-off-by: Mel Gorman
    Cc: Andy Whitcroft
    Signed-off-by: Yasunori Goto
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

17 Jul, 2007

6 commits

  • Limiting smaller allocation failures by fault injection helps to find real
    possible bugs. Because higher order allocations are likely to fail and
    zero-order allocations are not likely to fail.

    This patch adds min-order parameter to fail_page_alloc. It specifies the
    minimum page allocation order to be injected failures.

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • Signed-off-by: Dan Aloni
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Aloni
     
  • Currently zone_spanned_pages_in_node() and zone_absent_pages_in_node() are
    non-static for ARCH_POPULATES_NODE_MAP and static otherwise. However, only
    the non-static versions are __meminit annotated, despite only being called
    from __meminit functions in either case.

    zone_init_free_lists() is currently non-static and not __meminit annotated
    either, despite only being called once in the entire tree by
    init_currently_empty_zone(), which too is __meminit. So make it static and
    properly annotated.

    Signed-off-by: Paul Mundt
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     
  • .. which modpost started warning about.

    Signed-off-by: Jan Beulich
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     
  • alloc_large_system_hash() is called at boot time to allocate space for
    several large hash tables.

    Lately, TCP hash table was changed and its bucketsize is not a power-of-two
    anymore.

    On most setups, alloc_large_system_hash() allocates one big page (order >
    0) with __get_free_pages(GFP_ATOMIC, order). This single high_order page
    has a power-of-two size, bigger than the needed size.

    We can free all pages that wont be used by the hash table.

    On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.

    TCP established hash table entries: 32768 (order: 6, 393216 bytes)

    Signed-off-by: Eric Dumazet
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Dumazet
     
  • Make zonelist creation policy selectable from sysctl/boot option v6.

    This patch makes NUMA's zonelist (of pgdat) order selectable.
    Available order are Default(automatic)/ Node-based / Zone-based.

    [Default Order]
    The kernel selects Node-based or Zone-based order automatically.

    [Node-based Order]
    This policy treats the locality of memory as the most important parameter.
    Zonelist order is created by each zone's locality. This means lower zones
    (ex. ZONE_DMA) can be used before higher zone (ex. ZONE_NORMAL) exhausion.
    IOW. ZONE_DMA will be in the middle of zonelist.
    current 2.6.21 kernel uses this.

    Pros.
    * A user can expect local memory as much as possible.
    Cons.
    * lower zone will be exhansted before higher zone. This may cause OOM_KILL.

    Maybe suitable if ZONE_DMA is relatively big and you never see OOM_KILL
    because of ZONE_DMA exhaution and you need the best locality.

    (example)
    assume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.

    *node(0)'s memory allocation order:

    node(0)'s NORMAL -> node(0)'s DMA -> node(1)'s NORMAL.

    *node(1)'s memory allocation order:

    node(1)'s NORMAL -> node(0)'s NORMAL -> node(0)'s DMA.

    [Zone-based order]
    This policy treats the zone type as the most important parameter.
    Zonelist order is created by zone-type order. This means lower zone
    never be used bofere higher zone exhaustion.
    IOW. ZONE_DMA will be always at the tail of zonelist.

    Pros.
    * OOM_KILL(bacause of lower zone) occurs only if the whole zones are exhausted.
    Cons.
    * memory locality may not be best.

    (example)
    assume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.

    *node(0)'s memory allocation order:

    node(0)'s NORMAL -> node(1)'s NORMAL -> node(0)'s DMA.

    *node(1)'s memory allocation order:

    node(1)'s NORMAL -> node(0)'s NORMAL -> node(0)'s DMA.

    bootoption "numa_zonelist_order=" and proc/sysctl is supporetd.

    command:
    %echo N > /proc/sys/vm/numa_zonelist_order

    Will rebuild zonelist in Node-based order.

    command:
    %echo Z > /proc/sys/vm/numa_zonelist_order

    Will rebuild zonelist in Zone-based order.

    Thanks to Lee Schermerhorn, he gives me much help and codes.

    [Lee.Schermerhorn@hp.com: add check_highest_zone to build_zonelists_in_zone_order]
    [akpm@linux-foundation.org: build fix]
    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: Andi Kleen
    Cc: "jesse.barnes@intel.com"
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

16 Jun, 2007

1 commit

  • When building with memory hotplug enabled and cpu hotplug disabled, we
    end up with the following section mismatch:

    WARNING: mm/built-in.o(.text+0x4e58): Section mismatch: reference to
    .init.text: (between 'free_area_init_node' and '__build_all_zonelists')

    This happens as a result of:

    -> free_area_init_node()
    -> free_area_init_core()
    -> zone_pcp_init() zone_batchsize()
    Acked-by: Yasunori Goto
    Signed-off-by: Linus Torvalds

    --

    mm/page_alloc.c | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    Paul Mundt
     

31 May, 2007

1 commit


24 May, 2007

1 commit

  • mm/page_alloc.c:931: warning: 'setup_nr_node_ids' defined but not used

    This is now the only (!) compiler warning I get in my UML build :)

    Signed-off-by: Miklos Szeredi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Miklos Szeredi
     

19 May, 2007

1 commit

  • modpost had two cases hardcoded for mm/
    Shift over to __init_refok and kill the
    hardcoded function names in modpost.

    This has the drawback that the functions
    will always be kept no matter configuration.
    With previous code the function were placed in
    init section if configuration allowed it.

    Signed-off-by: Sam Ravnborg

    Sam Ravnborg
     

11 May, 2007

1 commit

  • Since it is referenced by memmap_init_zone (which is __meminit) via the
    early_pfn_in_nid macro when CONFIG_NODES_SPAN_OTHER_NODES is set (which
    basically means PowerPC 64).

    This removes a section mismatch warning in those circumstances.

    Signed-off-by: Stephen Rothwell
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     

10 May, 2007

2 commits

  • Currently the slab allocators contain callbacks into the page allocator to
    perform the draining of pagesets on remote nodes. This requires SLUB to have
    a whole subsystem in order to be compatible with SLAB. Moving node draining
    out of the slab allocators avoids a section of code in SLUB.

    Move the node draining so that is is done when the vm statistics are updated.
    At that point we are already touching all the cachelines with the pagesets of
    a processor.

    Add a expire counter there. If we have to update per zone or global vm
    statistics then assume that the pageset will require subsequent draining.

    The expire counter will be decremented on each vm stats update pass until it
    reaches zero. Then we will drain one batch from the pageset. The draining
    will cause vm counter updates which will then cause another expiration until
    the pcp is empty. So we will drain a batch every 3 seconds.

    Note that remote node draining is a somewhat esoteric feature that is required
    on large NUMA systems because otherwise significant portions of system memory
    can become trapped in pcp queues. The number of pcp is determined by the
    number of processors and nodes in a system. A system with 4 processors and 2
    nodes has 8 pcps which is okay. But a system with 1024 processors and 512
    nodes has 512k pcps with a high potential for large amount of memory being
    caught in them.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Since nonboot CPUs are now disabled after tasks and devices have been
    frozen and the CPU hotplug infrastructure is used for this purpose, we need
    special CPU hotplug notifications that will help the CPU-hotplug-aware
    subsystems distinguish normal CPU hotplug events from CPU hotplug events
    related to a system-wide suspend or resume operation in progress. This
    patch introduces such notifications and causes them to be used during
    suspend and resume transitions. It also changes all of the
    CPU-hotplug-aware subsystems to take these notifications into consideration
    (for now they are handled in the same way as the corresponding "normal"
    ones).

    [oleg@tv-sign.ru: cleanups]
    Signed-off-by: Rafael J. Wysocki
    Cc: Gautham R Shenoy
    Cc: Pavel Machek
    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     

09 May, 2007

2 commits

  • This patch is add white list into modpost.c for some functions and
    ia64's section to fix section mismatchs.

    sparse_index_alloc() and zone_wait_table_init() calls bootmem allocator
    at boot time, and kmalloc/vmalloc at hotplug time. If config
    memory hotplug is on, there are references of bootmem allocater(init text)
    from them (normal text). This is cause of section mismatch.

    Bootmem is called by many functions and it must be
    used only at boot time. I think __init of them should keep for
    section mismatch check. So, I would like to register sparse_index_alloc()
    and zone_wait_table_init() into white list.

    In addition, ia64's .machvec section is function table of some platform
    dependent code. It is mixture of .init.text and normal text. These
    reference of __init functions are valid too.

    Signed-off-by: Yasunori Goto
    Cc: Sam Ravnborg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yasunori Goto
     
  • This is to fix many section mismatches of code related to memory hotplug.
    I checked compile with memory hotplug on/off on ia64 and x86-64 box.

    Signed-off-by: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yasunori Goto
     

08 May, 2007

6 commits

  • Replace direct invocations of SetPageNosave(), SetPageNosaveFree() etc. with
    calls to inline functions that can be changed in subsequent patches without
    modifying the code calling them.

    Signed-off-by: Rafael J. Wysocki
    Acked-by: Pavel Machek
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     
  • The patch adds PageTail(page) and PageHead(page) to check if a page is the
    head or the tail of a compound page. This is done by masking the two bits
    describing the state of a compound page and then comparing them. So one
    comparision and a branch instead of two bit checks and two branches.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • If we add a new flag so that we can distinguish between the first page and the
    tail pages then we can avoid to use page->private in the first page.
    page->private == page for the first page, so there is no real information in
    there.

    Freeing up page->private makes the use of compound pages more transparent.
    They become more usable like real pages. Right now we have to be careful f.e.
    if we are going beyond PAGE_SIZE allocations in the slab on i386 because we
    can then no longer use the private field. This is one of the issues that
    cause us not to support debugging for page size slabs in SLAB.

    Having page->private available for SLUB would allow more meta information in
    the page struct. I can probably avoid the 16 bit ints that I have in there
    right now.

    Also if page->private is available then a compound page may be equipped with
    buffer heads. This may free up the way for filesystems to support larger
    blocks than page size.

    We add PageTail as an alias of PageReclaim. Compound pages cannot currently
    be reclaimed. Because of the alias one needs to check PageCompound first.

    The RFC for the this approach was discussed at
    http://marc.info/?t=117574302800001&r=1&w=2

    [nacc@us.ibm.com: fix hugetlbfs]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Nishanth Aravamudan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • The sysctl handler for min_free_kbytes calls setup_per_zone_pages_min() on
    read or write. This function iterates through every zone and calls
    spin_lock_irqsave() on the zone LRU lock. When reading min_free_kbytes,
    this is a total waste of time that disables interrupts on the local
    processor. It might even be noticable machines with large numbers of zones
    if a process started constantly reading min_free_kbytes.

    This patch only calls setup_per_zone_pages_min() only on write. Tested on
    an x86 laptop and it did the right thing.

    Signed-off-by: Mel Gorman
    Acked-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Generally we work under the assumption that memory the mem_map array is
    contigious and valid out to MAX_ORDER_NR_PAGES block of pages, ie. that if we
    have validated any page within this MAX_ORDER_NR_PAGES block we need not check
    any other. This is not true when CONFIG_HOLES_IN_ZONE is set and we must
    check each and every reference we make from a pfn.

    Add a pfn_valid_within() helper which should be used when scanning pages
    within a MAX_ORDER_NR_PAGES block when we have already checked the validility
    of the block normally with pfn_valid(). This can then be optimised away when
    we do not have holes within a MAX_ORDER_NR_PAGES block of pages.

    Signed-off-by: Andy Whitcroft
    Acked-by: Mel Gorman
    Acked-by: Bob Picco
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Whitcroft
     
  • The nr_cpu_ids value is currently only calculated in smp_init. However, it
    may be needed before (SLUB needs it on kmem_cache_init!) and other kernel
    components may also want to allocate dynamically sized per cpu array before
    smp_init. So move the determination of possible cpus into sched_init()
    where we already loop over all possible cpus early in boot.

    Also initialize both nr_node_ids and nr_cpu_ids with the highest value they
    could take. If we have accidental users before these values are determined
    then the current valud of 0 may cause too small per cpu and per node arrays
    to be allocated. If it is set to the maximum possible then we only waste
    some memory for early boot users.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

02 Mar, 2007

1 commit

  • Rename PG_checked to PG_owner_priv_1 to reflect its availablilty as a
    private flag for use by the owner/allocator of the page. In the case of
    pagecache pages (which might be considered to be owned by the mm),
    filesystems may use the flag.

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

21 Feb, 2007

2 commits

  • The alien cache is a per cpu per node array allocated for every slab on the
    system. Currently we size this array for all nodes that the kernel does
    support. For IA64 this is 1024 nodes. So we allocate an array with 1024
    objects even if we only boot a system with 4 nodes.

    This patch uses "nr_node_ids" to determine the number of possible nodes
    supported by a hardware configuration and only allocates an alien cache
    sized for possible nodes.

    The initialization of nr_node_ids occurred too late relative to the bootstrap
    of the slab allocator and so I moved the setup_nr_node_ids() into
    free_area_init_nodes().

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • highest_possible_node_id() is currently used to calculate the last possible
    node idso that the network subsystem can figure out how to size per node
    arrays.

    I think having the ability to determine the maximum amount of nodes in a
    system at runtime is useful but then we should name this entry
    correspondingly, it should return the number of node_ids, and the the value
    needs to be setup only once on bootup. The node_possible_map does not
    change after bootup.

    This patch introduces nr_node_ids and replaces the use of
    highest_possible_node_id(). nr_node_ids is calculated on bootup when the
    page allocators pagesets are initialized.

    [deweerdt@free.fr: fix oops]
    Signed-off-by: Christoph Lameter
    Cc: Neil Brown
    Cc: Trond Myklebust
    Signed-off-by: Frederik Deweerdt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

12 Feb, 2007

8 commits

  • Make ZONE_DMA optional in core code.

    - ifdef all code for ZONE_DMA and related definitions following the example
    for ZONE_DMA32 and ZONE_HIGHMEM.

    - Without ZONE_DMA, ZONE_HIGHMEM and ZONE_DMA32 we get to a ZONES_SHIFT of
    0.

    - Modify the VM statistics to work correctly without a DMA zone.

    - Modify slab to not create DMA slabs if there is no ZONE_DMA.

    [akpm@osdl.org: cleanup]
    [jdike@addtoit.com: build fix]
    [apw@shadowen.org: Simplify calculation of the number of bits we need for ZONES_SHIFT]
    Signed-off-by: Christoph Lameter
    Cc: Andi Kleen
    Cc: "Luck, Tony"
    Cc: Kyle McMartin
    Cc: Matthew Wilcox
    Cc: James Bottomley
    Cc: Paul Mundt
    Signed-off-by: Andy Whitcroft
    Signed-off-by: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • This patchset follows up on the earlier work in Andrew's tree to reduce the
    number of zones. The patches allow to go to a minimum of 2 zones. This one
    allows also to make ZONE_DMA optional and therefore the number of zones can be
    reduced to one.

    ZONE_DMA is usually used for ISA DMA devices. There are a number of reasons
    why we would not want to have ZONE_DMA

    1. Some arches do not need ZONE_DMA at all.

    2. With the advent of IOMMUs DMA zones are no longer needed.
    The necessity of DMA zones may drastically be reduced
    in the future. This patchset allows a compilation of
    a kernel without that overhead.

    3. Devices that require ISA DMA get rare these days. All
    my systems do not have any need for ISA DMA.

    4. The presence of an additional zone unecessarily complicates
    VM operations because it must be scanned and balancing
    logic must operate on its.

    5. With only ZONE_NORMAL one can reach the situation where
    we have only one zone. This will allow the unrolling of many
    loops in the VM and allows the optimization of varous
    code paths in the VM.

    6. Having only a single zone in a NUMA system results in a
    1-1 correspondence between nodes and zones. Various additional
    optimizations to critical VM paths become possible.

    Many systems today can operate just fine with a single zone. If you look at
    what is in ZONE_DMA then one usually sees that nothing uses it. The DMA slabs
    are empty (Some arches use ZONE_DMA instead of ZONE_NORMAL, then ZONE_NORMAL
    will be empty instead).

    On all of my systems (i386, x86_64, ia64) ZONE_DMA is completely empty. Why
    constantly look at an empty zone in /proc/zoneinfo and empty slab in
    /proc/slabinfo? Non i386 also frequently have no need for ZONE_DMA and zones
    stay empty.

    The patchset was tested on i386 (UP / SMP), x86_64 (UP, NUMA) and ia64 (NUMA).

    The RFC posted earlier (see
    http://marc.theaimsgroup.com/?l=linux-kernel&m=115231723513008&w=2) had lots
    of #ifdefs in them. An effort has been made to minize the number of #ifdefs
    and make this as compact as possible. The job was made much easier by the
    ongoing efforts of others to extract common arch specific functionality.

    I have been running this for awhile now on my desktop and finally Linux is
    using all my available RAM instead of leaving the 16MB in ZONE_DMA untouched:

    christoph@pentium940:~$ cat /proc/zoneinfo
    Node 0, zone Normal
    pages free 4435
    min 1448
    low 1810
    high 2172
    active 241786
    inactive 210170
    scanned 0 (a: 0 i: 0)
    spanned 524224
    present 524224
    nr_anon_pages 61680
    nr_mapped 14271
    nr_file_pages 390264
    nr_slab_reclaimable 27564
    nr_slab_unreclaimable 1793
    nr_page_table_pages 449
    nr_dirty 39
    nr_writeback 0
    nr_unstable 0
    nr_bounce 0
    cpu: 0 pcp: 0
    count: 156
    high: 186
    batch: 31
    cpu: 0 pcp: 1
    count: 9
    high: 62
    batch: 15
    vm stats threshold: 20
    cpu: 1 pcp: 0
    count: 177
    high: 186
    batch: 31
    cpu: 1 pcp: 1
    count: 12
    high: 62
    batch: 15
    vm stats threshold: 20
    all_unreclaimable: 0
    prev_priority: 12
    temp_priority: 12
    start_pfn: 0

    This patch:

    In two places in the VM we use ZONE_DMA to refer to the first zone. If
    ZONE_DMA is optional then other zones may be first. So simply replace
    ZONE_DMA with zone 0.

    This also fixes ZONETABLE_PGSHIFT. If we have only a single zone then
    ZONES_PGSHIFT may become 0 because there is no need anymore to encode the zone
    number related to a pgdat. However, we still need a zonetable to index all
    the zones for each node if this is a NUMA system. Therefore define
    ZONETABLE_SHIFT unconditionally as the offset of the ZONE field in page flags.

    [apw@shadowen.org: fix mismerge]
    Acked-by: Christoph Hellwig
    Signed-off-by: Christoph Lameter
    Cc: Andi Kleen
    Cc: "Luck, Tony"
    Cc: Kyle McMartin
    Cc: Matthew Wilcox
    Cc: James Bottomley
    Cc: Paul Mundt
    Signed-off-by: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Values are available via ZVC sums.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Function is unnecessary now. We can use the summing features of the ZVCs to
    get the values we need.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • nr_free_pages is now a simple access to a global variable. Make it a macro
    instead of a function.

    The nr_free_pages now requires vmstat.h to be included. There is one
    occurrence in power management where we need to add the include. Directly
    refrer to global_page_state() there to clarify why the #include was added.

    [akpm@osdl.org: arm build fix]
    [akpm@osdl.org: sparc64 build fix]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • This is again simplifies some of the VM counter calculations through the use
    of the ZVC consolidated counters.

    [michal.k.k.piotrowski@gmail.com: build fix]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Michal Piotrowski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • The determination of the dirty ratio to determine writeback behavior is
    currently based on the number of total pages on the system.

    However, not all pages in the system may be dirtied. Thus the ratio is always
    too low and can never reach 100%. The ratio may be particularly skewed if
    large hugepage allocations, slab allocations or device driver buffers make
    large sections of memory not available anymore. In that case we may get into
    a situation in which f.e. the background writeback ratio of 40% cannot be
    reached anymore which leads to undesired writeback behavior.

    This patchset fixes that issue by determining the ratio based on the actual
    pages that may potentially be dirty. These are the pages on the active and
    the inactive list plus free pages.

    The problem with those counts has so far been that it is expensive to
    calculate these because counts from multiple nodes and multiple zones will
    have to be summed up. This patchset makes these counters ZVC counters. This
    means that a current sum per zone, per node and for the whole system is always
    available via global variables and not expensive anymore to calculate.

    The patchset results in some other good side effects:

    - Removal of the various functions that sum up free, active and inactive
    page counts

    - Cleanup of the functions that display information via the proc filesystem.

    This patch:

    The use of a ZVC for nr_inactive and nr_active allows a simplification of some
    counter operations. More ZVC functionality is used for sums etc in the
    following patches.

    [akpm@osdl.org: UP build fix]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • find_min_pfn_for_node() and find_min_pfn_with_active_regions() sort
    early_node_map[] on every call. This is an excessive amount of sorting and
    that can be avoided. This patch always searches the whole early_node_map[]
    in find_min_pfn_for_node() instead of returning the first value found. The
    map is then only sorted once when required. Successfully boot tested on a
    number of machines.

    [akpm@osdl.org: cleanup]
    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

10 Feb, 2007

1 commit