12 May, 2007

1 commit


06 Feb, 2007

1 commit


08 Dec, 2006

1 commit

  • NUMA node ids are passed as either int or unsigned int almost exclusivly
    page_to_nid and zone_to_nid both return unsigned long. This is a throw
    back to when page_to_nid was a #define and was thus exposing the real type
    of the page flags field.

    In addition to fixing up the definitions of page_to_nid and zone_to_nid I
    audited the users of these functions identifying the following incorrect
    uses:

    1) mm/page_alloc.c show_node() -- printk dumping the node id,
    2) include/asm-ia64/pgalloc.h pgtable_quicklist_free() -- comparison
    against numa_node_id() which returns an int from cpu_to_node(), and
    3) mm/mpolicy.c check_pte_range -- used as an index in node_isset which
    uses bit_set which in generic code takes an int.

    Signed-off-by: Andy Whitcroft
    Cc: Christoph Lameter
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Whitcroft
     

26 Apr, 2006

1 commit


12 Nov, 2005

1 commit

  • This patch introduces 4-level page tables to ia64. I have run
    some benchmarks and found nothing interesting. Performance has
    consistently fallen within the noise range.

    It also introduces a config option (setting the default to 3
    levels). The config option prevents having 4 level page
    tables with 64k base page size.

    Signed-off-by: Robin Holt
    Signed-off-by: Tony Luck

    Robin Holt
     

27 Apr, 2005

1 commit


26 Apr, 2005

2 commits

  • Oops. Should have caught this before I checked it in.

    Signed-off-by: Tony Luck

    Tony Luck
     
  • This patch introduces using the quicklists for pgd, pmd, and pte levels
    by combining the alloc and free functions into a common set of routines.
    This greatly simplifies the reading of this header file.

    This patch is simple but necessary for large numa configurations.
    It simply ensures that only pages from the local node are added to a
    cpus quicklist. This prevents the trapping of pages on a remote nodes
    quicklist by starting a process, touching a large number of pages to
    fill pmd and pte entries, migrating to another node, and then unmapping
    or exiting. With those conditions, the pages get trapped and if the
    machine has more than 100 nodes of the same size, the calculation of
    the pgtable high water mark will be larger than any single node so page
    table cache flushing will never occur.

    I ran lmbench lat_proc fork and lat_proc exec on a zx1 with and without
    this patch and did not notice any change.

    On an sn2 machine, there was a slight improvement which is possibly
    due to pages from other nodes trapped on the test node before starting
    the run. I did not investigate further.

    This patch shrinks the quicklist based upon free memory on the node
    instead of the high/low water marks. I have written it to enable
    preemption periodically and recalculate the amount to shrink every time
    we have freed enough pages that the quicklist size should have grown.
    I rescan the nodes zones each pass because other processess may be
    draining node memory at the same time as we are adding.

    Signed-off-by: Robin Holt
    Signed-off-by: Tony Luck

    Robin Holt
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds