30 Sep, 2006

1 commit

  • Change the list of memory nodes allowed to tasks in the top (root) nodeset
    to dynamically track what cpus are online, using a call to a cpuset hook
    from the memory hotplug code. Make this top cpus file read-only.

    On systems that have cpusets configured in their kernel, but that aren't
    actively using cpusets (for some distros, this covers the majority of
    systems) all tasks end up in the top cpuset.

    If that system does support memory hotplug, then these tasks cannot make
    use of memory nodes that are added after system boot, because the memory
    nodes are not allowed in the top cpuset. This is a surprising regression
    over earlier kernels that didn't have cpusets enabled.

    One key motivation for this change is to remain consistent with the
    behaviour for the top_cpuset's 'cpus', which is also read-only, and which
    automatically tracks the cpu_online_map.

    This change also has the minor benefit that it fixes a long standing,
    little noticed, minor bug in cpusets. The cpuset performance tweak to
    short circuit the cpuset_zone_allowed() check on systems with just a single
    cpuset (see 'number_of_cpusets', in linux/cpuset.h) meant that simply
    changing the 'mems' of the top_cpuset had no affect, even though the change
    (the write system call) appeared to succeed. With the following change,
    that write to the 'mems' file fails -EACCES, and the 'mems' file stubbornly
    refuses to be changed via user space writes. Thus no one should be mislead
    into thinking they've changed the top_cpusets's 'mems' when in affect they
    haven't.

    In order to keep the behaviour of cpusets consistent between systems
    actively making use of them and systems not using them, this patch changes
    the behaviour of the 'mems' file in the top (root) cpuset, making it read
    only, and making it automatically track the value of node_online_map. Thus
    tasks in the top cpuset will have automatic use of hot plugged memory nodes
    allowed by their cpuset.

    [akpm@osdl.org: build fix]
    [bunk@stusta.de: build fix]
    Signed-off-by: Paul Jackson
    Signed-off-by: Adrian Bunk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

24 Mar, 2006

1 commit

  • This patch provides the implementation and cpuset interface for an alternative
    memory allocation policy that can be applied to certain kinds of memory
    allocations, such as the page cache (file system buffers) and some slab caches
    (such as inode caches).

    The policy is called "memory spreading." If enabled, it spreads out these
    kinds of memory allocations over all the nodes allowed to a task, instead of
    preferring to place them on the node where the task is executing.

    All other kinds of allocations, including anonymous pages for a tasks stack
    and data regions, are not affected by this policy choice, and continue to be
    allocated preferring the node local to execution, as modified by the NUMA
    mempolicy.

    There are two boolean flag files per cpuset that control where the kernel
    allocates pages for the file system buffers and related in kernel data
    structures. They are called 'memory_spread_page' and 'memory_spread_slab'.

    If the per-cpuset boolean flag file 'memory_spread_page' is set, then the
    kernel will spread the file system buffers (page cache) evenly over all the
    nodes that the faulting task is allowed to use, instead of preferring to put
    those pages on the node where the task is running.

    If the per-cpuset boolean flag file 'memory_spread_slab' is set, then the
    kernel will spread some file system related slab caches, such as for inodes
    and dentries evenly over all the nodes that the faulting task is allowed to
    use, instead of preferring to put those pages on the node where the task is
    running.

    The implementation is simple. Setting the cpuset flags 'memory_spread_page'
    or 'memory_spread_cache' turns on the per-process flags PF_SPREAD_PAGE or
    PF_SPREAD_SLAB, respectively, for each task that is in the cpuset or
    subsequently joins that cpuset. In subsequent patches, the page allocation
    calls for the affected page cache and slab caches are modified to perform an
    inline check for these flags, and if set, a call to a new routine
    cpuset_mem_spread_node() returns the node to prefer for the allocation.

    The cpuset_mem_spread_node() routine is also simple. It uses the value of a
    per-task rotor cpuset_mem_spread_rotor to select the next node in the current
    tasks mems_allowed to prefer for the allocation.

    This policy can provide substantial improvements for jobs that need to place
    thread local data on the corresponding node, but that need to access large
    file system data sets that need to be spread across the several nodes in the
    jobs cpuset in order to fit. Without this patch, especially for jobs that
    might have one thread reading in the data set, the memory allocation across
    the nodes in the jobs cpuset can become very uneven.

    A couple of Copyright year ranges are updated as well. And a couple of email
    addresses that can be found in the MAINTAINERS file are removed.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

15 Jan, 2006

1 commit

  • The problem, reported in:

    http://bugzilla.kernel.org/show_bug.cgi?id=5859

    and by various other email messages and lkml posts is that the cpuset hook
    in the oom (out of memory) code can try to take a cpuset semaphore while
    holding the tasklist_lock (a spinlock).

    One must not sleep while holding a spinlock.

    The fix seems easy enough - move the cpuset semaphore region outside the
    tasklist_lock region.

    This required a few lines of mechanism to implement. The oom code where
    the locking needs to be changed does not have access to the cpuset locks,
    which are internal to kernel/cpuset.c only. So I provided a couple more
    cpuset interface routines, available to the rest of the kernel, which
    simple take and drop the lock needed here (cpusets callback_sem).

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

09 Jan, 2006

6 commits

  • Remove a couple of more lines of code from the cpuset hooks in the page
    allocation code path.

    There was a check for a NULL cpuset pointer in the routine
    cpuset_update_task_memory_state() that was only needed during system boot,
    after the memory subsystem was initialized, before the cpuset subsystem was
    initialized, to catch a NULL task->cpuset pointer.

    Add a cpuset_init_early() routine, just before the mem_init() call in
    init/main.c, that sets up just enough of the init tasks cpuset structure to
    render cpuset_update_task_memory_state() calls harmless.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • Easy little optimization hack to avoid actually having to call
    cpuset_zone_allowed() and check mems_allowed, in the main page allocation
    routine, __alloc_pages(). This saves several CPU cycles per page allocation
    on systems not using cpusets.

    A counter is updated each time a cpuset is created or removed, and whenever
    there is only one cpuset in the system, it must be the root cpuset, which
    contains all CPUs and all Memory Nodes. In that case, when the counter is
    one, all allocations are allowed.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • Provide a cpuset_mems_allowed() method, which the sys_migrate_pages() code
    needed, to obtain the mems_allowed vector of a cpuset, and replaced the
    workaround in sys_migrate_pages() to call this new method.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • The important code paths through alloc_pages_current() and alloc_page_vma(),
    by which most kernel page allocations go, both called
    cpuset_update_current_mems_allowed(), which in turn called refresh_mems().
    -Both- of these latter two routines did a tasklock, got the tasks cpuset
    pointer, and checked for out of date cpuset->mems_generation.

    That was a silly duplication of code and waste of CPU cycles on an important
    code path.

    Consolidated those two routines into a single routine, called
    cpuset_update_task_memory_state(), since it updates more than just
    mems_allowed.

    Changed all callers of either routine to call the new consolidated routine.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • Provide a simple per-cpuset metric of memory pressure, tracking the -rate-
    that the tasks in a cpuset call try_to_free_pages(), the synchronous
    (direct) memory reclaim code.

    This enables batch managers monitoring jobs running in dedicated cpusets to
    efficiently detect what level of memory pressure that job is causing.

    This is useful both on tightly managed systems running a wide mix of
    submitted jobs, which may choose to terminate or reprioritize jobs that are
    trying to use more memory than allowed on the nodes assigned them, and with
    tightly coupled, long running, massively parallel scientific computing jobs
    that will dramatically fail to meet required performance goals if they
    start to use more memory than allowed to them.

    This patch just provides a very economical way for the batch manager to
    monitor a cpuset for signs of memory pressure. It's up to the batch
    manager or other user code to decide what to do about it and take action.

    ==> Unless this feature is enabled by writing "1" to the special file
    /dev/cpuset/memory_pressure_enabled, the hook in the rebalance
    code of __alloc_pages() for this metric reduces to simply noticing
    that the cpuset_memory_pressure_enabled flag is zero. So only
    systems that enable this feature will compute the metric.

    Why a per-cpuset, running average:

    Because this meter is per-cpuset, rather than per-task or mm, the
    system load imposed by a batch scheduler monitoring this metric is
    sharply reduced on large systems, because a scan of the tasklist can be
    avoided on each set of queries.

    Because this meter is a running average, instead of an accumulating
    counter, a batch scheduler can detect memory pressure with a single
    read, instead of having to read and accumulate results for a period of
    time.

    Because this meter is per-cpuset rather than per-task or mm, the
    batch scheduler can obtain the key information, memory pressure in a
    cpuset, with a single read, rather than having to query and accumulate
    results over all the (dynamically changing) set of tasks in the cpuset.

    A per-cpuset simple digital filter (requires a spinlock and 3 words of data
    per-cpuset) is kept, and updated by any task attached to that cpuset, if it
    enters the synchronous (direct) page reclaim code.

    A per-cpuset file provides an integer number representing the recent
    (half-life of 10 seconds) rate of direct page reclaims caused by the tasks
    in the cpuset, in units of reclaims attempted per second, times 1000.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • Finish converting mm/mempolicy.c from bitmaps to nodemasks. The previous
    conversion had left one routine using bitmaps, since it involved a
    corresponding change to kernel/cpuset.c

    Fix that interface by replacing with a simple macro that calls nodes_subset(),
    or if !CONFIG_CPUSET, returns (1).

    Signed-off-by: Paul Jackson
    Cc: Christoph Lameter
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

09 Oct, 2005

1 commit

  • - added typedef unsigned int __nocast gfp_t;

    - replaced __nocast uses for gfp flags with gfp_t - it gives exactly
    the same warnings as far as sparse is concerned, doesn't change
    generated code (from gcc point of view we replaced unsigned int with
    typedef) and documents what's going on far better.

    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Al Viro
     

08 Sep, 2005

2 commits

  • Now the real motivation for this cpuset mem_exclusive patch series seems
    trivial.

    This patch keeps a task in or under one mem_exclusive cpuset from provoking an
    oom kill of a task under a non-overlapping mem_exclusive cpuset. Since only
    interrupt and GFP_ATOMIC allocations are allowed to escape mem_exclusive
    containment, there is little to gain from oom killing a task under a
    non-overlapping mem_exclusive cpuset, as almost all kernel and user memory
    allocation must come from disjoint memory nodes.

    This patch enables configuring a system so that a runaway job under one
    mem_exclusive cpuset cannot cause the killing of a job in another such cpuset
    that might be using very high compute and memory resources for a prolonged
    time.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • This patch makes use of the previously underutilized cpuset flag
    'mem_exclusive' to provide what amounts to another layer of memory placement
    resolution. With this patch, there are now the following four layers of
    memory placement available:

    1) The whole system (interrupt and GFP_ATOMIC allocations can use this),
    2) The nearest enclosing mem_exclusive cpuset (GFP_KERNEL allocations can use),
    3) The current tasks cpuset (GFP_USER allocations constrained to here), and
    4) Specific node placement, using mbind and set_mempolicy.

    These nest - each layer is a subset (same or within) of the previous.

    Layer (2) above is new, with this patch. The call used to check whether a
    zone (its node, actually) is in a cpuset (in its mems_allowed, actually) is
    extended to take a gfp_mask argument, and its logic is extended, in the case
    that __GFP_HARDWALL is not set in the flag bits, to look up the cpuset
    hierarchy for the nearest enclosing mem_exclusive cpuset, to determine if
    placement is allowed. The definition of GFP_USER, which used to be identical
    to GFP_KERNEL, is changed to also set the __GFP_HARDWALL bit, in the previous
    cpuset_gfp_hardwall_flag patch.

    GFP_ATOMIC and GFP_KERNEL allocations will stay within the current tasks
    cpuset, so long as any node therein is not too tight on memory, but will
    escape to the larger layer, if need be.

    The intended use is to allow something like a batch manager to handle several
    jobs, each job in its own cpuset, but using common kernel memory for caches
    and such. Swapper and oom_kill activity is also constrained to Layer (2). A
    task in or below one mem_exclusive cpuset should not cause swapping on nodes
    in another non-overlapping mem_exclusive cpuset, nor provoke oom_killing of a
    task in another such cpuset. Heavy use of kernel memory for i/o caching and
    such by one job should not impact the memory available to jobs in other
    non-overlapping mem_exclusive cpusets.

    This patch enables providing hardwall, inescapable cpusets for memory
    allocations of each job, while sharing kernel memory allocations between
    several jobs, in an enclosing mem_exclusive cpuset.

    Like Dinakar's patch earlier to enable administering sched domains using the
    cpu_exclusive flag, this patch also provides a useful meaning to a cpuset flag
    that had previously done nothing much useful other than restrict what cpuset
    configurations were allowed.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

17 Apr, 2005

2 commits

  • gcc-4 warns with
    include/linux/cpuset.h:21: warning: type qualifiers ignored on function
    return type

    cpuset_cpus_allowed is declared with const
    extern const cpumask_t cpuset_cpus_allowed(const struct task_struct *p);

    First const should be __attribute__((const)), but the gcc manual
    explains that:

    "Note that a function that has pointer arguments and examines the data
    pointed to must not be declared const. Likewise, a function that calls a
    non-const function usually must not be const. It does not make sense for
    a const function to return void."

    The following patch remove const from the function declaration.

    Signed-off-by: Benoit Boissinot
    Acked-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benoit Boissinot
     
  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds