11 Sep, 2005

14 commits

  • for kernel/acct.c:
    - fix typos
    - add kerneldoc for non-static functions

    Signed-off-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • Don't pull tasks from a group if that would cause the group's total load to
    drop below its total cpu_power (ie. cause the group to start going idle).

    Signed-off-by: Suresh Siddha
    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Siddha, Suresh B
     
  • Jack Steiner brought this issue at my OLS talk.

    Take a scenario where two tasks are pinned to two HT threads in a physical
    package. Idle packages in the system will keep kicking migration_thread on
    the busy package with out any success.

    We will run into similar scenarios in the presence of CMP/NUMA.

    Signed-off-by: Suresh Siddha
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Siddha, Suresh B
     
  • In sys_sched_yield(), we cache current->array in the "array" variable, thus
    there's no need to dereference "current" again later.

    Signed-Off-By: Renaud Lienhart
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Renaud Lienhart
     
  • If an idle sibling of an HT queue encounters a busy sibling, then make
    higher level load balancing of the non-idle variety.

    Performance of multiprocessor HT systems with low numbers of tasks
    (generally < number of virtual CPUs) can be significantly worse than the
    exact same workloads when running in non-HT mode. The reason is largely
    due to poor scheduling behaviour.

    This patch improves the situation, making the performance gap far less
    significant on one problematic test case (tbench).

    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • During periodic load balancing, don't hold this runqueue's lock while
    scanning remote runqueues, which can take a non trivial amount of time
    especially on very large systems.

    Holding the runqueue lock will only help to stabilise ->nr_running, however
    this doesn't do much to help because tasks being woken will simply get held
    up on the runqueue lock, so ->nr_running would not provide a really
    accurate picture of runqueue load in that case anyway.

    What's more, ->nr_running (and possibly the cpu_load averages) of remote
    runqueues won't be stable anyway, so load balancing is always an inexact
    operation.

    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Similarly to the earlier change in load_balance, only lock the runqueue in
    load_balance_newidle if the busiest queue found has a nr_running > 1. This
    will reduce frequency of expensive remote runqueue lock aquisitions in the
    schedule() path on some workloads.

    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • William Weston reported unusually high scheduling latencies on his x86 HT
    box, on the -RT kernel. I managed to reproduce it on my HT box and the
    latency tracer shows the incident in action:

    _------=> CPU#
    / _-----=> irqs-off
    | / _----=> need-resched
    || / _---=> hardirq/softirq
    ||| / _--=> preempt-depth
    |||| /
    ||||| delay
    cmd pid ||||| time | caller
    \ / ||||| \ | /
    du-2803 3Dnh2 0us : __trace_start_sched_wakeup (try_to_wake_up)
    ..............................................................
    ... we are running on CPU#3, PID 2778 gets woken to CPU#1: ...
    ..............................................................
    du-2803 3Dnh2 0us : __trace_start_sched_wakeup <-2778> (73 1)
    du-2803 3Dnh2 0us : _raw_spin_unlock (try_to_wake_up)
    ................................................
    ... still on CPU#3, we send an IPI to CPU#1: ...
    ................................................
    du-2803 3Dnh1 0us : resched_task (try_to_wake_up)
    du-2803 3Dnh1 1us : smp_send_reschedule (try_to_wake_up)
    du-2803 3Dnh1 1us : send_IPI_mask_bitmask (smp_send_reschedule)
    du-2803 3Dnh1 2us : _raw_spin_unlock_irqrestore (try_to_wake_up)
    ...............................................
    ... 1 usec later, the IPI arrives on CPU#1: ...
    ...............................................
    -0 1Dnh. 2us : smp_reschedule_interrupt (c0100c5a 0 0)

    So far so good, this is the normal wakeup/preemption mechanism. But here
    comes the scheduler anomaly on CPU#1:

    -0 1Dnh. 2us : preempt_schedule_irq (need_resched)
    -0 1Dnh. 2us : preempt_schedule_irq (need_resched)
    -0 1Dnh. 3us : __schedule (preempt_schedule_irq)
    -0 1Dnh. 3us : profile_hit (__schedule)
    -0 1Dnh1 3us : sched_clock (__schedule)
    -0 1Dnh1 4us : _raw_spin_lock_irq (__schedule)
    -0 1Dnh1 4us : _raw_spin_lock_irqsave (__schedule)
    -0 1Dnh2 5us : _raw_spin_unlock (__schedule)
    -0 1Dnh1 5us : preempt_schedule (__schedule)
    -0 1Dnh1 6us : _raw_spin_lock (__schedule)
    -0 1Dnh2 6us : find_next_bit (__schedule)
    -0 1Dnh2 6us : _raw_spin_lock (__schedule)
    -0 1Dnh3 7us : find_next_bit (__schedule)
    -0 1Dnh3 7us : find_next_bit (__schedule)
    -0 1Dnh3 8us : _raw_spin_unlock (__schedule)
    -0 1Dnh2 8us : preempt_schedule (__schedule)
    -0 1Dnh2 8us : find_next_bit (__schedule)
    -0 1Dnh2 9us : trace_stop_sched_switched (__schedule)
    -0 1Dnh2 9us : _raw_spin_lock (trace_stop_sched_switched)
    -0 1Dnh3 10us : trace_stop_sched_switched <-2778> (73 8c)
    -0 1Dnh3 10us : _raw_spin_unlock (trace_stop_sched_switched)
    -0 1Dnh1 10us : _raw_spin_unlock (__schedule)
    -0 1Dnh. 11us : local_irq_enable_noresched (preempt_schedule_irq)
    -0 1Dnh. 11us < (0)

    we didnt pick up pid 2778! It only gets scheduled much later:

    -2778 1Dnh2 412us : __switch_to (__schedule)
    -2778 1Dnh2 413us : __schedule <-0> (8c 73)
    -2778 1Dnh2 413us : _raw_spin_unlock (__schedule)
    -2778 1Dnh1 413us : trace_stop_sched_switched (__schedule)
    -2778 1Dnh1 414us : _raw_spin_lock (trace_stop_sched_switched)
    -2778 1Dnh2 414us : trace_stop_sched_switched <-2778> (73 1)
    -2778 1Dnh2 414us : _raw_spin_unlock (trace_stop_sched_switched)
    -2778 1Dnh1 415us : trace_stop_sched_switched (__schedule)

    the reason for this anomaly is the following code in dependent_sleeper():

    /*
    * If a user task with lower static priority than the
    * running task on the SMT sibling is trying to schedule,
    * delay it till there is proportionately less timeslice
    * left of the sibling task to prevent a lower priority
    * task from using an unfair proportion of the
    * physical cpu's resources. -ck
    */
    [...]
    if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
    100) > task_timeslice(p)))
    ret = 1;

    Note that in contrast to the comment above, we dont actually do the check
    based on static priority, we do the check based on timeslices. But
    timeslices go up and down, and even highprio tasks can randomly have very
    low timeslices (just before their next refill) and can thus be judged as
    'lowprio' by the above piece of code. This condition is clearly buggy.
    The correct test is to check for static_prio _and_ to check for the
    preemption priority. Even on different static priority levels, a
    higher-prio interactive task should not be delayed due to a
    higher-static-prio CPU hog.

    There is a symmetric bug in the 'kick SMT sibling' code of this function as
    well, which can be solved in a similar way.

    The patch below (against the current scheduler queue in -mm) fixes both
    bugs. I have build and boot-tested this on x86 SMT, and nice +20 tasks
    still get properly throttled - so the dependent-sleeper logic is still in
    action.

    btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
    property was applied too liberally, so this fix is likely a throughput
    improvement as well.

    I separated out a smt_slice() function to make the code easier to read.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
    used by blocking points to mark the task's wait as "non-interactive". This
    does not mean the task will be considered a CPU-hog - the wait will simply
    not have an effect on the waiting task's priority - positive or negative
    alike. Right now only pipe_wait() will make use of it, because it's a
    common source of not-so-interactive waits (kernel compilation jobs, etc.).

    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • whitespace cleanups.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
    them return only the groups that have allowed CPUs and allowed CPUs
    respectively.

    Signed-off-by: M.Baris Demiray
    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    M.Baris Demiray
     
  • The hyperthread aware nice handling currently puts to sleep any non real
    time task when a real time task is running on its sibling cpu. This can
    lead to prolonged starvation by having the non real time task pegged to the
    cpu with load balancing not pulling that task away.

    Currently we force lower priority hyperthread tasks to run a percentage of
    time difference based on timeslice differences which is meaningless when
    comparing real time tasks to SCHED_NORMAL tasks. We can allow non real
    time tasks to run with real time tasks on the sibling up to per_cpu_gain%
    if we use jiffies as a counter.

    Cleanups and micro-optimisations to the relevant code section should make
    it more understandable as well.

    Signed-off-by: Con Kolivas
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Con Kolivas
     
  • The cpusets-formalize-intermediate-gfp_kernel-containment patch
    has a deadlock problem.

    This patch was part of a set of four patches to make more
    extensive use of the cpuset 'mem_exclusive' attribute to
    manage kernel GFP_KERNEL memory allocations and to constrain
    the out-of-memory (oom) killer.

    A task that is changing cpusets in particular ways on a system
    when it is very short of free memory could double trip over
    the global cpuset_sem semaphore (get the lock and then deadlock
    trying to get it again).

    The second attempt to get cpuset_sem would be in the routine
    cpuset_zone_allowed(). This was discovered by code inspection.
    I can not reproduce the problem except with an artifically
    hacked kernel and a specialized stress test.

    In real life you cannot hit this unless you are manipulating
    cpusets, and are very unlikely to hit it unless you are rapidly
    modifying cpusets on a memory tight system. Even then it would
    be a rare occurence.

    If you did hit it, the task double tripping over cpuset_sem
    would deadlock in the kernel, and any other task also trying
    to manipulate cpusets would deadlock there too, on cpuset_sem.
    Your batch manager would be wedged solid (if it was cpuset
    savvy), but classic Unix shells and utilities would work well
    enough to reboot the system.

    The unusual condition that led to this bug is that unlike most
    semaphores, cpuset_sem _can_ be acquired while in the page
    allocation code, when __alloc_pages() calls cpuset_zone_allowed.
    So it easy to mistakenly perform the following sequence:
    1) task makes system call to alter a cpuset
    2) take cpuset_sem
    3) try to allocate memory
    4) memory allocator, via cpuset_zone_allowed, trys to take cpuset_sem
    5) deadlock

    The reason that this is not a serious bug for most users
    is that almost all calls to allocate memory don't require
    taking cpuset_sem. Only some code paths off the beaten
    track require taking cpuset_sem -- which is good. Taking
    a global semaphore on the main code path for allocating
    memory would not scale well.

    This patch fixes this deadlock by wrapping the up() and down()
    calls on cpuset_sem in kernel/cpuset.c with code that tracks
    the nesting depth of the current task on that semaphore, and
    only does the real down() if the task doesn't hold the lock
    already, and only does the real up() if the nesting depth
    (number of unmatched downs) is exactly one.

    The previous required use of refresh_mems(), anytime that
    the cpuset_sem semaphore was acquired and the code executed
    while holding that semaphore might try to allocate memory, is
    no longer required. Two refresh_mems() calls were removed
    thanks to this. This is a good change, as failing to get
    all the necessary refresh_mems() calls placed was a primary
    source of bugs in this cpuset code. The only remaining call
    to refresh_mems() is made while doing a memory allocation,
    if certain task memory placement data needs to be updated
    from its cpuset, due to the cpuset having been changed behind
    the tasks back.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • This patch (written by me and also containing many suggestions of Arjan van
    de Ven) does a major cleanup of the spinlock code. It does the following
    things:

    - consolidates and enhances the spinlock/rwlock debugging code

    - simplifies the asm/spinlock.h files

    - encapsulates the raw spinlock type and moves generic spinlock
    features (such as ->break_lock) into the generic code.

    - cleans up the spinlock code hierarchy to get rid of the spaghetti.

    Most notably there's now only a single variant of the debugging code,
    located in lib/spinlock_debug.c. (previously we had one SMP debugging
    variant per architecture, plus a separate generic one for UP builds)

    Also, i've enhanced the rwlock debugging facility, it will now track
    write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
    All locks have lockup detection now, which will work for both soft and hard
    spin/rwlock lockups.

    The arch-level include files now only contain the minimally necessary
    subset of the spinlock code - all the rest that can be generalized now
    lives in the generic headers:

    include/asm-i386/spinlock_types.h | 16
    include/asm-x86_64/spinlock_types.h | 16

    I have also split up the various spinlock variants into separate files,
    making it easier to see which does what. The new layout is:

    SMP | UP
    ----------------------------|-----------------------------------
    asm/spinlock_types_smp.h | linux/spinlock_types_up.h
    linux/spinlock_types.h | linux/spinlock_types.h
    asm/spinlock_smp.h | linux/spinlock_up.h
    linux/spinlock_api_smp.h | linux/spinlock_api_up.h
    linux/spinlock.h | linux/spinlock.h

    /*
    * here's the role of the various spinlock/rwlock related include files:
    *
    * on SMP builds:
    *
    * asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
    * initializers
    *
    * linux/spinlock_types.h:
    * defines the generic type and initializers
    *
    * asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
    * implementations, mostly inline assembly code
    *
    * (also included on UP-debug builds:)
    *
    * linux/spinlock_api_smp.h:
    * contains the prototypes for the _spin_*() APIs.
    *
    * linux/spinlock.h: builds the final spin_*() APIs.
    *
    * on UP builds:
    *
    * linux/spinlock_type_up.h:
    * contains the generic, simplified UP spinlock type.
    * (which is an empty structure on non-debug builds)
    *
    * linux/spinlock_types.h:
    * defines the generic type and initializers
    *
    * linux/spinlock_up.h:
    * contains the __raw_spin_*()/etc. version of UP
    * builds. (which are NOPs on non-debug, non-preempt
    * builds)
    *
    * (included on UP-non-debug builds:)
    *
    * linux/spinlock_api_up.h:
    * builds the _spin_*() APIs.
    *
    * linux/spinlock.h: builds the final spin_*() APIs.
    */

    All SMP and UP architectures are converted by this patch.

    arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
    crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
    be mostly fine.

    From: Grant Grundler

    Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
    Builds 32-bit SMP kernel (not booted or tested). I did not try to build
    non-SMP kernels. That should be trivial to fix up later if necessary.

    I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
    some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
    are well tested and contained entirely inside arch specific code. I do NOT
    expect any new issues to arise with them.

    If someone does ever need to use debug/metrics with them, then they will
    need to unravel this hairball between spinlocks, atomic ops, and bit ops
    that exist only because parisc has exactly one atomic instruction: LDCW
    (load and clear word).

    From: "Luck, Tony"

    ia64 fix

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Grant Grundler
    Cc: Matthew Wilcox
    Signed-off-by: Hirokazu Takata
    Signed-off-by: Mikael Pettersson
    Signed-off-by: Benoit Boissinot
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

10 Sep, 2005

7 commits

  • Patch to eliminate struct files_struct.file_lock spinlock on the reader side
    and use rcu refcounting rcuref_xxx api for the f_count refcounter. The
    updates to the fdtable are done by allocating a new fdtable structure and
    setting files->fdt to point to the new structure. The fdtable structure is
    protected by RCU thereby allowing lock-free lookup. For fd arrays/sets that
    are vmalloced, we use keventd to free them since RCU callbacks can't sleep. A
    global list of fdtable to be freed is not scalable, so we use a per-cpu list.
    If keventd is already handling the current cpu's work, we use a timer to defer
    queueing of that work.

    Since the last publication, this patch has been re-written to avoid using
    explicit memory barriers and use rcu_assign_pointer(), rcu_dereference()
    premitives instead. This required that the fd information is kept in a
    separate structure (fdtable) and updated atomically.

    Signed-off-by: Dipankar Sarma
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dipankar Sarma
     
  • In order for the RCU to work, the file table array, sets and their sizes must
    be updated atomically. Instead of ensuring this through too many memory
    barriers, we put the arrays and their sizes in a separate structure. This
    patch takes the first step of putting the file table elements in a separate
    structure fdtable that is embedded withing files_struct. It also changes all
    the users to refer to the file table using files_fdtable() macro. Subsequent
    applciation of RCU becomes easier after this.

    Signed-off-by: Dipankar Sarma
    Signed-Off-By: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dipankar Sarma
     
  • Adds a set of primitives to do reference counting for objects that are looked
    up without locks using RCU.

    Signed-off-by: Ravikiran Thirumalai
    Signed-off-by: Dipankar Sarma
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dipankar Sarma
     
  • This patch fixes minor problem that the CPUSETS have when files in the
    cpuset filesystem are read after lseek()-ed beyond the EOF.

    Signed-off-by: KUROSAWA Takahiro
    Acked-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KUROSAWA Takahiro
     
  • For architecture like ia64, the switch stack structure is fairly large
    (currently 528 bytes). For context switch intensive application, we found
    that significant amount of cache misses occurs in switch_to() function.
    The following patch adds a hook in the schedule() function to prefetch
    switch stack structure as soon as 'next' task is determined. This allows
    maximum overlap in prefetch cache lines for that structure.

    Signed-off-by: Ken Chen
    Cc: Ingo Molnar
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chen, Kenneth W
     
  • Race is as follows. Process A forks process B, both being part of the same
    session. Then, A calls disassociate_ctty while B forks C:

    A B
    ==== ====
    fork()
    copy_signal()
    dissasociate_ctty() ....
    attach_pid(p, PIDTYPE_SID, p->signal->session);

    Now, C can have current->signal->tty pointing to a freed tty structure, as
    it hasn't yet been added to the session group (to have its controlling tty
    cleared on the diassociate_ctty() call).

    This has shown up as an oops but could be even more serious. I haven't
    tried to create a test case, but a customer has verified that the patch
    below resolves the issue, which was occuring quite frequently. I'll try
    and post the test case if i can.

    The patch simply checks for a NULL tty *after* it has been attached to the
    proper session group and clears it as necessary. Alternatively, we could
    simply do the tty assignment after the the process is added to the proper
    session group.

    Signed-off-by: Jason Baron
    Cc: Roland McGrath
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jason Baron
     
  • An oversight. We don't want to carry the IO scheduler's "we hold exclusive fs
    resources" hint over to the child across fork().

    Acked-by: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Giancarlo Formicuccia
     

08 Sep, 2005

19 commits

  • Len Brown
     
  • Linus Torvalds
     
  • This patch fixes a race condition where in system used to hang or sometime
    crash within minutes when kprobes are inserted on ISR routine and a task
    routine.

    The fix has been stress tested on i386, ia64, pp64 and on x86_64. To
    reproduce the problem insert kprobes on schedule() and do_IRQ() functions
    and you should see hang or system crash.

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Ananth N Mavinakayanahalli
    Acked-by: Prasanna S Panchamukhi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Keshavamurthy Anil S
     
  • There are possible race conditions if probes are placed on routines within the
    kprobes files and routines used by the kprobes. For example if you put probe
    on get_kprobe() routines, the system can hang while inserting probes on any
    routine such as do_fork(). Because while inserting probes on do_fork(),
    register_kprobes() routine grabs the kprobes spin lock and executes
    get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler()
    gets executed and tries to grab kprobes spin lock, and spins forever. This
    patch avoids such possible race conditions by preventing probes on routines
    within the kprobes file and routines used by kprobes.

    I have modified the patches as per Andi Kleen's suggestion to move kprobes
    routines and other routines used by kprobes to a seperate section
    .kprobes.text.

    Also moved page fault and exception handlers, general protection fault to
    .kprobes.text section.

    These patches have been tested on i386, x86_64 and ppc64 architectures, also
    compiled on ia64 and sparc64 architectures.

    Signed-off-by: Prasanna S Panchamukhi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Prasanna S Panchamukhi
     
  • This patch introduces a kzalloc wrapper and converts kernel/ to use it. It
    saves a little program text.

    Signed-off-by: Pekka Enberg
    Signed-off-by: Adrian Bunk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pekka J Enberg
     
  • Extract common code used by ptrace_attach() and may_ptrace_attach()
    into a separate function.

    Signed-off-by: Miklos Szeredi
    Cc:
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Miklos Szeredi
     
  • Revert the hack introduced last week.

    Signed-off-by: John Hawkes
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    John Hawkes
     
  • For a NUMA system with multiple CPUs per node, declaring a cpu-exclusive
    cpuset that includes only some, but not all, of the CPUs in a node will mangle
    the sched domain structures.

    Signed-off-by: John Hawkes
    Cc; Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    John Hawkes
     
  • Signed-off-by: John Hawkes
    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    John Hawkes
     
  • Now the real motivation for this cpuset mem_exclusive patch series seems
    trivial.

    This patch keeps a task in or under one mem_exclusive cpuset from provoking an
    oom kill of a task under a non-overlapping mem_exclusive cpuset. Since only
    interrupt and GFP_ATOMIC allocations are allowed to escape mem_exclusive
    containment, there is little to gain from oom killing a task under a
    non-overlapping mem_exclusive cpuset, as almost all kernel and user memory
    allocation must come from disjoint memory nodes.

    This patch enables configuring a system so that a runaway job under one
    mem_exclusive cpuset cannot cause the killing of a job in another such cpuset
    that might be using very high compute and memory resources for a prolonged
    time.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • This patch makes use of the previously underutilized cpuset flag
    'mem_exclusive' to provide what amounts to another layer of memory placement
    resolution. With this patch, there are now the following four layers of
    memory placement available:

    1) The whole system (interrupt and GFP_ATOMIC allocations can use this),
    2) The nearest enclosing mem_exclusive cpuset (GFP_KERNEL allocations can use),
    3) The current tasks cpuset (GFP_USER allocations constrained to here), and
    4) Specific node placement, using mbind and set_mempolicy.

    These nest - each layer is a subset (same or within) of the previous.

    Layer (2) above is new, with this patch. The call used to check whether a
    zone (its node, actually) is in a cpuset (in its mems_allowed, actually) is
    extended to take a gfp_mask argument, and its logic is extended, in the case
    that __GFP_HARDWALL is not set in the flag bits, to look up the cpuset
    hierarchy for the nearest enclosing mem_exclusive cpuset, to determine if
    placement is allowed. The definition of GFP_USER, which used to be identical
    to GFP_KERNEL, is changed to also set the __GFP_HARDWALL bit, in the previous
    cpuset_gfp_hardwall_flag patch.

    GFP_ATOMIC and GFP_KERNEL allocations will stay within the current tasks
    cpuset, so long as any node therein is not too tight on memory, but will
    escape to the larger layer, if need be.

    The intended use is to allow something like a batch manager to handle several
    jobs, each job in its own cpuset, but using common kernel memory for caches
    and such. Swapper and oom_kill activity is also constrained to Layer (2). A
    task in or below one mem_exclusive cpuset should not cause swapping on nodes
    in another non-overlapping mem_exclusive cpuset, nor provoke oom_killing of a
    task in another such cpuset. Heavy use of kernel memory for i/o caching and
    such by one job should not impact the memory available to jobs in other
    non-overlapping mem_exclusive cpusets.

    This patch enables providing hardwall, inescapable cpusets for memory
    allocations of each job, while sharing kernel memory allocations between
    several jobs, in an enclosing mem_exclusive cpuset.

    Like Dinakar's patch earlier to enable administering sched domains using the
    cpu_exclusive flag, this patch also provides a useful meaning to a cpuset flag
    that had previously done nothing much useful other than restrict what cpuset
    configurations were allowed.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • This patch cleans up the error path of futex_fd() by removing duplicate
    code.

    Signed-off-by: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pekka Enberg
     
  • posix_timer_event() first checks that the thread (SIGEV_THREAD_ID case)
    does not have PF_EXITING flag, then it calls send_sigqueue() which locks
    task list. But if the thread exits in between the kernel will oops
    (->sighand == NULL after __exit_sighand).

    This patch moves the PF_EXITING check into the send_sigqueue(), it must be
    done atomically under tasklist_lock. When send_sigqueue() detects exiting
    thread it returns -1. In that case posix_timer_event will send the signal
    to thread group.

    Also, this patch fixes task_struct use-after-free in posix_timer_event.

    Signed-off-by: Oleg Nesterov
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • The patch removes a redundant variable `sig' from sys_prctl().

    For some reason, when sys_prctl is called with option == PR_SET_PDEATHSIG
    then the value of arg2 is assigned to an int variable named sig. Then sig
    is tested with valid_signal() and later used to set the value of
    current->pdeath_signal .

    There is no reason to use this intermediate variable since valid_signal()
    takes a unsigned long argument, so it can handle being passed arg2
    directly, and if the call to valid_signal is OK, then we know the value of
    arg2 is in the range zero to _NSIG and thus it'll easily fit in a plain int
    and thus there's no problem assigning it later to current->pdeath_signal
    (which is an int).

    The patch gets rid of the pointless variable `sig'.
    This reduces the size of kernel/sys.o in 2.6.13-rc6-mm1 by 32 bytes on my
    system.

    Patch has been compile tested, boot tested, and just to make damn sure I
    didn't break anything I wrote a quick test app that calls
    prctl(PR_SET_PDEATHSIG ...) with the entire range of values for a
    unsigned long, and it behaves as expected with and without the patch.

    Signed-off-by: Jesper Juhl
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Juhl
     
  • There is a problem in the accounting subsystem in the kernel can not
    correctly handle files larger than 2GB. The output file containing the
    process accounting data can grow very large if the system is large enough
    and active enough. If the 2GB limit is reached, then the system simply
    stops storing process accounting data.

    Another annoying problem is that once the system reaches this 2GB limit,
    then every process which exits will receive a signal, SIGXFSZ. This signal
    is generated because an attempt was made to write beyond the limit for the
    file descriptor. This signal makes it look like every process has exited
    due to a signal, when in fact, they have not.

    The solution is to add the O_LARGEFILE flag to the list of flags used to
    open the accounting file. The rest of the accounting support is already
    largefile safe.

    The changes were tested by constructing a large file (just short of 2GB),
    enabling accounting, and then running enough commands to cause the
    accounting data generated to increase the size of the file to 2GB. Without
    the changes, the file grows to 2GB and the last command run in the test
    script appears to exit due a signal when it has not. With the changes,
    things work as expected and quietly.

    There are some user level changes required so that it can deal with
    largefiles, but those are being handled separately.

    Signed-off-by: Peter Staubach
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Staubach
     
  • This patch simplifies the usage of do_notify_parent_cldstop(), it lessens
    the source and .text size slightly, and makes the code (in my opinion) a
    bit more readable.

    I am sending this patch now because I'm afraid Paul will touch
    do_notify_parent_cldstop() really soon, It's better to cleanup first.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • IRQ_PER_CPU is not used by all architectures. This patch introduces the
    macros ARCH_HAS_IRQ_PER_CPU and CHECK_IRQ_PER_CPU() to avoid the generation
    of dead code in __do_IRQ().

    ARCH_HAS_IRQ_PER_CPU is defined by architectures using IRQ_PER_CPU in their
    include/asm_ARCH/irq.h file.

    Through grepping the tree I found the following architectures currently use
    IRQ_PER_CPU:

    cris, ia64, ppc, ppc64 and parisc.

    Signed-off-by: Karsten Wiese
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Karsten Wiese
     
  • With "-W -Wno-unused -Wno-sign-compare" I get the following compile warning:

    CC kernel/workqueue.o
    kernel/workqueue.c: In function `workqueue_cpu_callback':
    kernel/workqueue.c:504: warning: ordered comparison of pointer with integer zero

    On error create_workqueue_thread() returns NULL, not negative pointer, so
    following trivial patch suggests itself.

    Signed-off-by: Mika Kukkonen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mika Kukkonen
     
  • Change the sequence of operations performed during module loading to flush
    the instruction cache before module parameters are processed. If a module
    has parameters of an unusual type that cannot be handled using the standard
    accessor functions param_set_xxx and param_get_xxx, it has to to provide a
    set of accessor functions for this type. This requires module code to be
    executed during parameter processing, which is of course only possible
    after the icache has been flushed.

    Signed-off-by: Thomas Koeller
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Koeller