11 Sep, 2005

22 commits

  • Signed-off-by: Christophe Lucas
    Signed-off-by: Domen Puncer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christophe Lucas
     
  • Signed-off-by: Christophe Lucas
    Signed-off-by: Domen Puncer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christophe Lucas
     
  • misc_register() can fail.

    Signed-off-by: Christophe Lucas
    Signed-off-by: Domen Puncer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christophe Lucas
     
  • With Nick Piggin

    Give some things static scope.

    Signed-off-by: Adrian Bunk
    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     
  • Update RCU documentation based on discussions and review of RCU-based tree
    patches. Add an introductory whatisRCU.txt file.

    Signed-off-by:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul E. McKenney
     
  • Randy cleaned out the bulk of these stale references to the now long gone
    Documentation/smp.tex back in 2004. I followed this up with a few more
    sweeps. Somehow, these have managed to sneak back in since.

    I can't seem to figure out a contact point for M32R (no one listed in
    MAINTAINERS!), but, these patches are only but trivial.

    Signed-off-by: Arthur Othieno
    Acked-by: Russell King
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arthur Othieno
     
  • Don't pull tasks from a group if that would cause the group's total load to
    drop below its total cpu_power (ie. cause the group to start going idle).

    Signed-off-by: Suresh Siddha
    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Siddha, Suresh B
     
  • Jack Steiner brought this issue at my OLS talk.

    Take a scenario where two tasks are pinned to two HT threads in a physical
    package. Idle packages in the system will keep kicking migration_thread on
    the busy package with out any success.

    We will run into similar scenarios in the presence of CMP/NUMA.

    Signed-off-by: Suresh Siddha
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Siddha, Suresh B
     
  • In sys_sched_yield(), we cache current->array in the "array" variable, thus
    there's no need to dereference "current" again later.

    Signed-Off-By: Renaud Lienhart
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Renaud Lienhart
     
  • If an idle sibling of an HT queue encounters a busy sibling, then make
    higher level load balancing of the non-idle variety.

    Performance of multiprocessor HT systems with low numbers of tasks
    (generally < number of virtual CPUs) can be significantly worse than the
    exact same workloads when running in non-HT mode. The reason is largely
    due to poor scheduling behaviour.

    This patch improves the situation, making the performance gap far less
    significant on one problematic test case (tbench).

    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • During periodic load balancing, don't hold this runqueue's lock while
    scanning remote runqueues, which can take a non trivial amount of time
    especially on very large systems.

    Holding the runqueue lock will only help to stabilise ->nr_running, however
    this doesn't do much to help because tasks being woken will simply get held
    up on the runqueue lock, so ->nr_running would not provide a really
    accurate picture of runqueue load in that case anyway.

    What's more, ->nr_running (and possibly the cpu_load averages) of remote
    runqueues won't be stable anyway, so load balancing is always an inexact
    operation.

    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Similarly to the earlier change in load_balance, only lock the runqueue in
    load_balance_newidle if the busiest queue found has a nr_running > 1. This
    will reduce frequency of expensive remote runqueue lock aquisitions in the
    schedule() path on some workloads.

    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • William Weston reported unusually high scheduling latencies on his x86 HT
    box, on the -RT kernel. I managed to reproduce it on my HT box and the
    latency tracer shows the incident in action:

    _------=> CPU#
    / _-----=> irqs-off
    | / _----=> need-resched
    || / _---=> hardirq/softirq
    ||| / _--=> preempt-depth
    |||| /
    ||||| delay
    cmd pid ||||| time | caller
    \ / ||||| \ | /
    du-2803 3Dnh2 0us : __trace_start_sched_wakeup (try_to_wake_up)
    ..............................................................
    ... we are running on CPU#3, PID 2778 gets woken to CPU#1: ...
    ..............................................................
    du-2803 3Dnh2 0us : __trace_start_sched_wakeup <-2778> (73 1)
    du-2803 3Dnh2 0us : _raw_spin_unlock (try_to_wake_up)
    ................................................
    ... still on CPU#3, we send an IPI to CPU#1: ...
    ................................................
    du-2803 3Dnh1 0us : resched_task (try_to_wake_up)
    du-2803 3Dnh1 1us : smp_send_reschedule (try_to_wake_up)
    du-2803 3Dnh1 1us : send_IPI_mask_bitmask (smp_send_reschedule)
    du-2803 3Dnh1 2us : _raw_spin_unlock_irqrestore (try_to_wake_up)
    ...............................................
    ... 1 usec later, the IPI arrives on CPU#1: ...
    ...............................................
    -0 1Dnh. 2us : smp_reschedule_interrupt (c0100c5a 0 0)

    So far so good, this is the normal wakeup/preemption mechanism. But here
    comes the scheduler anomaly on CPU#1:

    -0 1Dnh. 2us : preempt_schedule_irq (need_resched)
    -0 1Dnh. 2us : preempt_schedule_irq (need_resched)
    -0 1Dnh. 3us : __schedule (preempt_schedule_irq)
    -0 1Dnh. 3us : profile_hit (__schedule)
    -0 1Dnh1 3us : sched_clock (__schedule)
    -0 1Dnh1 4us : _raw_spin_lock_irq (__schedule)
    -0 1Dnh1 4us : _raw_spin_lock_irqsave (__schedule)
    -0 1Dnh2 5us : _raw_spin_unlock (__schedule)
    -0 1Dnh1 5us : preempt_schedule (__schedule)
    -0 1Dnh1 6us : _raw_spin_lock (__schedule)
    -0 1Dnh2 6us : find_next_bit (__schedule)
    -0 1Dnh2 6us : _raw_spin_lock (__schedule)
    -0 1Dnh3 7us : find_next_bit (__schedule)
    -0 1Dnh3 7us : find_next_bit (__schedule)
    -0 1Dnh3 8us : _raw_spin_unlock (__schedule)
    -0 1Dnh2 8us : preempt_schedule (__schedule)
    -0 1Dnh2 8us : find_next_bit (__schedule)
    -0 1Dnh2 9us : trace_stop_sched_switched (__schedule)
    -0 1Dnh2 9us : _raw_spin_lock (trace_stop_sched_switched)
    -0 1Dnh3 10us : trace_stop_sched_switched <-2778> (73 8c)
    -0 1Dnh3 10us : _raw_spin_unlock (trace_stop_sched_switched)
    -0 1Dnh1 10us : _raw_spin_unlock (__schedule)
    -0 1Dnh. 11us : local_irq_enable_noresched (preempt_schedule_irq)
    -0 1Dnh. 11us < (0)

    we didnt pick up pid 2778! It only gets scheduled much later:

    -2778 1Dnh2 412us : __switch_to (__schedule)
    -2778 1Dnh2 413us : __schedule <-0> (8c 73)
    -2778 1Dnh2 413us : _raw_spin_unlock (__schedule)
    -2778 1Dnh1 413us : trace_stop_sched_switched (__schedule)
    -2778 1Dnh1 414us : _raw_spin_lock (trace_stop_sched_switched)
    -2778 1Dnh2 414us : trace_stop_sched_switched <-2778> (73 1)
    -2778 1Dnh2 414us : _raw_spin_unlock (trace_stop_sched_switched)
    -2778 1Dnh1 415us : trace_stop_sched_switched (__schedule)

    the reason for this anomaly is the following code in dependent_sleeper():

    /*
    * If a user task with lower static priority than the
    * running task on the SMT sibling is trying to schedule,
    * delay it till there is proportionately less timeslice
    * left of the sibling task to prevent a lower priority
    * task from using an unfair proportion of the
    * physical cpu's resources. -ck
    */
    [...]
    if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
    100) > task_timeslice(p)))
    ret = 1;

    Note that in contrast to the comment above, we dont actually do the check
    based on static priority, we do the check based on timeslices. But
    timeslices go up and down, and even highprio tasks can randomly have very
    low timeslices (just before their next refill) and can thus be judged as
    'lowprio' by the above piece of code. This condition is clearly buggy.
    The correct test is to check for static_prio _and_ to check for the
    preemption priority. Even on different static priority levels, a
    higher-prio interactive task should not be delayed due to a
    higher-static-prio CPU hog.

    There is a symmetric bug in the 'kick SMT sibling' code of this function as
    well, which can be solved in a similar way.

    The patch below (against the current scheduler queue in -mm) fixes both
    bugs. I have build and boot-tested this on x86 SMT, and nice +20 tasks
    still get properly throttled - so the dependent-sleeper logic is still in
    action.

    btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
    property was applied too liberally, so this fix is likely a throughput
    improvement as well.

    I separated out a smt_slice() function to make the code easier to read.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
    used by blocking points to mark the task's wait as "non-interactive". This
    does not mean the task will be considered a CPU-hog - the wait will simply
    not have an effect on the waiting task's priority - positive or negative
    alike. Right now only pipe_wait() will make use of it, because it's a
    common source of not-so-interactive waits (kernel compilation jobs, etc.).

    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • whitespace cleanups.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
    them return only the groups that have allowed CPUs and allowed CPUs
    respectively.

    Signed-off-by: M.Baris Demiray
    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    M.Baris Demiray
     
  • The hyperthread aware nice handling currently puts to sleep any non real
    time task when a real time task is running on its sibling cpu. This can
    lead to prolonged starvation by having the non real time task pegged to the
    cpu with load balancing not pulling that task away.

    Currently we force lower priority hyperthread tasks to run a percentage of
    time difference based on timeslice differences which is meaningless when
    comparing real time tasks to SCHED_NORMAL tasks. We can allow non real
    time tasks to run with real time tasks on the sibling up to per_cpu_gain%
    if we use jiffies as a counter.

    Cleanups and micro-optimisations to the relevant code section should make
    it more understandable as well.

    Signed-off-by: Con Kolivas
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Con Kolivas
     
  • Add ability to clear statistics.

    Signed-off-by: Paul Fulghum
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Fulghum
     
  • The cpusets-formalize-intermediate-gfp_kernel-containment patch
    has a deadlock problem.

    This patch was part of a set of four patches to make more
    extensive use of the cpuset 'mem_exclusive' attribute to
    manage kernel GFP_KERNEL memory allocations and to constrain
    the out-of-memory (oom) killer.

    A task that is changing cpusets in particular ways on a system
    when it is very short of free memory could double trip over
    the global cpuset_sem semaphore (get the lock and then deadlock
    trying to get it again).

    The second attempt to get cpuset_sem would be in the routine
    cpuset_zone_allowed(). This was discovered by code inspection.
    I can not reproduce the problem except with an artifically
    hacked kernel and a specialized stress test.

    In real life you cannot hit this unless you are manipulating
    cpusets, and are very unlikely to hit it unless you are rapidly
    modifying cpusets on a memory tight system. Even then it would
    be a rare occurence.

    If you did hit it, the task double tripping over cpuset_sem
    would deadlock in the kernel, and any other task also trying
    to manipulate cpusets would deadlock there too, on cpuset_sem.
    Your batch manager would be wedged solid (if it was cpuset
    savvy), but classic Unix shells and utilities would work well
    enough to reboot the system.

    The unusual condition that led to this bug is that unlike most
    semaphores, cpuset_sem _can_ be acquired while in the page
    allocation code, when __alloc_pages() calls cpuset_zone_allowed.
    So it easy to mistakenly perform the following sequence:
    1) task makes system call to alter a cpuset
    2) take cpuset_sem
    3) try to allocate memory
    4) memory allocator, via cpuset_zone_allowed, trys to take cpuset_sem
    5) deadlock

    The reason that this is not a serious bug for most users
    is that almost all calls to allocate memory don't require
    taking cpuset_sem. Only some code paths off the beaten
    track require taking cpuset_sem -- which is good. Taking
    a global semaphore on the main code path for allocating
    memory would not scale well.

    This patch fixes this deadlock by wrapping the up() and down()
    calls on cpuset_sem in kernel/cpuset.c with code that tracks
    the nesting depth of the current task on that semaphore, and
    only does the real down() if the task doesn't hold the lock
    already, and only does the real up() if the nesting depth
    (number of unmatched downs) is exactly one.

    The previous required use of refresh_mems(), anytime that
    the cpuset_sem semaphore was acquired and the code executed
    while holding that semaphore might try to allocate memory, is
    no longer required. Two refresh_mems() calls were removed
    thanks to this. This is a good change, as failing to get
    all the necessary refresh_mems() calls placed was a primary
    source of bugs in this cpuset code. The only remaining call
    to refresh_mems() is made while doing a memory allocation,
    if certain task memory placement data needs to be updated
    from its cpuset, due to the cpuset having been changed behind
    the tasks back.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • This patch (written by me and also containing many suggestions of Arjan van
    de Ven) does a major cleanup of the spinlock code. It does the following
    things:

    - consolidates and enhances the spinlock/rwlock debugging code

    - simplifies the asm/spinlock.h files

    - encapsulates the raw spinlock type and moves generic spinlock
    features (such as ->break_lock) into the generic code.

    - cleans up the spinlock code hierarchy to get rid of the spaghetti.

    Most notably there's now only a single variant of the debugging code,
    located in lib/spinlock_debug.c. (previously we had one SMP debugging
    variant per architecture, plus a separate generic one for UP builds)

    Also, i've enhanced the rwlock debugging facility, it will now track
    write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
    All locks have lockup detection now, which will work for both soft and hard
    spin/rwlock lockups.

    The arch-level include files now only contain the minimally necessary
    subset of the spinlock code - all the rest that can be generalized now
    lives in the generic headers:

    include/asm-i386/spinlock_types.h | 16
    include/asm-x86_64/spinlock_types.h | 16

    I have also split up the various spinlock variants into separate files,
    making it easier to see which does what. The new layout is:

    SMP | UP
    ----------------------------|-----------------------------------
    asm/spinlock_types_smp.h | linux/spinlock_types_up.h
    linux/spinlock_types.h | linux/spinlock_types.h
    asm/spinlock_smp.h | linux/spinlock_up.h
    linux/spinlock_api_smp.h | linux/spinlock_api_up.h
    linux/spinlock.h | linux/spinlock.h

    /*
    * here's the role of the various spinlock/rwlock related include files:
    *
    * on SMP builds:
    *
    * asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
    * initializers
    *
    * linux/spinlock_types.h:
    * defines the generic type and initializers
    *
    * asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
    * implementations, mostly inline assembly code
    *
    * (also included on UP-debug builds:)
    *
    * linux/spinlock_api_smp.h:
    * contains the prototypes for the _spin_*() APIs.
    *
    * linux/spinlock.h: builds the final spin_*() APIs.
    *
    * on UP builds:
    *
    * linux/spinlock_type_up.h:
    * contains the generic, simplified UP spinlock type.
    * (which is an empty structure on non-debug builds)
    *
    * linux/spinlock_types.h:
    * defines the generic type and initializers
    *
    * linux/spinlock_up.h:
    * contains the __raw_spin_*()/etc. version of UP
    * builds. (which are NOPs on non-debug, non-preempt
    * builds)
    *
    * (included on UP-non-debug builds:)
    *
    * linux/spinlock_api_up.h:
    * builds the _spin_*() APIs.
    *
    * linux/spinlock.h: builds the final spin_*() APIs.
    */

    All SMP and UP architectures are converted by this patch.

    arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
    crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
    be mostly fine.

    From: Grant Grundler

    Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
    Builds 32-bit SMP kernel (not booted or tested). I did not try to build
    non-SMP kernels. That should be trivial to fix up later if necessary.

    I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
    some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
    are well tested and contained entirely inside arch specific code. I do NOT
    expect any new issues to arise with them.

    If someone does ever need to use debug/metrics with them, then they will
    need to unravel this hairball between spinlocks, atomic ops, and bit ops
    that exist only because parisc has exactly one atomic instruction: LDCW
    (load and clear word).

    From: "Luck, Tony"

    ia64 fix

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Grant Grundler
    Cc: Matthew Wilcox
    Signed-off-by: Hirokazu Takata
    Signed-off-by: Mikael Pettersson
    Signed-off-by: Benoit Boissinot
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • pcibus_to_cpumask expands into more than just an initialiser so gcc
    moans about code before variable declarations.

    Signed-off-by: Alan Cox
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alan Cox
     
  • *** Warning: "bit_spin_lock" [fs/ntfs/ntfs.ko] undefined!
    *** Warning: "bit_spin_unlock" [fs/ntfs/ntfs.ko] undefined!

    Cc: Anton Altaparmakov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

10 Sep, 2005

18 commits