20 Jul, 2007

4 commits

  • Implement the cpu_clock(cpu) interface for kernel-internal use:
    high-speed (but slightly incorrect) per-cpu clock constructed from
    sched_clock().

    This API, unused at the moment, will be used in the future by blktrace,
    by the softlockup-watchdog, by printk and by lockstat.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • nr_moved is not the correct check for triggering all pinned logic. Fix
    the all pinned logic in the case of load_balance_newidle().

    Signed-off-by: Suresh Siddha
    Signed-off-by: Ingo Molnar

    Suresh Siddha
     
  • In the presence of SMT, newly idle balance was never happening for
    multi-core and SMP domains (even when both the logical siblings are
    idle).

    If thread 0 is already idle and when thread 1 is about to go to idle,
    newly idle load balance always think that one of the threads is not idle
    and skips doing the newly idle load balance for multi-core and SMP
    domains.

    This is because of the idle_cpu() macro, which checks if the current
    process on a cpu is an idle process. But this is not the case for the
    thread doing the load_balance_newidle().

    Fix this by using runqueue's nr_running field instead of idle_cpu(). And
    also skip the logic of 'only one idle cpu in the group will be doing
    load balancing' during newly idle case.

    Signed-off-by: Suresh Siddha
    Signed-off-by: Ingo Molnar

    Suresh Siddha
     
  • Currently most of the per cpu data, which is accessed by different cpus,
    has a ____cacheline_aligned_in_smp attribute. Move all this data to the
    new per cpu shared data section: .data.percpu.shared_aligned.

    This will seperate the percpu data which is referenced frequently by other
    cpus from the local only percpu data.

    Signed-off-by: Fenghua Yu
    Acked-by: Suresh Siddha
    Cc: Rusty Russell
    Cc: Christoph Lameter
    Cc: "Luck, Tony"
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fenghua Yu
     

18 Jul, 2007

1 commit

  • Currently, the freezer treats all tasks as freezable, except for the kernel
    threads that explicitly set the PF_NOFREEZE flag for themselves. This
    approach is problematic, since it requires every kernel thread to either
    set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
    care for the freezing of tasks at all.

    It seems better to only require the kernel threads that want to or need to
    be frozen to use some freezer-related code and to remove any
    freezer-related code from the other (nonfreezable) kernel threads, which is
    done in this patch.

    The patch causes all kernel threads to be nonfreezable by default (ie. to
    have PF_NOFREEZE set by default) and introduces the set_freezable()
    function that should be called by the freezable kernel threads in order to
    unset PF_NOFREEZE. It also makes all of the currently freezable kernel
    threads call set_freezable(), so it shouldn't cause any (intentional)
    change of behaviour to appear. Additionally, it updates documentation to
    describe the freezing of tasks more accurately.

    [akpm@linux-foundation.org: build fixes]
    Signed-off-by: Rafael J. Wysocki
    Acked-by: Nigel Cunningham
    Cc: Pavel Machek
    Cc: Oleg Nesterov
    Cc: Gautham R Shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     

16 Jul, 2007

3 commits


14 Jul, 2007

4 commits


10 Jul, 2007

27 commits


24 Jun, 2007

1 commit

  • The intervals of domains that do not have SD_BALANCE_NEWIDLE must be
    considered for the calculation of the time of the next balance. Otherwise
    we may defer rebalancing forever.

    Siddha also spotted that the conversion of the balance interval
    to jiffies is missing. Fix that to.

    From: Srivatsa Vaddagiri

    also continue the loop if !(sd->flags & SD_LOAD_BALANCE).

    Tested-by: Paul E. McKenney

    It did in fact trigger under all three of mainline, CFS, and -rt including CFS
    -- see below for a couple of emails from last Friday giving results for these
    three on the AMD box (where it happened) and on a single-quad NUMA-Q system
    (where it did not, at least not with such severity).

    Signed-off-by: Christoph Lameter
    Signed-off-by: Ingo Molnar
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter