26 Jan, 2008

2 commits


31 Dec, 2007

1 commit

  • Meelis Roos reported these warnings on sparc64:

    CC kernel/sched.o
    In file included from kernel/sched.c:879:
    kernel/sched_debug.c: In function 'nsec_high':
    kernel/sched_debug.c:38: warning: comparison of distinct pointer types lacks a cast

    the debug check in do_div() is over-eager here, because the long long
    is always positive in these places. Mark this by casting them to
    unsigned long long.

    no change in code output:

    text data bss dec hex filename
    51471 6582 376 58429 e43d sched.o.before
    51471 6582 376 58429 e43d sched.o.after

    md5:
    7f7729c111f185bf3ccea4d542abc049 sched.o.before.asm
    7f7729c111f185bf3ccea4d542abc049 sched.o.after.asm

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

28 Nov, 2007

1 commit


27 Nov, 2007

1 commit


10 Nov, 2007

1 commit

  • we lost the sched_min_granularity tunable to a clever optimization
    that uses the sched_latency/min_granularity ratio - but the ratio
    is quite unintuitive to users and can also crash the kernel if the
    ratio is set to 0. So reintroduce the min_granularity tunable,
    while keeping the ratio maintained internally.

    no functionality changed.

    [ mingo@elte.hu: some fixlets. ]

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

25 Oct, 2007

1 commit

  • Lockdep noticed that this lock can also be taken from hardirq context, and can
    thus not unconditionally disable/enable irqs.

    WARNING: at kernel/lockdep.c:2033 trace_hardirqs_on()
    [show_trace_log_lvl+26/48] show_trace_log_lvl+0x1a/0x30
    [show_trace+18/32] show_trace+0x12/0x20
    [dump_stack+22/32] dump_stack+0x16/0x20
    [trace_hardirqs_on+405/416] trace_hardirqs_on+0x195/0x1a0
    [_read_unlock_irq+34/48] _read_unlock_irq+0x22/0x30
    [sched_debug_show+2615/4224] sched_debug_show+0xa37/0x1080
    [show_state_filter+326/368] show_state_filter+0x146/0x170
    [sysrq_handle_showstate+10/16] sysrq_handle_showstate+0xa/0x10
    [__handle_sysrq+123/288] __handle_sysrq+0x7b/0x120
    [handle_sysrq+40/64] handle_sysrq+0x28/0x40
    [kbd_event+1045/1680] kbd_event+0x415/0x690
    [input_pass_event+206/208] input_pass_event+0xce/0xd0
    [input_handle_event+170/928] input_handle_event+0xaa/0x3a0
    [input_event+95/112] input_event+0x5f/0x70
    [atkbd_interrupt+434/1456] atkbd_interrupt+0x1b2/0x5b0
    [serio_interrupt+59/128] serio_interrupt+0x3b/0x80
    [i8042_interrupt+263/576] i8042_interrupt+0x107/0x240
    [handle_IRQ_event+40/96] handle_IRQ_event+0x28/0x60
    [handle_edge_irq+175/320] handle_edge_irq+0xaf/0x140
    [do_IRQ+64/128] do_IRQ+0x40/0x80
    [common_interrupt+46/52] common_interrupt+0x2e/0x34

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

19 Oct, 2007

1 commit

  • schedstat is useful in investigating CPU scheduler behavior. Ideally,
    I think it is beneficial to have it on all the time. However, the
    cost of turning it on in production system is quite high, largely due
    to number of events it collects and also due to its large memory
    footprint.

    Most of the fields probably don't need to be full 64-bit on 64-bit
    arch. Rolling over 4 billion events will most like take a long time
    and user space tool can be made to accommodate that. I'm proposing
    kernel to cut back most of variable width on 64-bit system. (note,
    the following patch doesn't affect 32-bit system).

    Signed-off-by: Ken Chen
    Signed-off-by: Ingo Molnar

    Ken Chen
     

15 Oct, 2007

27 commits


05 Sep, 2007

1 commit


23 Aug, 2007

1 commit

  • construct a more or less wall-clock time out of sched_clock(), by
    using ACPI-idle's existing knowledge about how much time we spent
    idling. This allows the rq clock to work around TSC-stops-in-C2,
    TSC-gets-corrupted-in-C3 type of problems.

    ( Besides the scheduler's statistics this also benefits blktrace and
    printk-timestamps as well. )

    Furthermore, the precise before-C2/C3-sleep and after-C2/C3-wakeup
    callbacks allow the scheduler to get out the most of the period where
    the CPU has a reliable TSC. This results in slightly more precise
    task statistics.

    the ACPI bits were acked by Len.

    Signed-off-by: Ingo Molnar
    Acked-by: Len Brown

    Ingo Molnar
     

11 Aug, 2007

1 commit


09 Aug, 2007

2 commits