10 Jul, 2007

1 commit


12 Oct, 2006

1 commit

  • This likely profiling is pretty fun. I found a few possible problems
    in sched.c.

    This patch may be not measurable, but when I did measure long ago,
    nooping (un)likely cost a couple of % on scheduler heavy benchmarks, so
    it all adds up.

    Tweak some branch hints:

    - the 2nd 64 bits in the bitmask is likely to be populated, because it
    contains the first 28 bits (nearly 3/4) of the normal priorities.
    (ratio of 669669:691 ~= 1000:1).

    - it isn't unlikely that context switching switches to another process. it
    might be very rapidly switching to and from the idle process (ratio of
    475815:419004 and 471330:423544). Let the branch predictor decide.

    - preempt_enable seems to be very often called in a nested preempt_disable
    or with interrupts disabled (ratio of 3567760:87965 ~= 40:1)

    Signed-off-by: Nick Piggin
    Acked-by: Ingo Molnar
    Cc: Daniel Walker
    Cc: Hua Zhong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

27 Mar, 2006

1 commit

  • This patch introduces the C-language equivalent of the function: int
    sched_find_first_bit(const unsigned long *b);

    In include/asm-generic/bitops/sched.h

    This code largely copied from: include/asm-powerpc/bitops.h

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita