07 Jun, 2015

1 commit

  • preempt_schedule_context() is a tracing safe preemption point but it's
    only used when CONFIG_CONTEXT_TRACKING=y. Other configs have tracing
    recursion issues since commit:

    b30f0e3ffedf ("sched/preempt: Optimize preemption operations on __schedule() callers")

    introduced function based preemp_count_*() ops.

    Lets make it available on all configs and give it a more appropriate
    name for its new position.

    Reported-by: Fengguang Wu
    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1433432349-1021-3-git-send-email-fweisbec@gmail.com
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

28 Oct, 2014

1 commit

  • task_preempt_count() is pointless if preemption counter is per-cpu,
    currently this is x86 only. It is only valid if the task is not
    running, and even in this case the only info it can provide is the
    state of PREEMPT_ACTIVE bit.

    Change its single caller to check p->on_rq instead, this should be
    the same if p->state != TASK_RUNNING, and kill this helper.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Steven Rostedt
    Cc: Kirill Tkhai
    Cc: Alexander Graf
    Cc: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Christoph Lameter
    Cc: Linus Torvalds
    Cc: linux-arch@vger.kernel.org
    Link: http://lkml.kernel.org/r/20141008183348.GC17495@redhat.com
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     

11 Dec, 2013

1 commit

  • While hunting a preemption issue with Alexander, Ben noticed that the
    currently generic PREEMPT_NEED_RESCHED stuff is horribly broken for
    load-store architectures.

    We currently rely on the IPI to fold TIF_NEED_RESCHED into
    PREEMPT_NEED_RESCHED, but when this IPI lands while we already have
    a load for the preempt-count but before the store, the store will erase
    the PREEMPT_NEED_RESCHED change.

    The current preempt-count only works on load-store archs because
    interrupts are assumed to be completely balanced wrt their preempt_count
    fiddling; the previous preempt_count load will match the preempt_count
    state after the interrupt and therefore nothing gets lost.

    This patch removes the PREEMPT_NEED_RESCHED usage from generic code and
    pushes it into x86 arch code; the generic code goes back to relying on
    TIF_NEED_RESCHED.

    Boot tested on x86_64 and compile tested on ppc64.

    Reported-by: Benjamin Herrenschmidt
    Reported-and-Tested-by: Alexander Graf
    Signed-off-by: Peter Zijlstra
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/20131128132641.GP10022@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

28 Sep, 2013

1 commit

  • Yuanhan reported a serious throughput regression in his pigz
    benchmark. Using the ftrace patch I found that several idle
    paths need more TLC before we can switch the generic
    need_resched() over to preempt_need_resched.

    The preemption paths benefit most from preempt_need_resched and
    do indeed use it; all other need_resched() users don't really
    care that much so reverting need_resched() back to
    tif_need_resched() is the simple and safe solution.

    Reported-by: Yuanhan Liu
    Signed-off-by: Peter Zijlstra
    Cc: Fengguang Wu
    Cc: Huang Ying
    Cc: lkp@linux.intel.com
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/20130927153003.GF15690@laptop.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

25 Sep, 2013

4 commits

  • Remove the bloat of the C calling convention out of the
    preempt_enable() sites by creating an ASM wrapper which allows us to
    do an asm("call ___preempt_schedule") instead.

    calling.h bits by Andi Kleen

    Suggested-by: Linus Torvalds
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-tk7xdi1cvvxewixzke8t8le1@git.kernel.org
    [ Fixed build error. ]
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Rewrite the preempt_count macros in order to extract the 3 basic
    preempt_count value modifiers:

    __preempt_count_add()
    __preempt_count_sub()

    and the new:

    __preempt_count_dec_and_test()

    And since we're at it anyway, replace the unconventional
    $op_preempt_count names with the more conventional preempt_count_$op.

    Since these basic operators are equivalent to the previous _notrace()
    variants, do away with the _notrace() versions.

    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-ewbpdbupy9xpsjhg960zwbv8@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • We need a few special preempt_count accessors:
    - task_preempt_count() for when we're interested in the preemption
    count of another (non-running) task.
    - init_task_preempt_count() for properly initializing the preemption
    count.
    - init_idle_preempt_count() a special case of the above for the idle
    threads.

    With these no generic code ever touches thread_info::preempt_count
    anymore and architectures could choose to remove it.

    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-jf5swrio8l78j37d06fzmo4r@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • In order to prepare to per-arch implementations of preempt_count move
    the required bits into an asm-generic header and use this for all
    archs.

    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-h5j0c1r3e3fk015m30h8f1zx@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra