12 Dec, 2011

2 commits

  • On the irq exit path, tick_nohz_irq_exit()
    may raise a softirq, which action leads to the wake up
    path and select_task_rq_fair() that makes use of rcu
    to iterate the domains.

    This is an illegal use of RCU because we may be in RCU
    extended quiescent state if we interrupted an RCU-idle
    window in the idle loop:

    [ 132.978883] ===============================
    [ 132.978883] [ INFO: suspicious RCU usage. ]
    [ 132.978883] -------------------------------
    [ 132.978883] kernel/sched_fair.c:1707 suspicious rcu_dereference_check() usage!
    [ 132.978883]
    [ 132.978883] other info that might help us debug this:
    [ 132.978883]
    [ 132.978883]
    [ 132.978883] rcu_scheduler_active = 1, debug_locks = 0
    [ 132.978883] RCU used illegally from extended quiescent state!
    [ 132.978883] 2 locks held by swapper/0:
    [ 132.978883] #0: (&p->pi_lock){-.-.-.}, at: [] try_to_wake_up+0x39/0x2f0
    [ 132.978883] #1: (rcu_read_lock){.+.+..}, at: [] select_task_rq_fair+0x6a/0xec0
    [ 132.978883]
    [ 132.978883] stack backtrace:
    [ 132.978883] Pid: 0, comm: swapper Tainted: G W 3.0.0+ #178
    [ 132.978883] Call Trace:
    [ 132.978883] [] lockdep_rcu_suspicious+0xe6/0x100
    [ 132.978883] [] select_task_rq_fair+0x749/0xec0
    [ 132.978883] [] ? select_task_rq_fair+0x6a/0xec0
    [ 132.978883] [] ? do_raw_spin_lock+0x54/0x150
    [ 132.978883] [] ? trace_hardirqs_on+0xd/0x10
    [ 132.978883] [] try_to_wake_up+0xd3/0x2f0
    [ 132.978883] [] ? ktime_get+0x68/0xf0
    [ 132.978883] [] wake_up_process+0x15/0x20
    [ 132.978883] [] raise_softirq_irqoff+0x65/0x110
    [ 132.978883] [] __hrtimer_start_range_ns+0x415/0x5a0
    [ 132.978883] [] ? do_raw_spin_unlock+0x5e/0xb0
    [ 132.978883] [] hrtimer_start+0x18/0x20
    [ 132.978883] [] tick_nohz_stop_sched_tick+0x393/0x450
    [ 132.978883] [] irq_exit+0xd2/0x100
    [ 132.978883] [] do_IRQ+0x66/0xe0
    [ 132.978883] [] common_interrupt+0x13/0x13
    [ 132.978883] [] ? native_safe_halt+0xb/0x10
    [ 132.978883] [] ? trace_hardirqs_on+0xd/0x10
    [ 132.978883] [] default_idle+0xba/0x370
    [ 132.978883] [] amd_e400_idle+0x5e/0x130
    [ 132.978883] [] cpu_idle+0xb6/0x120
    [ 132.978883] [] rest_init+0xef/0x150
    [ 132.978883] [] ? rest_init+0x52/0x150
    [ 132.978883] [] start_kernel+0x3da/0x3e5
    [ 132.978883] [] x86_64_start_reservations+0x131/0x135
    [ 132.978883] [] x86_64_start_kernel+0x103/0x112

    Fix this by calling rcu_idle_enter() after tick_nohz_irq_exit().

    Signed-off-by: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Frederic Weisbecker
     
  • The tick_nohz_stop_sched_tick() function, which tries to delay
    the next timer tick as long as possible, can be called from two
    places:

    - From the idle loop to start the dytick idle mode
    - From interrupt exit if we have interrupted the dyntick
    idle mode, so that we reprogram the next tick event in
    case the irq changed some internal state that requires this
    action.

    There are only few minor differences between both that
    are handled by that function, driven by the ts->inidle
    cpu variable and the inidle parameter. The whole guarantees
    that we only update the dyntick mode on irq exit if we actually
    interrupted the dyntick idle mode, and that we enter in RCU extended
    quiescent state from idle loop entry only.

    Split this function into:

    - tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
    dynticks idle mode unconditionally if it can, and enters into RCU
    extended quiescent state.

    - tick_nohz_irq_exit() which only updates the dynticks idle mode
    when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).

    To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
    into tick_nohz_idle_exit().

    This simplifies the code and micro-optimize the irq exit path (no need
    for local_irq_save there). This also prepares for the split between
    dynticks and rcu extended quiescent state logics. We'll need this split to
    further fix illegal uses of RCU in extended quiescent states in the idle
    loop.

    Signed-off-by: Frederic Weisbecker
    Cc: Mike Frysinger
    Cc: Guan Xuetao
    Cc: David Miller
    Cc: Chris Metcalf
    Cc: Hans-Christian Egtvedt
    Cc: Ralf Baechle
    Cc: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: H. Peter Anvin
    Cc: Russell King
    Cc: Paul Mackerras
    Cc: Heiko Carstens
    Cc: Paul Mundt
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Frederic Weisbecker
     

31 Oct, 2011

1 commit

  • The changed files were only including linux/module.h for the
    EXPORT_SYMBOL infrastructure, and nothing else. Revector them
    onto the isolated export header for faster compile times.

    Nothing to see here but a whole lot of instances of:

    -#include
    +#include

    This commit is only changing the kernel dir; next targets
    will probably be mm, fs, the arch dirs, etc.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

21 Jul, 2011

1 commit

  • The rcu_read_unlock_special() function relies on in_irq() to exclude
    scheduler activity from interrupt level. This fails because exit_irq()
    can invoke the scheduler after clearing the preempt_count() bits that
    in_irq() uses to determine that it is at interrupt level. This situation
    can result in failures as follows:

    $task IRQ SoftIRQ

    rcu_read_lock()

    /* do stuff */

    |= UNLOCK_BLOCKED

    rcu_read_unlock()
    --t->rcu_read_lock_nesting

    irq_enter();
    /* do stuff, don't use RCU */
    irq_exit();
    sub_preempt_count(IRQ_EXIT_OFFSET);
    invoke_softirq()

    ttwu();
    spin_lock_irq(&pi->lock)
    rcu_read_lock();
    /* do stuff */
    rcu_read_unlock();
    rcu_read_unlock_special()
    rcu_report_exp_rnp()
    ttwu()
    spin_lock_irq(&pi->lock) /* deadlock */

    rcu_read_unlock_special(t);

    Ed can simply trigger this 'easy' because invoke_softirq() immediately
    does a ttwu() of ksoftirqd/# instead of doing the in-place softirq stuff
    first, but even without that the above happens.

    Cure this by also excluding softirqs from the
    rcu_read_unlock_special() handler and ensuring the force_irqthreads
    ksoftirqd/# wakeup is done from full softirq context.

    [ Alternatively, delaying the ->rcu_read_lock_nesting decrement
    until after the special handling would make the thing more robust
    in the face of interrupts as well. And there is a separate patch
    for that. ]

    Cc: Thomas Gleixner
    Reported-and-tested-by: Ed Tomlinson
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney

    Peter Zijlstra
     

15 Jun, 2011

1 commit

  • Commit a26ac2455ffcf3(rcu: move TREE_RCU from softirq to kthread)
    introduced performance regression. In an AIM7 test, this commit degraded
    performance by about 40%.

    The commit runs rcu callbacks in a kthread instead of softirq. We observed
    high rate of context switch which is caused by this. Out test system has
    64 CPUs and HZ is 1000, so we saw more than 64k context switch per second
    which is caused by RCU's per-CPU kthread. A trace showed that most of
    the time the RCU per-CPU kthread doesn't actually handle any callbacks,
    but instead just does a very small amount of work handling grace periods.
    This means that RCU's per-CPU kthreads are making the scheduler do quite
    a bit of work in order to allow a very small amount of RCU-related
    processing to be done.

    Alex Shi's analysis determined that this slowdown is due to lock
    contention within the scheduler. Unfortunately, as Peter Zijlstra points
    out, the scheduler's real-time semantics require global action, which
    means that this contention is inherent in real-time scheduling. (Yes,
    perhaps someone will come up with a workaround -- otherwise, -rt is not
    going to do well on large SMP systems -- but this patch will work around
    this issue in the meantime. And "the meantime" might well be forever.)

    This patch therefore re-introduces softirq processing to RCU, but only
    for core RCU work. RCU callbacks are still executed in kthread context,
    so that only a small amount of RCU work runs in softirq context in the
    common case. This should minimize ksoftirqd execution, allowing us to
    skip boosting of ksoftirqd for CONFIG_RCU_BOOST=y kernels.

    Signed-off-by: Shaohua Li
    Tested-by: "Alex,Shi"
    Signed-off-by: Paul E. McKenney

    Shaohua Li
     

06 May, 2011

1 commit

  • If RCU priority boosting is to be meaningful, callback invocation must
    be boosted in addition to preempted RCU readers. Otherwise, in presence
    of CPU real-time threads, the grace period ends, but the callbacks don't
    get invoked. If the callbacks don't get invoked, the associated memory
    doesn't get freed, so the system is still subject to OOM.

    But it is not reasonable to priority-boost RCU_SOFTIRQ, so this commit
    moves the callback invocations to a kthread, which can be boosted easily.

    Also add comments and properly synchronized all accesses to
    rcu_cpu_kthread_task, as suggested by Lai Jiangshan.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     

31 Mar, 2011

1 commit


23 Mar, 2011

1 commit

  • ksoftirqd, kworker, migration, and pktgend kthreads can be created with
    kthread_create_on_node(), to get proper NUMA affinities for their stack and
    task_struct.

    Signed-off-by: Eric Dumazet
    Acked-by: David S. Miller
    Reviewed-by: Andi Kleen
    Acked-by: Rusty Russell
    Acked-by: Tejun Heo
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: David Howells
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Dumazet
     

16 Mar, 2011

1 commit

  • * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (116 commits)
    x86: Enable forced interrupt threading support
    x86: Mark low level interrupts IRQF_NO_THREAD
    x86: Use generic show_interrupts
    x86: ioapic: Avoid redundant lookup of irq_cfg
    x86: ioapic: Use new move_irq functions
    x86: Use the proper accessors in fixup_irqs()
    x86: ioapic: Use irq_data->state
    x86: ioapic: Simplify irq chip and handler setup
    x86: Cleanup the genirq name space
    genirq: Add chip flag to force mask on suspend
    genirq: Add desc->irq_data accessor
    genirq: Add comments to Kconfig switches
    genirq: Fixup fasteoi handler for oneshot mode
    genirq: Provide forced interrupt threading
    sched: Switch wait_task_inactive to schedule_hrtimeout()
    genirq: Add IRQF_NO_THREAD
    genirq: Allow shared oneshot interrupts
    genirq: Prepare the handling of shared oneshot interrupts
    genirq: Make warning in handle_percpu_event useful
    x86: ioapic: Move trigger defines to io_apic.h
    ...

    Fix up trivial(?) conflicts in arch/x86/pci/xen.c due to genirq name
    space changes clashing with the Xen cleanups. The set_irq_msi() had
    moved to xen_bind_pirq_msi_to_irq().

    Linus Torvalds
     

26 Feb, 2011

1 commit

  • Add a commandline parameter "threadirqs" which forces all interrupts except
    those marked IRQF_NO_THREAD to run threaded. That's mostly a debug option to
    allow retrieving better debug data from crashing interrupt handlers. If
    "threadirqs" is not enabled on the kernel command line, then there is no
    impact in the interrupt hotpath.

    Architecture code needs to select CONFIG_IRQ_FORCED_THREADING after
    marking the interrupts which cant be threaded IRQF_NO_THREAD. All
    interrupts which have IRQF_TIMER set are implict marked
    IRQF_NO_THREAD. Also all PER_CPU interrupts are excluded.

    Forced threading hard interrupts also forces all soft interrupt
    handling into thread context.

    When enabled it might slow down things a bit, but for debugging problems in
    interrupt code it's a reasonable penalty as it does not immediately
    crash and burn the machine when an interrupt handler is buggy.

    Some test results on a Core2Duo machine:

    Cache cold run of:
    # time git grep irq_desc

    non-threaded threaded
    real 1m18.741s 1m19.061s
    user 0m1.874s 0m1.757s
    sys 0m5.843s 0m5.427s

    # iperf -c server
    non-threaded
    [ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec
    [ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec
    [ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec
    threaded
    [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
    [ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec
    [ 3] 0.0-10.0 sec 1.09 GBytes 937 Mbits/sec

    Signed-off-by: Thomas Gleixner
    Cc: Peter Zijlstra
    LKML-Reference:

    Thomas Gleixner
     

09 Feb, 2011

1 commit

  • ksoftirqd() calls do_softirq() which switches stacks on several
    architectures. That makes no sense at all. ksoftirqd's stack is
    sufficient.

    Call __do_softirq() directly.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Cc: Benjamin Herrenschmidt
    Cc: Heiko Carstens
    Acked-by: David Miller
    Cc: Paul Mundt
    Reviewed-by: Frank Rowand
    LKML-Reference:

    Thomas Gleixner
     

26 Jan, 2011

1 commit


14 Jan, 2011

1 commit

  • For arch which needs USE_GENERIC_SMP_HELPERS, it has to select
    USE_GENERIC_SMP_HELPERS, rather than leaving a choice to user, since they
    don't provide their own implementions.

    Also, move on_each_cpu() to kernel/smp.c, it is strange to put it in
    kernel/softirq.c.

    For arch which doesn't use USE_GENERIC_SMP_HELPERS, e.g. blackfin, only
    on_each_cpu() is compiled.

    Signed-off-by: Amerigo Wang
    Cc: David Howells
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Yinghai Lu
    Cc: Peter Zijlstra
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Amerigo Wang
     

08 Jan, 2011

1 commit

  • * 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (30 commits)
    gameport: use this_cpu_read instead of lookup
    x86: udelay: Use this_cpu_read to avoid address calculation
    x86: Use this_cpu_inc_return for nmi counter
    x86: Replace uses of current_cpu_data with this_cpu ops
    x86: Use this_cpu_ops to optimize code
    vmstat: User per cpu atomics to avoid interrupt disable / enable
    irq_work: Use per cpu atomics instead of regular atomics
    cpuops: Use cmpxchg for xchg to avoid lock semantics
    x86: this_cpu_cmpxchg and this_cpu_xchg operations
    percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg support
    percpu,x86: relocate this_cpu_add_return() and friends
    connector: Use this_cpu operations
    xen: Use this_cpu_inc_return
    taskstats: Use this_cpu_ops
    random: Use this_cpu_inc_return
    fs: Use this_cpu_inc_return in buffer.c
    highmem: Use this_cpu_xx_return() operations
    vmstat: Use this_cpu_inc_return for vm statistics
    x86: Support for this_cpu_add, sub, dec, inc_return
    percpu: Generic support for this_cpu_add, sub, dec, inc_return
    ...

    Fixed up conflicts: in arch/x86/kernel/{apic/nmi.c, apic/x2apic_uv_x.c, process.c}
    as per Tejun.

    Linus Torvalds
     

07 Jan, 2011

1 commit


17 Dec, 2010

1 commit

  • __get_cpu_var() can be replaced with this_cpu_read and will then use a
    single read instruction with implied address calculation to access the
    correct per cpu instance.

    However, the address of a per cpu variable passed to __this_cpu_read()
    cannot be determined (since it's an implied address conversion through
    segment prefixes). Therefore apply this only to uses of __get_cpu_var
    where the address of the variable is not used.

    Cc: Pekka Enberg
    Cc: Hugh Dickins
    Cc: Thomas Gleixner
    Acked-by: H. Peter Anvin
    Signed-off-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Christoph Lameter
     

18 Nov, 2010

1 commit


28 Oct, 2010

1 commit

  • …/git/tip/linux-2.6-tip

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (50 commits)
    perf python scripting: Add futex-contention script
    perf python scripting: Fixup cut'n'paste error in sctop script
    perf scripting: Shut up 'perf record' final status
    perf record: Remove newline character from perror() argument
    perf python scripting: Support fedora 11 (audit 1.7.17)
    perf python scripting: Improve the syscalls-by-pid script
    perf python scripting: print the syscall name on sctop
    perf python scripting: Improve the syscalls-counts script
    perf python scripting: Improve the failed-syscalls-by-pid script
    kprobes: Remove redundant text_mutex lock in optimize
    x86/oprofile: Fix uninitialized variable use in debug printk
    tracing: Fix 'faild' -> 'failed' typo
    perf probe: Fix format specified for Dwarf_Off parameter
    perf trace: Fix detection of script extension
    perf trace: Use $PERF_EXEC_PATH in canned report scripts
    perf tools: Document event modifiers
    perf tools: Remove direct slang.h include
    perf_events: Fix for transaction recovery in group_sched_in()
    perf_events: Revert: Fix transaction recovery in group_sched_in()
    perf, x86: Use NUMA aware allocations for PEBS/BTS/DS allocations
    ...

    Linus Torvalds
     

24 Oct, 2010

1 commit


23 Oct, 2010

2 commits

  • Andrew Morton pointed out almost all sched_setscheduler() callers are
    using fixed parameters and can be converted to static. It reduces runtime
    memory use a little.

    Signed-off-by: KOSAKI Motohiro
    Reported-by: Andrew Morton
    Acked-by: James Morris
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    KOSAKI Motohiro
     
  • … 'x86-quirks-for-linus', 'x86-setup-for-linus', 'x86-uv-for-linus' and 'x86-vm86-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

    * 'softirq-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    softirqs: Make wakeup_softirqd static

    * 'x86-debug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, asm: Restore parentheses around one pushl_cfi argument
    x86, asm: Fix ancient-GAS workaround
    x86, asm: Fix CFI macro invocations to deal with shortcomings in gas

    * 'x86-numa-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, numa: Assign CPUs to nodes in round-robin manner on fake NUMA

    * 'x86-quirks-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86: HPET force enable for CX700 / VIA Epia LT

    * 'x86-setup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, setup: Use string copy operation to optimze copy in kernel compression

    * 'x86-uv-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, UV: Use allocated buffer in tlb_uv.c:tunables_read()

    * 'x86-vm86-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, vm86: Fix preemption bug for int1 debug and int3 breakpoint handlers.

    Linus Torvalds
     

22 Oct, 2010

1 commit

  • * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (96 commits)
    apic, x86: Use BIOS settings for IBS and MCE threshold interrupt LVT offsets
    apic, x86: Check if EILVT APIC registers are available (AMD only)
    x86: ioapic: Call free_irte only if interrupt remapping enabled
    arm: Use ARCH_IRQ_INIT_FLAGS
    genirq, ARM: Fix boot on ARM platforms
    genirq: Fix CONFIG_GENIRQ_NO_DEPRECATED=y build
    x86: Switch sparse_irq allocations to GFP_KERNEL
    genirq: Switch sparse_irq allocator to GFP_KERNEL
    genirq: Make sparse_lock a mutex
    x86: lguest: Use new irq allocator
    genirq: Remove the now unused sparse irq leftovers
    genirq: Sanitize dynamic irq handling
    genirq: Remove arch_init_chip_data()
    x86: xen: Sanitise sparse_irq handling
    x86: Use sane enumeration
    x86: uv: Clean up the direct access to irq_desc
    x86: Make io_apic.c local functions static
    genirq: Remove irq_2_iommu
    x86: Speed up the irq_remapped check in hot pathes
    intr_remap: Simplify the code further
    ...

    Fix up trivial conflicts in arch/x86/Kconfig

    Linus Torvalds
     

21 Oct, 2010

1 commit

  • With the addition of trace_softirq_raise() the softirq tracepoint got
    even more convoluted. Why the tracepoints take two pointers to assign
    an integer is beyond my comprehension.

    But adding an extra case which treats the first pointer as an unsigned
    long when the second pointer is NULL including the back and forth
    type casting is just horrible.

    Convert the softirq tracepoints to take a single unsigned int argument
    for the softirq vector number and fix the call sites.

    Signed-off-by: Thomas Gleixner
    LKML-Reference:
    Acked-by: Peter Zijlstra
    Acked-by: mathieu.desnoyers@efficios.com
    Cc: Frederic Weisbecker
    Cc: Steven Rostedt

    Thomas Gleixner
     

19 Oct, 2010

3 commits

  • When CPU is idle and on first interrupt, irq_enter calls tick_check_idle()
    to notify interruption from idle. But, there is a problem if this call
    is done after __irq_enter, as all routines in __irq_enter may find
    stale time due to yet to be done tick_check_idle.

    Specifically, trace calls in __irq_enter when they use global clock and also
    account_system_vtime change in this patch as it wants to use sched_clock_cpu()
    to do proper irq timing.

    But, tick_check_idle was moved after __irq_enter intentionally to
    prevent problem of unneeded ksoftirqd wakeups by the commit ee5f80a:

    irq: call __irq_enter() before calling the tick_idle_check
    Impact: avoid spurious ksoftirqd wakeups

    Moving tick_check_idle() before __irq_enter and wrapping it with
    local_bh_enable/disable would solve both the problems.

    Fixed-by: Yong Zhang
    Signed-off-by: Venkatesh Pallipadi
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Venkatesh Pallipadi
     
  • To account softirq time cleanly in scheduler, we need to identify whether
    softirq is invoked in ksoftirqd context or softirq at hardirq tail context.
    Add PF_KSOFTIRQD for that purpose.

    As all PF flag bits are currently taken, create space by moving one of the
    infrequently used bits (PF_THREAD_BOUND) down in task_struct to be along
    with some other state fields.

    Signed-off-by: Venkatesh Pallipadi
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Venkatesh Pallipadi
     
  • Peter Zijlstra found a bug in the way softirq time is accounted in
    VIRT_CPU_ACCOUNTING on this thread:

    http://lkml.indiana.edu/hypermail//linux/kernel/1009.2/01366.html

    The problem is, softirq processing uses local_bh_disable internally. There
    is no way, later in the flow, to differentiate between whether softirq is
    being processed or is it just that bh has been disabled. So, a hardirq when bh
    is disabled results in time being wrongly accounted as softirq.

    Looking at the code a bit more, the problem exists in !VIRT_CPU_ACCOUNTING
    as well. As account_system_time() in normal tick based accouting also uses
    softirq_count, which will be set even when not in softirq with bh disabled.

    Peter also suggested solution of using 2*SOFTIRQ_OFFSET as irq count
    for local_bh_{disable,enable} and using just SOFTIRQ_OFFSET while softirq
    processing. The patch below does that and adds API in_serving_softirq() which
    returns whether we are currently processing softirq or not.

    Also changes one of the usages of softirq_count in net/sched/cls_cgroup.c
    to in_serving_softirq.

    Looks like many usages of in_softirq really want in_serving_softirq. Those
    changes can be made individually on a case by case basis.

    Signed-off-by: Venkatesh Pallipadi
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Venkatesh Pallipadi
     

12 Oct, 2010

2 commits


22 Sep, 2010

1 commit


05 Jun, 2010

1 commit

  • The commit 80b5184cc537718122e036afe7e62d202b70d077 ("kernel/: convert cpu
    notifier to return encapsulate errno value") changed the return value of
    cpu notifier callbacks.

    Those callbacks don't return NOTIFY_BAD on failures anymore. But there
    are a few callbacks which are called directly at init time and checking
    the return value.

    I forgot to change BUG_ON checking by the direct callers in the commit.

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

28 May, 2010

1 commit


11 May, 2010

1 commit

  • The addition of preemptible RCU to treercu resulted in a bit of
    confusion and inefficiency surrounding the handling of context switches
    for RCU-sched and for RCU-preempt. For RCU-sched, a context switch
    is a quiescent state, pure and simple, just like it always has been.
    For RCU-preempt, a context switch is in no way a quiescent state, but
    special handling is required when a task blocks in an RCU read-side
    critical section.

    However, the callout from the scheduler and the outer loop in ksoftirqd
    still calls something named rcu_sched_qs(), whose name is no longer
    accurate. Furthermore, when rcu_check_callbacks() notes an RCU-sched
    quiescent state, it ends up unnecessarily (though harmlessly, aside
    from the performance hit) enqueuing the current task if it happens to
    be running in an RCU-preempt read-side critical section. This not only
    increases the maximum latency of scheduler_tick(), it also needlessly
    increases the overhead of the next outermost rcu_read_unlock() invocation.

    This patch addresses this situation by separating the notion of RCU's
    context-switch handling from that of RCU-sched's quiescent states.
    The context-switch handling is covered by rcu_note_context_switch() in
    general and by rcu_preempt_note_context_switch() for preemptible RCU.
    This permits rcu_sched_qs() to handle quiescent states and only quiescent
    states. It also reduces the maximum latency of scheduler_tick(), though
    probably by much less than a microsecond. Finally, it means that tasks
    within preemptible-RCU read-side critical sections avoid incurring the
    overhead of queuing unless there really is a context switch.

    Suggested-by: Lai Jiangshan
    Acked-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Peter Zijlstra

    Paul E. McKenney
     

04 Feb, 2010

1 commit

  • hrtimers callbacks are always done from hardirq context, either the
    jiffy tick interrupt or the hrtimer device interrupt.

    [ there is currently one exception that can still call a hrtimer
    callback from softirq, but even in that case this will still
    work correctly. ]

    Reported-by: Wei Yongjun
    Signed-off-by: Peter Zijlstra
    Cc: Yury Polyanskiy
    Tested-by: Wei Yongjun
    Acked-by: David S. Miller
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     

15 Dec, 2009

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

02 Nov, 2009

1 commit

  • Currently, rcu_irq_exit() is invoked only for CONFIG_NO_HZ,
    while rcu_irq_enter() is invoked unconditionally. This patch
    moves rcu_irq_exit() out from under CONFIG_NO_HZ so that the
    calls are balanced.

    This patch has no effect on the behavior of the kernel because
    both rcu_irq_enter() and rcu_irq_exit() are empty for
    !CONFIG_NO_HZ, but the code is easier to understand if the calls
    are obviously balanced in all cases.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Lai Jiangshan
     

29 Oct, 2009

1 commit

  • This patch updates percpu related symbols under kernel/ and mm/ such
    that percpu symbols are unique and don't clash with local symbols.
    This serves two purposes of decreasing the possibility of global
    percpu symbol collision and allowing dropping per_cpu__ prefix from
    percpu symbols.

    * kernel/lockdep.c: s/lock_stats/cpu_lock_stats/

    * kernel/sched.c: s/init_rq_rt/init_rt_rq_var/ (any better idea?)
    s/sched_group_cpus/sched_groups/

    * kernel/softirq.c: s/ksoftirqd/run_ksoftirqd/a

    * kernel/softlockup.c: s/(*)_timestamp/softlockup_\1_ts/
    s/watchdog_task/softlockup_watchdog/
    s/timestamp/ts/ for local variables

    * kernel/time/timer_stats: s/lookup_lock/tstats_lookup_lock/

    * mm/slab.c: s/reap_work/slab_reap_work/
    s/reap_node/slab_reap_node/

    * mm/vmstat.c: local variable changed to avoid collision with vmstat_work

    Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
    which cause name clashes" patch.

    Signed-off-by: Tejun Heo
    Acked-by: (slab/vmstat) Christoph Lameter
    Reviewed-by: Christoph Lameter
    Cc: Rusty Russell
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Andrew Morton
    Cc: Nick Piggin

    Tejun Heo
     

18 Sep, 2009

1 commit


23 Aug, 2009

1 commit

  • Make RCU-sched, RCU-bh, and RCU-preempt be underlying
    implementations, with "RCU" defined in terms of one of the
    three. Update the outdated rcu_qsctr_inc() names, as these
    functions no longer increment anything.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josht@linux.vnet.ibm.com
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

22 Jul, 2009

1 commit

  • commit ca109491f (hrtimer: removing all ur callback modes) moved all
    hrtimer callbacks into hard interrupt context when high resolution
    timers are active. That breaks code which relied on the assumption
    that the callback happens in softirq context.

    Provide a generic infrastructure which combines tasklets and hrtimers
    together to provide an in-softirq hrtimer experience.

    Signed-off-by: Peter Zijlstra
    Cc: torvalds@linux-foundation.org
    Cc: kaber@trash.net
    Cc: David Miller
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     

19 Jun, 2009

1 commit

  • Statistics for softirq doesn't exist.
    It will be helpful like statistics for interrupts.
    This patch introduces counting the number of softirq,
    which will be exported in /proc/softirqs.

    When softirq handler consumes much CPU time,
    /proc/stat is like the following.

    $ while :; do cat /proc/stat | head -n1 ; sleep 10 ; done
    cpu 88 0 408 739665 583 28 2 0 0
    cpu 450 0 1090 740970 594 28 1294 0 0
    ^^^^
    softirq

    In such a situation,
    /proc/softirqs shows us which softirq handler is invoked.
    We can see the increase rate of softirqs.

    $ cat /proc/softirqs
    CPU0 CPU1 CPU2 CPU3
    HI 0 0 0 0
    TIMER 462850 462805 462782 462718
    NET_TX 0 0 0 365
    NET_RX 2472 2 2 40
    BLOCK 0 0 381 1164
    TASKLET 0 0 0 224
    SCHED 462654 462689 462698 462427
    RCU 3046 2423 3367 3173

    $ cat /proc/softirqs
    CPU0 CPU1 CPU2 CPU3
    HI 0 0 0 0
    TIMER 463361 465077 465056 464991
    NET_TX 53 0 1 365
    NET_RX 3757 2 2 40
    BLOCK 0 0 398 1170
    TASKLET 0 0 0 224
    SCHED 463074 464318 464612 463330
    RCU 3505 2948 3947 3673

    When CPU TIME of softirq is high,
    the rates of increase is the following.
    TIMER : 220/sec : CPU1-3
    NET_TX : 5/sec : CPU0
    NET_RX : 120/sec : CPU0
    SCHED : 40-200/sec : all CPU
    RCU : 45-58/sec : all CPU

    The rates of increase in an idle mode is the following.
    TIMER : 250/sec
    SCHED : 250/sec
    RCU : 2/sec

    It seems many softirqs for receiving packets and rcu are invoked. This
    gives us help for checking system.

    Signed-off-by: Keika Kobayashi
    Reviewed-by: Hiroshi Shimamoto
    Reviewed-by: KOSAKI Motohiro
    Cc: Ingo Molnar
    Cc: Eric Dumazet
    Cc: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Keika Kobayashi