05 Sep, 2016
1 commit
-
On some hardware models (e.g. Dell Studio 1555 laptop) some hardware
related functions (e.g. SMIs) are to be executed on physical CPU 0
only. Instead of open coding such a functionality multiple times in
the kernel add a service function for this purpose. This will enable
the possibility to take special measures in virtualized environments
like Xen, too.Signed-off-by: Juergen Gross
Signed-off-by: Peter Zijlstra (Intel)
Cc: Douglas_Warzecha@dell.com
Cc: Linus Torvalds
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: akataria@vmware.com
Cc: boris.ostrovsky@oracle.com
Cc: chrisw@sous-sol.org
Cc: david.vrabel@citrix.com
Cc: hpa@zytor.com
Cc: jdelvare@suse.com
Cc: jeremy@goop.org
Cc: linux@roeck-us.net
Cc: pali.rohar@gmail.com
Cc: rusty@rustcorp.com.au
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1472453327-19050-4-git-send-email-jgross@suse.com
Signed-off-by: Ingo Molnar
15 Jul, 2016
1 commit
-
Install the callbacks via the state machine. They are installed at runtime so
smpcfd_prepare_cpu() needs to be invoked by the boot-CPU.Signed-off-by: Richard Weinberger
[ Added the dropped CPU dying case back in. ]
Signed-off-by: Richard Cochran
Signed-off-by: Anna-Maria Gleixner
Reviewed-by: Sebastian Andrzej Siewior
Cc: Davidlohr Bueso
Cc: Linus Torvalds
Cc: Mel Gorman
Cc: Oleg Nesterov
Cc: Peter Zijlstra
Cc: Rasmus Villemoes
Cc: Thomas Gleixner
Cc: rt@linutronix.de
Link: http://lkml.kernel.org/r/20160713153337.818376366@linutronix.de
Signed-off-by: Ingo Molnar
21 Apr, 2015
1 commit
-
Yes, it should work, but it's a bad idea. Not only did ARM64 not have
the 16-bit access code (there's a separate patch to add it), it's just
not a good atomic type. Some architectures fundamentally don't do
atomic accesses in them (alpha), and it's not like it saves any space
here anyway because of structure packing issues.We normally should aim for flags to be "unsigned int" or "unsigned
long". And if space is at a premium, use a single byte (although that
causes problems on alpha again). There might be very special cases
where a 16-byte entity is really wanted, but this is not one of them.Signed-off-by: Linus Torvalds
22 Jan, 2015
1 commit
-
The UP local API support can be set up from an early initcall. No need
for horrible hackery in the init code.Signed-off-by: Thomas Gleixner
Cc: Jiang Liu
Cc: Joerg Roedel
Cc: Tony Luck
Cc: Borislav Petkov
Link: http://lkml.kernel.org/r/20150115211703.827943883@linutronix.de
Signed-off-by: Thomas Gleixner
19 Sep, 2014
1 commit
-
Currently kick_all_cpus_sync() can break non-polling idle cpus
thru IPI interrupts.But sometimes we need to break the polling idle cpus immediately
to reselect the suitable c-state, also for non-idle cpus, we need
to do nothing if we try to wake up them.Here adding one new function wake_up_all_idle_cpus() to let all cpus
out of idle based on function wake_up_if_idle().Signed-off-by: Chuansheng Liu
Signed-off-by: Peter Zijlstra (Intel)
Cc: daniel.lezcano@linaro.org
Cc: rjw@rjwysocki.net
Cc: linux-pm@vger.kernel.org
Cc: changcheng.liu@intel.com
Cc: xiaoming.wang@intel.com
Cc: souvik.k.chakravarty@intel.com
Cc: luto@amacapital.net
Cc: Andrew Morton
Cc: Christoph Hellwig
Cc: Frederic Weisbecker
Cc: Geert Uytterhoeven
Cc: Jan Kara
Cc: Jens Axboe
Cc: Jens Axboe
Cc: Linus Torvalds
Cc: Michal Hocko
Cc: Paul Gortmaker
Cc: Roman Gushchin
Cc: Srivatsa S. Bhat
Link: http://lkml.kernel.org/r/1409815075-4180-2-git-send-email-chuansheng.liu@intel.com
Signed-off-by: Ingo Molnar
07 Jun, 2014
1 commit
-
After all architectures were converted to the generic idle framework,
commit d190e8195b90 ("idle: Remove GENERIC_IDLE_LOOP config switch")
removed the last caller of cpu_idle(). The forward declarations in
header files were forgotten.Signed-off-by: Geert Uytterhoeven
Cc: Thomas Gleixner
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
25 Feb, 2014
4 commits
-
The name __smp_call_function_single() doesn't tell much about the
properties of this function, especially when compared to
smp_call_function_single().The comments above the implementation are also misleading. The main
point of this function is actually not to be able to embed the csd
in an object. This is actually a requirement that result from the
purpose of this function which is to raise an IPI asynchronously.As such it can be called with interrupts disabled. And this feature
comes at the cost of the caller who then needs to serialize the
IPIs on this csd.Lets rename the function and enhance the comments so that they reflect
these properties.Suggested-by: Christoph Hellwig
Cc: Andrew Morton
Cc: Christoph Hellwig
Cc: Ingo Molnar
Cc: Jan Kara
Cc: Jens Axboe
Signed-off-by: Frederic Weisbecker
Signed-off-by: Jens Axboe -
The main point of calling __smp_call_function_single() is to send
an IPI in a pure asynchronous way. By embedding a csd in an object,
a caller can send the IPI without waiting for a previous one to complete
as is required by smp_call_function_single() for example. As such,
sending this kind of IPI can be safe even when irqs are disabled.This flexibility comes at the expense of the caller who then needs to
synchronize the csd lifecycle by himself and make sure that IPIs on a
single csd are serialized.This is how __smp_call_function_single() works when wait = 0 and this
usecase is relevant.Now there don't seem to be any usecase with wait = 1 that can't be
covered by smp_call_function_single() instead, which is safer. Lets look
at the two possible scenario:1) The user calls __smp_call_function_single(wait = 1) on a csd embedded
in an object. It looks like a nice and convenient pattern at the first
sight because we can then retrieve the object from the IPI handler easily.But actually it is a waste of memory space in the object since the csd
can be allocated from the stack by smp_call_function_single(wait = 1)
and the object can be passed an the IPI argument.Besides that, embedding the csd in an object is more error prone
because the caller must take care of the serialization of the IPIs
for this csd.2) The user calls __smp_call_function_single(wait = 1) on a csd that
is allocated on the stack. It's ok but smp_call_function_single()
can do it as well and it already takes care of the allocation on the
stack. Again it's more simple and less error prone.Therefore, using the underscore prepend API version with wait = 1
is a bad pattern and a sign that the caller can do safer and more
simple.There was a single user of that which has just been converted.
So lets remove this option to discourage further users.Cc: Andrew Morton
Cc: Christoph Hellwig
Cc: Ingo Molnar
Cc: Jan Kara
Cc: Jens Axboe
Signed-off-by: Frederic Weisbecker
Signed-off-by: Jens Axboe -
Align __smp_call_function_single() with smp_call_function_single() so
that it also checks whether requested cpu is still online.Signed-off-by: Jan Kara
Cc: Andrew Morton
Cc: Christoph Hellwig
Cc: Ingo Molnar
Cc: Jens Axboe
Signed-off-by: Frederic Weisbecker
Signed-off-by: Jens Axboe -
Now that we got rid of all the remaining code which fiddled with csd.list,
lets remove it.Signed-off-by: Jan Kara
Cc: Andrew Morton
Cc: Christoph Hellwig
Cc: Ingo Molnar
Cc: Jens Axboe
Signed-off-by: Frederic Weisbecker
Signed-off-by: Jens Axboe
11 Feb, 2014
1 commit
-
Use what we already do for arch_disable_smp_support() to fix these:
arch/x86/kernel/smpboot.c:1155:6: warning: symbol 'arch_enable_nonboot_cpus_begin' was not declared. Should it be static?
arch/x86/kernel/smpboot.c:1160:6: warning: symbol 'arch_enable_nonboot_cpus_end' was not declared. Should it be static?
kernel/cpu.c:512:13: warning: symbol 'arch_enable_nonboot_cpus_begin' was not declared. Should it be static?
kernel/cpu.c:516:13: warning: symbol 'arch_enable_nonboot_cpus_end' was not declared. Should it be static?Signed-off-by: Paul Gortmaker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
31 Jan, 2014
1 commit
-
Make smp_call_function_single and friends more efficient by using a
lockless list.Signed-off-by: Christoph Hellwig
Reviewed-by: Jan Kara
Cc: Jens Axboe
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
15 Nov, 2013
2 commits
-
x86_64 allnoconfig:
kernel/up.c:25: error: redefinition of '__smp_call_function_single'
include/linux/smp.h:154: note: previous definition of '__smp_call_function_single' was hereCc: Christoph Hellwig
Cc: Christoph Hellwig
Cc: Jan Kara
Cc: Jens Axboe
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
We've switched over every architecture that supports SMP to it, so
remove the new useless config variable.Signed-off-by: Christoph Hellwig
Cc: Jan Kara
Cc: Jens Axboe
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
25 Sep, 2013
1 commit
-
watchdog_tresh controls how often nmi perf event counter checks per-cpu
hrtimer_interrupts counter and blows up if the counter hasn't changed
since the last check. The counter is updated by per-cpu
watchdog_hrtimer hrtimer which is scheduled with 2/5 watchdog_thresh
period which guarantees that hrtimer is scheduled 2 times per the main
period. Both hrtimer and perf event are started together when the
watchdog is enabled.So far so good. But...
But what happens when watchdog_thresh is updated from sysctl handler?
proc_dowatchdog will set a new sampling period and hrtimer callback
(watchdog_timer_fn) will use the new value in the next round. The
problem, however, is that nobody tells the perf event that the sampling
period has changed so it is ticking with the period configured when it
has been set up.This might result in an ear ripping dissonance between perf and hrtimer
parts if the watchdog_thresh is increased. And even worse it might lead
to KABOOM if the watchdog is configured to panic on such a spurious
lockup.This patch fixes the issue by updating both nmi perf even counter and
hrtimers if the threshold value has changed.The nmi one is disabled and then reinitialized from scratch. This has
an unpleasant side effect that the allocation of the new event might
fail theoretically so the hard lockup detector would be disabled for
such cpus. On the other hand such a memory allocation failure is very
unlikely because the original event is deallocated right before.It would be much nicer if we just changed perf event period but there
doesn't seem to be any API to do that right now. It is also unfortunate
that perf_event_alloc uses GFP_KERNEL allocation unconditionally so we
cannot use on_each_cpu() and do the same thing from the per-cpu context.
The update from the current CPU should be safe because
perf_event_disable removes the event atomically before it clears the
per-cpu watchdog_ev so it cannot change anything under running handler
feet.The hrtimer is simply restarted (thanks to Don Zickus who has pointed
this out) if it is queued because we cannot rely it will fire&adopt to
the new sampling period before a new nmi event triggers (when the
treshold is decreased).[akpm@linux-foundation.org: the UP version of __smp_call_function_single ended up in the wrong place]
Signed-off-by: Michal Hocko
Acked-by: Don Zickus
Cc: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Fabio Estevam
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
12 Sep, 2013
3 commits
-
All of the other non-trivial !SMP versions of functions in smp.h are
out-of-line in up.c. Move on_each_cpu() there as well.This allows us to get rid of the #include . The
drawback is that this makes both the x86_64 and i386 defconfig !SMP
kernels about 200 bytes larger each.Signed-off-by: David Daney
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
As in commit f21afc25f9ed ("smp.h: Use local_irq_{save,restore}() in
!SMP version of on_each_cpu()"), we don't want to enable irqs if they
are not already enabled. There are currently no known problematical
callers of these functions, but since it is a known failure pattern, we
preemptively fix them.Since they are not trivial functions, make them non-inline by moving
them to up.c. This also makes it so we don't have to fix #include
dependancies for preempt_{disable,enable}.Signed-off-by: David Daney
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Revert commit c846ef7deba2 ("include/linux/smp.h:on_each_cpu(): switch
back to a macro"). It turns out that the problematic linux/irqflags.h
include was fixed within ia64 and mn10300.Cc: Geert Uytterhoeven
Cc: David Daney
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
04 Jul, 2013
1 commit
-
Commit f21afc25f9ed ("smp.h: Use local_irq_{save,restore}() in !SMP
version of on_each_cpu()") converted on_each_cpu() to a C function.This required inclusion of irqflags.h, which broke ia64 and mn10300 (at
least) due to header ordering hell.Switch on_each_cpu() back to a macro to fix this.
Reported-by: Geert Uytterhoeven
Acked-by: Geert Uytterhoeven
Cc: David Daney
Cc: Ralf Baechle
Cc: Stephen Rothwell
Cc: [3.10.x]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
15 Jun, 2013
1 commit
-
Thanks to commit f91eb62f71b3 ("init: scream bloody murder if interrupts
are enabled too early"), "bloody murder" is now being screamed.With a MIPS OCTEON config, we use on_each_cpu() in our
irq_chip.irq_bus_sync_unlock() function. This gets called in early as a
result of the time_init() call. Because the !SMP version of
on_each_cpu() unconditionally enables irqs, we get:WARNING: at init/main.c:560 start_kernel+0x250/0x410()
Interrupts were enabled early
CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.0-rc5-Cavium-Octeon+ #801
Call Trace:
show_stack+0x68/0x80
warn_slowpath_common+0x78/0xb0
warn_slowpath_fmt+0x38/0x48
start_kernel+0x250/0x410Suggested fix: Do what we already do in the SMP version of
on_each_cpu(), and use local_irq_save/local_irq_restore. Because we
need a flags variable, make it a static inline to avoid name space
issues.[ Change from v1: Convert on_each_cpu to a static inline function, add
#include to avoid build breakage on some files.on_each_cpu_mask() and on_each_cpu_cond() suffer the same problem as
on_each_cpu(), but they are not causing !SMP bugs for me, so I will
defer changing them to a less urgent patch. ]Signed-off-by: David Daney
Cc: Ralf Baechle
Cc: Andrew Morton
Signed-off-by: Linus Torvalds
01 May, 2013
1 commit
-
The 'priv' field is redundant; we can pass data via 'info'.
Signed-off-by: liguang
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
22 Feb, 2013
1 commit
-
I'm testing swapout workload in a two-socket Xeon machine. The workload
has 10 threads, each thread sequentially accesses separate memory
region. TLB flush overhead is very big in the workload. For each page,
page reclaim need move it from active lru list and then unmap it. Both
need a TLB flush. And this is a multthread workload, TLB flush happens
in 10 CPUs. In X86, TLB flush uses generic smp_call)function. So this
workload stress smp_call_function_many heavily.Without patch, perf shows:
+ 24.49% [k] generic_smp_call_function_interrupt
- 21.72% [k] _raw_spin_lock
- _raw_spin_lock
+ 79.80% __page_check_address
+ 6.42% generic_smp_call_function_interrupt
+ 3.31% get_swap_page
+ 2.37% free_pcppages_bulk
+ 1.75% handle_pte_fault
+ 1.54% put_super
+ 1.41% grab_super_passive
+ 1.36% __swap_duplicate
+ 0.68% blk_flush_plug_list
+ 0.62% swap_info_get
+ 6.55% [k] flush_tlb_func
+ 6.46% [k] smp_call_function_many
+ 5.09% [k] call_function_interrupt
+ 4.75% [k] default_send_IPI_mask_sequence_phys
+ 2.18% [k] find_next_bitswapout throughput is around 1300M/s.
With the patch, perf shows:
- 27.23% [k] _raw_spin_lock
- _raw_spin_lock
+ 80.53% __page_check_address
+ 8.39% generic_smp_call_function_single_interrupt
+ 2.44% get_swap_page
+ 1.76% free_pcppages_bulk
+ 1.40% handle_pte_fault
+ 1.15% __swap_duplicate
+ 1.05% put_super
+ 0.98% grab_super_passive
+ 0.86% blk_flush_plug_list
+ 0.57% swap_info_get
+ 8.25% [k] default_send_IPI_mask_sequence_phys
+ 7.55% [k] call_function_interrupt
+ 7.47% [k] smp_call_function_many
+ 7.25% [k] flush_tlb_func
+ 3.81% [k] _raw_spin_lock_irqsave
+ 3.78% [k] generic_smp_call_function_single_interruptswapout throughput is around 1400M/s. So there is around a 7%
improvement, and total cpu utilization doesn't change.Without the patch, cfd_data is shared by all CPUs.
generic_smp_call_function_interrupt does read/write cfd_data several times
which will create a lot of cache ping-pong. With the patch, the data
becomes per-cpu. The ping-pong is avoided. And from the perf data, this
doesn't make call_single_queue lock contend.Next step is to remove generic_smp_call_function_interrupt() from arch
code.Signed-off-by: Shaohua Li
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Steven Rostedt
Cc: Jens Axboe
Cc: Linus Torvalds
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
05 Jun, 2012
2 commits
-
No users.
Signed-off-by: Thomas Gleixner
Cc: Srivatsa S. Bhat
Cc: Rusty Russell -
There is no user of those APIs anymore, just remove it.
Signed-off-by: Yong Zhang
Cc: ralf@linux-mips.org
Cc: sshtylyov@mvista.com
Cc: david.daney@cavium.com
Cc: nikunj@linux.vnet.ibm.com
Cc: paulmck@linux.vnet.ibm.com
Cc: axboe@kernel.dk
Cc: Andrew Morton
Link: http://lkml.kernel.org/r/1338275765-3217-11-git-send-email-yong.zhang0@gmail.com
Acked-by: Srivatsa S. Bhat
Acked-by: Peter Zijlstra
Signed-off-by: Thomas Gleixner
08 May, 2012
1 commit
-
Will replace the misnomed cpu_idle_wait() function which is copied a
gazillion times all over arch/*Signed-off-by: Thomas Gleixner
Acked-by: Peter Zijlstra
Link: http://lkml.kernel.org/r/20120507175652.049316594@linutronix.de
26 Apr, 2012
1 commit
-
Preparatory patch to make the idle thread allocation for secondary
cpus generic.Signed-off-by: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Rusty Russell
Cc: Paul E. McKenney
Cc: Srivatsa S. Bhat
Cc: Matt Turner
Cc: Russell King
Cc: Mike Frysinger
Cc: Jesper Nilsson
Cc: Richard Kuo
Cc: Tony Luck
Cc: Hirokazu Takata
Cc: Ralf Baechle
Cc: David Howells
Cc: James E.J. Bottomley
Cc: Benjamin Herrenschmidt
Cc: Martin Schwidefsky
Cc: Paul Mundt
Cc: David S. Miller
Cc: Chris Metcalf
Cc: Richard Weinberger
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/20120420124556.964170564@linutronix.de
29 Mar, 2012
2 commits
-
Add the on_each_cpu_cond() function that wraps on_each_cpu_mask() and
calculates the cpumask of cpus to IPI by calling a function supplied as a
parameter in order to determine whether to IPI each specific cpu.The function works around allocation failure of cpumask variable in
CONFIG_CPUMASK_OFFSTACK=y by itereating over cpus sending an IPI a time
via smp_call_function_single().The function is useful since it allows to seperate the specific code that
decided in each case whether to IPI a specific cpu for a specific request
from the common boilerplate code of handling creating the mask, handling
failures etc.[akpm@linux-foundation.org: s/gfpflags/gfp_flags/]
[akpm@linux-foundation.org: avoid double-evaluation of `info' (per Michal), parenthesise evaluation of `cond_func']
[akpm@linux-foundation.org: s/CPU/CPUs, use all 80 cols in comment]
Signed-off-by: Gilad Ben-Yossef
Cc: Chris Metcalf
Cc: Christoph Lameter
Acked-by: Peter Zijlstra
Cc: Frederic Weisbecker
Cc: Russell King
Cc: Pekka Enberg
Cc: Matt Mackall
Cc: Sasha Levin
Cc: Rik van Riel
Cc: Andi Kleen
Cc: Alexander Viro
Cc: Avi Kivity
Acked-by: Michal Nazarewicz
Cc: Kosaki Motohiro
Cc: Milton Miller
Reviewed-by: "Srivatsa S. Bhat"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
We have lots of infrastructure in place to partition multi-core systems
such that we have a group of CPUs that are dedicated to specific task:
cgroups, scheduler and interrupt affinity, and cpuisol= boot parameter.
Still, kernel code will at times interrupt all CPUs in the system via IPIs
for various needs. These IPIs are useful and cannot be avoided
altogether, but in certain cases it is possible to interrupt only specific
CPUs that have useful work to do and not the entire system.This patch set, inspired by discussions with Peter Zijlstra and Frederic
Weisbecker when testing the nohz task patch set, is a first stab at trying
to explore doing this by locating the places where such global IPI calls
are being made and turning the global IPI into an IPI for a specific group
of CPUs. The purpose of the patch set is to get feedback if this is the
right way to go for dealing with this issue and indeed, if the issue is
even worth dealing with at all. Based on the feedback from this patch set
I plan to offer further patches that address similar issue in other code
paths.This patch creates an on_each_cpu_mask() and on_each_cpu_cond()
infrastructure API (the former derived from existing arch specific
versions in Tile and Arm) and uses them to turn several global IPI
invocation to per CPU group invocations.Core kernel:
on_each_cpu_mask() calls a function on processors specified by cpumask,
which may or may not include the local processor.You must not call this function with disabled interrupts or from a
hardware interrupt handler or from a bottom half handler.arch/arm:
Note that the generic version is a little different then the Arm one:
1. It has the mask as first parameter
2. It calls the function on the calling CPU with interrupts disabled,
but this should be OK since the function is called on the other CPUs
with interrupts disabled anyway.arch/tile:
The API is the same as the tile private one, but the generic version
also calls the function on the with interrupts disabled in UP caseThis is OK since the function is called on the other CPUs
with interrupts disabled.Signed-off-by: Gilad Ben-Yossef
Reviewed-by: Christoph Lameter
Acked-by: Chris Metcalf
Acked-by: Peter Zijlstra
Cc: Frederic Weisbecker
Cc: Russell King
Cc: Pekka Enberg
Cc: Matt Mackall
Cc: Rik van Riel
Cc: Andi Kleen
Cc: Sasha Levin
Cc: Mel Gorman
Cc: Alexander Viro
Cc: Avi Kivity
Acked-by: Michal Nazarewicz
Cc: Kosaki Motohiro
Cc: Milton Miller
Cc: Russell King
Acked-by: Peter Zijlstra
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
17 Jun, 2011
1 commit
-
There is a problem that kdump(2nd kernel) sometimes hangs up due
to a pending IPI from 1st kernel. Kernel panic occurs because IPI
comes before call_single_queue is initialized.To fix the crash, rename init_call_single_data() to call_function_init()
and call it in start_kernel() so that call_single_queue can be
initialized before enabling interrupts.The details of the crash are:
(1) 2nd kernel boots up
(2) A pending IPI from 1st kernel comes when irqs are first enabled
in start_kernel().(3) Kernel tries to handle the interrupt, but call_single_queue
is not initialized yet at this point. As a result, in the
generic_smp_call_function_single_interrupt(), NULL pointer
dereference occurs when list_replace_init() tries to access
&q->list.next.Therefore this patch changes the name of init_call_single_data()
to call_function_init() and calls it before local_irq_enable()
in start_kernel().Signed-off-by: Takao Indoh
Reviewed-by: WANG Cong
Acked-by: Neil Horman
Acked-by: Vivek Goyal
Acked-by: Peter Zijlstra
Cc: Milton Miller
Cc: Jens Axboe
Cc: Paul E. McKenney
Cc: kexec@lists.infradead.org
Link: http://lkml.kernel.org/r/D6CBEE2F420741indou.takao@jp.fujitsu.com
Signed-off-by: Ingo Molnar
26 May, 2011
1 commit
-
Now that powerpc has removed its use of MSG_ALL_BUT_SELF and MSG_ALL
all these MSG_ flags are unused.Signed-off-by: Milton Miller
Signed-off-by: Benjamin Herrenschmidt
23 Mar, 2011
2 commits
-
Commit 34db18a054c6 ("smp: move smp setup functions to kernel/smp.c")
causes this build error on s390 because of a missing init.h include:CC arch/s390/kernel/asm-offsets.s
In file included from /home2/heicarst/linux-2.6/arch/s390/include/asm/spinlock.h:14:0,
from include/linux/spinlock.h:87,
from include/linux/seqlock.h:29,
from include/linux/time.h:8,
from include/linux/timex.h:56,
from include/linux/sched.h:57,
from arch/s390/kernel/asm-offsets.c:10:
include/linux/smp.h:117:20: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'setup_nr_cpu_ids'
include/linux/smp.h:118:20: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'smp_init'Fix it by adding the include statement.
Signed-off-by: Heiko Carstens
Acked-by: WANG Cong
Signed-off-by: Linus Torvalds -
Move setup_nr_cpu_ids(), smp_init() and some other SMP boot parameter
setup functions from init/main.c to kenrel/smp.c, saves some #ifdef
CONFIG_SMP.Signed-off-by: WANG Cong
Cc: Rakib Mullick
Cc: David Howells
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Arnd Bergmann
Cc: Akinobu Mita
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
28 Oct, 2010
1 commit
-
Typedef the pointer to the function to be called by smp_call_function() and
friends:typedef void (*smp_call_func_t)(void *info);
as it is used in a fair number of places.
Signed-off-by: David Howells
cc: linux-arch@vger.kernel.org
07 Mar, 2010
1 commit
-
smp: Fix documentation.
Fix documentation in include/linux/smp.h: smp_processor_id()
Signed-off-by: Rakib Mullick
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
18 Nov, 2009
1 commit
-
Andrew points out that acpi-cpufreq uses cpumask_any, when it really
would prefer to use the same CPU if possible (to avoid an IPI). In
general, this seems a good idea to offer.[ tglx: Documented selection preference and Inlined the UP case to
avoid the copy of smp_call_function_single() and the extra
EXPORT ]Signed-off-by: Rusty Russell
Cc: Ingo Molnar
Cc: Venkatesh Pallipadi
Cc: Len Brown
Cc: Zhao Yakui
Cc: Dave Jones
Cc: Thomas Gleixner
Cc: Mike Galbraith
Cc: "Zhang, Yanmin"
Signed-off-by: Andrew Morton
Signed-off-by: Thomas Gleixner
24 Sep, 2009
1 commit
-
Everyone is now using smp_call_function_many().
Signed-off-by: Rusty Russell
17 Jun, 2009
1 commit
-
put_cpu_no_resched() is an optimization of put_cpu() which unfortunately
can cause high latencies.The nfs iostats code uses put_cpu_no_resched() in a code sequence where a
reschedule request caused by an interrupt between the get_cpu() and the
put_cpu_no_resched() can delay the reschedule for at least HZ.The other users of put_cpu_no_resched() optimize correctly in interrupt
code, but there is no real harm in using the put_cpu() function which is
an alias for preempt_enable(). The extra check of the preemmpt count is
not as critical as the potential source of missing a reschedule.Debugged in the preempt-rt tree and verified in mainline.
Impact: remove a high latency source
[akpm@linux-foundation.org: build fix]
Signed-off-by: Thomas Gleixner
Acked-by: Ingo Molnar
Cc: Tony Luck
Cc: Trond Myklebust
Cc: "J. Bruce Fields"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
13 Mar, 2009
2 commits
-
Impact: cleanup, no code changed
Remove an ugly #ifdef CONFIG_SMP from panic(), by providing
an smp_send_stop() wrapper on UP too.LKML-Reference:
Signed-off-by: Ingo Molnar
25 Feb, 2009
1 commit
-
Oleg noticed that we don't strictly need CSD_FLAG_WAIT, rework
the code so that we can use CSD_FLAG_LOCK for both purposes.Signed-off-by: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Linus Torvalds
Cc: Nick Piggin
Cc: Jens Axboe
Cc: "Paul E. McKenney"
Cc: Rusty Russell
Signed-off-by: Ingo Molnar