14 Nov, 2011

1 commit

  • It appears that stop_machine_text_poke() wants to be called on all CPUs,
    like it's done from text_poke_smp(). Fix text_poke_smp_batch() to do
    this.

    Signed-off-by: Rabin Vincent
    Acked-by: Masami Hiramatsu
    Signed-off-by: Peter Zijlstra
    Cc: Mathieu Desnoyers
    Cc: Jason Baron
    Link: http://lkml.kernel.org/r/1319702072-32676-1-git-send-email-rabin@rab.in
    Signed-off-by: Ingo Molnar

    Rabin Vincent
     

15 Jul, 2011

1 commit


14 Jul, 2011

1 commit


20 May, 2011

1 commit

  • * 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, cpu: Fix detection of Celeron Covington stepping A1 and B0
    Documentation, ABI: Update L3 cache index disable text
    x86, AMD, cacheinfo: Fix L3 cache index disable checks
    x86, AMD, cacheinfo: Fix fallout caused by max3 conversion
    x86, cpu: Change NOP selection for certain Intel CPUs
    x86, cpu: Clean up and unify the NOP selection infrastructure
    x86, percpu: Use ASM_NOP4 instead of hardcoding P6_NOP4
    x86, cpu: Move AMD Elan Kconfig under "Processor family"

    Fix up trivial conflicts in alternative handling (commit dc326fca2b64
    "x86, cpu: Clean up and unify the NOP selection infrastructure" removed
    some hacky 5-byte instruction stuff, while commit d430d3d7e646 "jump
    label: Introduce static_branch() interface" renamed HAVE_JUMP_LABEL to
    CONFIG_JUMP_LABEL in the code that went away)

    Linus Torvalds
     

19 May, 2011

1 commit


18 May, 2011

1 commit

  • Some string operation functions may be patched twice, e.g. on enhanced REP MOVSB
    /STOSB processors, memcpy is patched first by fast string alternative function,
    then it is patched by enhanced REP MOVSB/STOSB alternative function.

    Add comment for applying alternatives order to warn people who may change the
    applying alternatives order for any reason.

    [ Documentation-only patch ]

    Signed-off-by: Fenghua Yu
    Link: http://lkml.kernel.org/r/1305671358-14478-4-git-send-email-fenghua.yu@intel.com
    Signed-off-by: H. Peter Anvin

    Fenghua Yu
     

19 Apr, 2011

2 commits

  • Due to a decoder implementation quirk, some specific Intel CPUs
    actually perform better with the "k8_nops" than with the
    SDM-recommended NOPs. For runtime-selected NOPs, if we detect those
    specific CPUs then use the k8_nops instead of the ones we would
    normally use.

    Signed-off-by: H. Peter Anvin
    Cc: Tejun Heo
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Jason Baron
    Link: http://lkml.kernel.org/r/1303166160-10315-4-git-send-email-hpa@linux.intel.com

    H. Peter Anvin
     
  • Clean up and unify the NOP selection infrastructure:

    - Make the atomic 5-byte NOP a part of the selection system.
    - Pick NOPs once during early boot and then be done with it.

    Signed-off-by: H. Peter Anvin
    Cc: Tejun Heo
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Jason Baron
    Link: http://lkml.kernel.org/r/1303166160-10315-3-git-send-email-hpa@linux.intel.com

    H. Peter Anvin
     

05 Apr, 2011

1 commit

  • Introduce:

    static __always_inline bool static_branch(struct jump_label_key *key);

    instead of the old JUMP_LABEL(key, label) macro.

    In this way, jump labels become really easy to use:

    Define:

    struct jump_label_key jump_key;

    Can be used as:

    if (static_branch(&jump_key))
    do unlikely code

    enable/disale via:

    jump_label_inc(&jump_key);
    jump_label_dec(&jump_key);

    that's it!

    For the jump labels disabled case, the static_branch() becomes an
    atomic_read(), and jump_label_inc()/dec() are simply atomic_inc(),
    atomic_dec() operations. We show testing results for this change below.

    Thanks to H. Peter Anvin for suggesting the 'static_branch()' construct.

    Since we now require a 'struct jump_label_key *key', we can store a pointer into
    the jump table addresses. In this way, we can enable/disable jump labels, in
    basically constant time. This change allows us to completely remove the previous
    hashtable scheme. Thanks to Peter Zijlstra for this re-write.

    Testing:

    I ran a series of 'tbench 20' runs 5 times (with reboots) for 3
    configurations, where tracepoints were disabled.

    jump label configured in
    avg: 815.6

    jump label *not* configured in (using atomic reads)
    avg: 800.1

    jump label *not* configured in (regular reads)
    avg: 803.4

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Jason Baron
    Suggested-by: H. Peter Anvin
    Tested-by: David Daney
    Acked-by: Ralf Baechle
    Acked-by: David S. Miller
    Acked-by: Mathieu Desnoyers
    Signed-off-by: Steven Rostedt

    Jason Baron
     

18 Mar, 2011

1 commit


15 Mar, 2011

1 commit

  • Intel Archiecture Software Developer's Manual section 7.1.3 specifies that a
    core serializing instruction such as "cpuid" should be executed on _each_ core
    before the new instruction is made visible.

    Failure to do so can lead to unspecified behavior (Intel XMC erratas include
    General Protection Fault in the list), so we should avoid this at all cost.

    This problem can affect modified code executed by interrupt handlers after
    interrupt are re-enabled at the end of stop_machine, because no core serializing
    instruction is executed between the code modification and the moment interrupts
    are reenabled.

    Because stop_machine_text_poke performs the text modification from the first CPU
    decrementing stop_machine_first, modified code executed in thread context is
    also affected by this problem. To explain why, we have to split the CPUs in two
    categories: the CPU that initiates the text modification (calls text_poke_smp)
    and all the others. The scheduler, executed on all other CPUs after
    stop_machine, issues an "iret" core serializing instruction, and therefore
    handles core serialization for all these CPUs. However, the text modification
    initiator can continue its execution on the same thread and access the modified
    text without any scheduler call. Given that the CPU that initiates the code
    modification is not guaranteed to be the one actually performing the code
    modification, it falls into the XMC errata.

    Q: Isn't this executed from an IPI handler, which will return with IRET (a
    serializing instruction) anyway?
    A: No, now stop_machine uses per-cpu workqueue, so that handler will be
    executed from worker threads. There is no iret anymore.

    Signed-off-by: Mathieu Desnoyers
    LKML-Reference:
    Reviewed-by: Masami Hiramatsu
    Cc:
    Cc: Arjan van de Ven
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: Andrew Morton
    Cc: Andi Kleen
    Cc: Frederic Weisbecker
    Signed-off-by: H. Peter Anvin

    Mathieu Desnoyers
     

12 Feb, 2011

1 commit

  • Fix this deadlock - we are already holding the mutex:

    =======================================================
    [ INFO: possible circular locking dependency detected ] 2.6.38-rc4-test+ #1
    -------------------------------------------------------
    bash/1850 is trying to acquire lock:
    (text_mutex){+.+.+.}, at: [] return_to_handler+0x0/0x2f

    but task is already holding lock:
    (smp_alt){+.+...}, at: [] return_to_handler+0x0/0x2f

    which lock already depends on the new lock.

    the existing dependency chain (in reverse order) is:

    -> #2 (smp_alt){+.+...}:
    [] lock_acquire+0xcd/0xf8
    [] __mutex_lock_common+0x4c/0x339
    [] mutex_lock_nested+0x3e/0x43
    [] alternatives_smp_switch+0x77/0x1d8
    [] do_boot_cpu+0xd7/0x762
    [] native_cpu_up+0xe6/0x16a
    [] _cpu_up+0x9d/0xee
    [] cpu_up+0xd3/0xe7
    [] kernel_init+0xe8/0x20a
    [] kernel_thread_helper+0x4/0x10

    -> #1 (cpu_hotplug.lock){+.+.+.}:
    [] lock_acquire+0xcd/0xf8
    [] __mutex_lock_common+0x4c/0x339
    [] mutex_lock_nested+0x3e/0x43
    [] get_online_cpus+0x41/0x55
    [] stop_machine+0x1e/0x3e
    [] text_poke_smp_batch+0x3a/0x3c
    [] arch_optimize_kprobes+0x10d/0x11c
    [] kprobe_optimizer+0x152/0x222
    [] process_one_work+0x1d3/0x335
    [] worker_thread+0x104/0x1a4
    [] kthread+0x9d/0xa5
    [] kernel_thread_helper+0x4/0x10

    -> #0 (text_mutex){+.+.+.}:

    other info that might help us debug this:

    6 locks held by bash/1850:
    #0: (&buffer->mutex){+.+.+.}, at: [] return_to_handler+0x0/0x2f
    #1: (s_active#75){.+.+.+}, at: [] return_to_handler+0x0/0x2f
    #2: (x86_cpu_hotplug_driver_mutex){+.+.+.}, at: [] return_to_handler+0x0/0x2f
    #3: (cpu_add_remove_lock){+.+.+.}, at: [] return_to_handler+0x0/0x2f
    #4: (cpu_hotplug.lock){+.+.+.}, at: [] return_to_handler+0x0/0x2f
    #5: (smp_alt){+.+...}, at: [] return_to_handler+0x0/0x2f

    stack backtrace:
    Pid: 1850, comm: bash Not tainted 2.6.38-rc4-test+ #1
    Call Trace:

    [] print_circular_bug+0xa8/0xb7
    [] mutex_lock_nested+0x3e/0x43
    [] alternatives_smp_unlock+0x3d/0x93
    [] alternatives_smp_switch+0x198/0x1d8
    [] native_cpu_die+0x65/0x95
    [] _cpu_down+0x13e/0x202
    [] sysfs_write_file+0x108/0x144
    [] vfs_write+0xac/0xff
    [] sys_write+0x4a/0x6e

    Reported-by: Steven Rostedt
    Tested-by: Steven Rostedt
    Signed-off-by: Peter Zijlstra
    Cc: mathieu.desnoyers@efficios.com
    Cc: rusty@rustcorp.com.au
    Cc: ananth@in.ibm.com
    Cc: masami.hiramatsu.pt@hitachi.com
    Cc: fweisbec@gmail.com
    Cc: jbeulich@novell.com
    Cc: jbaron@redhat.com
    Cc: mhiramat@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

07 Jan, 2011

1 commit

  • …-linus', 'x86-paravirt-for-linus', 'core-locking-for-linus' and 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

    * 'x86-alternatives-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, suspend: Avoid unnecessary smp alternatives switch during suspend/resume

    * 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86-64, asm: Use fxsaveq/fxrestorq in more places

    * 'x86-hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, hwmon: Add core threshold notification to therm_throt.c

    * 'x86-paravirt-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, paravirt: Use native_halt on a halt, not native_safe_halt

    * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    locking, lockdep: Convert sprintf_symbol to %pS

    * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    irq: Better struct irqaction layout

    Linus Torvalds
     

14 Dec, 2010

1 commit

  • During suspend, we disable all the non boot cpus. And during resume we bring
    them all back again. So no need to do alternatives_smp_switch() in between.

    On my core 2 based laptop, this speeds up the suspend path by 15msec and the
    resume path by 5 msec (suspend/resume speed up differences can be attributed
    to the different P-states that the cpu is in during suspend/resume).

    Signed-off-by: Suresh Siddha
    LKML-Reference:
    Cc: Rafael J. Wysocki
    Signed-off-by: H. Peter Anvin

    Suresh Siddha
     

07 Dec, 2010

1 commit

  • Introduce text_poke_smp_batch(). This function modifies several
    text areas with one stop_machine() on SMP. Because calling
    stop_machine() is heavy task, it is better to aggregate
    text_poke requests.

    ( Note: I've talked with Rusty about this interface, and
    he would not like to expand stop_machine() interface, since
    it is not for generic use. )

    Signed-off-by: Masami Hiramatsu
    Acked-by: Steven Rostedt
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Jan Beulich
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

31 Oct, 2010

1 commit

  • …nel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    jump label: Add work around to i386 gcc asm goto bug
    x86, ftrace: Use safe noops, drop trap test
    jump_label: Fix unaligned traps on sparc.
    jump label: Make arch_jump_label_text_poke_early() optional
    jump label: Fix error with preempt disable holding mutex
    oprofile: Remove deprecated use of flush_scheduled_work()
    oprofile: Fix the hang while taking the cpu offline
    jump label: Fix deadlock b/w jump_label_mutex vs. text_mutex
    jump label: Fix module __init section race

    * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86: Check irq_remapped instead of remapping_enabled in destroy_irq()

    Linus Torvalds
     

30 Oct, 2010

2 commits

  • Currently, text_poke_smp() passes a NULL as the third argument to
    __stop_machine(), which will only run stop_machine_text_poke()
    on 1 cpu. Change NULL -> cpu_online_mask, as stop_machine_text_poke()
    is intended to be run on all cpus.

    I actually didn't notice any problems with stop_machine_text_poke()
    only being called on 1 cpu, but found this via code inspection.

    Signed-off-by: Jason Baron
    LKML-Reference:
    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Signed-off-by: H. Peter Anvin

    Jason Baron
     
  • Always use a safe 5-byte noop sequence. Drop the trap test, since it
    is known to return false negatives on some virtualization platforms on
    32 bits. The resulting code is both simpler and safer.

    Cc: Daniel Drake
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Signed-off-by: H. Peter Anvin
    Signed-off-by: Steven Rostedt

    H. Peter Anvin
     

14 Oct, 2010

1 commit

  • Use __stop_machine() in text_poke_smp() because the caller
    must get online_cpus before calling text_poke_smp(), but
    stop_machine() do it again. We don't need it.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Ananth N Mavinakayanahalli
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    Cc: Mathieu Desnoyers
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

23 Sep, 2010

1 commit

  • base patch to implement 'jump labeling'. Based on a new 'asm goto' inline
    assembly gcc mechanism, we can now branch to labels from an 'asm goto'
    statment. This allows us to create a 'no-op' fastpath, which can subsequently
    be patched with a jump to the slowpath code. This is useful for code which
    might be rarely used, but which we'd like to be able to call, if needed.
    Tracepoints are the current usecase that these are being implemented for.

    Acked-by: David S. Miller
    Signed-off-by: Jason Baron
    LKML-Reference:

    [ cleaned up some formating ]

    Signed-off-by: Steven Rostedt

    Jason Baron
     

21 Sep, 2010

2 commits


14 Jul, 2010

1 commit


30 Apr, 2010

1 commit


29 Apr, 2010

1 commit


30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

06 Mar, 2010

1 commit

  • …nel/git/tip/linux-2.6-tip

    * 'perf-probes-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86: Issue at least one memory barrier in stop_machine_text_poke()
    perf probe: Correct probe syntax on command line help
    perf probe: Add lazy line matching support
    perf probe: Show more lines after last line
    perf probe: Check function address range strictly in line finder
    perf probe: Use libdw callback routines
    perf probe: Use elfutils-libdw for analyzing debuginfo
    perf probe: Rename probe finder functions
    perf probe: Fix bugs in line range finder
    perf probe: Update perf probe document
    perf probe: Do not show --line option without dwarf support
    kprobes: Add documents of jump optimization
    kprobes/x86: Support kprobes jump optimization on x86
    x86: Add text_poke_smp for SMP cross modifying code
    kprobes/x86: Cleanup save/restore registers
    kprobes/x86: Boost probes when reentering
    kprobes: Jump optimization sysctl interface
    kprobes: Introduce kprobes jump optimization
    kprobes: Introduce generic insn_slot framework
    kprobes/x86: Cleanup RELATIVEJUMP_INSTRUCTION to RELATIVEJUMP_OPCODE

    Linus Torvalds
     

04 Mar, 2010

1 commit


01 Mar, 2010

1 commit

  • * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86: Mark atomic irq ops raw for 32bit legacy
    x86: Merge show_regs()
    x86: Macroise x86 cache descriptors
    x86-32: clean up rwsem inline asm statements
    x86: Merge asm/atomic_{32,64}.h
    x86: Sync asm/atomic_32.h and asm/atomic_64.h
    x86: Split atomic64_t functions into seperate headers
    x86-64: Modify memcpy()/memset() alternatives mechanism
    x86-64: Modify copy_user_generic() alternatives mechanism
    x86: Lift restriction on the location of FIX_BTMAP_*
    x86, core: Optimize hweight32()

    Linus Torvalds
     

26 Feb, 2010

2 commits

  • The current lock prefix UP/SMP alternative code doesn't allow
    LOCK_PREFIX to be used in alternatives code.

    This patch solves the problem by adding a new LOCK_PREFIX_ALTERNATIVE_PATCH
    macro that only records the lock prefix location but does not emit
    the prefix.

    The user of this macro can then start any alternative sequence with
    "lock" and have it UP/SMP patched.

    To make this work, the UP/SMP alternative code is changed to do the
    lock/DS prefix switching only if the byte actually contains a lock or
    DS prefix.

    Thus, if an alternative without the "lock" is selected, it will now do
    nothing instead of clobbering the code.

    Changes in v2:
    - Naming change
    - Change label to not conflict with alternatives

    Signed-off-by: Luca Barbieri
    LKML-Reference:
    Signed-off-by: H. Peter Anvin

    Luca Barbieri
     
  • Add generic text_poke_smp for SMP which uses stop_machine()
    to synchronize modifying code.
    This stop_machine() method is officially described at "7.1.3
    Handling Self- and Cross-Modifying Code" on the intel's
    software developer's manual 3A.

    Since stop_machine() can't protect code against NMI/MCE, this
    function can not modify those handlers. And also, this function
    is basically for modifying multibyte-single-instruction. For
    modifying multibyte-multi-instructions, we need another special
    trap & detour code.

    This code originaly comes from immediate values with
    stop_machine() version. Thanks Jason and Mathieu!

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Mathieu Desnoyers
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

08 Feb, 2010

1 commit

  • Fixes these warnings:

    arch/x86/kernel/alternative.c: In function 'alternatives_text_reserved':
    arch/x86/kernel/alternative.c:402: warning: comparison of distinct pointer types lacks a cast
    arch/x86/kernel/alternative.c:402: warning: comparison of distinct pointer types lacks a cast
    arch/x86/kernel/alternative.c:405: warning: comparison of distinct pointer types lacks a cast
    arch/x86/kernel/alternative.c:405: warning: comparison of distinct pointer types lacks a cast

    Caused by:

    2cfa197: ftrace/alternatives: Introducing *_text_reserved functions

    Changes in v2:
    - Use local variables to compare, instead of type casts.

    Reported-by: Ingo Molnar
    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

04 Feb, 2010

1 commit

  • Introducing *_text_reserved functions for checking the text
    address range is partially reserved or not. This patch provides
    checking routines for x86 smp alternatives and dynamic ftrace.
    Since both functions modify fixed pieces of kernel text, they
    should reserve and protect those from other dynamic text
    modifier, like kprobes.

    This will also be extended when introducing other subsystems
    which modify fixed pieces of kernel text. Dynamic text modifiers
    should avoid those.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Steven Rostedt
    Cc: przemyslaw@pawelczyk.it
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Mathieu Desnoyers
    Cc: Jason Baron
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

30 Dec, 2009

1 commit

  • In order to avoid unnecessary chains of branches, rather than
    implementing copy_user_generic() as a function consisting of
    just a single (possibly patched) branch, instead properly deal
    with patching call instructions in the alternative instructions
    framework, and move the patching into the callers.

    As a follow-on, one could also introduce something like
    __EXPORT_SYMBOL_ALT() to avoid patching call sites in modules.

    Signed-off-by: Jan Beulich
    Cc: Nick Piggin
    Cc: Linus Torvalds
    Cc: Andrew Morton
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Jan Beulich
     

14 Sep, 2009

1 commit

  • * 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (22 commits)
    x86: Fix code patching for paravirt-alternatives on 486
    x86, msr: change msr-reg.o to obj-y, and export its symbols
    x86: Use hard_smp_processor_id() to get apic id for AMD K8 cpus
    x86, sched: Workaround broken sched domain creation for AMD Magny-Cours
    x86, mcheck: Use correct cpumask for shared bank4
    x86, cacheinfo: Fixup L3 cache information for AMD multi-node processors
    x86: Fix CPU llc_shared_map information for AMD Magny-Cours
    x86, msr: Fix msr-reg.S compilation with gas 2.16.1, on 32-bit too
    x86: Move kernel_fpu_using to irq_fpu_usable in asm/i387.h
    x86, msr: fix msr-reg.S compilation with gas 2.16.1
    x86, msr: Export the register-setting MSR functions via /dev/*/msr
    x86, msr: Create _on_cpu helpers for {rw,wr}msr_safe_regs()
    x86, msr: Have the _safe MSR functions return -EIO, not -EFAULT
    x86, msr: CFI annotations, cleanups for msr-reg.S
    x86, asm: Make _ASM_EXTABLE() usable from assembly code
    x86, asm: Add 32-bit versions of the combined CFI macros
    x86, AMD: Disable wrongly set X86_FEATURE_LAHF_LM CPUID bit
    x86, msr: Rewrite AMD rd/wrmsr variants
    x86, msr: Add rd/wrmsr interfaces with preset registers
    x86: add specific support for Intel Atom architecture
    ...

    Linus Torvalds
     

11 Sep, 2009

1 commit

  • As reported in and
    , kernels with paravirt-alternatives
    enabled crash in text_poke_early() on at least some 486-class
    processors.

    The problem is that text_poke_early() itself uses inline functions
    affected by paravirt-alternatives and so will modify instructions that
    have already been prefetched. Pentium and later processors will
    invalidate the prefetched instructions in this case, but 486-class
    processors do not.

    Change sync_core() to limit prefetching on 486-class (and 386-class)
    processors, and move the call to sync_core() above the call to the
    modifiable local_irq_restore().

    Signed-off-by: Ben Hutchings
    LKML-Reference:
    Signed-off-by: H. Peter Anvin

    Ben Hutchings
     

22 Aug, 2009

1 commit

  • Some of the NOPs tables aren't used on 64-bits, quite some code and
    data is needed post-init for module loading only, and a couple of
    functions aren't used outside that file (i.e. can be static, and don't
    need to be exported).

    The change to __INITDATA/__INITRODATA is needed to avoid an assembler
    warning.

    Signed-off-by: Jan Beulich
    LKML-Reference:
    Acked-by: Sam Ravnborg
    Signed-off-by: H. Peter Anvin

    Jan Beulich
     

10 Mar, 2009

1 commit


06 Mar, 2009

2 commits