13 Jan, 2012

2 commits

  • Move CMPXCHG_DOUBLE and rename it to HAVE_CMPXCHG_DOUBLE so architectures
    can simply select the option if it is supported.

    Signed-off-by: Heiko Carstens
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     
  • Move CMPXCHG_LOCAL and rename it to HAVE_CMPXCHG_LOCAL so architectures
    can simply select the option if it is supported.

    Signed-off-by: Heiko Carstens
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     

26 Jun, 2011

1 commit

  • A simple implementation that only supports the word size and does not
    have a fallback mode (would require a spinlock).

    Add 32 and 64 bit support for cmpxchg_double. cmpxchg double uses
    the cmpxchg8b or cmpxchg16b instruction on x86 processors to compare
    and swap 2 machine words. This allows lockless algorithms to move more
    context information through critical sections.

    Set a flag CONFIG_CMPXCHG_DOUBLE to signal that support for double word
    cmpxchg detection has been build into the kernel. Note that each subsystem
    using cmpxchg_double has to implement a fall back mechanism as long as
    we offer support for processors that do not implement cmpxchg_double.

    Reviewed-by: H. Peter Anvin
    Cc: Tejun Heo
    Cc: Pekka Enberg
    Signed-off-by: Christoph Lameter
    Link: http://lkml.kernel.org/r/20110601172614.173427964@linux.com
    Signed-off-by: H. Peter Anvin

    Christoph Lameter
     

09 Apr, 2011

1 commit

  • Currently the option resides under X86_EXTENDED_PLATFORM due to historical
    nonstandard A20M# handling. However that is no longer the case and so Elan can
    be treated as part of the standard processor choice Kconfig option.

    Signed-off-by: Ian Campbell
    Link: http://lkml.kernel.org/r/1302245177.31620.47.camel@localhost.localdomain
    Cc: H. Peter Anvin
    Signed-off-by: H. Peter Anvin

    Ian Campbell
     

18 Mar, 2011

2 commits


17 Mar, 2011

1 commit


09 Mar, 2011

1 commit


21 Jan, 2011

1 commit

  • The meaning of CONFIG_EMBEDDED has long since been obsoleted; the option
    is used to configure any non-standard kernel with a much larger scope than
    only small devices.

    This patch renames the option to CONFIG_EXPERT in init/Kconfig and fixes
    references to the option throughout the kernel. A new CONFIG_EMBEDDED
    option is added that automatically selects CONFIG_EXPERT when enabled and
    can be used in the future to isolate options that should only be
    considered for embedded systems (RISC architectures, SLOB, etc).

    Calling the option "EXPERT" more accurately represents its intention: only
    expert users who understand the impact of the configuration changes they
    are making should enable it.

    Reviewed-by: Ingo Molnar
    Acked-by: David Woodhouse
    Signed-off-by: David Rientjes
    Cc: Greg KH
    Cc: "David S. Miller"
    Cc: Jens Axboe
    Cc: Arnd Bergmann
    Cc: Robin Holt
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

18 Dec, 2010

1 commit

  • Provide support as far as the hardware capabilities of the x86 cpus
    allow.

    Define CONFIG_CMPXCHG_LOCAL in Kconfig.cpu to allow core code to test for
    fast cpuops implementations.

    V1->V2:
    - Take out the definition for this_cpu_cmpxchg_8 and move it into
    a separate patch.

    tj: - Reordered ops to better follow this_cpu_* organization.
    - Renamed macro temp variables similar to their existing
    neighbours.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Christoph Lameter
     

18 May, 2010

1 commit

  • * 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, fpu: Use static_cpu_has() to implement use_xsave()
    x86: Add new static_cpu_has() function using alternatives
    x86, fpu: Use the proper asm constraint in use_xsave()
    x86, fpu: Unbreak FPU emulation
    x86: Introduce 'struct fpu' and related API
    x86: Eliminate TS_XSAVE
    x86-32: Don't set ignore_fpu_irq in simd exception
    x86: Merge kernel_math_error() into math_error()
    x86: Merge simd_math_error() into math_error()
    x86-32: Rework cache flush denied handler

    Fix trivial conflict in arch/x86/kernel/process.c

    Linus Torvalds
     

04 May, 2010

1 commit

  • The cache flush denied error is an erratum on some AMD 486 clones. If an invd
    instruction is executed in userspace, the processor calls exception 19 (13 hex)
    instead of #GP (13 decimal). On cpus where XMM is not supported, redirect
    exception 19 to do_general_protection(). Also, remove die_if_kernel(), since
    this was the last user.

    Signed-off-by: Brian Gerst
    LKML-Reference:
    Signed-off-by: H. Peter Anvin

    Brian Gerst
     

26 Mar, 2010

1 commit

  • Support for the PMU's BTS features has been upstreamed in
    v2.6.32, but we still have the old and disabled ptrace-BTS,
    as Linus noticed it not so long ago.

    It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
    regard for other uses (perf) and doesn't provide the flexibility
    needed for perf either.

    Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
    was never used and ptrace-block-step can be implemented using a
    much simpler approach.

    So axe all 3000 lines of it. That includes the *locked_memory*()
    APIs in mm/mlock.c as well.

    Reported-by: Linus Torvalds
    Signed-off-by: Peter Zijlstra
    Cc: Roland McGrath
    Cc: Oleg Nesterov
    Cc: Markus Metzger
    Cc: Steven Rostedt
    Cc: Andrew Morton
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

01 Mar, 2010

1 commit


14 Jan, 2010

1 commit

  • This one is much faster than the spinlock based fallback rwsem code,
    with certain artifical benchmarks having shown 300%+ improvement on
    threaded page faults etc.

    Again, note the 32767-thread limit here. So this really does need that
    whole "make rwsem_count_t be 64-bit and fix the BIAS values to match"
    extension on top of it, but that is conceptually a totally independent
    issue.

    NOT TESTED! The original patch that this all was based on were tested by
    KAMEZAWA Hiroyuki, but maybe I screwed up something when I created the
    cleaned-up series, so caveat emptor..

    Also note that it _may_ be a good idea to mark some more registers
    clobbered on x86-64 in the inline asms instead of saving/restoring them.
    They are inline functions, but they are only used in places where there
    are not a lot of live registers _anyway_, so doing for example the
    clobbers of %r8-%r11 in the asm wouldn't make the fast-path code any
    worse, and would make the slow-path code smaller.

    (Not that the slow-path really matters to that degree. Saving a few
    unnecessary registers is the _least_ of our problems when we hit the slow
    path. The instruction/cycle counting really only matters in the fast
    path).

    Signed-off-by: Linus Torvalds
    LKML-Reference:
    Signed-off-by: H. Peter Anvin

    Linus Torvalds
     

06 Jan, 2010

1 commit

  • This reverts commit ae1b22f6e46c03cede7cea234d0bf2253b4261cf.

    As Linus said in 982d007a6ee: "There was something really messy about
    cmpxchg8b and clone CPU's, so if you enable it on other CPUs later, do it
    carefully."

    This breaks lguest for those configs, but we can fix that by emulating
    if we have to.

    Fixes: http://bugzilla.kernel.org/show_bug.cgi?id=14884
    Signed-off-by: Rusty Russell
    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    Rusty Russell
     

09 Dec, 2009

1 commit

  • * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (36 commits)
    x86, mm: Correct the implementation of is_untracked_pat_range()
    x86/pat: Trivial: don't create debugfs for memtype if pat is disabled
    x86, mtrr: Fix sorting of mtrr after subtracting
    x86: Move find_smp_config() earlier and avoid bootmem usage
    x86, platform: Change is_untracked_pat_range() to bool; cleanup init
    x86: Change is_ISA_range() into an inline function
    x86, mm: is_untracked_pat_range() takes a normal semiclosed range
    x86, mm: Call is_untracked_pat_range() rather than is_ISA_range()
    x86: UV SGI: Don't track GRU space in PAT
    x86: SGI UV: Fix BAU initialization
    x86, numa: Use near(er) online node instead of roundrobin for NUMA
    x86, numa, bootmem: Only free bootmem on NUMA failure path
    x86: Change crash kernel to reserve via reserve_early()
    x86: Eliminate redundant/contradicting cache line size config options
    x86: When cleaning MTRRs, do not fold WP into UC
    x86: remove "extern" from function prototypes in
    x86, mm: Report state of NX protections during boot
    x86, mm: Clean up and simplify NX enablement
    x86, pageattr: Make set_memory_(x|nx) aware of NX support
    x86, sleep: Always save the value of EFER
    ...

    Fix up conflicts (added both iommu_shutdown and is_untracked_pat_range)
    to 'struct x86_platform_ops') in
    arch/x86/include/asm/x86_init.h
    arch/x86/kernel/x86_init.c

    Linus Torvalds
     

06 Dec, 2009

1 commit


19 Nov, 2009

1 commit

  • Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT
    (with inconsistent defaults), just having the latter suffices as
    the former can be easily calculated from it.

    To be consistent, also change X86_INTERNODE_CACHE_BYTES to
    X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA
    to account for last level cache line size (which here matters
    more than L1 cache line size).

    Finally, make sure the default value for X86_L1_CACHE_SHIFT,
    when X86_GENERIC is selected, is being seen before that for the
    individual CPU model options (other than on x86-64, where
    GENERIC_CPU is part of the choice construct, X86_GENERIC is a
    separate option on ix86).

    Signed-off-by: Jan Beulich
    Acked-by: Ravikiran Thirumalai
    Acked-by: Nick Piggin
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Jan Beulich
     

26 Oct, 2009

1 commit

  • Commit 79e1dd05d1a22 "x86: Provide an alternative() based
    cmpxchg64()" broke lguest, even on systems which have cmpxchg8b
    support. The emulation code gets used until alternatives get
    run, but it contains native instructions, not their paravirt
    alternatives.

    The simplest fix is to turn this code off except for 386 and 486
    builds.

    Reported-by: Johannes Stezenbach
    Signed-off-by: Rusty Russell
    Acked-by: H. Peter Anvin
    Cc: lguest@ozlabs.org
    Cc: Arjan van de Ven
    Cc: Jeremy Fitzhardinge
    Cc: Linus Torvalds
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Rusty Russell
     

03 Oct, 2009

1 commit


01 Oct, 2009

1 commit

  • Try to avoid the 'alternates()' code when we can statically
    determine that cmpxchg8b is fine. We already have that
    CONFIG_x86_CMPXCHG64 (enabled by PAE support), and we could easily
    also enable it for some of the CPU cases.

    Note, this patch only adds CMPXCHG8B for the obvious Intel CPU's,
    not for others. (There was something really messy about cmpxchg8b
    and clone CPU's, so if you enable it on other CPUs later, do it
    carefully.)

    If we avoid that asm-alternative thing when we can assume the
    instruction exists, we'll generate less support crud, and we'll
    avoid the whole issue with that extra 'nop' for padding instruction
    sizes etc.

    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Linus Torvalds
     

23 Aug, 2009

1 commit


16 Apr, 2009

1 commit

  • Oleg Nesterov found a couple of races in the ptrace-bts code
    and fixes are queued up for it but they did not get ready in time
    for the merge window. We'll merge them in v2.6.31 - until then
    mark the feature as CONFIG_BROKEN. There's no user-space yet
    making use of this so it's not a big issue.

    Cc:
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

14 Mar, 2009

1 commit

  • there should be no difference, except:

    * the 64bit variant now also initializes the padlock unit.
    * ->c_early_init() is executed again from ->c_init()
    * the 64bit fixups made into 32bit path.

    Signed-off-by: Sebastian Andrzej Siewior
    Cc: herbert@gondor.apana.org.au
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Sebastian Andrzej Siewior
     

06 Feb, 2009

2 commits


05 Feb, 2009

1 commit


21 Jan, 2009

1 commit

  • Fix:

    arch/x86/mm/tlb.c:47: error: ‘CONFIG_X86_INTERNODE_CACHE_BYTES’ undeclared here (not in a function)

    The CONFIG_X86_INTERNODE_CACHE_BYTES symbol is only defined on 64-bit,
    because vsmp support is 64-bit only. Define it on 32-bit too - where it
    will always be equal to X86_L1_CACHE_BYTES.

    Also move the default of X86_L1_CACHE_BYTES (which is separate from the
    more commonly used L1_CACHE_SHIFT kconfig symbol) from 128 bytes to
    64 bytes.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

14 Jan, 2009

1 commit

  • Right now the generic cacheline size is 128 bytes - that is wasteful
    when structures are aligned, as all modern x86 CPUs have an (effective)
    cacheline sizes of 64 bytes.

    It was set to 128 bytes due to some cacheline aliasing problems on
    older P4 systems, but those are many years old and we dont optimize
    for them anymore. (They'll still get the 128 bytes cacheline size if
    the kernel is specifically built for Pentium 4)

    Signed-off-by: Ingo Molnar
    Acked-by: Arjan van de Ven

    Ingo Molnar
     

06 Jan, 2009

1 commit


26 Nov, 2008

1 commit

  • Impact: add new ftrace plugin

    A prototype for a BTS ftrace plug-in.

    The tracer collects branch trace in a cyclic buffer for each cpu.

    The tracer is not configurable and the trace for each snapshot is
    appended when doing cat /debug/tracing/trace.

    This is a proof of concept that will be extended with future patches
    to become a (hopefully) useful tool.

    Signed-off-by: Markus Metzger
    Signed-off-by: Ingo Molnar

    Markus Metzger
     

28 Oct, 2008

1 commit


13 Oct, 2008

2 commits


12 Oct, 2008

2 commits


10 Sep, 2008

3 commits