19 Jul, 2011

2 commits


17 Dec, 2010

1 commit

  • Use this_cpu_ops to reduce code size and simplify things in various places.

    V3->V4:
    Move instance of this_cpu_inc_return to a later patchset so that
    this patch can be applied without infrastructure changes.

    Cc: Jeremy Fitzhardinge
    Acked-by: H. Peter Anvin
    Signed-off-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Christoph Lameter
     

09 Feb, 2009

1 commit


04 Feb, 2009

1 commit

  • Impact: Fix race condition

    xen_mc_batch has a small preempt race where it takes the address of a
    percpu variable immediately before disabling interrupts, thereby
    leaving a small window in which we may migrate to another cpu and save
    the flags in the wrong percpu variable. Disable interrupts before
    saving the old flags in a percpu.

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: H. Peter Anvin

    Jeremy Fitzhardinge
     

16 Jan, 2009

1 commit

  • It is an optimization and a cleanup, and adds the following new
    generic percpu methods:

    percpu_read()
    percpu_write()
    percpu_add()
    percpu_sub()
    percpu_and()
    percpu_or()
    percpu_xor()

    and implements support for them on x86. (other architectures will fall
    back to a default implementation)

    The advantage is that for example to read a local percpu variable,
    instead of this sequence:

    return __get_cpu_var(var);

    ffffffff8102ca2b: 48 8b 14 fd 80 09 74 mov -0x7e8bf680(,%rdi,8),%rdx
    ffffffff8102ca32: 81
    ffffffff8102ca33: 48 c7 c0 d8 59 00 00 mov $0x59d8,%rax
    ffffffff8102ca3a: 48 8b 04 10 mov (%rax,%rdx,1),%rax

    We can get a single instruction by using the optimized variants:

    return percpu_read(var);

    ffffffff8102ca3f: 65 48 8b 05 91 8f fd mov %gs:0x7efd8f91(%rip),%rax

    I also cleaned up the x86-specific APIs and made the x86 code use
    these new generic percpu primitives.

    tj: * fixed generic percpu_sub() definition as Roel Kluin pointed out
    * added percpu_and() for completeness's sake
    * made generic percpu ops atomic against preemption

    Signed-off-by: Ingo Molnar
    Signed-off-by: Tejun Heo

    Ingo Molnar
     

25 Jun, 2008

1 commit

  • Some Xen hypercalls accept an array of operations to work on. In
    general this is because its more efficient for the hypercall to the
    work all at once rather than as separate hypercalls (even batched as a
    multicall).

    This patch adds a mechanism (xen_mc_extend_args()) to allocate more
    argument space to the last-issued multicall, in order to extend its
    argument list.

    The user of this mechanism is xen/mmu.c, which uses it to extend the
    args array of mmu_update. This is particularly valuable when doing
    the update for a large mprotect, which goes via
    ptep_modify_prot_commit(), but it also manages to batch updates to
    pgd/pmds as well.

    Signed-off-by: Jeremy Fitzhardinge
    Acked-by: Linus Torvalds
    Acked-by: Hugh Dickins
    Signed-off-by: Ingo Molnar

    Jeremy Fitzhardinge
     

17 Oct, 2007

2 commits

  • This adds a mechanism to register a callback function to be called once
    a batch of hypercalls has been issued. This is typically used to unlock
    things which must remain locked until the hypercall has taken place.

    [ Stable folks: pre-req for 2.6.23 bugfix "xen: deal with stale cr3
    values when unpinning pagetables" ]

    Signed-off-by: Jeremy Fitzhardinge
    Cc: Stable Kernel

    Jeremy Fitzhardinge
     
  • Currently, the set_lazy_mode pv_op is overloaded with 5 functions:
    1. enter lazy cpu mode
    2. leave lazy cpu mode
    3. enter lazy mmu mode
    4. leave lazy mmu mode
    5. flush pending batched operations

    This complicates each paravirt backend, since it needs to deal with
    all the possible state transitions, handling flushing, etc. In
    particular, flushing is quite distinct from the other 4 functions, and
    seems to just cause complication.

    This patch removes the set_lazy_mode operation, and adds "enter" and
    "leave" lazy mode operations on mmu_ops and cpu_ops. All the logic
    associated with enter and leaving lazy states is now in common code
    (basically BUG_ONs to make sure that no mode is current when entering
    a lazy mode, and make sure that the mode is current when leaving).
    Also, flush is handled in a common way, by simply leaving and
    re-entering the lazy mode.

    The result is that the Xen, lguest and VMI lazy mode implementations
    are much simpler.

    Signed-off-by: Jeremy Fitzhardinge
    Cc: Andi Kleen
    Cc: Zach Amsden
    Cc: Rusty Russell
    Cc: Avi Kivity
    Cc: Anthony Liguory
    Cc: "Glauber de Oliveira Costa"
    Cc: Jun Nakajima

    Jeremy Fitzhardinge
     

11 Oct, 2007

1 commit