15 Feb, 2009

5 commits

  • kvm->slots_lock is outer to kvm->lock, so take slots_lock
    in kvm_vm_ioctl_assign_device() before taking kvm->lock,
    rather than taking it in kvm_iommu_map_memslots().

    Cc: stable@kernel.org
    Signed-off-by: Mark McLoughlin
    Acked-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Mark McLoughlin
     
  • Missing buckets and wrong parameter for free_irq()

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • In the past, kvm_get_kvm() and kvm_put_kvm() was called in assigned device irq
    handler and interrupt_work, in order to prevent cancel_work_sync() in
    kvm_free_assigned_irq got a illegal state when waiting for interrupt_work done.
    But it's tricky and still got two problems:

    1. A bug ignored two conditions that cancel_work_sync() would return true result
    in a additional kvm_put_kvm().

    2. If interrupt type is MSI, we would got a window between cancel_work_sync()
    and free_irq(), which interrupt would be injected again...

    This patch discard the reference count used for irq handler and interrupt_work,
    and ensure the legal state by moving the free function at the very beginning of
    kvm_destroy_vm(). And the patch fix the second bug by disable irq before
    cancel_work_sync(), which may result in nested disable of irq but OK for we are
    going to free it.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • kvm_arch_sync_events is introduced to quiet down all other events may happen
    contemporary with VM destroy process, like IRQ handler and work struct for
    assigned device.

    For kvm_arch_sync_events is called at the very beginning of kvm_destroy_vm(), so
    the state of KVM here is legal and can provide a environment to quiet down other
    events.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • The destructor for huge pages uses the backing inode for adjusting
    hugetlbfs accounting.

    Hugepage mappings are destroyed by exit_mmap, after
    mmu_notifier_release, so there are no notifications through
    unmap_hugepage_range at this point.

    The hugetlbfs inode can be freed with pages backed by it referenced
    by the shadow. When the shadow releases its reference, the huge page
    destructor will access a now freed inode.

    Implement the release operation for kvm mmu notifiers to release page
    refs before the hugetlbfs inode is gone.

    Signed-off-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Marcelo Tosatti
     

03 Jan, 2009

6 commits


31 Dec, 2008

29 commits

  • If an assigned device shares a guest irq with an emulated
    device then we currently interpret an ack generated by the
    emulated device as originating from the assigned device
    leading to e.g. "Unbalanced enable for IRQ 4347" from the
    enable_irq() in kvm_assigned_dev_ack_irq().

    The fix is fairly simple - don't enable the physical device
    irq unless it was previously disabled.

    Of course, this can still lead to a situation where a
    non-assigned device ACK can cause the physical device irq to
    be reenabled before the device was serviced. However, being
    level sensitive, the interrupt will merely be regenerated.

    Signed-off-by: Mark McLoughlin
    Signed-off-by: Avi Kivity

    Mark McLoughlin
     
  • Signed-off-by: Avi Kivity

    Avi Kivity
     
  • Userspace might need to act differently.

    Signed-off-by: Avi Kivity

    Avi Kivity
     
  • This changes cpus_hardware_enabled from a cpumask_t to a cpumask_var_t:
    equivalent for CONFIG_CPUMASKS_OFFSTACK=n, otherwise dynamically allocated.

    Signed-off-by: Rusty Russell
    Signed-off-by: Avi Kivity

    Rusty Russell
     
  • We're getting rid on on-stack cpumasks for large NR_CPUS.

    1) Use cpumask_var_t/alloc_cpumask_var.
    2) smp_call_function_mask -> smp_call_function_many
    3) cpus_clear, cpus_empty, cpu_set -> cpumask_clear, cpumask_empty,
    cpumask_set_cpu.

    This actually generates slightly smaller code than the old one with
    CONFIG_CPUMASKS_OFFSTACK=n. (gcc knows that cpus cannot be NULL in
    that case, where cpumask_var_t is cpumask_t[1]).

    Signed-off-by: Rusty Russell
    Signed-off-by: Avi Kivity

    Rusty Russell
     
  • Avi said:
    > Wow, code duplication from Rusty. Things must be bad.

    Something about glass houses comes to mind. But instead, a patch.

    Signed-off-by: Rusty Russell
    Signed-off-by: Avi Kivity

    Rusty Russell
     
  • There is a race between a "close of the file descriptors" and module
    unload in the kvm module.

    You can easily trigger this problem by applying this debug patch:
    >--- kvm.orig/virt/kvm/kvm_main.c
    >+++ kvm/virt/kvm/kvm_main.c
    >@@ -648,10 +648,14 @@ void kvm_free_physmem(struct kvm *kvm)
    > kvm_free_physmem_slot(&kvm->memslots[i], NULL);
    > }
    >
    >+#include
    > static void kvm_destroy_vm(struct kvm *kvm)
    > {
    > struct mm_struct *mm = kvm->mm;
    >
    >+ printk("off1\n");
    >+ msleep(5000);
    >+ printk("off2\n");
    > spin_lock(&kvm_lock);
    > list_del(&kvm->vm_list);
    > spin_unlock(&kvm_lock);

    and killing the userspace, followed by an rmmod.

    The problem is that kvm_destroy_vm can run while the module count
    is 0. That means, you can remove the module while kvm_destroy_vm
    is running. But kvm_destroy_vm is part of the module text. This
    causes a kerneloops. The race exists without the msleep but is much
    harder to trigger.

    This patch requires the fix for anon_inodes (anon_inodes: use fops->owner
    for module refcount).
    With this patch, we can set the owner of all anonymous KVM inodes file
    operations. The VFS will then control the KVM module refcount as long as there
    is an open file. kvm_destroy_vm will be called by the release function of the
    last closed file - before the VFS drops the module refcount.

    Signed-off-by: Christian Borntraeger
    Signed-off-by: Avi Kivity

    Christian Borntraeger
     
  • Right now, KVM does not remove a slot when we do a
    register ioctl for size 0 (would be the expected behaviour).

    Instead, we only mark it as empty, but keep all bitmaps
    and allocated data structures present. It completely
    nullifies our chances of reusing that same slot again
    for mapping a different piece of memory.

    In this patch, we destroy rmaps, and vfree() the
    pointers that used to hold the dirty bitmap, rmap
    and lpage_info structures.

    Signed-off-by: Glauber Costa
    Signed-off-by: Avi Kivity

    Glauber Costa
     
  • Split out the logic corresponding to undoing assign_irq() and
    clean it up a bit.

    Signed-off-by: Mark McLoughlin
    Signed-off-by: Avi Kivity

    Mark McLoughlin
     
  • Make sure kvm_request_irq_source_id() never returns
    KVM_USERSPACE_IRQ_SOURCE_ID.

    Likewise, check that kvm_free_irq_source_id() never accepts
    KVM_USERSPACE_IRQ_SOURCE_ID.

    Signed-off-by: Mark McLoughlin
    Signed-off-by: Avi Kivity

    Mark McLoughlin
     
  • Set assigned_dev->irq_source_id to -1 so that we can avoid freeing
    a source ID which we never allocated.

    Signed-off-by: Mark McLoughlin
    Signed-off-by: Avi Kivity

    Mark McLoughlin
     
  • We never pass a NULL notifier pointer here, but we may well
    pass a notifier struct which hasn't previously been
    registered.

    Guard against this by using hlist_del_init() which will
    not do anything if the node hasn't been added to the list
    and, when removing the node, will ensure that a subsequent
    call to hlist_del_init() will be fine too.

    Fixes an oops seen when an assigned device is freed before
    and IRQ is assigned to it.

    Signed-off-by: Mark McLoughlin
    Signed-off-by: Avi Kivity

    Mark McLoughlin
     
  • We will obviously never pass a NULL struct kvm_irq_ack_notifier* to
    this functions. They are always embedded in the assigned device
    structure, so the assertion add nothing.

    The irqchip_in_kernel() assertion is very out of place - clearly
    this little abstraction needs to know nothing about the upper
    layer details.

    Signed-off-by: Mark McLoughlin
    Signed-off-by: Avi Kivity

    Mark McLoughlin
     
  • Impact: make global function static

    virt/kvm/kvm_main.c:85:6: warning: symbol 'kvm_rebooting' was not declared. Should it be static?

    Signed-off-by: Hannes Eder
    Signed-off-by: Avi Kivity

    Hannes Eder
     
  • Add marker_synchronize_unregister() before module unloading.
    This prevents possible trace calls into unloaded module text.

    Signed-off-by: Wu Fengguang
    Signed-off-by: Avi Kivity

    Wu Fengguang
     
  • Now we use MSI as default one, and translate MSI to INTx when guest need
    INTx rather than MSI. For legacy device, we provide support for non-sharing
    host IRQ.

    Provide a parameter msi2intx for this method. The value is true by default in
    x86 architecture.

    We can't guarantee this mode can work on every device, but for most of us
    tested, it works. If your device encounter some trouble with this mode, you can
    try set msi2intx modules parameter to 0. If the device is OK with msi2intx=0,
    then please report it to KVM mailing list or me. We may prepare a blacklist for
    the device that can't work in this mode.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • We enable guest MSI and host MSI support in this patch. The userspace want to
    enable MSI should set KVM_DEV_IRQ_ASSIGN_ENABLE_MSI in the assigned_irq's flag.
    Function would return -ENOTTY if can't enable MSI, userspace shouldn't set MSI
    Enable bit when KVM_ASSIGN_IRQ return -ENOTTY with
    KVM_DEV_IRQ_ASSIGN_ENABLE_MSI.

    Userspace can tell the support of MSI device from #ifdef KVM_CAP_DEVICE_MSI.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • The function is used to dispatch MSI to lapic according to MSI message
    address and message data.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • It would be used for MSI in device assignment, for MSI dispatch.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Separate guest irq type and host irq type, for we can support guest using INTx
    with host using MSI (but not opposite combination).

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Separate INTx enabling part to a independence function, so that we can add MSI
    enabling part easily.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Distinguish common part for device assignment and INTx part, perparing for
    refactor later.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Commit 7fd49de9773fdcb7b75e823b21c1c5dc1e218c14 "KVM: ensure that memslot
    userspace addresses are page-aligned" broke kernel space allocated memory
    slot, for the userspace_addr is invalid.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Bad page translation and silent guest failure ensue if the userspace address is
    not page-aligned. I hit this problem using large (host) pages with qemu,
    because qemu currently has a hardcoded 4096-byte alignment for guest memory
    allocations.

    Signed-off-by: Hollis Blanchard
    Signed-off-by: Avi Kivity

    Hollis Blanchard
     
  • Some areas of kvm x86 mmu are using gfn offset inside a slot without
    unaliasing the gfn first. This patch makes sure that the gfn will be
    unaliased and add gfn_to_memslot_unaliased() to save the calculating
    of the gfn unaliasing in case we have it unaliased already.

    Signed-off-by: Izik Eidus
    Acked-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Izik Eidus
     
  • Ideally, every assigned device should in a clear condition before and after
    assignment, so that the former state of device won't affect later work.
    Some devices provide a mechanism named Function Level Reset, which is
    defined in PCI/PCI-e document. We should execute it before and after device
    assignment.

    (But sadly, the feature is new, and most device on the market now don't
    support it. We are considering using D0/D3hot transmit to emulate it later,
    but not that elegant and reliable as FLR itself.)

    [Update: Reminded by Xiantao, execute FLR after we ensure that the device can
    be assigned to the guest.]

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Also remove unnecessary parameter of unregister irq ack notifier.

    Signed-off-by: Sheng Yang
    Signed-off-by: Avi Kivity

    Sheng Yang
     
  • Kick the NMI receiving VCPU in case the triggering caller runs in a
    different context.

    Signed-off-by: Jan Kiszka
    Signed-off-by: Avi Kivity

    Jan Kiszka