21 Apr, 2010

1 commit

  • I got this dmesg due to srcu_read_lock() is missing in
    kvm_mmu_notifier_release().

    ===================================================
    [ INFO: suspicious rcu_dereference_check() usage. ]
    ---------------------------------------------------
    arch/x86/kvm/x86.h:72 invoked rcu_dereference_check() without protection!

    other info that might help us debug this:

    rcu_scheduler_active = 1, debug_locks = 0
    2 locks held by qemu-system-x86/3100:
    #0: (rcu_read_lock){.+.+..}, at: [] __mmu_notifier_release+0x38/0xdf
    #1: (&(&kvm->mmu_lock)->rlock){+.+...}, at: [] kvm_mmu_zap_all+0x21/0x5e [kvm]

    stack backtrace:
    Pid: 3100, comm: qemu-system-x86 Not tainted 2.6.34-rc3-22949-gbc8a97a-dirty #2
    Call Trace:
    [] lockdep_rcu_dereference+0xaa/0xb3
    [] unalias_gfn+0x56/0xab [kvm]
    [] gfn_to_memslot+0x16/0x25 [kvm]
    [] gfn_to_rmap+0x17/0x6e [kvm]
    [] rmap_remove+0xa0/0x19d [kvm]
    [] kvm_mmu_zap_page+0x109/0x34d [kvm]
    [] kvm_mmu_zap_all+0x35/0x5e [kvm]
    [] kvm_arch_flush_shadow+0x16/0x22 [kvm]
    [] kvm_mmu_notifier_release+0x15/0x17 [kvm]
    [] __mmu_notifier_release+0x88/0xdf
    [] ? __mmu_notifier_release+0x38/0xdf
    [] ? exit_mm+0xe0/0x115
    [] exit_mmap+0x2c/0x17e
    [] mmput+0x2d/0xd4
    [] exit_mm+0x108/0x115
    [...]

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Avi Kivity

    Lai Jiangshan
     

20 Apr, 2010

1 commit

  • Int is not long enough to store the size of a dirty bitmap.

    This patch fixes this problem with the introduction of a wrapper
    function to calculate the sizes of dirty bitmaps.

    Note: in mark_page_dirty(), we have to consider the fact that
    __set_bit() takes the offset as int, not long.

    Signed-off-by: Takuya Yoshikawa
    Signed-off-by: Marcelo Tosatti

    Takuya Yoshikawa
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

01 Mar, 2010

23 commits


25 Jan, 2010

3 commits

  • kvm didn't clear irqfd counter on deassign, as a result we could get a
    spurious interrupt when irqfd is assigned back. this leads to poor
    performance and, in theory, guest crash.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: Avi Kivity

    Michael S. Tsirkin
     
  • Otherwise memory beyond irq_states[16] might be accessed.

    Noticed by Juan Quintela.

    Cc: stable@kernel.org
    Signed-off-by: Marcelo Tosatti
    Acked-by: Juan Quintela
    Signed-off-by: Avi Kivity

    Marcelo Tosatti
     
  • Looks like repeatedly binding same fd to multiple gsi's with irqfd can
    use up a ton of kernel memory for irqfd structures.

    A simple fix is to allow each fd to only trigger one gsi: triggering a
    storm of interrupts in guest is likely useless anyway, and we can do it
    by binding a single gsi to many interrupts if we really want to.

    Cc: stable@kernel.org
    Signed-off-by: Michael S. Tsirkin
    Acked-by: Acked-by: Gregory Haskins
    Signed-off-by: Avi Kivity

    Michael S. Tsirkin
     

27 Dec, 2009

2 commits


23 Dec, 2009

1 commit

  • It seems a couple places such as arch/ia64/kernel/perfmon.c and
    drivers/infiniband/core/uverbs_main.c could use anon_inode_getfile()
    instead of a private pseudo-fs + alloc_file(), if only there were a way
    to get a read-only file. So provide this by having anon_inode_getfile()
    create a read-only file if we pass O_RDONLY in flags.

    Signed-off-by: Roland Dreier
    Signed-off-by: Al Viro

    Roland Dreier
     

09 Dec, 2009

1 commit


03 Dec, 2009

7 commits

  • Usually userspace will freeze the guest so we can inspect it, but some
    internal state is not available. Add extra data to internal error
    reporting so we can expose it to the debugger. Extra data is specific
    to the suberror.

    Signed-off-by: Avi Kivity

    Avi Kivity
     
  • Otherwise kvm might attempt to dereference a NULL pointer.

    Signed-off-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Marcelo Tosatti
     
  • With big endian userspace, we can't quite figure out if a pointer
    is 32 bit (shifted >> 32) or 64 bit when we read a 64 bit pointer.

    This is what happens with dirty logging. To get the pointer interpreted
    correctly, we thus need Arnd's patch to implement a compat layer for
    the ioctl:

    A better way to do this is to add a separate compat_ioctl() method that
    converts this for you.

    Based on initial patch from Arnd Bergmann.

    Signed-off-by: Arnd Bergmann
    Signed-off-by: Alexander Graf
    Signed-off-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Arnd Bergmann
     
  • find_first_zero_bit works with bit numbers, not bytes.

    Fixes

    https://sourceforge.net/tracker/?func=detail&aid=2847560&group_id=180599&atid=893831

    Reported-by: "Xu, Jiajun"
    Cc: stable@kernel.org
    Signed-off-by: Marcelo Tosatti

    Marcelo Tosatti
     
  • Introduce kvm_vcpu_on_spin, to be used by VMX/SVM to yield processing
    once the cpu detects pause-based looping.

    Signed-off-by: "Zhai, Edwin"
    Signed-off-by: Marcelo Tosatti

    Zhai, Edwin
     
  • Stanse found 2 lock imbalances in kvm_request_irq_source_id and
    kvm_free_irq_source_id. They omit to unlock kvm->irq_lock on fail paths.

    Fix that by adding unlock labels at the end of the functions and jump
    there from the fail paths.

    Signed-off-by: Jiri Slaby
    Cc: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Jiri Slaby
     
  • X86 CPUs need to have some magic happening to enable the virtualization
    extensions on them. This magic can result in unpleasant results for
    users, like blocking other VMMs from working (vmx) or using invalid TLB
    entries (svm).

    Currently KVM activates virtualization when the respective kernel module
    is loaded. This blocks us from autoloading KVM modules without breaking
    other VMMs.

    To circumvent this problem at least a bit, this patch introduces on
    demand activation of virtualization. This means, that instead
    virtualization is enabled on creation of the first virtual machine
    and disabled on destruction of the last one.

    So using this, KVM can be easily autoloaded, while keeping other
    hypervisors usable.

    Signed-off-by: Alexander Graf
    Signed-off-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Alexander Graf