25 Oct, 2010

1 commit

  • * 'kvm-updates/2.6.37' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (321 commits)
    KVM: Drop CONFIG_DMAR dependency around kvm_iommu_map_pages
    KVM: Fix signature of kvm_iommu_map_pages stub
    KVM: MCE: Send SRAR SIGBUS directly
    KVM: MCE: Add MCG_SER_P into KVM_MCE_CAP_SUPPORTED
    KVM: fix typo in copyright notice
    KVM: Disable interrupts around get_kernel_ns()
    KVM: MMU: Avoid sign extension in mmu_alloc_direct_roots() pae root address
    KVM: MMU: move access code parsing to FNAME(walk_addr) function
    KVM: MMU: audit: check whether have unsync sps after root sync
    KVM: MMU: audit: introduce audit_printk to cleanup audit code
    KVM: MMU: audit: unregister audit tracepoints before module unloaded
    KVM: MMU: audit: fix vcpu's spte walking
    KVM: MMU: set access bit for direct mapping
    KVM: MMU: cleanup for error mask set while walk guest page table
    KVM: MMU: update 'root_hpa' out of loop in PAE shadow path
    KVM: x86 emulator: Eliminate compilation warning in x86_decode_insn()
    KVM: x86: Fix constant type in kvm_get_time_scale
    KVM: VMX: Add AX to list of registers clobbered by guest switch
    KVM guest: Move a printk that's using the clock before it's ready
    KVM: x86: TSC catchup mode
    ...

    Linus Torvalds
     

24 Oct, 2010

7 commits

  • We also have to call kvm_iommu_map_pages for CONFIG_AMD_IOMMU. So drop
    the dependency on Intel IOMMU, kvm_iommu_map_pages will be a nop anyway
    if CONFIG_IOMMU_API is not defined.

    KVM-Stable-Tag.
    Signed-off-by: Jan Kiszka
    Signed-off-by: Marcelo Tosatti

    Jan Kiszka
     
  • Fix typo in copyright notice.

    Signed-off-by: Nicolas Kaiser
    Signed-off-by: Marcelo Tosatti

    Nicolas Kaiser
     
  • It doesn't really matter, but if we spin, we should spin in a more relaxed
    manner. This way, if something goes wrong at least it won't contribute to
    global warming.

    Signed-off-by: Avi Kivity
    Signed-off-by: Marcelo Tosatti

    Avi Kivity
     
  • There is a bugs in this function, we call gfn_to_pfn() and kvm_mmu_gva_to_gpa_read() in
    atomic context(kvm_mmu_audit() is called under the spinlock(mmu_lock)'s protection).

    This patch fix it by:
    - introduce gfn_to_pfn_atomic instead of gfn_to_pfn
    - get the mapping gfn from kvm_mmu_page_get_gfn()

    And it adds 'notrap' ptes check in unsync/direct sps

    Signed-off-by: Xiao Guangrong
    Signed-off-by: Avi Kivity

    Xiao Guangrong
     
  • Introduce this function to get consecutive gfn's pages, it can reduce
    gup's overload, used by later patch

    Signed-off-by: Xiao Guangrong
    Signed-off-by: Marcelo Tosatti

    Xiao Guangrong
     
  • Introduce hva_to_pfn_atomic(), it's the fast path and can used in atomic
    context, the later patch will use it

    Signed-off-by: Xiao Guangrong
    Signed-off-by: Marcelo Tosatti

    Xiao Guangrong
     
  • If there are active VCPUs which are marked as belonging to
    a particular hardware CPU, request a clock sync for them when
    enabling hardware; the TSC could be desynchronized on a newly
    arriving CPU, and we need to recompute guests system time
    relative to boot after a suspend event.

    This covers both cases.

    Note that it is acceptable to take the spinlock, as either
    no other tasks will be running and no locks held (BSP after
    resume), or other tasks will be guaranteed to drop the lock
    relatively quickly (AP on CPU_STARTING).

    Noting we now get clock synchronization requests for VCPUs
    which are starting up (or restarting), it is tempting to
    attempt to remove the arch/x86/kvm/x86.c CPU hot-notifiers
    at this time, however it is not correct to do so; they are
    required for systems with non-constant TSC as the frequency
    may not be known immediately after the processor has started
    until the cpufreq driver has had a chance to run and query
    the chipset.

    Updated: implement better locking semantics for hardware_enable

    Removed the hack of dropping and retaking the lock by adding the
    semantic that we always hold kvm_lock when hardware_enable is
    called. The one place that doesn't need to worry about it is
    resume, as resuming a frozen CPU, the spinlock won't be taken.

    Signed-off-by: Zachary Amsden
    Signed-off-by: Marcelo Tosatti

    Zachary Amsden
     

23 Oct, 2010

1 commit

  • * 'llseek' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/bkl:
    vfs: make no_llseek the default
    vfs: don't use BKL in default_llseek
    llseek: automatically add .llseek fop
    libfs: use generic_file_llseek for simple_attr
    mac80211: disallow seeks in minstrel debug code
    lirc: make chardev nonseekable
    viotape: use noop_llseek
    raw: use explicit llseek file operations
    ibmasmfs: use generic_file_llseek
    spufs: use llseek in all file operations
    arm/omap: use generic_file_llseek in iommu_debug
    lkdtm: use generic_file_llseek in debugfs
    net/wireless: use generic_file_llseek in debugfs
    drm: use noop_llseek

    Linus Torvalds
     

15 Oct, 2010

1 commit

  • All file_operations should get a .llseek operation so we can make
    nonseekable_open the default for future file operations without a
    .llseek pointer.

    The three cases that we can automatically detect are no_llseek, seq_lseek
    and default_llseek. For cases where we can we can automatically prove that
    the file offset is always ignored, we use noop_llseek, which maintains
    the current behavior of not returning an error from a seek.

    New drivers should normally not use noop_llseek but instead use no_llseek
    and call nonseekable_open at open time. Existing drivers can be converted
    to do the same when the maintainer knows for certain that no user code
    relies on calling seek on the device file.

    The generated code is often incorrectly indented and right now contains
    comments that clarify for each added line why a specific variant was
    chosen. In the version that gets submitted upstream, the comments will
    be gone and I will manually fix the indentation, because there does not
    seem to be a way to do that using coccinelle.

    Some amount of new code is currently sitting in linux-next that should get
    the same modifications, which I will do at the end of the merge window.

    Many thanks to Julia Lawall for helping me learn to write a semantic
    patch that does all this.

    ===== begin semantic patch =====
    // This adds an llseek= method to all file operations,
    // as a preparation for making no_llseek the default.
    //
    // The rules are
    // - use no_llseek explicitly if we do nonseekable_open
    // - use seq_lseek for sequential files
    // - use default_llseek if we know we access f_pos
    // - use noop_llseek if we know we don't access f_pos,
    // but we still want to allow users to call lseek
    //
    @ open1 exists @
    identifier nested_open;
    @@
    nested_open(...)
    {

    }

    @ open exists@
    identifier open_f;
    identifier i, f;
    identifier open1.nested_open;
    @@
    int open_f(struct inode *i, struct file *f)
    {

    }

    @ read disable optional_qualifier exists @
    identifier read_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    expression E;
    identifier func;
    @@
    ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
    {

    }

    @ read_no_fpos disable optional_qualifier exists @
    identifier read_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    @@
    ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
    {
    ... when != off
    }

    @ write @
    identifier write_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    expression E;
    identifier func;
    @@
    ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
    {

    }

    @ write_no_fpos @
    identifier write_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    @@
    ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
    {
    ... when != off
    }

    @ fops0 @
    identifier fops;
    @@
    struct file_operations fops = {
    ...
    };

    @ has_llseek depends on fops0 @
    identifier fops0.fops;
    identifier llseek_f;
    @@
    struct file_operations fops = {
    ...
    .llseek = llseek_f,
    ...
    };

    @ has_read depends on fops0 @
    identifier fops0.fops;
    identifier read_f;
    @@
    struct file_operations fops = {
    ...
    .read = read_f,
    ...
    };

    @ has_write depends on fops0 @
    identifier fops0.fops;
    identifier write_f;
    @@
    struct file_operations fops = {
    ...
    .write = write_f,
    ...
    };

    @ has_open depends on fops0 @
    identifier fops0.fops;
    identifier open_f;
    @@
    struct file_operations fops = {
    ...
    .open = open_f,
    ...
    };

    // use no_llseek if we call nonseekable_open
    ////////////////////////////////////////////
    @ nonseekable1 depends on !has_llseek && has_open @
    identifier fops0.fops;
    identifier nso ~= "nonseekable_open";
    @@
    struct file_operations fops = {
    ... .open = nso, ...
    +.llseek = no_llseek, /* nonseekable */
    };

    @ nonseekable2 depends on !has_llseek @
    identifier fops0.fops;
    identifier open.open_f;
    @@
    struct file_operations fops = {
    ... .open = open_f, ...
    +.llseek = no_llseek, /* open uses nonseekable */
    };

    // use seq_lseek for sequential files
    /////////////////////////////////////
    @ seq depends on !has_llseek @
    identifier fops0.fops;
    identifier sr ~= "seq_read";
    @@
    struct file_operations fops = {
    ... .read = sr, ...
    +.llseek = seq_lseek, /* we have seq_read */
    };

    // use default_llseek if there is a readdir
    ///////////////////////////////////////////
    @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier readdir_e;
    @@
    // any other fop is used that changes pos
    struct file_operations fops = {
    ... .readdir = readdir_e, ...
    +.llseek = default_llseek, /* readdir is present */
    };

    // use default_llseek if at least one of read/write touches f_pos
    /////////////////////////////////////////////////////////////////
    @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read.read_f;
    @@
    // read fops use offset
    struct file_operations fops = {
    ... .read = read_f, ...
    +.llseek = default_llseek, /* read accesses f_pos */
    };

    @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier write.write_f;
    @@
    // write fops use offset
    struct file_operations fops = {
    ... .write = write_f, ...
    + .llseek = default_llseek, /* write accesses f_pos */
    };

    // Use noop_llseek if neither read nor write accesses f_pos
    ///////////////////////////////////////////////////////////

    @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read_no_fpos.read_f;
    identifier write_no_fpos.write_f;
    @@
    // write fops use offset
    struct file_operations fops = {
    ...
    .write = write_f,
    .read = read_f,
    ...
    +.llseek = noop_llseek, /* read and write both use no f_pos */
    };

    @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier write_no_fpos.write_f;
    @@
    struct file_operations fops = {
    ... .write = write_f, ...
    +.llseek = noop_llseek, /* write uses no f_pos */
    };

    @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read_no_fpos.read_f;
    @@
    struct file_operations fops = {
    ... .read = read_f, ...
    +.llseek = noop_llseek, /* read uses no f_pos */
    };

    @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    @@
    struct file_operations fops = {
    ...
    +.llseek = noop_llseek, /* no read or write fn */
    };
    ===== End semantic patch =====

    Signed-off-by: Arnd Bergmann
    Cc: Julia Lawall
    Cc: Christoph Hellwig

    Arnd Bergmann
     

23 Sep, 2010

2 commits

  • When we reboot, we disable vmx extensions or otherwise INIT gets blocked.
    If a task on another cpu hits a vmx instruction, it will fault if vmx is
    disabled. We trap that to avoid a nasty oops and spin until the reboot
    completes.

    Problem is, we sleep with interrupts disabled. This blocks smp_send_stop()
    from running, and the reboot process halts.

    Fix by enabling interrupts before spinning.

    KVM-Stable-Tag.
    Signed-off-by: Avi Kivity
    Signed-off-by: Marcelo Tosatti

    Avi Kivity
     
  • I think I see the following (theoretical) race:

    During irqfd assign, we drop irqfds lock before we
    schedule inject work. Therefore, deassign running
    on another CPU could cause shutdown and flush to run
    before inject, causing user after free in inject.

    A simple fix it to schedule inject under the lock.

    Signed-off-by: Michael S. Tsirkin
    Acked-by: Gregory Haskins
    Signed-off-by: Marcelo Tosatti

    Michael S. Tsirkin
     

10 Sep, 2010

1 commit

  • The CPU_STARTING callback was added upstream with the intention
    of being used for KVM, specifically for the hardware enablement
    that must be done before we can run in hardware virt. It had
    bugs on the x86_64 architecture at the time, where it was called
    after CPU_ONLINE. The arches have since merged and the bug is
    gone.

    It might be noted other features should probably start making
    use of this callback; microcode updates in particular which
    might be fixing important erratums would be best applied before
    beginning to run user tasks.

    Signed-off-by: Zachary Amsden
    Signed-off-by: Marcelo Tosatti

    Zachary Amsden
     

02 Aug, 2010

4 commits


01 Aug, 2010

15 commits

  • This patch converts unnecessary divide and modulo operations
    in the KVM large page related code into logical operations.
    This allows to convert gfn_t to u64 while not breaking 32
    bit builds.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Marcelo Tosatti

    Joerg Roedel
     
  • This patch fixes the following warning.

    ===================================================
    [ INFO: suspicious rcu_dereference_check() usage. ]
    ---------------------------------------------------
    include/linux/kvm_host.h:259 invoked rcu_dereference_check() without
    protection!

    other info that might help us debug this:

    rcu_scheduler_active = 1, debug_locks = 0
    no locks held by qemu-system-x86/29679.

    stack backtrace:
    Pid: 29679, comm: qemu-system-x86 Not tainted 2.6.35-rc3+ #200
    Call Trace:
    [] lockdep_rcu_dereference+0xa8/0xb1
    [] kvm_iommu_unmap_memslots+0xc9/0xde [kvm]
    [] kvm_iommu_unmap_guest+0x40/0x4e [kvm]
    [] kvm_arch_destroy_vm+0x1a/0x186 [kvm]
    [] kvm_put_kvm+0x110/0x167 [kvm]
    [] kvm_vcpu_release+0x18/0x1c [kvm]
    [] fput+0x22a/0x3a0
    [] filp_close+0xb4/0xcd
    [] put_files_struct+0x1b7/0x36b
    [] ? put_files_struct+0x48/0x36b
    [] ? do_raw_spin_unlock+0x118/0x160
    [] exit_files+0x6d/0x75
    [] do_exit+0x47d/0xc60
    [] ? _raw_spin_unlock_irq+0x30/0x36
    [] do_group_exit+0xcf/0x134
    [] get_signal_to_deliver+0x732/0x81d
    [] ? cpu_clock+0x4e/0x60
    [] do_notify_resume+0x117/0xc43
    [] ? trace_hardirqs_on+0xd/0xf
    [] ? sys_rt_sigtimedwait+0x2b5/0x3bf
    [] ? trace_hardirqs_off_thunk+0x3a/0x3c
    [] ? sysret_signal+0x5/0x3d
    [] int_signal+0x12/0x17

    Signed-off-by: Sheng Yang
    Signed-off-by: Marcelo Tosatti

    Sheng Yang
     
  • is_hwpoison_address accesses the page table, so the caller must hold
    current->mm->mmap_sem in read mode. So fix its usage in hva_to_pfn of
    kvm accordingly.

    Comment is_hwpoison_address to remind other users.

    Reported-by: Avi Kivity
    Signed-off-by: Huang Ying
    Signed-off-by: Avi Kivity

    Huang Ying
     
  • May be used for distinguishing between internal and user slots, or for sorting
    slots in size order.

    Signed-off-by: Avi Kivity

    Avi Kivity
     
  • Makes it a little more readable and hackable.

    Signed-off-by: Avi Kivity

    Avi Kivity
     
  • As advertised in feature-removal-schedule.txt. Equivalent support is provided
    by overlapping memory regions.

    Signed-off-by: Avi Kivity

    Avi Kivity
     
  • Otherwise we might try to deliver a timer interrupt to a cpu that
    can't possibly handle it.

    Signed-off-by: Chris Lalancette
    Signed-off-by: Marcelo Tosatti

    Chris Lalancette
     
  • No real bugs in this one.

    Signed-off-by: Andi Kleen
    Signed-off-by: Avi Kivity

    Andi Kleen
     
  • When the user passed in a NULL mask pass this on from the ioctl
    handler.

    Found by gcc 4.6's new warnings.

    Signed-off-by: Andi Kleen
    Signed-off-by: Avi Kivity

    Andi Kleen
     
  • The type of '*new.rmap' is not 'struct page *', fix it

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Marcelo Tosatti

    Lai Jiangshan
     
  • Signed-off-by: Avi Kivity

    Avi Kivity
     
  • Now that all arch specific ioctls have centralized locking, it is easy to
    move it to the central dispatcher.

    Signed-off-by: Avi Kivity

    Avi Kivity
     
  • All vcpu ioctls need to be locked, so instead of locking each one specifically
    we lock at the generic dispatcher.

    This patch only updates generic ioctls and leaves arch specific ioctls alone.

    Signed-off-by: Avi Kivity

    Avi Kivity
     
  • Remove this check in an effort to allow kvm guests to run without
    root privileges. This capability check doesn't seem to add any
    security since the device needs to have already been added via the
    assign device ioctl and the io actually occurs through the pci
    sysfs interface.

    Signed-off-by: Alex Williamson
    Signed-off-by: Marcelo Tosatti

    Alex Williamson
     
  • In common cases, guest SRAO MCE will cause corresponding poisoned page
    be un-mapped and SIGBUS be sent to QEMU-KVM, then QEMU-KVM will relay
    the MCE to guest OS.

    But it is reported that if the poisoned page is accessed in guest
    after unmapping and before MCE is relayed to guest OS, userspace will
    be killed.

    The reason is as follows. Because poisoned page has been un-mapped,
    guest access will cause guest exit and kvm_mmu_page_fault will be
    called. kvm_mmu_page_fault can not get the poisoned page for fault
    address, so kernel and user space MMIO processing is tried in turn. In
    user MMIO processing, poisoned page is accessed again, then userspace
    is killed by force_sig_info.

    To fix the bug, kvm_mmu_page_fault send HWPOISON signal to QEMU-KVM
    and do not try kernel and user space MMIO processing for poisoned
    page.

    [xiao: fix warning introduced by avi]

    Reported-by: Max Asbock
    Signed-off-by: Huang Ying
    Signed-off-by: Xiao Guangrong
    Signed-off-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Huang Ying
     

11 Jun, 2010

1 commit


09 Jun, 2010

1 commit

  • This is obviously a left-over from the the old interface taking the
    size. Apparently a mostly harmless issue with the current iommu_unmap
    implementation.

    Signed-off-by: Jan Kiszka
    Acked-by: Joerg Roedel
    Signed-off-by: Avi Kivity

    Jan Kiszka
     

22 May, 2010

1 commit

  • * 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (269 commits)
    KVM: x86: Add missing locking to arch specific vcpu ioctls
    KVM: PPC: Add missing vcpu_load()/vcpu_put() in vcpu ioctls
    KVM: MMU: Segregate shadow pages with different cr0.wp
    KVM: x86: Check LMA bit before set_efer
    KVM: Don't allow lmsw to clear cr0.pe
    KVM: Add cpuid.txt file
    KVM: x86: Tell the guest we'll warn it about tsc stability
    x86, paravirt: don't compute pvclock adjustments if we trust the tsc
    x86: KVM guest: Try using new kvm clock msrs
    KVM: x86: export paravirtual cpuid flags in KVM_GET_SUPPORTED_CPUID
    KVM: x86: add new KVMCLOCK cpuid feature
    KVM: x86: change msr numbers for kvmclock
    x86, paravirt: Add a global synchronization point for pvclock
    x86, paravirt: Enable pvclock flags in vcpu_time_info structure
    KVM: x86: Inject #GP with the right rip on efer writes
    KVM: SVM: Don't allow nested guest to VMMCALL into host
    KVM: x86: Fix exception reinjection forced to true
    KVM: Fix wallclock version writing race
    KVM: MMU: Don't read pdptrs with mmu spinlock held in mmu_alloc_roots
    KVM: VMX: enable VMXON check with SMX enabled (Intel TXT)
    ...

    Linus Torvalds
     

19 May, 2010

1 commit


18 May, 2010

1 commit

  • …/git/tip/linux-2.6-tip

    * 'core-iommu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86/amd-iommu: Add amd_iommu=off command line option
    iommu-api: Remove iommu_{un}map_range functions
    x86/amd-iommu: Implement ->{un}map callbacks for iommu-api
    x86/amd-iommu: Make amd_iommu_iova_to_phys aware of multiple page sizes
    x86/amd-iommu: Make iommu_unmap_page and fetch_pte aware of page sizes
    x86/amd-iommu: Make iommu_map_page and alloc_pte aware of page sizes
    kvm: Change kvm_iommu_map_pages to map large pages
    VT-d: Change {un}map_range functions to implement {un}map interface
    iommu-api: Add ->{un}map callbacks to iommu_ops
    iommu-api: Add iommu_map and iommu_unmap functions
    iommu-api: Rename ->{un}map function pointers to ->{un}map_range

    Linus Torvalds
     

17 May, 2010

3 commits

  • As Avi pointed out, testing bit part in mark_page_dirty() was important
    in the days of shadow paging, but currently EPT and NPT has already become
    common and the chance of faulting a page more that once per iteration is
    small. So let's remove the test bit to avoid extra access.

    Signed-off-by: Takuya Yoshikawa
    Signed-off-by: Avi Kivity

    Takuya Yoshikawa
     
  • When CPU_UP_CANCELED, hardware_enable() has not been called at the CPU
    which is going up because raw_notifier_call_chain(CPU_ONLINE)
    has not been called for this cpu.

    Drop the handling for CPU_UP_CANCELED.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Avi Kivity

    Lai Jiangshan
     
  • The RCU/SRCU API have already changed for proving RCU usage.

    I got the following dmesg when PROVE_RCU=y because we used incorrect API.
    This patch coverts rcu_deference() to srcu_dereference() or family API.

    ===================================================
    [ INFO: suspicious rcu_dereference_check() usage. ]
    ---------------------------------------------------
    arch/x86/kvm/mmu.c:3020 invoked rcu_dereference_check() without protection!

    other info that might help us debug this:

    rcu_scheduler_active = 1, debug_locks = 0
    2 locks held by qemu-system-x86/8550:
    #0: (&kvm->slots_lock){+.+.+.}, at: [] kvm_set_memory_region+0x29/0x50 [kvm]
    #1: (&(&kvm->mmu_lock)->rlock){+.+...}, at: [] kvm_arch_commit_memory_region+0xa6/0xe2 [kvm]

    stack backtrace:
    Pid: 8550, comm: qemu-system-x86 Not tainted 2.6.34-rc4-tip-01028-g939eab1 #27
    Call Trace:
    [] lockdep_rcu_dereference+0xaa/0xb3
    [] kvm_mmu_calculate_mmu_pages+0x44/0x7d [kvm]
    [] kvm_arch_commit_memory_region+0xb7/0xe2 [kvm]
    [] __kvm_set_memory_region+0x636/0x6e2 [kvm]
    [] kvm_set_memory_region+0x37/0x50 [kvm]
    [] vmx_set_tss_addr+0x46/0x5a [kvm_intel]
    [] kvm_arch_vm_ioctl+0x17a/0xcf8 [kvm]
    [] ? unlock_page+0x27/0x2c
    [] ? __do_fault+0x3a9/0x3e1
    [] kvm_vm_ioctl+0x364/0x38d [kvm]
    [] ? up_read+0x23/0x3d
    [] vfs_ioctl+0x32/0xa6
    [] do_vfs_ioctl+0x495/0x4db
    [] ? fget_light+0xc2/0x241
    [] ? do_sys_open+0x104/0x116
    [] ? retint_swapgs+0xe/0x13
    [] sys_ioctl+0x47/0x6a
    [] system_call_fastpath+0x16/0x1b

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Avi Kivity

    Lai Jiangshan