Commit 32f6daad4651a748a58a3ab6da0611862175722f

Authored by Alex Williamson
Committed by Marcelo Tosatti
1 parent f19a0c2c2e

KVM: unmap pages from the iommu when slots are removed

We've been adding new mappings, but not destroying old mappings.
This can lead to a page leak as pages are pinned using
get_user_pages, but only unpinned with put_page if they still
exist in the memslots list on vm shutdown.  A memslot that is
destroyed while an iommu domain is enabled for the guest will
therefore result in an elevated page reference count that is
never cleared.

Additionally, without this fix, the iommu is only programmed
with the first translation for a gpa.  This can result in
peer-to-peer errors if a mapping is destroyed and replaced by a
new mapping at the same gpa as the iommu will still be pointing
to the original, pinned memory address.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

Showing 3 changed files with 15 additions and 3 deletions Side-by-side Diff

include/linux/kvm_host.h
... ... @@ -596,6 +596,7 @@
596 596  
597 597 #ifdef CONFIG_IOMMU_API
598 598 int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot);
  599 +void kvm_iommu_unmap_pages(struct kvm *kvm, struct kvm_memory_slot *slot);
599 600 int kvm_iommu_map_guest(struct kvm *kvm);
600 601 int kvm_iommu_unmap_guest(struct kvm *kvm);
601 602 int kvm_assign_device(struct kvm *kvm,
... ... @@ -607,6 +608,11 @@
607 608 struct kvm_memory_slot *slot)
608 609 {
609 610 return 0;
  611 +}
  612 +
  613 +static inline void kvm_iommu_unmap_pages(struct kvm *kvm,
  614 + struct kvm_memory_slot *slot)
  615 +{
610 616 }
611 617  
612 618 static inline int kvm_iommu_map_guest(struct kvm *kvm)
... ... @@ -310,6 +310,11 @@
310 310 }
311 311 }
312 312  
  313 +void kvm_iommu_unmap_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
  314 +{
  315 + kvm_iommu_put_pages(kvm, slot->base_gfn, slot->npages);
  316 +}
  317 +
313 318 static int kvm_iommu_unmap_memslots(struct kvm *kvm)
314 319 {
315 320 int idx;
... ... @@ -320,7 +325,7 @@
320 325 slots = kvm_memslots(kvm);
321 326  
322 327 kvm_for_each_memslot(memslot, slots)
323   - kvm_iommu_put_pages(kvm, memslot->base_gfn, memslot->npages);
  328 + kvm_iommu_unmap_pages(kvm, memslot);
324 329  
325 330 srcu_read_unlock(&kvm->srcu, idx);
326 331  
... ... @@ -808,12 +808,13 @@
808 808 if (r)
809 809 goto out_free;
810 810  
811   - /* map the pages in iommu page table */
  811 + /* map/unmap the pages in iommu page table */
812 812 if (npages) {
813 813 r = kvm_iommu_map_pages(kvm, &new);
814 814 if (r)
815 815 goto out_free;
816   - }
  816 + } else
  817 + kvm_iommu_unmap_pages(kvm, &old);
817 818  
818 819 r = -ENOMEM;
819 820 slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),