Commit a355aa54f1d25dff83c0feef8863d83a76988fdb
Committed by
Avi Kivity
1 parent
342d3db763
Exists in
master
and in
20 other branches
KVM: Add barriers to allow mmu_notifier_retry to be used locklessly
This adds an smp_wmb in kvm_mmu_notifier_invalidate_range_end() and an smp_rmb in mmu_notifier_retry() so that mmu_notifier_retry() will give the correct answer when called without kvm->mmu_lock being held. PowerPC Book3S HV KVM wants to use a bitlock per guest page rather than a single global spinlock in order to improve the scalability of updates to the guest MMU hashed page table, and so needs this. Signed-off-by: Paul Mackerras <paulus@samba.org> Acked-by: Avi Kivity <avi@redhat.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
Showing 2 changed files with 12 additions and 8 deletions Side-by-side Diff
include/linux/kvm_host.h
... | ... | @@ -702,12 +702,16 @@ |
702 | 702 | if (unlikely(vcpu->kvm->mmu_notifier_count)) |
703 | 703 | return 1; |
704 | 704 | /* |
705 | - * Both reads happen under the mmu_lock and both values are | |
706 | - * modified under mmu_lock, so there's no need of smb_rmb() | |
707 | - * here in between, otherwise mmu_notifier_count should be | |
708 | - * read before mmu_notifier_seq, see | |
709 | - * mmu_notifier_invalidate_range_end write side. | |
705 | + * Ensure the read of mmu_notifier_count happens before the read | |
706 | + * of mmu_notifier_seq. This interacts with the smp_wmb() in | |
707 | + * mmu_notifier_invalidate_range_end to make sure that the caller | |
708 | + * either sees the old (non-zero) value of mmu_notifier_count or | |
709 | + * the new (incremented) value of mmu_notifier_seq. | |
710 | + * PowerPC Book3s HV KVM calls this under a per-page lock | |
711 | + * rather than under kvm->mmu_lock, for scalability, so | |
712 | + * can't rely on kvm->mmu_lock to keep things ordered. | |
710 | 713 | */ |
714 | + smp_rmb(); | |
711 | 715 | if (vcpu->kvm->mmu_notifier_seq != mmu_seq) |
712 | 716 | return 1; |
713 | 717 | return 0; |
virt/kvm/kvm_main.c
... | ... | @@ -357,11 +357,11 @@ |
357 | 357 | * been freed. |
358 | 358 | */ |
359 | 359 | kvm->mmu_notifier_seq++; |
360 | + smp_wmb(); | |
360 | 361 | /* |
361 | 362 | * The above sequence increase must be visible before the |
362 | - * below count decrease but both values are read by the kvm | |
363 | - * page fault under mmu_lock spinlock so we don't need to add | |
364 | - * a smb_wmb() here in between the two. | |
363 | + * below count decrease, which is ensured by the smp_wmb above | |
364 | + * in conjunction with the smp_rmb in mmu_notifier_retry(). | |
365 | 365 | */ |
366 | 366 | kvm->mmu_notifier_count--; |
367 | 367 | spin_unlock(&kvm->mmu_lock); |