LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Maxim Levitsky <mlevitsk@redhat.com>
To: kvm@vger.kernel.org
Cc: Jim Mattson <jmattson@google.com>,
linux-kernel@vger.kernel.org, Wanpeng Li <wanpengli@tencent.com>,
Borislav Petkov <bp@alien8.de>, Joerg Roedel <joro@8bytes.org>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
"H. Peter Anvin" <hpa@zytor.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Sean Christopherson <seanjc@google.com>,
x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)),
Maxim Levitsky <mlevitsk@redhat.com>
Subject: [PATCH v4 04/16] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range
Date: Tue, 10 Aug 2021 23:52:39 +0300 [thread overview]
Message-ID: <20210810205251.424103-5-mlevitsk@redhat.com> (raw)
In-Reply-To: <20210810205251.424103-1-mlevitsk@redhat.com>
This together with previous patch, ensures that
kvm_zap_gfn_range doesn't race with page fault
running on another vcpu, and will make this page fault code
retry instead.
This is based on a patch suggested by Sean Christopherson:
https://lkml.org/lkml/2021/7/22/1025
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 4 ++++
include/linux/kvm_host.h | 5 +++++
virt/kvm/kvm_main.c | 7 +++++--
3 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d4e22a9635a9..abaf8e661c61 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5660,6 +5660,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
write_lock(&kvm->mmu_lock);
+ kvm_inc_notifier_count(kvm, gfn_start, gfn_end);
+
if (kvm_memslots_have_rmaps(kvm)) {
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
slots = __kvm_memslots(kvm, i);
@@ -5695,6 +5697,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
if (flush)
kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end);
+ kvm_dec_notifier_count(kvm, gfn_start, gfn_end);
+
write_unlock(&kvm->mmu_lock);
}
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f50bfcf225f0..4e43843fe0d7 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -991,6 +991,11 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
#endif
+void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
+ unsigned long end);
+void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
+ unsigned long end);
+
long kvm_arch_dev_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg);
long kvm_arch_vcpu_ioctl(struct file *filp,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a438a7a3774a..46f55e860b8b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -610,7 +610,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
}
-static void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
+void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
unsigned long end)
{
/*
@@ -638,6 +638,7 @@ static void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
max(kvm->mmu_notifier_range_end, end);
}
}
+EXPORT_SYMBOL_GPL(kvm_inc_notifier_count);
static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
const struct mmu_notifier_range *range)
@@ -672,7 +673,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
return 0;
}
-static void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
+void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
unsigned long end)
{
/*
@@ -689,6 +690,8 @@ static void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
*/
kvm->mmu_notifier_count--;
}
+EXPORT_SYMBOL_GPL(kvm_dec_notifier_count);
+
static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
const struct mmu_notifier_range *range)
--
2.26.3
next prev parent reply other threads:[~2021-08-10 20:53 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 01/16] Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock" Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 02/16] KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 03/16] KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range Maxim Levitsky
2021-08-10 20:52 ` Maxim Levitsky [this message]
2021-08-10 20:52 ` [PATCH v4 05/16] KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 06/16] KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 07/16] KVM: x86/mmu: allow APICv memslot to be enabled but invisible Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 08/16] KVM: x86: don't disable APICv memslot when inhibited Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 09/16] KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 10/16] KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC inhibition Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 11/16] KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in use Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 12/16] KVM: SVM: remove svm_toggle_avic_for_irq_window Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 13/16] KVM: SVM: avoid refreshing avic if its state didn't change Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 14/16] KVM: SVM: move check for kvm_vcpu_apicv_active outside of avic_vcpu_{put|load} Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 15/16] KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 16/16] KVM: SVM: AVIC: drop unsupported AVIC base relocation code Maxim Levitsky
2021-08-10 21:21 ` [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
2021-08-11 8:06 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210810205251.424103-5-mlevitsk@redhat.com \
--to=mlevitsk@redhat.com \
--cc=bp@alien8.de \
--cc=hpa@zytor.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=tglx@linutronix.de \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
--cc=x86@kernel.org \
--subject='Re: [PATCH v4 04/16] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).