LKML Archive on lore.kernel.org help / color / mirror / Atom feed
From: Vitaly Kuznetsov <vkuznets@redhat.com> To: kvm@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Nitesh Narayan Lal <nitesh@redhat.com>, Lai Jiangshan <jiangshanlai@gmail.com>, Maxim Levitsky <mlevitsk@redhat.com>, Eduardo Habkost <ehabkost@redhat.com>, linux-kernel@vger.kernel.org Subject: [PATCH v4 4/8] KVM: Optimize kvm_make_vcpus_request_mask() a bit Date: Fri, 27 Aug 2021 11:25:12 +0200 [thread overview] Message-ID: <20210827092516.1027264-5-vkuznets@redhat.com> (raw) In-Reply-To: <20210827092516.1027264-1-vkuznets@redhat.com> Iterating over set bits in 'vcpu_bitmap' should be faster than going through all vCPUs, especially when just a few bits are set. Drop kvm_make_vcpus_request_mask() call from kvm_make_all_cpus_request_except() to avoid handling the special case when 'vcpu_bitmap' is NULL, move the code to kvm_make_all_cpus_request_except() itself. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> --- virt/kvm/kvm_main.c | 88 +++++++++++++++++++++++++++------------------ 1 file changed, 53 insertions(+), 35 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2082aceffbf6..e32ba210025f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -261,50 +261,57 @@ static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait) return true; } +static void kvm_make_vcpu_request(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned int req, cpumask_var_t tmp, + int current_cpu) +{ + int cpu = vcpu->cpu; + + kvm_make_request(req, vcpu); + + if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu)) + return; + + /* + * tmp can be "unavailable" if cpumasks are allocated off stack as + * allocation of the mask is deliberately not fatal and is handled by + * falling back to kicking all online CPUs. + */ + if (!cpumask_available(tmp)) + return; + + /* + * Note, the vCPU could get migrated to a different pCPU at any point + * after kvm_request_needs_ipi(), which could result in sending an IPI + * to the previous pCPU. But, that's OK because the purpose of the IPI + * is to ensure the vCPU returns to OUTSIDE_GUEST_MODE, which is + * satisfied if the vCPU migrates. Entering READING_SHADOW_PAGE_TABLES + * after this point is also OK, as the requirement is only that KVM wait + * for vCPUs that were reading SPTEs _before_ any changes were + * finalized. See kvm_vcpu_kick() for more details on handling requests. + */ + if (kvm_request_needs_ipi(vcpu, req)) { + cpu = READ_ONCE(vcpu->cpu); + if (cpu != -1 && cpu != current_cpu) + __cpumask_set_cpu(cpu, tmp); + } +} + bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req, struct kvm_vcpu *except, unsigned long *vcpu_bitmap, cpumask_var_t tmp) { - int i, cpu, me; struct kvm_vcpu *vcpu; + int i, me; bool called; me = get_cpu(); - kvm_for_each_vcpu(i, vcpu, kvm) { - if ((vcpu_bitmap && !test_bit(i, vcpu_bitmap)) || - vcpu == except) - continue; - - kvm_make_request(req, vcpu); - - if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu)) + for_each_set_bit(i, vcpu_bitmap, KVM_MAX_VCPUS) { + vcpu = kvm_get_vcpu(kvm, i); + if (!vcpu || vcpu == except) continue; - - /* - * tmp can be "unavailable" if cpumasks are allocated off stack - * as allocation of the mask is deliberately not fatal and is - * handled by falling back to kicking all online CPUs. - */ - if (!cpumask_available(tmp)) - continue; - - /* - * Note, the vCPU could get migrated to a different pCPU at any - * point after kvm_request_needs_ipi(), which could result in - * sending an IPI to the previous pCPU. But, that's ok because - * the purpose of the IPI is to ensure the vCPU returns to - * OUTSIDE_GUEST_MODE, which is satisfied if the vCPU migrates. - * Entering READING_SHADOW_PAGE_TABLES after this point is also - * ok, as the requirement is only that KVM wait for vCPUs that - * were reading SPTEs _before_ any changes were finalized. See - * kvm_vcpu_kick() for more details on handling requests. - */ - if (kvm_request_needs_ipi(vcpu, req)) { - cpu = READ_ONCE(vcpu->cpu); - if (cpu != -1 && cpu != me) - __cpumask_set_cpu(cpu, tmp); - } + kvm_make_vcpu_request(kvm, vcpu, req, tmp, me); } called = kvm_kick_many_cpus(tmp, !!(req & KVM_REQUEST_WAIT)); @@ -316,12 +323,23 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req, bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req, struct kvm_vcpu *except) { + struct kvm_vcpu *vcpu; cpumask_var_t cpus; bool called; + int i, me; zalloc_cpumask_var(&cpus, GFP_ATOMIC); - called = kvm_make_vcpus_request_mask(kvm, req, except, NULL, cpus); + me = get_cpu(); + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (vcpu == except) + continue; + kvm_make_vcpu_request(kvm, vcpu, req, cpus, me); + } + + called = kvm_kick_many_cpus(cpus, !!(req & KVM_REQUEST_WAIT)); + put_cpu(); free_cpumask_var(cpus); return called; -- 2.31.1
next prev parent reply other threads:[~2021-08-27 9:25 UTC|newest] Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-27 9:25 [PATCH v4 0/8] KVM: Various fixes and improvements around kicking vCPUs Vitaly Kuznetsov 2021-08-27 9:25 ` [PATCH v4 1/8] KVM: Clean up benign vcpu->cpu data races when " Vitaly Kuznetsov 2021-08-27 9:25 ` [PATCH v4 2/8] KVM: KVM: Use cpumask_available() to check for NULL cpumask " Vitaly Kuznetsov 2021-08-27 9:25 ` [PATCH v4 3/8] KVM: x86: hyper-v: Avoid calling kvm_make_vcpus_request_mask() with vcpu_mask==NULL Vitaly Kuznetsov 2021-09-02 20:57 ` Sean Christopherson 2021-08-27 9:25 ` Vitaly Kuznetsov [this message] 2021-09-02 21:00 ` [PATCH v4 4/8] KVM: Optimize kvm_make_vcpus_request_mask() a bit Sean Christopherson 2021-08-27 9:25 ` [PATCH v4 5/8] KVM: Drop 'except' parameter from kvm_make_vcpus_request_mask() Vitaly Kuznetsov 2021-09-02 21:00 ` Sean Christopherson 2021-08-27 9:25 ` [PATCH v4 6/8] KVM: x86: Fix stack-out-of-bounds memory access from ioapic_write_indirect() Vitaly Kuznetsov 2021-08-27 9:25 ` [PATCH v4 7/8] KVM: Pre-allocate cpumasks for kvm_make_all_cpus_request_except() Vitaly Kuznetsov 2021-09-02 21:08 ` Sean Christopherson 2021-09-03 7:20 ` Vitaly Kuznetsov 2021-09-03 14:54 ` Sean Christopherson 2021-08-27 9:25 ` [PATCH v4 8/8] KVM: Make kvm_make_vcpus_request_mask() use pre-allocated cpu_kick_mask Vitaly Kuznetsov 2021-09-02 21:19 ` Sean Christopherson
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210827092516.1027264-5-vkuznets@redhat.com \ --to=vkuznets@redhat.com \ --cc=dgilbert@redhat.com \ --cc=ehabkost@redhat.com \ --cc=jiangshanlai@gmail.com \ --cc=jmattson@google.com \ --cc=kvm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mlevitsk@redhat.com \ --cc=nitesh@redhat.com \ --cc=pbonzini@redhat.com \ --cc=seanjc@google.com \ --cc=wanpengli@tencent.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).