LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Maxim Levitsky <mlevitsk@redhat.com>
To: Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] KVM: x86: Query vcpu->vcpu_idx directly and drop its accessor
Date: Sun, 12 Sep 2021 13:49:28 +0300 [thread overview]
Message-ID: <30f1a856342bc0a45f92558923f9dc22ba453a8b.camel@redhat.com> (raw)
In-Reply-To: <20210910183220.2397812-2-seanjc@google.com>
On Fri, 2021-09-10 at 11:32 -0700, Sean Christopherson wrote:
> Read vcpu->vcpu_idx directly instead of bouncing through the one-line
> wrapper, kvm_vcpu_get_idx(), and drop the wrapper. The wrapper is a
> remnant of the original implementation and serves no purpose; remove it
> before it gains more users.
>
> Back when kvm_vcpu_get_idx() was added by commit 497d72d80a78 ("KVM: Add
> kvm_vcpu_get_idx to get vcpu index in kvm->vcpus"), the implementation
> was more than just a simple wrapper as vcpu->vcpu_idx did not exist and
> retrieving the index meant walking over the vCPU array to find the given
> vCPU.
>
> When vcpu_idx was introduced by commit 8750e72a79dd ("KVM: remember
> position in kvm->vcpus array"), the helper was left behind, likely to
> avoid extra thrash (but even then there were only two users, the original
> arm usage having been removed at some point in the past).
>
> No functional change intended.
>
> Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/hyperv.c | 7 +++----
> arch/x86/kvm/hyperv.h | 2 +-
> include/linux/kvm_host.h | 5 -----
> 3 files changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index fe4a02715266..04dbc001f4fc 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -939,7 +939,7 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu)
> for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++)
> stimer_init(&hv_vcpu->stimer[i], i);
>
> - hv_vcpu->vp_index = kvm_vcpu_get_idx(vcpu);
> + hv_vcpu->vp_index = vcpu->vcpu_idx;
>
> return 0;
> }
> @@ -1444,7 +1444,6 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
> switch (msr) {
> case HV_X64_MSR_VP_INDEX: {
> struct kvm_hv *hv = to_kvm_hv(vcpu->kvm);
> - int vcpu_idx = kvm_vcpu_get_idx(vcpu);
> u32 new_vp_index = (u32)data;
>
> if (!host || new_vp_index >= KVM_MAX_VCPUS)
> @@ -1459,9 +1458,9 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
> * VP index is changing, adjust num_mismatched_vp_indexes if
> * it now matches or no longer matches vcpu_idx.
> */
> - if (hv_vcpu->vp_index == vcpu_idx)
> + if (hv_vcpu->vp_index == vcpu->vcpu_idx)
> atomic_inc(&hv->num_mismatched_vp_indexes);
> - else if (new_vp_index == vcpu_idx)
> + else if (new_vp_index == vcpu->vcpu_idx)
> atomic_dec(&hv->num_mismatched_vp_indexes);
>
> hv_vcpu->vp_index = new_vp_index;
> diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
> index 730da8537d05..ed1c4e546d04 100644
> --- a/arch/x86/kvm/hyperv.h
> +++ b/arch/x86/kvm/hyperv.h
> @@ -83,7 +83,7 @@ static inline u32 kvm_hv_get_vpindex(struct kvm_vcpu *vcpu)
> {
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>
> - return hv_vcpu ? hv_vcpu->vp_index : kvm_vcpu_get_idx(vcpu);
> + return hv_vcpu ? hv_vcpu->vp_index : vcpu->vcpu_idx;
> }
>
> int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host);
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index e4d712e9f760..31071ad821e2 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -721,11 +721,6 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id)
> return NULL;
> }
>
> -static inline int kvm_vcpu_get_idx(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->vcpu_idx;
> -}
> -
> #define kvm_for_each_memslot(memslot, slots) \
> for (memslot = &slots->memslots[0]; \
> memslot < slots->memslots + slots->used_slots; memslot++) \
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Best regards,
Maxim Levitsky
next prev parent reply other threads:[~2021-09-12 10:50 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-10 18:32 [PATCH 0/2] KVM: x86: vcpu_idx related cleanups Sean Christopherson
2021-09-10 18:32 ` [PATCH 1/2] KVM: x86: Query vcpu->vcpu_idx directly and drop its accessor Sean Christopherson
2021-09-12 10:49 ` Maxim Levitsky [this message]
2021-09-13 7:02 ` Vitaly Kuznetsov
2021-09-10 18:32 ` [PATCH 2/2] KVM: x86: Identify vCPU0 by its vcpu_idx instead of walking vCPUs array Sean Christopherson
2021-09-10 21:46 ` Jim Mattson
2021-09-12 10:52 ` Maxim Levitsky
2021-09-13 7:07 ` Vitaly Kuznetsov
2021-09-20 14:57 ` Sean Christopherson
2021-09-22 7:41 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=30f1a856342bc0a45f92558923f9dc22ba453a8b.camel@redhat.com \
--to=mlevitsk@redhat.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
--subject='Re: [PATCH 1/2] KVM: x86: Query vcpu->vcpu_idx directly and drop its accessor' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).