LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Wanpeng Li <kernellwp@gmail.com>
To: Nadav Amit <nadav.amit@gmail.com>
Cc: "Sean Christopherson" <sean.j.christopherson@intel.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>, kvm <kvm@vger.kernel.org>,
	"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
Date: Fri, 28 Jun 2019 17:18:06 +0800	[thread overview]
Message-ID: <CANRm+CyKx4uMkAe7maTg8nwBvkA7SCi2CnSwrghDs1nPuWNg5A@mail.gmail.com> (raw)
In-Reply-To: <CANRm+CwnShNXmDi7yCZNc=oWrmFO7BTQ-MHxd1f5LRV8+YMJEg@mail.gmail.com>

On Fri, 28 Jun 2019 at 17:12, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> On Wed, 12 Jun 2019 at 09:37, Nadav Amit <nadav.amit@gmail.com> wrote:
> >
> > > On Jun 11, 2019, at 6:18 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> > >
> > > On Wed, 12 Jun 2019 at 00:57, Nadav Amit <nadav.amit@gmail.com> wrote:
> > >>> On Jun 11, 2019, at 3:02 AM, Wanpeng Li <kernellwp@gmail.com> wrote:
> > >>>
> > >>> On Tue, 11 Jun 2019 at 09:48, Nadav Amit <nadav.amit@gmail.com> wrote:
> > >>>>> On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> > >>>>>
> > >>>>> On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
> > >>>>> <sean.j.christopherson@intel.com> wrote:
> > >>>>>> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> > >>>>>>> 2019-05-30 09:05+0800, Wanpeng Li:
> > >>>>>>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> > >>>>>>>> yield if any of the IPI target vCPUs was preempted. 17% performance
> > >>>>>>>> increasement of ebizzy benchmark can be observed in an over-subscribe
> > >>>>>>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> > >>>>>>>> IPI-many since call-function is not easy to be trigged by userspace
> > >>>>>>>> workload).
> > >>>>>>>
> > >>>>>>> Have you checked if we could gain performance by having the yield as an
> > >>>>>>> extension to our PV IPI call?
> > >>>>>>>
> > >>>>>>> It would allow us to skip the VM entry/exit overhead on the caller.
> > >>>>>>> (The benefit of that might be negligible and it also poses a
> > >>>>>>> complication when splitting the target mask into several PV IPI
> > >>>>>>> hypercalls.)
> > >>>>>>
> > >>>>>> Tangetially related to splitting PV IPI hypercalls, are there any major
> > >>>>>> hurdles to supporting shorthand?  Not having to generate the mask for
> > >>>>>> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
> > >>>>>> shave cycles for affected flows.
> > >>>>>
> > >>>>> Not sure why shorthand is not used for native x2apic mode.
> > >>>>
> > >>>> Why do you say so? native_send_call_func_ipi() checks if allbutself
> > >>>> shorthand should be used and does so (even though the check can be more
> > >>>> efficient - I’m looking at that code right now…)
> > >>>
> > >>> Please continue to follow the apic/x2apic driver. Just apic_flat set
> > >>> APIC_DEST_ALLBUT/APIC_DEST_ALLINC to ICR.
> > >>
> > >> Indeed - I was sure by the name that it does it correctly. That’s stupid.
> > >>
> > >> I’ll add it to the patch-set I am working on (TLB shootdown improvements),
> > >> if you don’t mind.
> > >
> > > Original for hotplug cpu safe.
> > > https://lwn.net/Articles/138365/
> > > https://lwn.net/Articles/138368/
> > > Not sure shortcut native support is acceptable, I will play my
> > > kvm_send_ipi_allbutself and kvm_send_ipi_all. :)
> >
> > Yes, I saw these threads before. But I think the test in
> > native_send_call_func_ipi() should take care of it.
>
> Good news, https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=WIP.x86/ipi
> Thomas who also is the hotplug state machine author introduces
> shorthands support to native kernel now, I will add the support to
> kvm_send_ipi_allbutself() and kvm_send_ipi_all() after his work
> complete.

Hmm, should fallback to native shorthands when support.

Regards,
Wanpeng Li

  reply	other threads:[~2019-06-28  9:18 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-30  1:05 Wanpeng Li
2019-05-30  1:05 ` [PATCH v3 1/3] KVM: X86: " Wanpeng Li
2019-05-30  1:05 ` [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall Wanpeng Li
2019-06-10 14:17   ` Radim Krčmář
2019-06-11  8:47     ` Wanpeng Li
2019-05-30  1:05 ` [PATCH v3 3/3] KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest Wanpeng Li
2019-06-10  5:58 ` [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
2019-06-10 14:34 ` Radim Krčmář
2019-06-11  1:11   ` Sean Christopherson
2019-06-11  1:45     ` Wanpeng Li
2019-06-11  1:48       ` Nadav Amit
2019-06-11 10:02         ` Wanpeng Li
2019-06-11 16:57           ` Nadav Amit
2019-06-12  1:18             ` Wanpeng Li
2019-06-12  1:37               ` Nadav Amit
2019-06-28  9:12                 ` Wanpeng Li
2019-06-28  9:18                   ` Wanpeng Li [this message]
2019-06-11 10:26   ` Wanpeng Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANRm+CyKx4uMkAe7maTg8nwBvkA7SCi2CnSwrghDs1nPuWNg5A@mail.gmail.com \
    --to=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nadav.amit@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --subject='Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).