LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
To: Will Deacon <will@kernel.org>
Cc: "linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"maz@kernel.org" <maz@kernel.org>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"james.morse@arm.com" <james.morse@arm.com>,
	"julien.thierry.kdev@gmail.com" <julien.thierry.kdev@gmail.com>,
	"suzuki.poulose@arm.com" <suzuki.poulose@arm.com>,
	"jean-philippe@linaro.org" <jean-philippe@linaro.org>,
	"Alexandru.Elisei@arm.com" <Alexandru.Elisei@arm.com>,
	"qperret@google.com" <qperret@google.com>,
	Linuxarm <linuxarm@huawei.com>
Subject: RE: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU schedule out
Date: Fri, 6 Aug 2021 12:24:36 +0000	[thread overview]
Message-ID: <b2146ea5db47485f8410a4c1ab0c15fe@huawei.com> (raw)
In-Reply-To: <20210803153036.GA31125@willie-the-truck>



> -----Original Message-----
> From: Shameerali Kolothum Thodi
> Sent: 03 August 2021 16:57
> To: 'Will Deacon' <will@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org; kvmarm@lists.cs.columbia.edu;
> linux-kernel@vger.kernel.org; maz@kernel.org; catalin.marinas@arm.com;
> james.morse@arm.com; julien.thierry.kdev@gmail.com;
> suzuki.poulose@arm.com; jean-philippe@linaro.org;
> Alexandru.Elisei@arm.com; qperret@google.com; Linuxarm
> <linuxarm@huawei.com>
> Subject: RE: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU
> schedule out
> 
> 
> 
> > -----Original Message-----
> > From: Will Deacon [mailto:will@kernel.org]
> > Sent: 03 August 2021 16:31
> > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > Cc: linux-arm-kernel@lists.infradead.org; kvmarm@lists.cs.columbia.edu;
> > linux-kernel@vger.kernel.org; maz@kernel.org; catalin.marinas@arm.com;
> > james.morse@arm.com; julien.thierry.kdev@gmail.com;
> > suzuki.poulose@arm.com; jean-philippe@linaro.org;
> > Alexandru.Elisei@arm.com; qperret@google.com; Linuxarm
> > <linuxarm@huawei.com>
> > Subject: Re: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU
> > schedule out
> >
> > On Tue, Aug 03, 2021 at 12:55:25PM +0000, Shameerali Kolothum Thodi
> > wrote:
> > > > > diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c
> > > > > index 5584e84aed95..5fd51f5445c1 100644
> > > > > --- a/arch/arm64/kvm/vmid.c
> > > > > +++ b/arch/arm64/kvm/vmid.c
> > > > > @@ -116,6 +116,12 @@ static u64 new_vmid(struct kvm_vmid
> > > > *kvm_vmid)
> > > > >  	return idx2vmid(vmid) | generation;
> > > > >  }
> > > > >
> > > > > +/* Call with preemption disabled */
> > > > > +void kvm_arm_vmid_clear_active(void)
> > > > > +{
> > > > > +	atomic64_set(this_cpu_ptr(&active_vmids), 0);
> > > > > +}
> > > >
> > > > I think this is very broken, as it will force everybody to take the
> > > > slow-path when they see an active_vmid of 0.
> > >
> > > Yes. I have seen that happening in my test setup.
> >
> > Why didn't you say so?!
> 
> Sorry. I thought of getting some performance numbers with and
> without this patch and measure the impact. But didn't quite get time
> to finish it yet.

These are some test numbers with and without this patch, run on two
different test setups.


a)Test Setup -1
-----------------------

Platform: HiSilicon D06 with 128 CPUs, VMID bits = 16
Run 128 VMs concurrently each with 2 vCPUs. Each Guest will execute hackbench
5 times before exiting.

Measurements taken avg. of 10 Runs.

Image : 5.14-rc3
---------------------------
  Time(s)       44.43813888
  No. of exits    145,348,264

Image: 5.14-rc3 + vmid-v3
----------------------------------------
  Time(s)        46.59789034
  No. of exits     133,587,307

%diff against 5.14-rc3
  Time: 4.8% more
  Exits: 8% less 

Image: 5.14-rc3 + vmid-v3 + Without active_asid clear
---------------------------------------------------------------------------
  Time(s)         44.5031782
  No. of exits      144,443,188

%diff against 5.14-rc3
  Time: 0.15% more
  Exits: 2.42% less

b)Test Setup -2
-----------------------

Platform: HiSilicon D06 + Kernel with maxcpus set to 8 and VMID bits set to 4.
Run 40 VMs concurrently each with 2 vCPUs. Each Guest will execute hackbench
5 times before exiting.

Measurements taken avg. of 10 Runs.

Image : 5.14-rc3-vmid4bit
------------------------------------
  Time(s)        46.19963266
  No. of exits     23,699,546

Image: 5.14-rc3-vmid4bit + vmid-v3
---------------------------------------------------
  Time(s)          45.83307736
  No. of exits      23,260,203

%diff against 5.14-rc3-vmid4bit
  Time: 0.8% less
  Exits: 1.85% less 

Image: 5.14-rc3-vmid4bit + vmid-v3 + Without active_asid clear
-----------------------------------------------------------------------------------------
  Time(s)           44.5031782
  No. of exits        144,443,188

%diff against 5.14-rc3-vmid4bit
  Time: 1.05% less
  Exits: 2.06% less

As expected, the active_asid clear on schedule out is not helping.
But without this patch, the numbers seems to be better than the
vanilla kernel when we force the setup(cpus=8, vmd=4bits)
to perform rollover.

Please let me know your thoughts.

Thanks,
Shameer

> 
> >
> > > > It also doesn't solve the issue I mentioned before, as an active_vmid of 0
> > > > means that the reserved vmid is preserved.
> > > >
> > > > Needs more thought...
> > >
> > > How about we clear all the active_vmids in kvm_arch_free_vm() if it
> > > matches the kvm_vmid->id ? But we may have to hold the lock
> > > there
> >
> > I think we have to be really careful not to run into the "suspended
> > animation" problem described in ae120d9edfe9 ("ARM: 7767/1: let the ASID
> > allocator handle suspended animation") if we go down this road.
> 
> 
> Ok. I will go through that.
> 
> > Maybe something along the lines of:
> >
> > ROLLOVER
> >
> >   * Take lock
> >   * Inc generation
> >     => This will force everybody down the slow path
> >   * Record active VMIDs
> >   * Broadcast TLBI
> >     => Only active VMIDs can be dirty
> >     => Reserve active VMIDs and mark as allocated
> >
> > VCPU SCHED IN
> >
> >   * Set active VMID
> >   * Check generation
> >   * If mismatch then:
> >         * Take lock
> >         * Try to match a reserved VMID
> >         * If no reserved VMID, allocate new
> >
> > VCPU SCHED OUT
> >
> >   * Clear active VMID
> >
> > but I'm not daft enough to think I got it right first time. I think it
> > needs both implementing *and* modelling in TLA+ before we merge it!
> >
> 
> Ok. I need some time to digest the above first :).
> 
> On another note, how serious do you think is the problem of extra
> reservation of the VMID space? Just wondering if we can skip this
> patch for now or not..
> 
> Thanks,
> Shameer

  parent reply	other threads:[~2021-08-06 12:24 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-29 10:40 [PATCH v3 0/4] kvm/arm: New VMID allocator based on asid Shameer Kolothum
2021-07-29 10:40 ` [PATCH v3 1/4] KVM: arm64: Introduce a new VMID allocator for KVM Shameer Kolothum
2021-08-03 11:38   ` Will Deacon
2021-08-03 12:12     ` Shameerali Kolothum Thodi
2021-07-29 10:40 ` [PATCH v3 2/4] KVM: arm64: Make VMID bits accessible outside of allocator Shameer Kolothum
2021-07-29 10:40 ` [PATCH v3 3/4] KVM: arm64: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
2021-07-29 14:59   ` kernel test robot
2021-07-29 10:40 ` [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU schedule out Shameer Kolothum
2021-08-03 11:40   ` Will Deacon
2021-08-03 12:55     ` Shameerali Kolothum Thodi
2021-08-03 15:30       ` Will Deacon
2021-08-03 15:56         ` Shameerali Kolothum Thodi
2021-08-06 12:24         ` Shameerali Kolothum Thodi [this message]
2021-08-09 13:09           ` Will Deacon
2021-08-09 13:48             ` Shameerali Kolothum Thodi
2021-08-11  8:47         ` Shameerali Kolothum Thodi
2021-10-11  6:06         ` Shameerali Kolothum Thodi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b2146ea5db47485f8410a4c1ab0c15fe@huawei.com \
    --to=shameerali.kolothum.thodi@huawei.com \
    --cc=Alexandru.Elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=james.morse@arm.com \
    --cc=jean-philippe@linaro.org \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=maz@kernel.org \
    --cc=qperret@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    --subject='RE: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU schedule out' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).