LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	peterx@redhat.com, Maxim Levitsky <mlevitsk@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: [PATCH v3 0/7] KVM: X86: Some light optimizations on rmap logic
Date: Fri, 30 Jul 2021 18:04:48 -0400	[thread overview]
Message-ID: <20210730220455.26054-1-peterx@redhat.com> (raw)

Major change to v3 is to address comments from Sean.\r
\r
Since I retested the two relevant patches and the numbers changed slightly, I\r
updated the numbers in the two optimization patches to reflect that.  In the\r
latest measurement the 3->15 slots change showed more effect on the speedup.\r
Summary:\r
\r
        Vanilla:      473.90 (+-5.93%)\r
        3->15 slots:  366.10 (+-4.94%)\r
        Add counter:  351.00 (+-3.70%)\r
\r
All the numbers are also updated in the commit messages.\r
\r
To apply the series upon kvm/queue, below patches should be replaced by the\r
corresponding patches in this v3:\r
\r
        KVM: X86: MMU: Tune PTE_LIST_EXT to be bigger\r
        KVM: X86: Optimize pte_list_desc with per-array counter\r
        KVM: X86: Optimize zapping rmap\r
\r
The 1st oneliner patch needs to be replaced because the commit message is\r
updated with the new numbers so to align all the numbers, the 2nd-3rd patches\r
are for addressing Sean's comments and also with the new numbers.\r
\r
I didn't repost the initial two patches because they're already in kvm/queue\r
and they'll be identical in content.  Please have a look, thanks.\r
\r
v2: https://lore.kernel.org/kvm/20210625153214.43106-1-peterx@redhat.com/\r
v1: https://lore.kernel.org/kvm/20210624181356.10235-1-peterx@redhat.com/\r
\r
-- original cover letter --\r
\r
All things started from patch 1, which introduced a new statistic to keep "max\r
rmap entry count per vm".  At that time I was just curious about how many rmap\r
is there normally for a guest, and it surprised me a bit.\r
\r
For TDP mappings it's all fine as mostly rmap of a page is either 0 or 1\r
depending on faulted or not.  It turns out with EPT=N there seems to be a huge\r
number of pages that can have tens or hundreds of rmap entries even for an idle\r
guest.  Then I continued with the rest.\r
\r
To understand better on "how much of those pages", I did patch 2-6 which\r
introduced the idea of per-arch per-vm debugfs nodes, and added a debug file to\r
do statistics for rmap, which is similar to kvm_arch_create_vcpu_debugfs() but\r
for vm not vcpu.\r
\r
I did notice this should be the clean approach as I also see other archs\r
randomly create some per-vm debugfs nodes there:\r
\r
---8<---\r
*** arch/arm64/kvm/vgic/vgic-debug.c:\r
vgic_debug_init[274]           debugfs_create_file("vgic-state", 0444, kvm->debugfs_dentry, kvm,\r
\r
*** arch/powerpc/kvm/book3s_64_mmu_hv.c:\r
kvmppc_mmu_debugfs_init[2115]  debugfs_create_file("htab", 0400, kvm->arch.debugfs_dir, kvm,\r
\r
*** arch/powerpc/kvm/book3s_64_mmu_radix.c:\r
kvmhv_radix_debugfs_init[1434] debugfs_create_file("radix", 0400, kvm->arch.debugfs_dir, kvm,\r
\r
*** arch/powerpc/kvm/book3s_hv.c:\r
debugfs_vcpu_init[2395]        debugfs_create_file("timings", 0444, vcpu->arch.debugfs_dir, vcpu,\r
\r
*** arch/powerpc/kvm/book3s_xics.c:\r
xics_debugfs_init[1027]        xics->dentry = debugfs_create_file(name, 0444, powerpc_debugfs_root,\r
\r
*** arch/powerpc/kvm/book3s_xive.c:\r
xive_debugfs_init[2236]        xive->dentry = debugfs_create_file(name, S_IRUGO, powerpc_debugfs_root,\r
\r
*** arch/powerpc/kvm/timing.c:\r
kvmppc_create_vcpu_debugfs[214] debugfs_file = debugfs_create_file(dbg_fname, 0666, kvm_debugfs_dir,\r
---8<---\r
\r
PPC even has its own per-vm dir for that.  I think if patch 2-6 can be\r
considered to be accepted then the next thing to consider is to merge all these\r
usages to be under the same existing per-vm dentry with their per-arch hooks\r
introduced.\r
\r
The last 3 patches (patch 7-9) are a few optimizations of existing rmap logic.\r
The major test case I used is rmap_fork [1], however it's not really the ideal\r
one to show their effect for sure as that test I wrote covers both\r
rmap_add/remove, while I don't have good idea on optimizing rmap_remove without\r
changing the array structure or adding much overhead (e.g. sort the array, or\r
making a tree-like structure somehow to replace the array list).  However it\r
already shows some benefit with those changes, so I post them out.\r
\r
Applying patch 7-8 will bring a summary of 38% perf boost when I fork 500\r
childs with the test I used.  Didn't run perf test on patch 9.  More in the\r
commit log.\r
\r
Please review, thanks.\r
\r
[1] https://github.com/xzpeter/clibs/commit/825436f825453de2ea5aaee4bdb1c92281efe5b3\r
\r
Peter Xu (7):\r
  KVM: Allow to have arch-specific per-vm debugfs files\r
  KVM: X86: Introduce pte_list_count() helper\r
  KVM: X86: Introduce kvm_mmu_slot_lpages() helpers\r
  KVM: X86: Introduce mmu_rmaps_stat per-vm debugfs file\r
  KVM: X86: MMU: Tune PTE_LIST_EXT to be bigger\r
  KVM: X86: Optimize pte_list_desc with per-array counter\r
  KVM: X86: Optimize zapping rmap\r
\r
 arch/x86/kvm/mmu/mmu.c          |  98 +++++++++++++++++-------\r
 arch/x86/kvm/mmu/mmu_internal.h |   1 +\r
 arch/x86/kvm/x86.c              | 130 +++++++++++++++++++++++++++++++-\r
 include/linux/kvm_host.h        |   1 +\r
 virt/kvm/kvm_main.c             |  20 ++++-\r
 5 files changed, 221 insertions(+), 29 deletions(-)\r
\r
-- \r
2.31.1\r
\r


             reply	other threads:[~2021-07-30 22:05 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-30 22:04 Peter Xu [this message]
2021-07-30 22:04 ` [PATCH v3 1/7] KVM: Allow to have arch-specific per-vm debugfs files Peter Xu
2021-08-03 11:15   ` Greg KH
2021-08-03 19:25     ` Peter Xu
2021-07-30 22:04 ` [PATCH v3 2/7] KVM: X86: Introduce pte_list_count() helper Peter Xu
2021-07-30 22:04 ` [PATCH v3 3/7] KVM: X86: Introduce kvm_mmu_slot_lpages() helpers Peter Xu
2021-07-30 22:04 ` [PATCH v3 4/7] KVM: X86: Introduce mmu_rmaps_stat per-vm debugfs file Peter Xu
2021-08-02 15:25   ` Paolo Bonzini
2021-08-03 19:14     ` Peter Xu
2021-08-05 18:19     ` Sean Christopherson
2021-07-30 22:04 ` [PATCH v3 5/7] KVM: X86: MMU: Tune PTE_LIST_EXT to be bigger Peter Xu
2021-07-30 22:06 ` [PATCH v3 6/7] KVM: X86: Optimize pte_list_desc with per-array counter Peter Xu
2021-07-30 22:06 ` [PATCH v3 7/7] KVM: X86: Optimize zapping rmap Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210730220455.26054-1-peterx@redhat.com \
    --to=peterx@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mlevitsk@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --subject='Re: [PATCH v3 0/7] KVM: X86: Some light optimizations on rmap logic' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).