From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.1 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2BE6C433F5 for ; Fri, 3 Sep 2021 16:25:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CDA02610CE for ; Fri, 3 Sep 2021 16:25:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349965AbhICQ0c (ORCPT ); Fri, 3 Sep 2021 12:26:32 -0400 Received: from out30-43.freemail.mail.aliyun.com ([115.124.30.43]:40513 "EHLO out30-43.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235434AbhICQ0a (ORCPT ); Fri, 3 Sep 2021 12:26:30 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=alimailimapcm10staff010182156082;MF=laijs@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0Un7j614_1630686327; Received: from C02XQCBJJG5H.local(mailfrom:laijs@linux.alibaba.com fp:SMTPD_---0Un7j614_1630686327) by smtp.aliyun-inc.com(127.0.0.1); Sat, 04 Sep 2021 00:25:28 +0800 Subject: Re: [PATCH 2/7] KVM: X86: Synchronize the shadow pagetable before link it To: Sean Christopherson Cc: Lai Jiangshan , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Marcelo Tosatti , Avi Kivity , kvm@vger.kernel.org References: <20210824075524.3354-1-jiangshanlai@gmail.com> <20210824075524.3354-3-jiangshanlai@gmail.com> From: Lai Jiangshan Message-ID: <7067bec0-8a15-1a18-481e-e2ea79575dcf@linux.alibaba.com> Date: Sat, 4 Sep 2021 00:25:27 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/9/4 00:06, Sean Christopherson wrote: > > trace_get_page: > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h > index 50ade6450ace..2ff123ec0d64 100644 > --- a/arch/x86/kvm/mmu/paging_tmpl.h > +++ b/arch/x86/kvm/mmu/paging_tmpl.h > @@ -704,6 +704,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, > access = gw->pt_access[it.level - 2]; > sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr, > it.level-1, false, access); > + if (sp->unsync_children && > + mmu_sync_children(vcpu, sp, false)) > + return RET_PF_RETRY; It was like my first (unsent) fix. Just return RET_PF_RETRY when break. And then I thought that it'd be better to retry fetching directly rather than retry guest when the conditions are still valid/unchanged to avoid all the next guest page walking and GUP(). Although the code does not check all conditions such as interrupt event pending. (we can add that too) I think it is a good design to allow break mmu_lock when mmu is handling heavy work. > } > > /* > -- >