LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* KVM: add support for SVM Nested Paging
@ 2008-02-07 12:47 Joerg Roedel
  2008-02-07 12:47 ` [PATCH 1/8] SVM: move feature detection to hardware setup code Joerg Roedel
                   ` (8 more replies)
  0 siblings, 9 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel

Hi,

here is the improved patchset which adds support for the Nested Paging
feature of the AMD Barcelona and Phenom processors to KVM. The patch set
was successfully install- and runtime-tested with various guest
operating systems (64 bit, 32 bit legacy and 32 bit PAE Linux,
Windows 64 bit and 32 bit versions too).

To the previous post of these patches they were extended with support
for KVM on 32 bit PAE hosts. Live migration is also implemented and
tested in various situations (migration between 32 bit and 64 bit host,
Non-NPT to/from NPT migration, NPT to NPT migration, all with various
kinds of guests, including Windows).  After all these tests I am pretty
sure that these patches don't introduce any regression to KVM and
therefore the NPT feature is enabled per default.

Some new performance numbers are also available. For this test I started
an Ubuntu 7.10 guest with 4 vcpus and 3G of RAM on a dual socket Opteron
box and ran kernbench -M in it with a 2.6.24 kernel as the source tree.
The test was done with traditional Shadow Paging and using Nested
Paging. The tests ran on the text console with GDM stopped. Here are the
results:

-----------------------------------------------------------------------
Average Half load -j 3 Run:
-----------------+-------------------------+---------------------------
                 | Nested Paging           | Shadow Paging
-----------------+-------------------------+---------------------------
                 | Time    (std deviation) | Time    (std deviation)
-----------------+-------------------------+---------------------------
Elapsed Time     |   147.574 (0.642168)    |   211.73  (0.802963)
User Time        |   382.036 (1.14742)     |   416.254 (1.91732)
System Time      |    48.29  (0.261151)    |   221.608 (0.706307)
Percent CPU      |   291     (0.707107)    |   300.8   (0.447214)
Context Switches | 33605.4   (148.788)     | 34901     (434.318)
Sleeps           | 47351     (98.2395)     | 47017.8   (378.582)
-----------------+-------------------------+---------------------------
Average Half load -j 16 Run:
-----------------+-------------------------+---------------------------
                 | Nested Paging           | Shadow Paging
-----------------+-------------------------+---------------------------
                 | Time    (std deviation) | Time    (std deviation)
-----------------+-------------------------+---------------------------
Elapsed Time     |   120.156 (0.337757)    |   179.162 (1.49999)
User Time        |   385.318 (3.57206)     |   421.624 (5.82785)
System Time      |    48.959 (0.811411)    |   230.8   (9.70806)
Percent CPU      |   327.6   (38.5982)     |   336.2   (37.3655)
Context Switches | 42435.5   (9312.09)     | 46190.7   (11909.6)
Sleeps           | 50297.5   (3111.73)     | 50688     (3882.49)
-----------------+-------------------------+---------------------------

In other words, Nested Paging gave 30% performance improvement in the
'make -j 3' run and a 33% improvement in the 'make -j 16' run for this
benchmark.

I think these patches are ready for merging into the KVM tree. Please
consider for inclusion.

Joerg Roedel





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/8] SVM: move feature detection to hardware setup code
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-07 12:47 ` [PATCH 2/8] SVM: add detection of Nested Paging feature Joerg Roedel
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

By moving the SVM feature detection from the each_cpu code to the hardware
setup code it runs only once. As an additional advance the feature check is now
available earlier in the module setup process.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/svm.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d1c7fcb..c0aaa85 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -302,7 +302,6 @@ static void svm_hardware_enable(void *garbage)
 	svm_data->asid_generation = 1;
 	svm_data->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	svm_data->next_asid = svm_data->max_asid + 1;
-	svm_features = cpuid_edx(SVM_CPUID_FUNC);
 
 	asm volatile ("sgdt %0" : "=m"(gdt_descr));
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -408,6 +407,9 @@ static __init int svm_hardware_setup(void)
 		if (r)
 			goto err_2;
 	}
+
+	svm_features = cpuid_edx(SVM_CPUID_FUNC);
+
 	return 0;
 
 err_2:
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 2/8] SVM: add detection of Nested Paging feature
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
  2008-02-07 12:47 ` [PATCH 1/8] SVM: move feature detection to hardware setup code Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-07 12:47 ` [PATCH 3/8] SVM: add module parameter to disable Nested Paging Joerg Roedel
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

Let SVM detect if the Nested Paging feature is available on the hardware.
Disable it to keep this patch series bisectable.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/svm.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c0aaa85..b037b27 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -47,6 +47,8 @@ MODULE_LICENSE("GPL");
 #define SVM_FEATURE_LBRV (1 << 1)
 #define SVM_DEATURE_SVML (1 << 2)
 
+static bool npt_enabled = false;
+
 static void kvm_reput_irq(struct vcpu_svm *svm);
 
 static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
@@ -410,6 +412,12 @@ static __init int svm_hardware_setup(void)
 
 	svm_features = cpuid_edx(SVM_CPUID_FUNC);
 
+	if (!svm_has(SVM_FEATURE_NPT))
+		npt_enabled = false;
+
+	if (npt_enabled)
+		printk(KERN_INFO "kvm: Nested Paging enabled\n");
+
 	return 0;
 
 err_2:
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 3/8] SVM: add module parameter to disable Nested Paging
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
  2008-02-07 12:47 ` [PATCH 1/8] SVM: move feature detection to hardware setup code Joerg Roedel
  2008-02-07 12:47 ` [PATCH 2/8] SVM: add detection of Nested Paging feature Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-07 12:47 ` [PATCH 4/8] X86: export information about NPT to generic x86 code Joerg Roedel
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

To disable the use of the Nested Paging feature even if it is available in
hardware this patch adds a module parameter. Nested Paging can be disabled by
passing npt=0 to the kvm_amd module.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/svm.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b037b27..8173ba6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -48,6 +48,9 @@ MODULE_LICENSE("GPL");
 #define SVM_DEATURE_SVML (1 << 2)
 
 static bool npt_enabled = false;
+static int npt = 1;
+
+module_param(npt, int, S_IRUGO);
 
 static void kvm_reput_irq(struct vcpu_svm *svm);
 
@@ -415,6 +418,11 @@ static __init int svm_hardware_setup(void)
 	if (!svm_has(SVM_FEATURE_NPT))
 		npt_enabled = false;
 
+	if (npt_enabled && !npt) {
+		printk(KERN_INFO "kvm: Nested Paging disabled\n");
+		npt_enabled = false;
+	}
+
 	if (npt_enabled)
 		printk(KERN_INFO "kvm: Nested Paging enabled\n");
 
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 4/8] X86: export information about NPT to generic x86 code
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
                   ` (2 preceding siblings ...)
  2008-02-07 12:47 ` [PATCH 3/8] SVM: add module parameter to disable Nested Paging Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-07 12:47 ` [PATCH 5/8] MMU: make the __nonpaging_map function generic Joerg Roedel
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

The generic x86 code has to know if the specific implementation uses Nested
Paging. In the generic code Nested Paging is called Two Dimensional Paging
(TDP) to avoid confusion with (future) TDP implementations of other vendors.
This patch exports the availability of TDP to the generic x86 code.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/mmu.c         |   15 +++++++++++++++
 arch/x86/kvm/svm.c         |    4 +++-
 include/asm-x86/kvm_host.h |    2 ++
 3 files changed, 20 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 635e70c..3477395 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -32,6 +32,15 @@
 #include <asm/cmpxchg.h>
 #include <asm/io.h>
 
+/*
+ * When setting this variable to true it enables Two-Dimensional-Paging
+ * where the hardware walks 2 page tables:
+ * 1. the guest-virtual to guest-physical
+ * 2. while doing 1. it walks guest-physical to host-physical
+ * If the hardware supports that we don't need to do shadow paging.
+ */
+static bool tdp_enabled = false;
+
 #undef MMU_DEBUG
 
 #undef AUDIT
@@ -1562,6 +1571,12 @@ out:
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_page_fault);
 
+void kvm_enable_tdp(void)
+{
+	tdp_enabled = true;
+}
+EXPORT_SYMBOL_GPL(kvm_enable_tdp);
+
 static void free_mmu_pages(struct kvm_vcpu *vcpu)
 {
 	struct kvm_mmu_page *sp;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8173ba6..f400499 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -423,8 +423,10 @@ static __init int svm_hardware_setup(void)
 		npt_enabled = false;
 	}
 
-	if (npt_enabled)
+	if (npt_enabled) {
 		printk(KERN_INFO "kvm: Nested Paging enabled\n");
+		kvm_enable_tdp();
+	}
 
 	return 0;
 
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index 67ae307..7661da0 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -491,6 +491,8 @@ int kvm_fix_hypercall(struct kvm_vcpu *vcpu);
 
 int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u32 error_code);
 
+void kvm_enable_tdp(void);
+
 int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3);
 int complete_pio(struct kvm_vcpu *vcpu);
 
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 5/8] MMU: make the __nonpaging_map function generic
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
                   ` (3 preceding siblings ...)
  2008-02-07 12:47 ` [PATCH 4/8] X86: export information about NPT to generic x86 code Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-07 12:47 ` [PATCH 6/8] X86: export the load_pdptrs() function to modules Joerg Roedel
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

The mapping function for the nonpaging case in the softmmu does basically the
same as required for Nested Paging. Make this function generic so it can be
used for both.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/mmu.c |    7 +++----
 1 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3477395..5e76963 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -965,10 +965,9 @@ static void nonpaging_new_cr3(struct kvm_vcpu *vcpu)
 {
 }
 
-static int __nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, int write,
-			   gfn_t gfn, struct page *page)
+static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
+			   gfn_t gfn, struct page *page, int level)
 {
-	int level = PT32E_ROOT_LEVEL;
 	hpa_t table_addr = vcpu->arch.mmu.root_hpa;
 	int pt_write = 0;
 
@@ -1026,7 +1025,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, int write, gfn_t gfn)
 
 	spin_lock(&vcpu->kvm->mmu_lock);
 	kvm_mmu_free_some_pages(vcpu);
-	r = __nonpaging_map(vcpu, v, write, gfn, page);
+	r = __direct_map(vcpu, v, write, gfn, page, PT32E_ROOT_LEVEL);
 	spin_unlock(&vcpu->kvm->mmu_lock);
 
 	up_read(&current->mm->mmap_sem);
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 6/8] X86: export the load_pdptrs() function to modules
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
                   ` (4 preceding siblings ...)
  2008-02-07 12:47 ` [PATCH 5/8] MMU: make the __nonpaging_map function generic Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-07 12:47 ` [PATCH 7/8] MMU: add TDP support to the KVM MMU Joerg Roedel
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

The load_pdptrs() function is required in the SVM module for NPT support.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/x86.c         |    1 +
 include/asm-x86/kvm_host.h |    2 ++
 2 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8f94a0b..31cdf09 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -202,6 +202,7 @@ out:
 
 	return ret;
 }
+EXPORT_SYMBOL_GPL(load_pdptrs);
 
 static bool pdptrs_changed(struct kvm_vcpu *vcpu)
 {
diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h
index 7661da0..38f29fa 100644
--- a/include/asm-x86/kvm_host.h
+++ b/include/asm-x86/kvm_host.h
@@ -410,6 +410,8 @@ void kvm_mmu_zap_all(struct kvm *kvm);
 unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
 void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
 
+int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3);
+
 enum emulation_result {
 	EMULATE_DONE,       /* no further processing */
 	EMULATE_DO_MMIO,      /* kvm_run filled with mmio request */
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 7/8] MMU: add TDP support to the KVM MMU
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
                   ` (5 preceding siblings ...)
  2008-02-07 12:47 ` [PATCH 6/8] X86: export the load_pdptrs() function to modules Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-07 13:27   ` [kvm-devel] " Izik Eidus
  2008-02-07 12:47 ` [PATCH 8/8] SVM: add support for Nested Paging Joerg Roedel
  2008-02-10 12:03 ` [kvm-devel] KVM: add support for SVM " Avi Kivity
  8 siblings, 1 reply; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

This patch contains the changes to the KVM MMU necessary for support of the
Nested Paging feature in AMD Barcelona and Phenom Processors.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/mmu.c |   79 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 arch/x86/kvm/mmu.h |    6 ++++
 2 files changed, 82 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5e76963..5304d55 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1081,6 +1081,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
 	int i;
 	gfn_t root_gfn;
 	struct kvm_mmu_page *sp;
+	int metaphysical = 0;
 
 	root_gfn = vcpu->arch.cr3 >> PAGE_SHIFT;
 
@@ -1089,14 +1090,20 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
 		hpa_t root = vcpu->arch.mmu.root_hpa;
 
 		ASSERT(!VALID_PAGE(root));
+		if (tdp_enabled)
+			metaphysical = 1;
 		sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
-				      PT64_ROOT_LEVEL, 0, ACC_ALL, NULL, NULL);
+				      PT64_ROOT_LEVEL, metaphysical,
+				      ACC_ALL, NULL, NULL);
 		root = __pa(sp->spt);
 		++sp->root_count;
 		vcpu->arch.mmu.root_hpa = root;
 		return;
 	}
 #endif
+	metaphysical = !is_paging(vcpu);
+	if (tdp_enabled)
+		metaphysical = 1;
 	for (i = 0; i < 4; ++i) {
 		hpa_t root = vcpu->arch.mmu.pae_root[i];
 
@@ -1110,7 +1117,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
 		} else if (vcpu->arch.mmu.root_level == 0)
 			root_gfn = 0;
 		sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
-				      PT32_ROOT_LEVEL, !is_paging(vcpu),
+				      PT32_ROOT_LEVEL, metaphysical,
 				      ACC_ALL, NULL, NULL);
 		root = __pa(sp->spt);
 		++sp->root_count;
@@ -1144,6 +1151,36 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva,
 			     error_code & PFERR_WRITE_MASK, gfn);
 }
 
+static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa,
+				u32 error_code)
+{
+	struct page *page;
+	int r;
+
+	ASSERT(vcpu);
+	ASSERT(VALID_PAGE(vcpu->arch.mmu.root_hpa));
+
+	r = mmu_topup_memory_caches(vcpu);
+	if (r)
+		return r;
+
+	down_read(&current->mm->mmap_sem);
+	page = gfn_to_page(vcpu->kvm, gpa >> PAGE_SHIFT);
+	if (is_error_page(page)) {
+		kvm_release_page_clean(page);
+		up_read(&current->mm->mmap_sem);
+		return 1;
+	}
+	spin_lock(&vcpu->kvm->mmu_lock);
+	kvm_mmu_free_some_pages(vcpu);
+	r = __direct_map(vcpu, gpa, error_code & PFERR_WRITE_MASK,
+			 gpa >> PAGE_SHIFT, page, TDP_ROOT_LEVEL);
+	spin_unlock(&vcpu->kvm->mmu_lock);
+	up_read(&current->mm->mmap_sem);
+
+	return r;
+}
+
 static void nonpaging_free(struct kvm_vcpu *vcpu)
 {
 	mmu_free_roots(vcpu);
@@ -1237,7 +1274,35 @@ static int paging32E_init_context(struct kvm_vcpu *vcpu)
 	return paging64_init_context_common(vcpu, PT32E_ROOT_LEVEL);
 }
 
-static int init_kvm_mmu(struct kvm_vcpu *vcpu)
+static int init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mmu *context = &vcpu->arch.mmu;
+
+	context->new_cr3 = nonpaging_new_cr3;
+	context->page_fault = tdp_page_fault;
+	context->free = nonpaging_free;
+	context->prefetch_page = nonpaging_prefetch_page;
+	context->shadow_root_level = TDP_ROOT_LEVEL;
+	context->root_hpa = INVALID_PAGE;
+
+	if (!is_paging(vcpu)) {
+		context->gva_to_gpa = nonpaging_gva_to_gpa;
+		context->root_level = 0;
+	} else if (is_long_mode(vcpu)) {
+		context->gva_to_gpa = paging64_gva_to_gpa;
+		context->root_level = PT64_ROOT_LEVEL;
+	} else if (is_pae(vcpu)) {
+		context->gva_to_gpa = paging64_gva_to_gpa;
+		context->root_level = PT32E_ROOT_LEVEL;
+	} else {
+		context->gva_to_gpa = paging32_gva_to_gpa;
+		context->root_level = PT32_ROOT_LEVEL;
+	}
+
+	return 0;
+}
+
+static int init_kvm_softmmu(struct kvm_vcpu *vcpu)
 {
 	ASSERT(vcpu);
 	ASSERT(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
@@ -1252,6 +1317,14 @@ static int init_kvm_mmu(struct kvm_vcpu *vcpu)
 		return paging32_init_context(vcpu);
 }
 
+static int init_kvm_mmu(struct kvm_vcpu *vcpu)
+{
+	if (tdp_enabled)
+		return init_kvm_tdp_mmu(vcpu);
+	else
+		return init_kvm_softmmu(vcpu);
+}
+
 static void destroy_kvm_mmu(struct kvm_vcpu *vcpu)
 {
 	ASSERT(vcpu);
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 1fce19e..e64e9f5 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -3,6 +3,12 @@
 
 #include <linux/kvm_host.h>
 
+#ifdef CONFIG_X86_64
+#define TDP_ROOT_LEVEL PT64_ROOT_LEVEL
+#else
+#define TDP_ROOT_LEVEL PT32E_ROOT_LEVEL
+#endif
+
 static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
 {
 	if (unlikely(vcpu->kvm->arch.n_free_mmu_pages < KVM_MIN_FREE_MMU_PAGES))
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 8/8] SVM: add support for Nested Paging
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
                   ` (6 preceding siblings ...)
  2008-02-07 12:47 ` [PATCH 7/8] MMU: add TDP support to the KVM MMU Joerg Roedel
@ 2008-02-07 12:47 ` Joerg Roedel
  2008-02-10 12:03 ` [kvm-devel] KVM: add support for SVM " Avi Kivity
  8 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 12:47 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Joerg Roedel

This patch contains the SVM architecture dependent changes for KVM to enable
support for the Nested Paging feature of AMD Barcelona and Phenom processors.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/svm.c |   72 ++++++++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 67 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f400499..9b9d838 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -47,7 +47,12 @@ MODULE_LICENSE("GPL");
 #define SVM_FEATURE_LBRV (1 << 1)
 #define SVM_DEATURE_SVML (1 << 2)
 
+/* enable NPT for AMD64 and X86 with PAE */
+#if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE)
+static bool npt_enabled = true;
+#else
 static bool npt_enabled = false;
+#endif
 static int npt = 1;
 
 module_param(npt, int, S_IRUGO);
@@ -187,7 +192,7 @@ static inline void flush_guest_tlb(struct kvm_vcpu *vcpu)
 
 static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 {
-	if (!(efer & EFER_LMA))
+	if (!npt_enabled && !(efer & EFER_LMA))
 		efer &= ~EFER_LME;
 
 	to_svm(vcpu)->vmcb->save.efer = efer | MSR_EFER_SVME_MASK;
@@ -570,6 +575,22 @@ static void init_vmcb(struct vmcb *vmcb)
 	save->cr0 = 0x00000010 | X86_CR0_PG | X86_CR0_WP;
 	save->cr4 = X86_CR4_PAE;
 	/* rdx = ?? */
+
+	if (npt_enabled) {
+		/* Setup VMCB for Nested Paging */
+		control->nested_ctl = 1;
+		control->intercept_exceptions &= ~(1 << PF_VECTOR);
+		control->intercept_cr_read &= ~(INTERCEPT_CR0_MASK|
+						INTERCEPT_CR3_MASK);
+		control->intercept_cr_write &= ~(INTERCEPT_CR0_MASK|
+						 INTERCEPT_CR3_MASK);
+		save->g_pat = 0x0007040600070406ULL;
+		/* enable caching because the QEMU Bios doesn't enable it */
+		save->cr0 = X86_CR0_ET;
+		save->cr3 = 0;
+		save->cr4 = 0;
+	}
+
 }
 
 static int svm_vcpu_reset(struct kvm_vcpu *vcpu)
@@ -804,6 +825,9 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 		}
 	}
 #endif
+	if (npt_enabled)
+		goto set;
+
 	if ((vcpu->arch.cr0 & X86_CR0_TS) && !(cr0 & X86_CR0_TS)) {
 		svm->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
 		vcpu->fpu_active = 1;
@@ -811,18 +835,26 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 
 	vcpu->arch.cr0 = cr0;
 	cr0 |= X86_CR0_PG | X86_CR0_WP;
-	cr0 &= ~(X86_CR0_CD | X86_CR0_NW);
 	if (!vcpu->fpu_active) {
 		svm->vmcb->control.intercept_exceptions |= (1 << NM_VECTOR);
 		cr0 |= X86_CR0_TS;
 	}
+set:
+	/*
+	 * re-enable caching here because the QEMU bios
+	 * does not do it - this results in some delay at
+	 * reboot
+	 */
+	cr0 &= ~(X86_CR0_CD | X86_CR0_NW);
 	svm->vmcb->save.cr0 = cr0;
 }
 
 static void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 {
        vcpu->arch.cr4 = cr4;
-       to_svm(vcpu)->vmcb->save.cr4 = cr4 | X86_CR4_PAE;
+       if (!npt_enabled)
+	       cr4 |= X86_CR4_PAE;
+       to_svm(vcpu)->vmcb->save.cr4 = cr4;
 }
 
 static void svm_set_segment(struct kvm_vcpu *vcpu,
@@ -1288,14 +1320,34 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm,
 	[SVM_EXIT_WBINVD]                       = emulate_on_interception,
 	[SVM_EXIT_MONITOR]			= invalid_op_interception,
 	[SVM_EXIT_MWAIT]			= invalid_op_interception,
+	[SVM_EXIT_NPF]				= pf_interception,
 };
 
-
 static int handle_exit(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	u32 exit_code = svm->vmcb->control.exit_code;
 
+	if (npt_enabled) {
+		int mmu_reload = 0;
+		if ((vcpu->arch.cr0 ^ svm->vmcb->save.cr0) & X86_CR0_PG) {
+			svm_set_cr0(vcpu, svm->vmcb->save.cr0);
+			mmu_reload = 1;
+		}
+		vcpu->arch.cr0 = svm->vmcb->save.cr0;
+		vcpu->arch.cr3 = svm->vmcb->save.cr3;
+		if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) {
+			if (!load_pdptrs(vcpu, vcpu->arch.cr3)) {
+				kvm_inject_gp(vcpu, 0);
+				return 1;
+			}
+		}
+		if (mmu_reload) {
+			kvm_mmu_reset_context(vcpu);
+			kvm_mmu_load(vcpu);
+		}
+	}
+
 	kvm_reput_irq(svm);
 
 	if (svm->vmcb->control.exit_code == SVM_EXIT_ERR) {
@@ -1306,7 +1358,8 @@ static int handle_exit(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	}
 
 	if (is_external_interrupt(svm->vmcb->control.exit_int_info) &&
-	    exit_code != SVM_EXIT_EXCP_BASE + PF_VECTOR)
+	    exit_code != SVM_EXIT_EXCP_BASE + PF_VECTOR &&
+	    exit_code != SVM_EXIT_NPF)
 		printk(KERN_ERR "%s: unexpected exit_ini_info 0x%x "
 		       "exit_code 0x%x\n",
 		       __FUNCTION__, svm->vmcb->control.exit_int_info,
@@ -1497,6 +1550,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 	svm->host_dr6 = read_dr6();
 	svm->host_dr7 = read_dr7();
 	svm->vmcb->save.cr2 = vcpu->arch.cr2;
+	/* required for live migration with NPT */
+	if (npt_enabled)
+		svm->vmcb->save.cr3 = vcpu->arch.cr3;
 
 	if (svm->vmcb->save.dr7 & 0xff) {
 		write_dr7(0);
@@ -1640,6 +1696,12 @@ static void svm_set_cr3(struct kvm_vcpu *vcpu, unsigned long root)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
+	if (npt_enabled) {
+		svm->vmcb->control.nested_cr3 = root;
+		force_new_asid(vcpu);
+		return;
+	}
+
 	svm->vmcb->save.cr3 = root;
 	force_new_asid(vcpu);
 
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [kvm-devel] [PATCH 7/8] MMU: add TDP support to the KVM MMU
  2008-02-07 12:47 ` [PATCH 7/8] MMU: add TDP support to the KVM MMU Joerg Roedel
@ 2008-02-07 13:27   ` Izik Eidus
  2008-02-07 13:50     ` Joerg Roedel
  0 siblings, 1 reply; 14+ messages in thread
From: Izik Eidus @ 2008-02-07 13:27 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Avi Kivity, kvm-devel, linux-kernel

Joerg Roedel wrote:
> This patch contains the changes to the KVM MMU necessary for support of the
> Nested Paging feature in AMD Barcelona and Phenom Processors.
>   

good patch, it look like things will be very fixable with it

> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> ---
>  arch/x86/kvm/mmu.c |   79 ++++++++++++++++++++++++++++++++++++++++++++++++++--
>  arch/x86/kvm/mmu.h |    6 ++++
>  2 files changed, 82 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 5e76963..5304d55 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -1081,6 +1081,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
>  	int i;
>  	gfn_t root_gfn;
>  	struct kvm_mmu_page *sp;
> +	int metaphysical = 0;
>  
>  	root_gfn = vcpu->arch.cr3 >> PAGE_SHIFT;
>  
> @@ -1089,14 +1090,20 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
>  		hpa_t root = vcpu->arch.mmu.root_hpa;
>  
>  		ASSERT(!VALID_PAGE(root));
> +		if (tdp_enabled)
> +			metaphysical = 1;
>  		sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
> -				      PT64_ROOT_LEVEL, 0, ACC_ALL, NULL, NULL);
> +				      PT64_ROOT_LEVEL, metaphysical,
> +				      ACC_ALL, NULL, NULL);
>  		root = __pa(sp->spt);
>  		++sp->root_count;
>  		vcpu->arch.mmu.root_hpa = root;
>  		return;
>  	}
>  #endif
> +	metaphysical = !is_paging(vcpu);
> +	if (tdp_enabled)
> +		metaphysical = 1;
>  	for (i = 0; i < 4; ++i) {
>  		hpa_t root = vcpu->arch.mmu.pae_root[i];
>  
> @@ -1110,7 +1117,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
>  		} else if (vcpu->arch.mmu.root_level == 0)
>  			root_gfn = 0;
>  		sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
> -				      PT32_ROOT_LEVEL, !is_paging(vcpu),
> +				      PT32_ROOT_LEVEL, metaphysical,
>  				      ACC_ALL, NULL, NULL);
>  		root = __pa(sp->spt);
>  		++sp->root_count;
> @@ -1144,6 +1151,36 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva,
>  			     error_code & PFERR_WRITE_MASK, gfn);
>  }
>  
> +static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa,
> +				u32 error_code)
>   

you probably mean gpa_t ?

> +{
> +	struct page *page;
> +	int r;
> +
> +	ASSERT(vcpu);
> +	ASSERT(VALID_PAGE(vcpu->arch.mmu.root_hpa));
> +
> +	r = mmu_topup_memory_caches(vcpu);
> +	if (r)
> +		return r;
> +
> +	down_read(&current->mm->mmap_sem);
> +	page = gfn_to_page(vcpu->kvm, gpa >> PAGE_SHIFT);
> +	if (is_error_page(page)) {
> +		kvm_release_page_clean(page);
> +		up_read(&current->mm->mmap_sem);
> +		return 1;
> +	}
>   

i dont know if it worth checking it here,
in the worth case we will map the error page and the host will be safe

> +	spin_lock(&vcpu->kvm->mmu_lock);
> +	kvm_mmu_free_some_pages(vcpu);
> +	r = __direct_map(vcpu, gpa, error_code & PFERR_WRITE_MASK,
> +			 gpa >> PAGE_SHIFT, page, TDP_ROOT_LEVEL);
> +	spin_unlock(&vcpu->kvm->mmu_lock);
> +	up_read(&current->mm->mmap_sem);
> +
> +	return r;
> +}
> +
>  static void nonpaging_free(struct kvm_vcpu *vcpu)
>  {
>  	mmu_free_roots(vcpu);
> @@ -1237,7 +1274,35 @@ static int paging32E_init_context(struct kvm_vcpu *vcpu)
>  	return paging64_init_context_common(vcpu, PT32E_ROOT_LEVEL);
>  }
>  
> -static int init_kvm_mmu(struct kvm_vcpu *vcpu)
> tdp_page_fault(struct
> +static int init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_mmu *context = &vcpu->arch.mmu;
> +
> +	context->new_cr3 = nonpaging_new_cr3;
> +	context->page_fault = tdp_page_fault;
> +	context->free = nonpaging_free;
> +	context->prefetch_page = nonpaging_prefetch_page;
> +	context->shadow_root_level = TDP_ROOT_LEVEL;
> +	context->root_hpa = INVALID_PAGE;
> +
> +	if (!is_paging(vcpu)) {
> +		context->gva_to_gpa = nonpaging_gva_to_gpa;
> +		context->root_level = 0;
> +	} else if (is_long_mode(vcpu)) {
> +		context->gva_to_gpa = paging64_gva_to_gpa;
> +		context->root_level = PT64_ROOT_LEVEL;
> +	} else if (is_pae(vcpu)) {
> +		context->gva_to_gpa = paging64_gva_to_gpa;
> +		context->root_level = PT32E_ROOT_LEVEL;
> +	} else {
> +		context->gva_to_gpa = paging32_gva_to_gpa;
> +		context->root_level = PT32_ROOT_LEVEL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int init_kvm_softmmu(struct kvm_vcpu *vcpu)
>  {
>  	ASSERT(vcpu);
>  	ASSERT(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
> @@ -1252,6 +1317,14 @@ static int init_kvm_mmu(struct kvm_vcpu *vcpu)
>  		return paging32_init_context(vcpu);
>  }
>  
> +static int init_kvm_mmu(struct kvm_vcpu *vcpu)
> +{
> +	if (tdp_enabled)
> +		return init_kvm_tdp_mmu(vcpu);
> +	else
> +		return init_kvm_softmmu(vcpu);
> +}
> +
>  static void destroy_kvm_mmu(struct kvm_vcpu *vcpu)
>  {
>  	ASSERT(vcpu);
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index 1fce19e..e64e9f5 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -3,6 +3,12 @@
>  
>  #include <linux/kvm_host.h>
>  
> +#ifdef CONFIG_X86_64
> +#define TDP_ROOT_LEVEL PT64_ROOT_LEVEL
> +#else
> +#define TDP_ROOT_LEVEL PT32E_ROOT_LEVEL
> +#endif
> +
>  static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
>  {
>  	if (unlikely(vcpu->kvm->arch.n_free_mmu_pages < KVM_MIN_FREE_MMU_PAGES))
>   


-- 
woof.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [kvm-devel] [PATCH 7/8] MMU: add TDP support to the KVM MMU
  2008-02-07 13:27   ` [kvm-devel] " Izik Eidus
@ 2008-02-07 13:50     ` Joerg Roedel
  2008-02-07 14:04       ` Izik Eidus
  0 siblings, 1 reply; 14+ messages in thread
From: Joerg Roedel @ 2008-02-07 13:50 UTC (permalink / raw)
  To: Izik Eidus; +Cc: Avi Kivity, kvm-devel, linux-kernel

On Thu, Feb 07, 2008 at 03:27:19PM +0200, Izik Eidus wrote:
> Joerg Roedel wrote:
> >This patch contains the changes to the KVM MMU necessary for support of the
> >Nested Paging feature in AMD Barcelona and Phenom Processors.
> >  
> 
> good patch, it look like things will be very fixable with it
> 
> >Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> >---
> > arch/x86/kvm/mmu.c |   79 ++++++++++++++++++++++++++++++++++++++++++++++++++--
> > arch/x86/kvm/mmu.h |    6 ++++
> > 2 files changed, 82 insertions(+), 3 deletions(-)
> >
> >diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> >index 5e76963..5304d55 100644
> >--- a/arch/x86/kvm/mmu.c
> >+++ b/arch/x86/kvm/mmu.c
> >@@ -1081,6 +1081,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
> > 	int i;
> > 	gfn_t root_gfn;
> > 	struct kvm_mmu_page *sp;
> >+	int metaphysical = 0;
> >  	root_gfn = vcpu->arch.cr3 >> PAGE_SHIFT;
> > @@ -1089,14 +1090,20 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
> > 		hpa_t root = vcpu->arch.mmu.root_hpa;
> >  		ASSERT(!VALID_PAGE(root));
> >+		if (tdp_enabled)
> >+			metaphysical = 1;
> > 		sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
> >-				      PT64_ROOT_LEVEL, 0, ACC_ALL, NULL, NULL);
> >+				      PT64_ROOT_LEVEL, metaphysical,
> >+				      ACC_ALL, NULL, NULL);
> > 		root = __pa(sp->spt);
> > 		++sp->root_count;
> > 		vcpu->arch.mmu.root_hpa = root;
> > 		return;
> > 	}
> > #endif
> >+	metaphysical = !is_paging(vcpu);
> >+	if (tdp_enabled)
> >+		metaphysical = 1;
> > 	for (i = 0; i < 4; ++i) {
> > 		hpa_t root = vcpu->arch.mmu.pae_root[i];
> > @@ -1110,7 +1117,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
> > 		} else if (vcpu->arch.mmu.root_level == 0)
> > 			root_gfn = 0;
> > 		sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
> >-				      PT32_ROOT_LEVEL, !is_paging(vcpu),
> >+				      PT32_ROOT_LEVEL, metaphysical,
> > 				      ACC_ALL, NULL, NULL);
> > 		root = __pa(sp->spt);
> > 		++sp->root_count;
> >@@ -1144,6 +1151,36 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva,
> > 			     error_code & PFERR_WRITE_MASK, gfn);
> > }
> > +static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa,
> >+				u32 error_code)
> >  
> 
> you probably mean gpa_t ?

Yes. But the function is assigned to a function pointer. And the type of
that pointer expects gva_t there. So I named the parameter gpa to
describe that a guest physical address is meant there.

> >+{
> >+	struct page *page;
> >+	int r;
> >+
> >+	ASSERT(vcpu);
> >+	ASSERT(VALID_PAGE(vcpu->arch.mmu.root_hpa));
> >+
> >+	r = mmu_topup_memory_caches(vcpu);
> >+	if (r)
> >+		return r;
> >+
> >+	down_read(&current->mm->mmap_sem);
> >+	page = gfn_to_page(vcpu->kvm, gpa >> PAGE_SHIFT);
> >+	if (is_error_page(page)) {
> >+		kvm_release_page_clean(page);
> >+		up_read(&current->mm->mmap_sem);
> >+		return 1;
> >+	}
> >  
> 
> i dont know if it worth checking it here,
> in the worth case we will map the error page and the host will be safe

Looking at the nonpaging_map function it is the right place to check for
the error page.

Joerg Roedel

-- 
           |           AMD Saxony Limited Liability Company & Co. KG
 Operating |         Wilschdorfer Landstr. 101, 01109 Dresden, Germany
 System    |                  Register Court Dresden: HRA 4896
 Research  |              General Partner authorized to represent:
 Center    |             AMD Saxony LLC (Wilmington, Delaware, US)
           | General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [kvm-devel] [PATCH 7/8] MMU: add TDP support to the KVM MMU
  2008-02-07 13:50     ` Joerg Roedel
@ 2008-02-07 14:04       ` Izik Eidus
  0 siblings, 0 replies; 14+ messages in thread
From: Izik Eidus @ 2008-02-07 14:04 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Avi Kivity, kvm-devel, linux-kernel

Joerg Roedel wrote:
> On Thu, Feb 07, 2008 at 03:27:19PM +0200, Izik Eidus wrote:
>   
>> Joerg Roedel wrote:
>>     
>>> This patch contains the changes to the KVM MMU necessary for support of the
>>> Nested Paging feature in AMD Barcelona and Phenom Processors.
>>>  
>>>       
>> good patch, it look like things will be very fixable with it
>>
>>     
>>> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
>>> ---
>>> arch/x86/kvm/mmu.c |   79 ++++++++++++++++++++++++++++++++++++++++++++++++++--
>>> arch/x86/kvm/mmu.h |    6 ++++
>>> 2 files changed, 82 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>> index 5e76963..5304d55 100644
>>> --- a/arch/x86/kvm/mmu.c
>>> +++ b/arch/x86/kvm/mmu.c
>>> @@ -1081,6 +1081,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
>>> 	int i;
>>> 	gfn_t root_gfn;
>>> 	struct kvm_mmu_page *sp;
>>> +	int metaphysical = 0;
>>>  	root_gfn = vcpu->arch.cr3 >> PAGE_SHIFT;
>>> @@ -1089,14 +1090,20 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
>>> 		hpa_t root = vcpu->arch.mmu.root_hpa;
>>>  		ASSERT(!VALID_PAGE(root));
>>> +		if (tdp_enabled)
>>> +			metaphysical = 1;
>>> 		sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
>>> -				      PT64_ROOT_LEVEL, 0, ACC_ALL, NULL, NULL);
>>> +				      PT64_ROOT_LEVEL, metaphysical,
>>> +				      ACC_ALL, NULL, NULL);
>>> 		root = __pa(sp->spt);
>>> 		++sp->root_count;
>>> 		vcpu->arch.mmu.root_hpa = root;
>>> 		return;
>>> 	}
>>> #endif
>>> +	metaphysical = !is_paging(vcpu);
>>> +	if (tdp_enabled)
>>> +		metaphysical = 1;
>>> 	for (i = 0; i < 4; ++i) {
>>> 		hpa_t root = vcpu->arch.mmu.pae_root[i];
>>> @@ -1110,7 +1117,7 @@ static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
>>> 		} else if (vcpu->arch.mmu.root_level == 0)
>>> 			root_gfn = 0;
>>> 		sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
>>> -				      PT32_ROOT_LEVEL, !is_paging(vcpu),
>>> +				      PT32_ROOT_LEVEL, metaphysical,
>>> 				      ACC_ALL, NULL, NULL);
>>> 		root = __pa(sp->spt);
>>> 		++sp->root_count;
>>> @@ -1144,6 +1151,36 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva,
>>> 			     error_code & PFERR_WRITE_MASK, gfn);
>>> }
>>> +static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa,
>>> +				u32 error_code)
>>>  
>>>       
>> you probably mean gpa_t ?
>>     
>
> Yes. But the function is assigned to a function pointer. And the type of
> that pointer expects gva_t there. So I named the parameter gpa to
> describe that a guest physical address is meant there.
>
>   
>>> +{
>>> +	struct page *page;
>>> +	int r;
>>> +
>>> +	ASSERT(vcpu);
>>> +	ASSERT(VALID_PAGE(vcpu->arch.mmu.root_hpa));
>>> +
>>> +	r = mmu_topup_memory_caches(vcpu);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	down_read(&current->mm->mmap_sem);
>>> +	page = gfn_to_page(vcpu->kvm, gpa >> PAGE_SHIFT);
>>> +	if (is_error_page(page)) {
>>> +		kvm_release_page_clean(page);
>>> +		up_read(&current->mm->mmap_sem);
>>> +		return 1;
>>> +	}
>>>  
>>>       
>> i dont know if it worth checking it here,
>> in the worth case we will map the error page and the host will be safe
>>     
>
> Looking at the nonpaging_map function it is the right place to check for
> the error page.
>   

thinking about it again you are right, (for some reason i was thinking 
about old kvm code that was replace already)
the is_error_page should be here.

> Joerg Roedel
>
>   


-- 
woof.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [kvm-devel] KVM: add support for SVM Nested Paging
  2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
                   ` (7 preceding siblings ...)
  2008-02-07 12:47 ` [PATCH 8/8] SVM: add support for Nested Paging Joerg Roedel
@ 2008-02-10 12:03 ` Avi Kivity
  8 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2008-02-10 12:03 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: kvm-devel, linux-kernel

Joerg Roedel wrote:
> Hi,
>
> here is the improved patchset which adds support for the Nested Paging
> feature of the AMD Barcelona and Phenom processors to KVM. The patch set
> was successfully install- and runtime-tested with various guest
> operating systems (64 bit, 32 bit legacy and 32 bit PAE Linux,
> Windows 64 bit and 32 bit versions too).
>
>   

Applied all, thanks.


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/8] SVM: move feature detection to hardware setup code
  2008-01-25 20:53 [PATCH][RFC] SVM: Add Support for Nested Paging in AMD Fam16 CPUs Joerg Roedel
@ 2008-01-25 20:53 ` Joerg Roedel
  0 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2008-01-25 20:53 UTC (permalink / raw)
  To: Avi Kivity, kvm-devel, linux-kernel; +Cc: Joerg Roedel

By moving the SVM feature detection from the each_cpu code to the hardware
setup code it runs only once. As an additional advance the feature check is now
available earlier in the module setup process.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/svm.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7bdbe16..0c58527 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -302,7 +302,6 @@ static void svm_hardware_enable(void *garbage)
 	svm_data->asid_generation = 1;
 	svm_data->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	svm_data->next_asid = svm_data->max_asid + 1;
-	svm_features = cpuid_edx(SVM_CPUID_FUNC);
 
 	asm volatile ("sgdt %0" : "=m"(gdt_descr));
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -408,6 +407,9 @@ static __init int svm_hardware_setup(void)
 		if (r)
 			goto err_2;
 	}
+
+	svm_features = cpuid_edx(SVM_CPUID_FUNC);
+
 	return 0;
 
 err_2:
-- 
1.5.3.7




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2008-02-10 12:03 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-02-07 12:47 KVM: add support for SVM Nested Paging Joerg Roedel
2008-02-07 12:47 ` [PATCH 1/8] SVM: move feature detection to hardware setup code Joerg Roedel
2008-02-07 12:47 ` [PATCH 2/8] SVM: add detection of Nested Paging feature Joerg Roedel
2008-02-07 12:47 ` [PATCH 3/8] SVM: add module parameter to disable Nested Paging Joerg Roedel
2008-02-07 12:47 ` [PATCH 4/8] X86: export information about NPT to generic x86 code Joerg Roedel
2008-02-07 12:47 ` [PATCH 5/8] MMU: make the __nonpaging_map function generic Joerg Roedel
2008-02-07 12:47 ` [PATCH 6/8] X86: export the load_pdptrs() function to modules Joerg Roedel
2008-02-07 12:47 ` [PATCH 7/8] MMU: add TDP support to the KVM MMU Joerg Roedel
2008-02-07 13:27   ` [kvm-devel] " Izik Eidus
2008-02-07 13:50     ` Joerg Roedel
2008-02-07 14:04       ` Izik Eidus
2008-02-07 12:47 ` [PATCH 8/8] SVM: add support for Nested Paging Joerg Roedel
2008-02-10 12:03 ` [kvm-devel] KVM: add support for SVM " Avi Kivity
  -- strict thread matches above, loose matches on Subject: below --
2008-01-25 20:53 [PATCH][RFC] SVM: Add Support for Nested Paging in AMD Fam16 CPUs Joerg Roedel
2008-01-25 20:53 ` [PATCH 1/8] SVM: move feature detection to hardware setup code Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).