LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH v3 0/5] mtrr, mm, x86: Enhance MTRR checks for huge I/O mapping
@ 2015-03-13 21:33 Toshi Kani
  2015-03-13 21:33 ` [PATCH v3 1/5] mm, x86: Document return values of mapping funcs Toshi Kani
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Toshi Kani @ 2015-03-13 21:33 UTC (permalink / raw)
  To: akpm, hpa, tglx, mingo
  Cc: linux-mm, x86, linux-kernel, dave.hansen, Elliott, pebolle

This patchset enhances MTRR checks for the kernel huge I/O mapping,
which was enabled by the patchset below:
  https://lkml.org/lkml/2015/3/3/589

The following functional changes are made in patch 5/5.
 - Allow pud_set_huge() and pmd_set_huge() to create a huge page
   mapping to a range covered by a single MTRR entry of any memory
   type.
 - Log a pr_warn() message when a specified PMD map range spans more
   than a single MTRR entry.  Drivers should make a mapping request
   aligned to a single MTRR entry when the range is covered by MTRRs.

Patch 1/5 addresses other review comments to the mapping funcs for
better code read-ability.  Patch 2/5 - 4/5 are bug fixes and clean up
to mtrr_type_lookup().

The patchset is based on the -mm tree.
---
v3:
 - Add patch 3/5 to fix a bug in MTRR state checks.
 - Update patch 4/5 to create separate functions for the fixed and
   variable entries. (Ingo Molnar)

v2:
 - Update change logs and comments per review comments.
   (Ingo Molnar)
 - Add patch 3/4 to clean up mtrr_type_lookup(). (Ingo Molnar)

---
Toshi Kani (5):
 1/5 mm, x86: Document return values of mapping funcs
 2/5 mtrr, x86: Fix MTRR lookup to handle inclusive entry
 3/5 mtrr, x86: Fix MTRR state checks in mtrr_type_lookup()
 4/5 mtrr, x86: Clean up mtrr_type_lookup()
 5/5 mtrr, mm, x86: Enhance MTRR checks for KVA huge page mapping

---
 arch/x86/Kconfig                   |   2 +-
 arch/x86/include/asm/mtrr.h        |   5 +-
 arch/x86/include/uapi/asm/mtrr.h   |   4 +
 arch/x86/kernel/cpu/mtrr/generic.c | 181 ++++++++++++++++++++++++-------------
 arch/x86/mm/pat.c                  |   4 +-
 arch/x86/mm/pgtable.c              |  53 ++++++++---
 6 files changed, 165 insertions(+), 84 deletions(-)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/5] mm, x86: Document return values of mapping funcs
  2015-03-13 21:33 [PATCH v3 0/5] mtrr, mm, x86: Enhance MTRR checks for huge I/O mapping Toshi Kani
@ 2015-03-13 21:33 ` Toshi Kani
  2015-03-13 21:33 ` [PATCH v3 2/5] mtrr, x86: Fix MTRR lookup to handle inclusive entry Toshi Kani
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Toshi Kani @ 2015-03-13 21:33 UTC (permalink / raw)
  To: akpm, hpa, tglx, mingo
  Cc: linux-mm, x86, linux-kernel, dave.hansen, Elliott, pebolle, Toshi Kani

Document the return values of KVA mapping functions,
pud_set_huge(), pmd_set_huge, pud_clear_huge() and
pmd_clear_huge().

Simplify the conditions to select HAVE_ARCH_HUGE_VMAP
in the Kconfig, since X86_PAE depends on X86_32.

There is no functional change in this patch.

Signed-off-by: Toshi Kani <toshi.kani@hp.com>
---
 arch/x86/Kconfig      |    2 +-
 arch/x86/mm/pgtable.c |   36 ++++++++++++++++++++++++++++--------
 2 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index c22526d..3c79a47 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -99,7 +99,7 @@ config X86
 	select IRQ_FORCED_THREADING
 	select HAVE_BPF_JIT if X86_64
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
-	select HAVE_ARCH_HUGE_VMAP if X86_64 || (X86_32 && X86_PAE)
+	select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
 	select ARCH_HAS_SG_CHAIN
 	select CLKEVT_I8253
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 0b97d2c..4891fa1 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -563,14 +563,19 @@ void native_set_fixmap(enum fixed_addresses idx, phys_addr_t phys,
 }
 
 #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
+/**
+ * pud_set_huge - setup kernel PUD mapping
+ *
+ * MTRR can override PAT memory types with 4KB granularity.  Therefore,
+ * it does not set up a huge page when the range is covered by a non-WB
+ * type of MTRR.  0xFF indicates that MTRR are disabled.
+ *
+ * Return 1 on success, and 0 when no PUD was set.
+ */
 int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
 {
 	u8 mtrr;
 
-	/*
-	 * Do not use a huge page when the range is covered by non-WB type
-	 * of MTRRs.
-	 */
 	mtrr = mtrr_type_lookup(addr, addr + PUD_SIZE);
 	if ((mtrr != MTRR_TYPE_WRBACK) && (mtrr != 0xFF))
 		return 0;
@@ -584,14 +589,19 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
 	return 1;
 }
 
+/**
+ * pmd_set_huge - setup kernel PMD mapping
+ *
+ * MTRR can override PAT memory types with 4KB granularity.  Therefore,
+ * it does not set up a huge page when the range is covered by a non-WB
+ * type of MTRR.  0xFF indicates that MTRR are disabled.
+ *
+ * Return 1 on success, and 0 when no PMD was set.
+ */
 int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
 {
 	u8 mtrr;
 
-	/*
-	 * Do not use a huge page when the range is covered by non-WB type
-	 * of MTRRs.
-	 */
 	mtrr = mtrr_type_lookup(addr, addr + PMD_SIZE);
 	if ((mtrr != MTRR_TYPE_WRBACK) && (mtrr != 0xFF))
 		return 0;
@@ -605,6 +615,11 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
 	return 1;
 }
 
+/**
+ * pud_clear_huge - clear kernel PUD mapping when it is set
+ *
+ * Return 1 on success, and 0 when no PUD map was found.
+ */
 int pud_clear_huge(pud_t *pud)
 {
 	if (pud_large(*pud)) {
@@ -615,6 +630,11 @@ int pud_clear_huge(pud_t *pud)
 	return 0;
 }
 
+/**
+ * pmd_clear_huge - clear kernel PMD mapping when it is set
+ *
+ * Return 1 on success, and 0 when no PMD map was found.
+ */
 int pmd_clear_huge(pmd_t *pmd)
 {
 	if (pmd_large(*pmd)) {

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 2/5] mtrr, x86: Fix MTRR lookup to handle inclusive entry
  2015-03-13 21:33 [PATCH v3 0/5] mtrr, mm, x86: Enhance MTRR checks for huge I/O mapping Toshi Kani
  2015-03-13 21:33 ` [PATCH v3 1/5] mm, x86: Document return values of mapping funcs Toshi Kani
@ 2015-03-13 21:33 ` Toshi Kani
  2015-03-16  7:49   ` Ingo Molnar
  2015-03-13 21:33 ` [PATCH v3 3/5] mtrr, x86: Fix MTRR state checks in mtrr_type_lookup() Toshi Kani
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Toshi Kani @ 2015-03-13 21:33 UTC (permalink / raw)
  To: akpm, hpa, tglx, mingo
  Cc: linux-mm, x86, linux-kernel, dave.hansen, Elliott, pebolle, Toshi Kani

When an MTRR entry is inclusive to a requested range, i.e.
the start and end of the request are not within the MTRR
entry range but the range contains the MTRR entry entirely,
__mtrr_type_lookup() ignores such a case because both
start_state and end_state are set to zero.

This patch fixes the issue by adding a new flag, 'inclusive',
to detect the case.  This case is then handled in the same
way as (!start_state && end_state).

Signed-off-by: Toshi Kani <toshi.kani@hp.com>
---
 arch/x86/kernel/cpu/mtrr/generic.c |   17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 7d74f7b..a82e370 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -154,7 +154,7 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 
 	prev_match = 0xFF;
 	for (i = 0; i < num_var_ranges; ++i) {
-		unsigned short start_state, end_state;
+		unsigned short start_state, end_state, inclusive;
 
 		if (!(mtrr_state.var_ranges[i].mask_lo & (1 << 11)))
 			continue;
@@ -166,15 +166,16 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 
 		start_state = ((start & mask) == (base & mask));
 		end_state = ((end & mask) == (base & mask));
+		inclusive = ((start < base) && (end > base));
 
-		if (start_state != end_state) {
+		if ((start_state != end_state) || inclusive) {
 			/*
 			 * We have start:end spanning across an MTRR.
-			 * We split the region into
-			 * either
-			 * (start:mtrr_end) (mtrr_end:end)
-			 * or
-			 * (start:mtrr_start) (mtrr_start:end)
+			 * We split the region into either
+			 * - start_state:1
+			 *     (start:mtrr_end) (mtrr_end:end)
+			 * - end_state:1 or inclusive:1
+			 *     (start:mtrr_start) (mtrr_start:end)
 			 * depending on kind of overlap.
 			 * Return the type for first region and a pointer to
 			 * the start of second region so that caller will
@@ -195,7 +196,7 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 			*repeat = 1;
 		}
 
-		if ((start & mask) != (base & mask))
+		if (!start_state)
 			continue;
 
 		curr_match = mtrr_state.var_ranges[i].base_lo & 0xff;

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 3/5] mtrr, x86: Fix MTRR state checks in mtrr_type_lookup()
  2015-03-13 21:33 [PATCH v3 0/5] mtrr, mm, x86: Enhance MTRR checks for huge I/O mapping Toshi Kani
  2015-03-13 21:33 ` [PATCH v3 1/5] mm, x86: Document return values of mapping funcs Toshi Kani
  2015-03-13 21:33 ` [PATCH v3 2/5] mtrr, x86: Fix MTRR lookup to handle inclusive entry Toshi Kani
@ 2015-03-13 21:33 ` Toshi Kani
  2015-03-16  7:51   ` Ingo Molnar
  2015-03-13 21:33 ` [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup() Toshi Kani
  2015-03-13 21:33 ` [PATCH v3 5/5] mtrr, mm, x86: Enhance MTRR checks for KVA huge page mapping Toshi Kani
  4 siblings, 1 reply; 13+ messages in thread
From: Toshi Kani @ 2015-03-13 21:33 UTC (permalink / raw)
  To: akpm, hpa, tglx, mingo
  Cc: linux-mm, x86, linux-kernel, dave.hansen, Elliott, pebolle, Toshi Kani

'mtrr_state.enabled' contains FE (fixed MTRRs enabled) and
E (MTRRs enabled) flags in MSR_MTRRdefType.  Intel SDM,
section 11.11.2.1, defines these flags as follows:
 - All MTRRs are disabled when the E flag is clear.
   The FE flag has no affect when the E flag is clear.
 - The default type is enabled when the E flag is set.
 - MTRR variable ranges are enabled when the E flag is set.
 - MTRR fixed ranges are enabled when both E and FE flags
   are set.

MTRR state checks in __mtrr_type_lookup() do not follow the
SDM definitions.  Therefore, this patch fixes the MTRR state
checks according to the SDM.  This patch defines the flags
in mtrr_state.enabled as follows.  print_mtrr_state() is also
updated.
 - FE flag: MTRR_STATE_MTRR_FIXED_ENABLED
 - E  flag: MTRR_STATE_MTRR_ENABLED

Lastly, this patch fixes the 'else if (start < 0x1000000)',
which checks a fixed range but has an extra-zero in the
address, to 'else' with no condition.

Signed-off-by: Toshi Kani <toshi.kani@hp.com>
---
 arch/x86/include/uapi/asm/mtrr.h   |    4 ++++
 arch/x86/kernel/cpu/mtrr/generic.c |   17 +++++++++--------
 2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/uapi/asm/mtrr.h b/arch/x86/include/uapi/asm/mtrr.h
index d0acb65..66ba88d 100644
--- a/arch/x86/include/uapi/asm/mtrr.h
+++ b/arch/x86/include/uapi/asm/mtrr.h
@@ -88,6 +88,10 @@ struct mtrr_state_type {
 	mtrr_type def_type;
 };
 
+/* Bit fields for enabled in struct mtrr_state_type */
+#define MTRR_STATE_MTRR_FIXED_ENABLED	0x01
+#define MTRR_STATE_MTRR_ENABLED		0x02
+
 #define MTRRphysBase_MSR(reg) (0x200 + 2 * (reg))
 #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
 
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index a82e370..4bff6db 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -119,14 +119,16 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 	if (!mtrr_state_set)
 		return 0xFF;
 
-	if (!mtrr_state.enabled)
+	if (!(mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED))
 		return 0xFF;
 
 	/* Make end inclusive end, instead of exclusive */
 	end--;
 
 	/* Look in fixed ranges. Just return the type as per start */
-	if (mtrr_state.have_fixed && (start < 0x100000)) {
+	if ((start < 0x100000) &&
+	    (mtrr_state.have_fixed) &&
+	    (mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED)) {
 		int idx;
 
 		if (start < 0x80000) {
@@ -137,7 +139,7 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 			idx = 1 * 8;
 			idx += ((start - 0x80000) >> 14);
 			return mtrr_state.fixed_ranges[idx];
-		} else if (start < 0x1000000) {
+		} else {
 			idx = 3 * 8;
 			idx += ((start - 0xC0000) >> 12);
 			return mtrr_state.fixed_ranges[idx];
@@ -149,9 +151,6 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 	 * Look of multiple ranges matching this address and pick type
 	 * as per MTRR precedence
 	 */
-	if (!(mtrr_state.enabled & 2))
-		return mtrr_state.def_type;
-
 	prev_match = 0xFF;
 	for (i = 0; i < num_var_ranges; ++i) {
 		unsigned short start_state, end_state, inclusive;
@@ -348,7 +347,9 @@ static void __init print_mtrr_state(void)
 		 mtrr_attrib_to_str(mtrr_state.def_type));
 	if (mtrr_state.have_fixed) {
 		pr_debug("MTRR fixed ranges %sabled:\n",
-			 mtrr_state.enabled & 1 ? "en" : "dis");
+			((mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED) &&
+			 (mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED)) ?
+			 "en" : "dis");
 		print_fixed(0x00000, 0x10000, mtrr_state.fixed_ranges + 0);
 		for (i = 0; i < 2; ++i)
 			print_fixed(0x80000 + i * 0x20000, 0x04000,
@@ -361,7 +362,7 @@ static void __init print_mtrr_state(void)
 		print_fixed_last();
 	}
 	pr_debug("MTRR variable ranges %sabled:\n",
-		 mtrr_state.enabled & 2 ? "en" : "dis");
+		 mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED ? "en" : "dis");
 	high_width = (__ffs64(size_or_mask) - (32 - PAGE_SHIFT) + 3) / 4;
 
 	for (i = 0; i < num_var_ranges; ++i) {

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup()
  2015-03-13 21:33 [PATCH v3 0/5] mtrr, mm, x86: Enhance MTRR checks for huge I/O mapping Toshi Kani
                   ` (2 preceding siblings ...)
  2015-03-13 21:33 ` [PATCH v3 3/5] mtrr, x86: Fix MTRR state checks in mtrr_type_lookup() Toshi Kani
@ 2015-03-13 21:33 ` Toshi Kani
  2015-03-16  7:58   ` Ingo Molnar
  2015-03-13 21:33 ` [PATCH v3 5/5] mtrr, mm, x86: Enhance MTRR checks for KVA huge page mapping Toshi Kani
  4 siblings, 1 reply; 13+ messages in thread
From: Toshi Kani @ 2015-03-13 21:33 UTC (permalink / raw)
  To: akpm, hpa, tglx, mingo
  Cc: linux-mm, x86, linux-kernel, dave.hansen, Elliott, pebolle, Toshi Kani

MTRRs contain fixed and variable entries.  mtrr_type_lookup()
may repeatedly call __mtrr_type_lookup() to handle a request
that overlaps with variable entries.  However,
__mtrr_type_lookup() also handles the fixed entries, which
do not have to be repeated.  Therefore, this patch creates
separate functions, mtrr_type_lookup_fixed() and
mtrr_type_lookup_variable(), to handle the fixed and variable
ranges respectively.

The patch also updates the function headers to clarify the
return values and output argument.  It updates comments to
clarify that the repeating is necessary to handle overlaps
with the default type, since overlaps with multiple entries
alone can be handled without such repeating.

There is no functional change in this patch.

Signed-off-by: Toshi Kani <toshi.kani@hp.com>
---
 arch/x86/kernel/cpu/mtrr/generic.c |  132 ++++++++++++++++++++++--------------
 1 file changed, 81 insertions(+), 51 deletions(-)

diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 4bff6db..271b19c 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -102,55 +102,64 @@ static int check_type_overlap(u8 *prev, u8 *curr)
 	return 0;
 }
 
-/*
- * Error/Semi-error returns:
- * 0xFF - when MTRR is not enabled
- * *repeat == 1 implies [start:end] spanned across MTRR range and type returned
- *		corresponds only to [start:*partial_end].
- *		Caller has to lookup again for [*partial_end:end].
+/**
+ * mtrr_type_lookup_fixed - look up memory type in MTRR fixed entries
+ *
+ * Return Values:
+ * memory type - Matched memory type
+ * 0xFF - Unmatched or fixed entries are disabled
  */
-static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
+static u8 mtrr_type_lookup_fixed(u64 start, u64 end)
+{
+	int idx;
+
+	if (start >= 0x100000)
+		return 0xFF;
+
+	if (!(mtrr_state.have_fixed) ||
+	    !(mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED))
+		return 0xFF;
+
+	if (start < 0x80000) {		/* 0x0 - 0x7FFFF */
+		idx = 0;
+		idx += (start >> 16);
+		return mtrr_state.fixed_ranges[idx];
+
+	} else if (start < 0xC0000) {	/* 0x80000 - 0xBFFFF */
+		idx = 1 * 8;
+		idx += ((start - 0x80000) >> 14);
+		return mtrr_state.fixed_ranges[idx];
+	}
+
+	/* 0xC0000 - 0xFFFFF */
+	idx = 3 * 8;
+	idx += ((start - 0xC0000) >> 12);
+	return mtrr_state.fixed_ranges[idx];
+}
+
+/**
+ * mtrr_type_lookup_variable - look up memory type in MTRR variable entries
+ *
+ * Return Value:
+ * memory type - Matched memory type or the default memory type (unmatched)
+ *
+ * Output Argument:
+ * repeat - Set to 1 when [start:end] spanned across MTRR range and type
+ *	    returned corresponds only to [start:*partial_end].  Caller has
+ *	    to lookup again for [*partial_end:end].
+ */
+static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+				    int *repeat)
 {
 	int i;
 	u64 base, mask;
 	u8 prev_match, curr_match;
 
 	*repeat = 0;
-	if (!mtrr_state_set)
-		return 0xFF;
-
-	if (!(mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED))
-		return 0xFF;
 
 	/* Make end inclusive end, instead of exclusive */
 	end--;
 
-	/* Look in fixed ranges. Just return the type as per start */
-	if ((start < 0x100000) &&
-	    (mtrr_state.have_fixed) &&
-	    (mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED)) {
-		int idx;
-
-		if (start < 0x80000) {
-			idx = 0;
-			idx += (start >> 16);
-			return mtrr_state.fixed_ranges[idx];
-		} else if (start < 0xC0000) {
-			idx = 1 * 8;
-			idx += ((start - 0x80000) >> 14);
-			return mtrr_state.fixed_ranges[idx];
-		} else {
-			idx = 3 * 8;
-			idx += ((start - 0xC0000) >> 12);
-			return mtrr_state.fixed_ranges[idx];
-		}
-	}
-
-	/*
-	 * Look in variable ranges
-	 * Look of multiple ranges matching this address and pick type
-	 * as per MTRR precedence
-	 */
 	prev_match = 0xFF;
 	for (i = 0; i < num_var_ranges; ++i) {
 		unsigned short start_state, end_state, inclusive;
@@ -179,7 +188,8 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 			 * Return the type for first region and a pointer to
 			 * the start of second region so that caller will
 			 * lookup again on the second region.
-			 * Note: This way we handle multiple overlaps as well.
+			 * Note: This way we handle overlaps with multiple
+			 * entries and the default type properly.
 			 */
 			if (start_state)
 				*partial_end = base + get_mtrr_size(mask);
@@ -208,21 +218,18 @@ static u8 __mtrr_type_lookup(u64 start, u64 end, u64 *partial_end, int *repeat)
 			return curr_match;
 	}
 
-	if (mtrr_tom2) {
-		if (start >= (1ULL<<32) && (end < mtrr_tom2))
-			return MTRR_TYPE_WRBACK;
-	}
-
 	if (prev_match != 0xFF)
 		return prev_match;
 
 	return mtrr_state.def_type;
 }
 
-/*
- * Returns the effective MTRR type for the region
- * Error return:
- * 0xFF - when MTRR is not enabled
+/**
+ * mtrr_type_lookup - look up memory type in MTRR
+ *
+ * Return Values:
+ * memory type - The effective MTRR type for the region
+ * 0xFF - MTRR is disabled
  */
 u8 mtrr_type_lookup(u64 start, u64 end)
 {
@@ -230,22 +237,45 @@ u8 mtrr_type_lookup(u64 start, u64 end)
 	int repeat;
 	u64 partial_end;
 
-	type = __mtrr_type_lookup(start, end, &partial_end, &repeat);
+	if (!mtrr_state_set)
+		return 0xFF;
+
+	if (!(mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED))
+		return 0xFF;
+
+	/*
+	 * Look up the fixed ranges first, which take priority over
+	 * the variable ranges.
+	 */
+	type = mtrr_type_lookup_fixed(start, end);
+	if (type != 0xFF)
+		return type;
+
+	/*
+	 * Look up the variable ranges.  Look of multiple ranges matching
+	 * this address and pick type as per MTRR precedence.
+	 */
+	type = mtrr_type_lookup_variable(start, end, &partial_end, &repeat);
 
 	/*
 	 * Common path is with repeat = 0.
 	 * However, we can have cases where [start:end] spans across some
-	 * MTRR range. Do repeated lookups for that case here.
+	 * MTRR ranges and/or the default type.  Do repeated lookups for
+	 * that case here.
 	 */
 	while (repeat) {
 		prev_type = type;
 		start = partial_end;
-		type = __mtrr_type_lookup(start, end, &partial_end, &repeat);
+		type = mtrr_type_lookup_variable(start, end, &partial_end,
+						 &repeat);
 
 		if (check_type_overlap(&prev_type, &type))
 			return type;
 	}
 
+	if (mtrr_tom2 && (start >= (1ULL<<32)) && (end < mtrr_tom2))
+		return MTRR_TYPE_WRBACK;
+
 	return type;
 }
 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 5/5] mtrr, mm, x86: Enhance MTRR checks for KVA huge page mapping
  2015-03-13 21:33 [PATCH v3 0/5] mtrr, mm, x86: Enhance MTRR checks for huge I/O mapping Toshi Kani
                   ` (3 preceding siblings ...)
  2015-03-13 21:33 ` [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup() Toshi Kani
@ 2015-03-13 21:33 ` Toshi Kani
  4 siblings, 0 replies; 13+ messages in thread
From: Toshi Kani @ 2015-03-13 21:33 UTC (permalink / raw)
  To: akpm, hpa, tglx, mingo
  Cc: linux-mm, x86, linux-kernel, dave.hansen, Elliott, pebolle, Toshi Kani

This patch adds an additional argument, 'uniform', to
mtrr_type_lookup(), which returns 1 when a given range is
covered uniformly by MTRRs, i.e. the range is fully covered
by a single MTRR entry or the default type.

pud_set_huge() and pmd_set_huge() are changed to check the
new 'uniform' flag to see if it is safe to create a huge page
mapping to the range.  This allows them to create a huge page
mapping to a range covered by a single MTRR entry of any
memory type.  It also detects a non-optimal request properly.
They continue to check with the WB type since the WB type has
no effect even if a request spans multiple MTRR entries.

pmd_set_huge() logs a warning message to a non-optimal request
so that driver writers will be aware of such a case.  Drivers
should make a mapping request aligned to a single MTRR entry
when the range is covered by MTRRs.

Signed-off-by: Toshi Kani <toshi.kani@hp.com>
---
 arch/x86/include/asm/mtrr.h        |    5 +++--
 arch/x86/kernel/cpu/mtrr/generic.c |   35 +++++++++++++++++++++++++++--------
 arch/x86/mm/pat.c                  |    4 ++--
 arch/x86/mm/pgtable.c              |   25 +++++++++++++++----------
 4 files changed, 47 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index f768f62..5b4d467 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -31,7 +31,7 @@
  * arch_phys_wc_add and arch_phys_wc_del.
  */
 # ifdef CONFIG_MTRR
-extern u8 mtrr_type_lookup(u64 addr, u64 end);
+extern u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform);
 extern void mtrr_save_fixed_ranges(void *);
 extern void mtrr_save_state(void);
 extern int mtrr_add(unsigned long base, unsigned long size,
@@ -50,11 +50,12 @@ extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
 extern int amd_special_default_mtrr(void);
 extern int phys_wc_to_mtrr_index(int handle);
 #  else
-static inline u8 mtrr_type_lookup(u64 addr, u64 end)
+static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
 {
 	/*
 	 * Return no-MTRRs:
 	 */
+	*uniform = 1;
 	return 0xff;
 }
 #define mtrr_save_fixed_ranges(arg) do {} while (0)
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 271b19c..6d6a540 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -143,19 +143,22 @@ static u8 mtrr_type_lookup_fixed(u64 start, u64 end)
  * Return Value:
  * memory type - Matched memory type or the default memory type (unmatched)
  *
- * Output Argument:
+ * Output Arguments:
  * repeat - Set to 1 when [start:end] spanned across MTRR range and type
  *	    returned corresponds only to [start:*partial_end].  Caller has
  *	    to lookup again for [*partial_end:end].
+ * uniform - Set to 1 when MTRR covers the region uniformly, i.e. the region
+ *	     is fully covered by a single MTRR entry or the default type.
  */
 static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
-				    int *repeat)
+				    int *repeat, u8 *uniform)
 {
 	int i;
 	u64 base, mask;
 	u8 prev_match, curr_match;
 
 	*repeat = 0;
+	*uniform = 1;
 
 	/* Make end inclusive end, instead of exclusive */
 	end--;
@@ -203,6 +206,7 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
 
 			end = *partial_end - 1; /* end is inclusive */
 			*repeat = 1;
+			*uniform = 0;
 		}
 
 		if (!start_state)
@@ -214,6 +218,7 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
 			continue;
 		}
 
+		*uniform = 0;
 		if (check_type_overlap(&prev_match, &curr_match))
 			return curr_match;
 	}
@@ -230,13 +235,19 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
  * Return Values:
  * memory type - The effective MTRR type for the region
  * 0xFF - MTRR is disabled
+ *
+ * Output Argument:
+ * uniform - Set to 1 when MTRR covers the region uniformly, i.e. the region
+ *	     is fully covered by a single MTRR entry or the default type.
  */
-u8 mtrr_type_lookup(u64 start, u64 end)
+u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform)
 {
-	u8 type, prev_type;
+	u8 type, prev_type, is_uniform, dummy;
 	int repeat;
 	u64 partial_end;
 
+	*uniform = 1;
+
 	if (!mtrr_state_set)
 		return 0xFF;
 
@@ -248,14 +259,17 @@ u8 mtrr_type_lookup(u64 start, u64 end)
 	 * the variable ranges.
 	 */
 	type = mtrr_type_lookup_fixed(start, end);
-	if (type != 0xFF)
+	if (type != 0xFF) {
+		*uniform = 0;
 		return type;
+	}
 
 	/*
 	 * Look up the variable ranges.  Look of multiple ranges matching
 	 * this address and pick type as per MTRR precedence.
 	 */
-	type = mtrr_type_lookup_variable(start, end, &partial_end, &repeat);
+	type = mtrr_type_lookup_variable(start, end, &partial_end,
+					 &repeat, &is_uniform);
 
 	/*
 	 * Common path is with repeat = 0.
@@ -266,16 +280,21 @@ u8 mtrr_type_lookup(u64 start, u64 end)
 	while (repeat) {
 		prev_type = type;
 		start = partial_end;
+		is_uniform = 0;
+
 		type = mtrr_type_lookup_variable(start, end, &partial_end,
-						 &repeat);
+						 &repeat, &dummy);
 
-		if (check_type_overlap(&prev_type, &type))
+		if (check_type_overlap(&prev_type, &type)) {
+			*uniform = 0;
 			return type;
+		}
 	}
 
 	if (mtrr_tom2 && (start >= (1ULL<<32)) && (end < mtrr_tom2))
 		return MTRR_TYPE_WRBACK;
 
+	*uniform = is_uniform;
 	return type;
 }
 
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 35af677..372ad42 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -267,9 +267,9 @@ static unsigned long pat_x_mtrr_type(u64 start, u64 end,
 	 * request is for WB.
 	 */
 	if (req_type == _PAGE_CACHE_MODE_WB) {
-		u8 mtrr_type;
+		u8 mtrr_type, uniform;
 
-		mtrr_type = mtrr_type_lookup(start, end);
+		mtrr_type = mtrr_type_lookup(start, end, &uniform);
 		if (mtrr_type != MTRR_TYPE_WRBACK)
 			return _PAGE_CACHE_MODE_UC_MINUS;
 
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 4891fa1..3d6edea 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -567,17 +567,18 @@ void native_set_fixmap(enum fixed_addresses idx, phys_addr_t phys,
  * pud_set_huge - setup kernel PUD mapping
  *
  * MTRR can override PAT memory types with 4KB granularity.  Therefore,
- * it does not set up a huge page when the range is covered by a non-WB
- * type of MTRR.  0xFF indicates that MTRR are disabled.
+ * it only sets up a huge page when the range is mapped uniformly by MTRR
+ * (i.e. the range is fully covered by a single MTRR entry or the default
+ * type) or the MTRR memory type is WB.
  *
  * Return 1 on success, and 0 when no PUD was set.
  */
 int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
 {
-	u8 mtrr;
+	u8 mtrr, uniform;
 
-	mtrr = mtrr_type_lookup(addr, addr + PUD_SIZE);
-	if ((mtrr != MTRR_TYPE_WRBACK) && (mtrr != 0xFF))
+	mtrr = mtrr_type_lookup(addr, addr + PUD_SIZE, &uniform);
+	if ((!uniform) && (mtrr != MTRR_TYPE_WRBACK))
 		return 0;
 
 	prot = pgprot_4k_2_large(prot);
@@ -593,18 +594,22 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
  * pmd_set_huge - setup kernel PMD mapping
  *
  * MTRR can override PAT memory types with 4KB granularity.  Therefore,
- * it does not set up a huge page when the range is covered by a non-WB
- * type of MTRR.  0xFF indicates that MTRR are disabled.
+ * it only sets up a huge page when the range is mapped uniformly by MTRR
+ * (i.e. the range is fully covered by a single MTRR entry or the default
+ * type) or the MTRR memory type is WB.
  *
  * Return 1 on success, and 0 when no PMD was set.
  */
 int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
 {
-	u8 mtrr;
+	u8 mtrr, uniform;
 
-	mtrr = mtrr_type_lookup(addr, addr + PMD_SIZE);
-	if ((mtrr != MTRR_TYPE_WRBACK) && (mtrr != 0xFF))
+	mtrr = mtrr_type_lookup(addr, addr + PMD_SIZE, &uniform);
+	if ((!uniform) && (mtrr != MTRR_TYPE_WRBACK)) {
+		pr_warn("pmd_set_huge: requesting [mem %#010llx-%#010llx], which spans more than a single MTRR entry\n",
+				addr, addr + PMD_SIZE);
 		return 0;
+	}
 
 	prot = pgprot_4k_2_large(prot);
 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/5] mtrr, x86: Fix MTRR lookup to handle inclusive entry
  2015-03-13 21:33 ` [PATCH v3 2/5] mtrr, x86: Fix MTRR lookup to handle inclusive entry Toshi Kani
@ 2015-03-16  7:49   ` Ingo Molnar
  2015-03-16 21:03     ` Kani, Toshimitsu
  0 siblings, 1 reply; 13+ messages in thread
From: Ingo Molnar @ 2015-03-16  7:49 UTC (permalink / raw)
  To: Toshi Kani
  Cc: akpm, hpa, tglx, mingo, linux-mm, x86, linux-kernel, dave.hansen,
	Elliott, pebolle


* Toshi Kani <toshi.kani@hp.com> wrote:

> When an MTRR entry is inclusive to a requested range, i.e.
> the start and end of the request are not within the MTRR
> entry range but the range contains the MTRR entry entirely,
> __mtrr_type_lookup() ignores such a case because both
> start_state and end_state are set to zero.
> 
> This patch fixes the issue by adding a new flag, 'inclusive',
> to detect the case.  This case is then handled in the same
> way as (!start_state && end_state).

It would be nice to discuss the high level effects of this fix in the 
changelog: i.e. what (presumably bad thing) happened before the 
change, what will happen after the change? What did users experience 
before the patch, and what will users experience after the patch?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/5] mtrr, x86: Fix MTRR state checks in mtrr_type_lookup()
  2015-03-13 21:33 ` [PATCH v3 3/5] mtrr, x86: Fix MTRR state checks in mtrr_type_lookup() Toshi Kani
@ 2015-03-16  7:51   ` Ingo Molnar
  2015-03-16 21:08     ` Kani, Toshimitsu
  0 siblings, 1 reply; 13+ messages in thread
From: Ingo Molnar @ 2015-03-16  7:51 UTC (permalink / raw)
  To: Toshi Kani
  Cc: akpm, hpa, tglx, mingo, linux-mm, x86, linux-kernel, dave.hansen,
	Elliott, pebolle


* Toshi Kani <toshi.kani@hp.com> wrote:

> 'mtrr_state.enabled' contains FE (fixed MTRRs enabled) and
> E (MTRRs enabled) flags in MSR_MTRRdefType.  Intel SDM,
> section 11.11.2.1, defines these flags as follows:
>  - All MTRRs are disabled when the E flag is clear.
>    The FE flag has no affect when the E flag is clear.
>  - The default type is enabled when the E flag is set.
>  - MTRR variable ranges are enabled when the E flag is set.
>  - MTRR fixed ranges are enabled when both E and FE flags
>    are set.
> 
> MTRR state checks in __mtrr_type_lookup() do not follow the
> SDM definitions.  Therefore, this patch fixes the MTRR state
> checks according to the SDM.  This patch defines the flags
> in mtrr_state.enabled as follows.  print_mtrr_state() is also
> updated.
>  - FE flag: MTRR_STATE_MTRR_FIXED_ENABLED
>  - E  flag: MTRR_STATE_MTRR_ENABLED
> 
> Lastly, this patch fixes the 'else if (start < 0x1000000)',
> which checks a fixed range but has an extra-zero in the
> address, to 'else' with no condition.

Firstly, this does multiple bug fixes in a single patch, which is a 
no-no: please split it up into separate patches.

Secondly, please also outline the differences between the old code and 
the new code - don't just list the SDM logic and state that we are 
updating to it.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup()
  2015-03-13 21:33 ` [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup() Toshi Kani
@ 2015-03-16  7:58   ` Ingo Molnar
  2015-03-16 21:24     ` Kani, Toshimitsu
  0 siblings, 1 reply; 13+ messages in thread
From: Ingo Molnar @ 2015-03-16  7:58 UTC (permalink / raw)
  To: Toshi Kani
  Cc: akpm, hpa, tglx, mingo, linux-mm, x86, linux-kernel, dave.hansen,
	Elliott, pebolle


* Toshi Kani <toshi.kani@hp.com> wrote:

> MTRRs contain fixed and variable entries.  mtrr_type_lookup()
> may repeatedly call __mtrr_type_lookup() to handle a request
> that overlaps with variable entries.  However,
> __mtrr_type_lookup() also handles the fixed entries, which
> do not have to be repeated.  Therefore, this patch creates
> separate functions, mtrr_type_lookup_fixed() and
> mtrr_type_lookup_variable(), to handle the fixed and variable
> ranges respectively.
> 
> The patch also updates the function headers to clarify the
> return values and output argument.  It updates comments to
> clarify that the repeating is necessary to handle overlaps
> with the default type, since overlaps with multiple entries
> alone can be handled without such repeating.
> 
> There is no functional change in this patch.

Nice cleanup!

I also suggest adding a small table to the comments before the 
function, that lists the fixed purpose MTRRs and their address ranges 
- to make it more obvious what the magic hexadecimal constants within 
the code are doing.

> +static u8 mtrr_type_lookup_fixed(u64 start, u64 end)
> +{
> +	int idx;
> +
> +	if (start >= 0x100000)
> +		return 0xFF;

Btw., as a separate cleanup patch, we should probably also change 
'0xFF' (which is sometimes written as 0xff) to be some sufficiently 
named constant, and explain its usage somewhere?

> +	if (!(mtrr_state.have_fixed) ||
> +	    !(mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED))

Btw., can MTRR_STATE_MTRR_FIXED_ENABLED ever be set in 
mtrr_state.enabled, without mtrr_state.have_fixed being set?

AFAICS get_mtrr_state() will only ever fill in mtrr_state with fixed 
MTRRs if mtrr_state.have_fixed != 0 - but I might be mis-reading the 
(rather convoluted) flow of code ...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/5] mtrr, x86: Fix MTRR lookup to handle inclusive entry
  2015-03-16  7:49   ` Ingo Molnar
@ 2015-03-16 21:03     ` Kani, Toshimitsu
  0 siblings, 0 replies; 13+ messages in thread
From: Kani, Toshimitsu @ 2015-03-16 21:03 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: akpm, hpa, tglx, mingo, linux-mm, x86, linux-kernel, dave.hansen,
	Elliott, Robert (Server Storage),
	pebolle

> On Mar 16, 2015, at 3:50 AM, Ingo Molnar <mingo@kernel.org> wrote:
> 
> 
> * Toshi Kani <toshi.kani@hp.com> wrote:
> 
>> When an MTRR entry is inclusive to a requested range, i.e.
>> the start and end of the request are not within the MTRR
>> entry range but the range contains the MTRR entry entirely,
>> __mtrr_type_lookup() ignores such a case because both
>> start_state and end_state are set to zero.
>> 
>> This patch fixes the issue by adding a new flag, 'inclusive',
>> to detect the case.  This case is then handled in the same
>> way as (!start_state && end_state).
> 
> It would be nice to discuss the high level effects of this fix in the 
> changelog: i.e. what (presumably bad thing) happened before the 
> change, what will happen after the change? What did users experience 
> before the patch, and what will users experience after the patch?

The original code uses this function to track 
memory attributes of ioremap'd ranges 
in order to avoid
any aliasing.
So, ignoring MTRR entries leads a tracked 
memory attribute different from its effective 
memory attribute.  I will document more 
details in the next version.

I will update the patchset next week.

Thanks,
-Toshi

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/5] mtrr, x86: Fix MTRR state checks in mtrr_type_lookup()
  2015-03-16  7:51   ` Ingo Molnar
@ 2015-03-16 21:08     ` Kani, Toshimitsu
  0 siblings, 0 replies; 13+ messages in thread
From: Kani, Toshimitsu @ 2015-03-16 21:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: akpm, hpa, tglx, mingo, linux-mm, x86, linux-kernel, dave.hansen,
	Elliott, Robert (Server Storage),
	pebolle

> On Mar 16, 2015, at 3:51 AM, Ingo Molnar <mingo@kernel.org> wrote:
> 
> 
> * Toshi Kani <toshi.kani@hp.com> wrote:
> 
>> 'mtrr_state.enabled' contains FE (fixed MTRRs enabled) and
>> E (MTRRs enabled) flags in MSR_MTRRdefType.  Intel SDM,
>> section 11.11.2.1, defines these flags as follows:
>> - All MTRRs are disabled when the E flag is clear.
>>   The FE flag has no affect when the E flag is clear.
>> - The default type is enabled when the E flag is set.
>> - MTRR variable ranges are enabled when the E flag is set.
>> - MTRR fixed ranges are enabled when both E and FE flags
>>   are set.
>> 
>> MTRR state checks in __mtrr_type_lookup() do not follow the
>> SDM definitions.  Therefore, this patch fixes the MTRR state
>> checks according to the SDM.  This patch defines the flags
>> in mtrr_state.enabled as follows.  print_mtrr_state() is also
>> updated.
>> - FE flag: MTRR_STATE_MTRR_FIXED_ENABLED
>> - E  flag: MTRR_STATE_MTRR_ENABLED
>> 
>> Lastly, this patch fixes the 'else if (start < 0x1000000)',
>> which checks a fixed range but has an extra-zero in the
>> address, to 'else' with no condition.
> 
> Firstly, this does multiple bug fixes in a single patch, which is a 
> no-no: please split it up into separate patches.

Right.  I will split into two patches.

> Secondly, please also outline the differences between the old code and 
> the new code - don't just list the SDM logic and state that we are 
> updating to it.

Yes, I will update the patch log accordingly.

Thanks,
-Toshi


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup()
  2015-03-16  7:58   ` Ingo Molnar
@ 2015-03-16 21:24     ` Kani, Toshimitsu
  2015-03-23 19:27       ` Toshi Kani
  0 siblings, 1 reply; 13+ messages in thread
From: Kani, Toshimitsu @ 2015-03-16 21:24 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: akpm, hpa, tglx, mingo, linux-mm, x86, linux-kernel, dave.hansen,
	Elliott, Robert (Server Storage),
	pebolle

> On Mar 16, 2015, at 3:58 AM, Ingo Molnar <mingo@kernel.org> wrote:
> 
> 
> * Toshi Kani <toshi.kani@hp.com> wrote:
> 
>> MTRRs contain fixed and variable entries.  mtrr_type_lookup()
>> may repeatedly call __mtrr_type_lookup() to handle a request
>> that overlaps with variable entries.  However,
>> __mtrr_type_lookup() also handles the fixed entries, which
>> do not have to be repeated.  Therefore, this patch creates
>> separate functions, mtrr_type_lookup_fixed() and
>> mtrr_type_lookup_variable(), to handle the fixed and variable
>> ranges respectively.
>> 
>> The patch also updates the function headers to clarify the
>> return values and output argument.  It updates comments to
>> clarify that the repeating is necessary to handle overlaps
>> with the default type, since overlaps with multiple entries
>> alone can be handled without such repeating.
>> 
>> There is no functional change in this patch.
> 
> Nice cleanup!
> 
> I also suggest adding a small table to the comments before the 
> function, that lists the fixed purpose MTRRs and their address ranges 
> - to make it more obvious what the magic hexadecimal constants within 
> the code are doing.

Yes, I will add a table to describe the fixed entries.

>> +static u8 mtrr_type_lookup_fixed(u64 start, u64 end)
>> +{
>> +    int idx;
>> +
>> +    if (start >= 0x100000)
>> +        return 0xFF;
> 
> Btw., as a separate cleanup patch, we should probably also change 
> '0xFF' (which is sometimes written as 0xff) to be some sufficiently 
> named constant, and explain its usage somewhere?

Sounds good.  I will add a separate patch to do so.

>> +    if (!(mtrr_state.have_fixed) ||
>> +        !(mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED))
> 
> Btw., can MTRR_STATE_MTRR_FIXED_ENABLED ever be set in 
> mtrr_state.enabled, without mtrr_state.have_fixed being set?

Yes, I believe the arch allows the fixed entries disabled
while MTRRs are enabled.  I expect the most of systems 
implement the fixed entries, though.

> AFAICS get_mtrr_state() will only ever fill in mtrr_state with fixed 
> MTRRs if mtrr_state.have_fixed != 0 - but I might be mis-reading the 
> (rather convoluted) flow of code ...

I will check the code next week.

Thanks,
-Toshi

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup()
  2015-03-16 21:24     ` Kani, Toshimitsu
@ 2015-03-23 19:27       ` Toshi Kani
  0 siblings, 0 replies; 13+ messages in thread
From: Toshi Kani @ 2015-03-23 19:27 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: akpm, hpa, tglx, mingo, linux-mm, x86, linux-kernel, dave.hansen,
	Elliott, Robert (Server Storage),
	pebolle

On Mon, 2015-03-16 at 21:24 +0000, Kani, Toshimitsu wrote:
> > On Mar 16, 2015, at 3:58 AM, Ingo Molnar <mingo@kernel.org> wrote:
> >> * Toshi Kani <toshi.kani@hp.com> wrote:
 :
> 
> >> +    if (!(mtrr_state.have_fixed) ||
> >> +        !(mtrr_state.enabled & MTRR_STATE_MTRR_FIXED_ENABLED))
> > 
> > Btw., can MTRR_STATE_MTRR_FIXED_ENABLED ever be set in 
> > mtrr_state.enabled, without mtrr_state.have_fixed being set?
> 
> Yes, I believe the arch allows the fixed entries disabled
> while MTRRs are enabled.  I expect the most of systems 
> implement the fixed entries, though.

Sorry, I noticed I had mis-read your question before...

No, MTRR_STATE_MTRR_FIXED_ENABLED may not be set without
mtrr_state.have_fixed being set.  mtrr_state.have_fixed indicates if the
CPU supports MTRR fixed ranges.  So, they can be only enabled when the
CPU has ones.

> > AFAICS get_mtrr_state() will only ever fill in mtrr_state with fixed 
> > MTRRs if mtrr_state.have_fixed != 0 - but I might be mis-reading the 
> > (rather convoluted) flow of code ...
> 
> I will check the code next week.

Yes, you are right that get_mtrr_state() only fills in
mtrr_state.fixed_ranges[] when mtrr_state.have_fixed is set.  This is
because the MSRs containing the fixed ranges are only valid when this
flag is set. 

Thanks,
-Toshi




^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-03-23 19:45 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-13 21:33 [PATCH v3 0/5] mtrr, mm, x86: Enhance MTRR checks for huge I/O mapping Toshi Kani
2015-03-13 21:33 ` [PATCH v3 1/5] mm, x86: Document return values of mapping funcs Toshi Kani
2015-03-13 21:33 ` [PATCH v3 2/5] mtrr, x86: Fix MTRR lookup to handle inclusive entry Toshi Kani
2015-03-16  7:49   ` Ingo Molnar
2015-03-16 21:03     ` Kani, Toshimitsu
2015-03-13 21:33 ` [PATCH v3 3/5] mtrr, x86: Fix MTRR state checks in mtrr_type_lookup() Toshi Kani
2015-03-16  7:51   ` Ingo Molnar
2015-03-16 21:08     ` Kani, Toshimitsu
2015-03-13 21:33 ` [PATCH v3 4/5] mtrr, x86: Clean up mtrr_type_lookup() Toshi Kani
2015-03-16  7:58   ` Ingo Molnar
2015-03-16 21:24     ` Kani, Toshimitsu
2015-03-23 19:27       ` Toshi Kani
2015-03-13 21:33 ` [PATCH v3 5/5] mtrr, mm, x86: Enhance MTRR checks for KVA huge page mapping Toshi Kani

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).