LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Yury Norov <yury.norov@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
"H. Peter Anvin" <hpa@zytor.com>, Andi Kleen <ak@linux.intel.com>,
Michal Hocko <mhocko@suse.com>,
Josh Poimboeuf <jpoimboe@redhat.com>,
Dave Hansen <dave.hansen@intel.com>,
Yury Norov <ynorov@marvell.com>,
x86@kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
Date: Thu, 9 May 2019 11:01:31 +0200 [thread overview]
Message-ID: <20190509090131.GA130570@gmail.com> (raw)
In-Reply-To: <20190508204411.13452-1-ynorov@marvell.com>
* Yury Norov <yury.norov@gmail.com> wrote:
> __VIRTUAL_MASK_SHIFT is defined twice to the same valie in
> arch/x86/include/asm/page_32_types.h. Fix it.
>
> Signed-off-by: Yury Norov <ynorov@marvell.com>
> ---
> arch/x86/include/asm/page_32_types.h | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> index 0d5c739eebd7..9bfac5c80d89 100644
> --- a/arch/x86/include/asm/page_32_types.h
> +++ b/arch/x86/include/asm/page_32_types.h
> @@ -28,6 +28,8 @@
> #define MCE_STACK 0
> #define N_EXCEPTION_STACKS 1
>
> +#define __VIRTUAL_MASK_SHIFT 32
> +
> #ifdef CONFIG_X86_PAE
> /*
> * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> @@ -36,11 +38,8 @@
> * The real limit is still 44 bits.
> */
> #define __PHYSICAL_MASK_SHIFT 52
> -#define __VIRTUAL_MASK_SHIFT 32
> -
> #else /* !CONFIG_X86_PAE */
> #define __PHYSICAL_MASK_SHIFT 32
> -#define __VIRTUAL_MASK_SHIFT 32
> #endif /* CONFIG_X86_PAE */
I think it's clearer to see them defined where the physical mask shift is
defined.
How about the patch below? It does away with the weird formatting and
cleans up both the comments and the style of the definition:
/*
* 52 bits on PAE is beyond the 44-bit limit imposed by the
* 32-bit long PFNs, but we need the full mask to make sure
* inverted PROT_NONE entries have all the host bits set
* in a guest. The real limit is still 44 bits.
*/
#ifdef CONFIG_X86_PAE
# define __PHYSICAL_MASK_SHIFT 52
# define __VIRTUAL_MASK_SHIFT 32
#else
# define __PHYSICAL_MASK_SHIFT 32
# define __VIRTUAL_MASK_SHIFT 32
#endif
?
Thanks,
Ingo
===============>
From: Ingo Molnar <mingo@kernel.org>
Date: Thu, 9 May 2019 10:59:44 +0200
Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/include/asm/page_32_types.h | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
index 565ad755c785..009e96d4b6d4 100644
--- a/arch/x86/include/asm/page_32_types.h
+++ b/arch/x86/include/asm/page_32_types.h
@@ -26,20 +26,19 @@
#define N_EXCEPTION_STACKS 1
-#ifdef CONFIG_X86_PAE
/*
- * This is beyond the 44 bit limit imposed by the 32bit long pfns,
- * but we need the full mask to make sure inverted PROT_NONE
- * entries have all the host bits set in a guest.
- * The real limit is still 44 bits.
+ * 52 bits on PAE is beyond the 44-bit limit imposed by the
+ * 32-bit long PFNs, but we need the full mask to make sure
+ * inverted PROT_NONE entries have all the host bits set
+ * in a guest. The real limit is still 44 bits.
*/
-#define __PHYSICAL_MASK_SHIFT 52
-#define __VIRTUAL_MASK_SHIFT 32
-
-#else /* !CONFIG_X86_PAE */
-#define __PHYSICAL_MASK_SHIFT 32
-#define __VIRTUAL_MASK_SHIFT 32
-#endif /* CONFIG_X86_PAE */
+#ifdef CONFIG_X86_PAE
+# define __PHYSICAL_MASK_SHIFT 52
+# define __VIRTUAL_MASK_SHIFT 32
+#else
+# define __PHYSICAL_MASK_SHIFT 32
+# define __VIRTUAL_MASK_SHIFT 32
+#endif
/*
* Kernel image size is limited to 512 MB (see in arch/x86/kernel/head_32.S)
next prev parent reply other threads:[~2019-05-09 9:01 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-08 20:44 [PATCH] x86: fix double definition for __VIRTUAL_MASK_SHIFT Yury Norov
2019-05-09 9:01 ` Ingo Molnar [this message]
2019-05-09 17:22 ` [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit Yury Norov
2019-05-09 19:20 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190509090131.GA130570@gmail.com \
--to=mingo@kernel.org \
--cc=ak@linux.intel.com \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=hpa@zytor.com \
--cc=jpoimboe@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
--cc=ynorov@marvell.com \
--cc=yury.norov@gmail.com \
--subject='Re: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).