LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] x86: fix double definition for __VIRTUAL_MASK_SHIFT
@ 2019-05-08 20:44 Yury Norov
2019-05-09 9:01 ` [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit Ingo Molnar
0 siblings, 1 reply; 4+ messages in thread
From: Yury Norov @ 2019-05-08 20:44 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
Andi Kleen, Michal Hocko, Josh Poimboeuf, Dave Hansen
Cc: Yury Norov, x86, linux-kernel
__VIRTUAL_MASK_SHIFT is defined twice to the same valie in
arch/x86/include/asm/page_32_types.h. Fix it.
Signed-off-by: Yury Norov <ynorov@marvell.com>
---
arch/x86/include/asm/page_32_types.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
index 0d5c739eebd7..9bfac5c80d89 100644
--- a/arch/x86/include/asm/page_32_types.h
+++ b/arch/x86/include/asm/page_32_types.h
@@ -28,6 +28,8 @@
#define MCE_STACK 0
#define N_EXCEPTION_STACKS 1
+#define __VIRTUAL_MASK_SHIFT 32
+
#ifdef CONFIG_X86_PAE
/*
* This is beyond the 44 bit limit imposed by the 32bit long pfns,
@@ -36,11 +38,8 @@
* The real limit is still 44 bits.
*/
#define __PHYSICAL_MASK_SHIFT 52
-#define __VIRTUAL_MASK_SHIFT 32
-
#else /* !CONFIG_X86_PAE */
#define __PHYSICAL_MASK_SHIFT 32
-#define __VIRTUAL_MASK_SHIFT 32
#endif /* CONFIG_X86_PAE */
/*
--
2.17.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
2019-05-08 20:44 [PATCH] x86: fix double definition for __VIRTUAL_MASK_SHIFT Yury Norov
@ 2019-05-09 9:01 ` Ingo Molnar
2019-05-09 17:22 ` Yury Norov
0 siblings, 1 reply; 4+ messages in thread
From: Ingo Molnar @ 2019-05-09 9:01 UTC (permalink / raw)
To: Yury Norov
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
Andi Kleen, Michal Hocko, Josh Poimboeuf, Dave Hansen,
Yury Norov, x86, linux-kernel
* Yury Norov <yury.norov@gmail.com> wrote:
> __VIRTUAL_MASK_SHIFT is defined twice to the same valie in
> arch/x86/include/asm/page_32_types.h. Fix it.
>
> Signed-off-by: Yury Norov <ynorov@marvell.com>
> ---
> arch/x86/include/asm/page_32_types.h | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> index 0d5c739eebd7..9bfac5c80d89 100644
> --- a/arch/x86/include/asm/page_32_types.h
> +++ b/arch/x86/include/asm/page_32_types.h
> @@ -28,6 +28,8 @@
> #define MCE_STACK 0
> #define N_EXCEPTION_STACKS 1
>
> +#define __VIRTUAL_MASK_SHIFT 32
> +
> #ifdef CONFIG_X86_PAE
> /*
> * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> @@ -36,11 +38,8 @@
> * The real limit is still 44 bits.
> */
> #define __PHYSICAL_MASK_SHIFT 52
> -#define __VIRTUAL_MASK_SHIFT 32
> -
> #else /* !CONFIG_X86_PAE */
> #define __PHYSICAL_MASK_SHIFT 32
> -#define __VIRTUAL_MASK_SHIFT 32
> #endif /* CONFIG_X86_PAE */
I think it's clearer to see them defined where the physical mask shift is
defined.
How about the patch below? It does away with the weird formatting and
cleans up both the comments and the style of the definition:
/*
* 52 bits on PAE is beyond the 44-bit limit imposed by the
* 32-bit long PFNs, but we need the full mask to make sure
* inverted PROT_NONE entries have all the host bits set
* in a guest. The real limit is still 44 bits.
*/
#ifdef CONFIG_X86_PAE
# define __PHYSICAL_MASK_SHIFT 52
# define __VIRTUAL_MASK_SHIFT 32
#else
# define __PHYSICAL_MASK_SHIFT 32
# define __VIRTUAL_MASK_SHIFT 32
#endif
?
Thanks,
Ingo
===============>
From: Ingo Molnar <mingo@kernel.org>
Date: Thu, 9 May 2019 10:59:44 +0200
Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/include/asm/page_32_types.h | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
index 565ad755c785..009e96d4b6d4 100644
--- a/arch/x86/include/asm/page_32_types.h
+++ b/arch/x86/include/asm/page_32_types.h
@@ -26,20 +26,19 @@
#define N_EXCEPTION_STACKS 1
-#ifdef CONFIG_X86_PAE
/*
- * This is beyond the 44 bit limit imposed by the 32bit long pfns,
- * but we need the full mask to make sure inverted PROT_NONE
- * entries have all the host bits set in a guest.
- * The real limit is still 44 bits.
+ * 52 bits on PAE is beyond the 44-bit limit imposed by the
+ * 32-bit long PFNs, but we need the full mask to make sure
+ * inverted PROT_NONE entries have all the host bits set
+ * in a guest. The real limit is still 44 bits.
*/
-#define __PHYSICAL_MASK_SHIFT 52
-#define __VIRTUAL_MASK_SHIFT 32
-
-#else /* !CONFIG_X86_PAE */
-#define __PHYSICAL_MASK_SHIFT 32
-#define __VIRTUAL_MASK_SHIFT 32
-#endif /* CONFIG_X86_PAE */
+#ifdef CONFIG_X86_PAE
+# define __PHYSICAL_MASK_SHIFT 52
+# define __VIRTUAL_MASK_SHIFT 32
+#else
+# define __PHYSICAL_MASK_SHIFT 32
+# define __VIRTUAL_MASK_SHIFT 32
+#endif
/*
* Kernel image size is limited to 512 MB (see in arch/x86/kernel/head_32.S)
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
2019-05-09 9:01 ` [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit Ingo Molnar
@ 2019-05-09 17:22 ` Yury Norov
2019-05-09 19:20 ` Ingo Molnar
0 siblings, 1 reply; 4+ messages in thread
From: Yury Norov @ 2019-05-09 17:22 UTC (permalink / raw)
To: Ingo Molnar
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
Andi Kleen, Michal Hocko, Josh Poimboeuf, Dave Hansen,
Yury Norov, x86, linux-kernel
On Thu, May 09, 2019 at 11:01:31AM +0200, Ingo Molnar wrote:
>
> * Yury Norov <yury.norov@gmail.com> wrote:
>
> > __VIRTUAL_MASK_SHIFT is defined twice to the same valie in
> > arch/x86/include/asm/page_32_types.h. Fix it.
> >
> > Signed-off-by: Yury Norov <ynorov@marvell.com>
> > ---
> > arch/x86/include/asm/page_32_types.h | 5 ++---
> > 1 file changed, 2 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> > index 0d5c739eebd7..9bfac5c80d89 100644
> > --- a/arch/x86/include/asm/page_32_types.h
> > +++ b/arch/x86/include/asm/page_32_types.h
> > @@ -28,6 +28,8 @@
> > #define MCE_STACK 0
> > #define N_EXCEPTION_STACKS 1
> >
> > +#define __VIRTUAL_MASK_SHIFT 32
> > +
> > #ifdef CONFIG_X86_PAE
> > /*
> > * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> > @@ -36,11 +38,8 @@
> > * The real limit is still 44 bits.
> > */
> > #define __PHYSICAL_MASK_SHIFT 52
> > -#define __VIRTUAL_MASK_SHIFT 32
> > -
> > #else /* !CONFIG_X86_PAE */
> > #define __PHYSICAL_MASK_SHIFT 32
> > -#define __VIRTUAL_MASK_SHIFT 32
> > #endif /* CONFIG_X86_PAE */
>
> I think it's clearer to see them defined where the physical mask shift is
> defined.
>
> How about the patch below? It does away with the weird formatting and
> cleans up both the comments and the style of the definition:
>
> /*
> * 52 bits on PAE is beyond the 44-bit limit imposed by the
> * 32-bit long PFNs, but we need the full mask to make sure
> * inverted PROT_NONE entries have all the host bits set
> * in a guest. The real limit is still 44 bits.
> */
> #ifdef CONFIG_X86_PAE
> # define __PHYSICAL_MASK_SHIFT 52
> # define __VIRTUAL_MASK_SHIFT 32
> #else
> # define __PHYSICAL_MASK_SHIFT 32
> # define __VIRTUAL_MASK_SHIFT 32
> #endif
>
> ?
My main concern was about double definition. It pretty looks like a
bug. But if it's intentional, I'm OK. In the patch below, could you
please add some note to the comment that __VIRTUAL_MASK_SHIFT defined
twice intentionally?
Thanks,
Yury
> Thanks,
>
> Ingo
>
> ===============>
> From: Ingo Molnar <mingo@kernel.org>
> Date: Thu, 9 May 2019 10:59:44 +0200
> Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
> arch/x86/include/asm/page_32_types.h | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> index 565ad755c785..009e96d4b6d4 100644
> --- a/arch/x86/include/asm/page_32_types.h
> +++ b/arch/x86/include/asm/page_32_types.h
> @@ -26,20 +26,19 @@
>
> #define N_EXCEPTION_STACKS 1
>
> -#ifdef CONFIG_X86_PAE
> /*
> - * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> - * but we need the full mask to make sure inverted PROT_NONE
> - * entries have all the host bits set in a guest.
> - * The real limit is still 44 bits.
> + * 52 bits on PAE is beyond the 44-bit limit imposed by the
> + * 32-bit long PFNs, but we need the full mask to make sure
> + * inverted PROT_NONE entries have all the host bits set
> + * in a guest. The real limit is still 44 bits.
> */
> -#define __PHYSICAL_MASK_SHIFT 52
> -#define __VIRTUAL_MASK_SHIFT 32
> -
> -#else /* !CONFIG_X86_PAE */
> -#define __PHYSICAL_MASK_SHIFT 32
> -#define __VIRTUAL_MASK_SHIFT 32
> -#endif /* CONFIG_X86_PAE */
> +#ifdef CONFIG_X86_PAE
> +# define __PHYSICAL_MASK_SHIFT 52
> +# define __VIRTUAL_MASK_SHIFT 32
> +#else
> +# define __PHYSICAL_MASK_SHIFT 32
> +# define __VIRTUAL_MASK_SHIFT 32
> +#endif
>
> /*
> * Kernel image size is limited to 512 MB (see in arch/x86/kernel/head_32.S)
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
2019-05-09 17:22 ` Yury Norov
@ 2019-05-09 19:20 ` Ingo Molnar
0 siblings, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2019-05-09 19:20 UTC (permalink / raw)
To: Yury Norov
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
Andi Kleen, Michal Hocko, Josh Poimboeuf, Dave Hansen,
Yury Norov, x86, linux-kernel
* Yury Norov <yury.norov@gmail.com> wrote:
> On Thu, May 09, 2019 at 11:01:31AM +0200, Ingo Molnar wrote:
> >
> > * Yury Norov <yury.norov@gmail.com> wrote:
> >
> > > __VIRTUAL_MASK_SHIFT is defined twice to the same valie in
> > > arch/x86/include/asm/page_32_types.h. Fix it.
> > >
> > > Signed-off-by: Yury Norov <ynorov@marvell.com>
> > > ---
> > > arch/x86/include/asm/page_32_types.h | 5 ++---
> > > 1 file changed, 2 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> > > index 0d5c739eebd7..9bfac5c80d89 100644
> > > --- a/arch/x86/include/asm/page_32_types.h
> > > +++ b/arch/x86/include/asm/page_32_types.h
> > > @@ -28,6 +28,8 @@
> > > #define MCE_STACK 0
> > > #define N_EXCEPTION_STACKS 1
> > >
> > > +#define __VIRTUAL_MASK_SHIFT 32
> > > +
> > > #ifdef CONFIG_X86_PAE
> > > /*
> > > * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> > > @@ -36,11 +38,8 @@
> > > * The real limit is still 44 bits.
> > > */
> > > #define __PHYSICAL_MASK_SHIFT 52
> > > -#define __VIRTUAL_MASK_SHIFT 32
> > > -
> > > #else /* !CONFIG_X86_PAE */
> > > #define __PHYSICAL_MASK_SHIFT 32
> > > -#define __VIRTUAL_MASK_SHIFT 32
> > > #endif /* CONFIG_X86_PAE */
> >
> > I think it's clearer to see them defined where the physical mask shift is
> > defined.
> >
> > How about the patch below? It does away with the weird formatting and
> > cleans up both the comments and the style of the definition:
> >
> > /*
> > * 52 bits on PAE is beyond the 44-bit limit imposed by the
> > * 32-bit long PFNs, but we need the full mask to make sure
> > * inverted PROT_NONE entries have all the host bits set
> > * in a guest. The real limit is still 44 bits.
> > */
> > #ifdef CONFIG_X86_PAE
> > # define __PHYSICAL_MASK_SHIFT 52
> > # define __VIRTUAL_MASK_SHIFT 32
> > #else
> > # define __PHYSICAL_MASK_SHIFT 32
> > # define __VIRTUAL_MASK_SHIFT 32
> > #endif
> >
> > ?
>
> My main concern was about double definition. It pretty looks like a
> bug. But if it's intentional, I'm OK. In the patch below, could you
> please add some note to the comment that __VIRTUAL_MASK_SHIFT defined
> twice intentionally?
It's not defined "twice", it has values set in the PAE and the non-PAE
branch - just like __PHYSICAL_MASK_SHIFT.
__VIRTUAL_MASK_SHIFT happens to have the same value in both branches, but
that's OK and pretty standard and happens in other headers too.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-05-09 19:20 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-08 20:44 [PATCH] x86: fix double definition for __VIRTUAL_MASK_SHIFT Yury Norov
2019-05-09 9:01 ` [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit Ingo Molnar
2019-05-09 17:22 ` Yury Norov
2019-05-09 19:20 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).