LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* Re: [PATCH 1/2] MSR: Add support for safe variants
@ 2007-03-26 11:57 Mikael Pettersson
2007-03-26 12:09 ` Jean Delvare
0 siblings, 1 reply; 7+ messages in thread
From: Mikael Pettersson @ 2007-03-26 11:57 UTC (permalink / raw)
To: akpm, khali; +Cc: adobriyan, davej, linux-kernel, r.marek
On Mon, 26 Mar 2007 13:29:37 +0200, Jean Delvare wrote:
> * * * * * Updated patch * * * * *
>
> From: Rudolf Marek <r.marek@assembler.cz>
>
> Add safe (exception handled) variants of rdmsr_on_cpu and wrmsr_on_cpu.
> You should use these when the target MSR may not actually exist, as
> doing so could trigger an exception which the regular functions do not
> handle. The safe variants are slower, though.
>
> The upcoming coretemp hardware monitoring driver will need this.
Maybe I'm in the minority here, but I for one strongly believe
that any attempt to access an MSR "which might not be there" is
inherently wrong. It implies that your HW detection is incomplete,
which in combination with MSR accesses means that you may end up
accessing MSRs that aren't at all what you think they should be.
Who supplies these imprecise MSR definitions anyway?
Intel manuals? ACPI?
/Mikael
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] MSR: Add support for safe variants
2007-03-26 11:57 [PATCH 1/2] MSR: Add support for safe variants Mikael Pettersson
@ 2007-03-26 12:09 ` Jean Delvare
2007-03-28 21:04 ` Rudolf Marek
0 siblings, 1 reply; 7+ messages in thread
From: Jean Delvare @ 2007-03-26 12:09 UTC (permalink / raw)
To: Mikael Pettersson
Cc: Andrew Morton, Alexey Dobriyan, Dave Jones, linux-kernel, Rudolf Marek
Hi Mikael,
On Mon, 26 Mar 2007 13:57:29 +0200 (MEST), Mikael Pettersson wrote:
> On Mon, 26 Mar 2007 13:29:37 +0200, Jean Delvare wrote:
> > * * * * * Updated patch * * * * *
> >
> > From: Rudolf Marek <r.marek@assembler.cz>
> >
> > Add safe (exception handled) variants of rdmsr_on_cpu and wrmsr_on_cpu.
> > You should use these when the target MSR may not actually exist, as
> > doing so could trigger an exception which the regular functions do not
> > handle. The safe variants are slower, though.
> >
> > The upcoming coretemp hardware monitoring driver will need this.
>
> Maybe I'm in the minority here, but I for one strongly believe
> that any attempt to access an MSR "which might not be there" is
> inherently wrong. It implies that your HW detection is incomplete,
> which in combination with MSR accesses means that you may end up
> accessing MSRs that aren't at all what you think they should be.
Hopefully CPU manufacturers are not that stupid and don't implement
MSRs using the same number and doing different things in CPU models
which are otherwise similar enough for one driver to attempt to handle
them both. But of course it's probably only a matter of time before I am
proven wrong...
I agree with you that accessing an MSR which might not be there should
be avoided where possible and only used as a last resort. But until
technical documentation is perfectly correct for all CPUs out there,
there will always be cases where we need to do that.
> Who supplies these imprecise MSR definitions anyway?
> Intel manuals? ACPI?
Intel. Rudolf will know the details better.
--
Jean Delvare
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] MSR: Add support for safe variants
2007-03-26 12:09 ` Jean Delvare
@ 2007-03-28 21:04 ` Rudolf Marek
0 siblings, 0 replies; 7+ messages in thread
From: Rudolf Marek @ 2007-03-28 21:04 UTC (permalink / raw)
To: Mikael Pettersson
Cc: Jean Delvare, Andrew Morton, Alexey Dobriyan, Dave Jones, linux-kernel
Hello Mikael,
> Maybe I'm in the minority here, but I for one strongly believe
> that any attempt to access an MSR "which might not be there" is
> inherently wrong. It implies that your HW detection is incomplete,
> which in combination with MSR accesses means that you may end up
> accessing MSRs that aren't at all what you think they should be.
Well I have some info from Intel, but the register is simply not always there.
So instead of oopsing kernel, handling the exception is much better.
For further details please check:
http://softwarecommunity.intel.com/isn/Community/en-US/forums/thread/30229027.aspx
(last page) I may have more info in the future.
> Who supplies these imprecise MSR definitions anyway?
Yes Intel, please check the thread. Sometimes similar to catch 22 ;)
Rudolf
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/2] MSR: Add support for safe variants
@ 2007-03-25 12:18 Jean Delvare
2007-03-25 22:22 ` Andrew Morton
0 siblings, 1 reply; 7+ messages in thread
From: Jean Delvare @ 2007-03-25 12:18 UTC (permalink / raw)
To: LKML, Andrew Morton; +Cc: Rudolf Marek, Alexey Dobriyan, Dave Jones
From: Rudolf Marek <r.marek@assembler.cz>
Add support for _safe (exception handled) variants of rdmsr_on_cpu
and wrmsr_on_cpu. This is needed for the upcoming coretemp hardware
monitoring driver, which might step into non-existing (poorly
documented) MSR.
Signed-off-by: Rudolf Marek <r.marek@assembler.cz>
Cc: Alexey Dobriyan <adobriyan@openvz.org>
Cc: Dave Jones <davej@redhat.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
---
Andrew, can you please pick this patch? I don't want to keep it in my
hwmon tree, as it might useful for others as well.
arch/i386/lib/msr-on-cpu.c | 71 ++++++++++++++++++++++++++++++++++++++++----
include/asm-i386/msr.h | 12 +++++++
include/asm-x86_64/msr.h | 11 +++++++
3 files changed, 87 insertions(+), 7 deletions(-)
--- linux-2.6.21-rc4.orig/arch/i386/lib/msr-on-cpu.c 2007-02-28 09:48:18.000000000 +0100
+++ linux-2.6.21-rc4/arch/i386/lib/msr-on-cpu.c 2007-03-24 20:50:05.000000000 +0100
@@ -6,6 +6,7 @@
struct msr_info {
u32 msr_no;
u32 l, h;
+ int err;
};
static void __rdmsr_on_cpu(void *info)
@@ -15,20 +16,38 @@ static void __rdmsr_on_cpu(void *info)
rdmsr(rv->msr_no, rv->l, rv->h);
}
-void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+static void __rdmsr_safe_on_cpu(void *info)
{
+ struct msr_info *rv = info;
+
+ rv->err = rdmsr_safe(rv->msr_no, &rv->l, &rv->h);
+}
+
+static int _rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h, int safe)
+{
+ int err = 0;
preempt_disable();
if (smp_processor_id() == cpu)
- rdmsr(msr_no, *l, *h);
+ if (safe)
+ err = rdmsr_safe(msr_no, l, h);
+ else
+ rdmsr(msr_no, *l, *h);
else {
struct msr_info rv;
rv.msr_no = msr_no;
- smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 0, 1);
+ if (safe) {
+ smp_call_function_single(cpu, __rdmsr_safe_on_cpu,
+ &rv, 0, 1);
+ err = rv.err;
+ } else {
+ smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 0, 1);
+ }
*l = rv.l;
*h = rv.h;
}
preempt_enable();
+ return err;
}
static void __wrmsr_on_cpu(void *info)
@@ -38,21 +57,61 @@ static void __wrmsr_on_cpu(void *info)
wrmsr(rv->msr_no, rv->l, rv->h);
}
-void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+static void __wrmsr_safe_on_cpu(void *info)
{
+ struct msr_info *rv = info;
+
+ rv->err = wrmsr_safe(rv->msr_no, rv->l, rv->h);
+}
+
+static int _wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h, int safe)
+{
+ int err = 0;
preempt_disable();
if (smp_processor_id() == cpu)
- wrmsr(msr_no, l, h);
+ if (safe)
+ err = wrmsr_safe(msr_no, l, h);
+ else
+ wrmsr(msr_no, l, h);
else {
struct msr_info rv;
rv.msr_no = msr_no;
rv.l = l;
rv.h = h;
- smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 0, 1);
+ if (safe) {
+ smp_call_function_single(cpu, __wrmsr_safe_on_cpu,
+ &rv, 0, 1);
+ err = rv.err;
+ } else {
+ smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 0, 1);
+ }
}
preempt_enable();
+ return err;
+}
+
+void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ _wrmsr_on_cpu(cpu, msr_no, l, h, 0);
+}
+
+void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ _rdmsr_on_cpu(cpu, msr_no, l, h, 0);
+}
+
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ return _wrmsr_on_cpu(cpu, msr_no, l, h, 1);
+}
+
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ return _rdmsr_on_cpu(cpu, msr_no, l, h, 1);
}
EXPORT_SYMBOL(rdmsr_on_cpu);
EXPORT_SYMBOL(wrmsr_on_cpu);
+EXPORT_SYMBOL(rdmsr_safe_on_cpu);
+EXPORT_SYMBOL(wrmsr_safe_on_cpu);
--- linux-2.6.21-rc4.orig/include/asm-i386/msr.h 2007-02-28 09:48:21.000000000 +0100
+++ linux-2.6.21-rc4/include/asm-i386/msr.h 2007-03-24 20:50:05.000000000 +0100
@@ -4,7 +4,7 @@
#ifdef CONFIG_PARAVIRT
#include <asm/paravirt.h>
#else
-
+#include <linux/errno.h>
/*
* Access to machine-specific registers (available on 586 and better only)
* Note: the rd* operations modify the parameters directly (without using
@@ -86,6 +86,8 @@ static inline void wrmsrl (unsigned long
#ifdef CONFIG_SMP
void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
#else /* CONFIG_SMP */
static inline void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
{
@@ -95,6 +97,14 @@ static inline void wrmsr_on_cpu(unsigned
{
wrmsr(msr_no, l, h);
}
+static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ return rdmsr_safe(msr_no, l, h);
+}
+static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ return wrmsr_safe(msr_no, l, h);
+}
#endif /* CONFIG_SMP */
/* symbolic names for some interesting MSRs */
--- linux-2.6.21-rc4.orig/include/asm-x86_64/msr.h 2007-02-28 09:48:21.000000000 +0100
+++ linux-2.6.21-rc4/include/asm-x86_64/msr.h 2007-03-24 20:50:05.000000000 +0100
@@ -2,6 +2,7 @@
#define X86_64_MSR_H 1
#ifndef __ASSEMBLY__
+#include <linux/errno.h>
/*
* Access to machine-specific registers (available on 586 and better only)
* Note: the rd* operations modify the parameters directly (without using
@@ -163,6 +164,8 @@ static inline unsigned int cpuid_edx(uns
#ifdef CONFIG_SMP
void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
#else /* CONFIG_SMP */
static inline void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
{
@@ -172,6 +175,14 @@ static inline void wrmsr_on_cpu(unsigned
{
wrmsr(msr_no, l, h);
}
+static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ return rdmsr_safe(msr_no, l, h);
+}
+static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ return wrmsr_safe(msr_no, l, h);
+}
#endif /* CONFIG_SMP */
#endif
--
Jean Delvare
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] MSR: Add support for safe variants
2007-03-25 12:18 Jean Delvare
@ 2007-03-25 22:22 ` Andrew Morton
2007-03-26 11:29 ` Jean Delvare
0 siblings, 1 reply; 7+ messages in thread
From: Andrew Morton @ 2007-03-25 22:22 UTC (permalink / raw)
To: Jean Delvare; +Cc: LKML, Rudolf Marek, Alexey Dobriyan, Dave Jones
On Sun, 25 Mar 2007 14:18:23 +0200 Jean Delvare <khali@linux-fr.org> wrote:
> Add support for _safe (exception handled) variants of rdmsr_on_cpu
> and wrmsr_on_cpu. This is needed for the upcoming coretemp hardware
> monitoring driver, which might step into non-existing (poorly
> documented) MSR.
Crappy changelog. What do these functions do? What is "safe" about them?
This is described in neither the changelog nor the code comments, but it
should be described in both.
Also, Andi presently has several MSR patches queued (most prominently
ftp://ftp.firstfloor.org/pub/ak/x86_64/quilt-current/patches/rwmsr-on-cpu)
but I'm not including them in -mm due to rejects. If I get a properly
changlogged and commented version of this patch I can merge it, but it
might get wrecked again when the x86_64 tree gets sorted out.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] MSR: Add support for safe variants
2007-03-25 22:22 ` Andrew Morton
@ 2007-03-26 11:29 ` Jean Delvare
2007-03-26 11:55 ` Andrew Morton
0 siblings, 1 reply; 7+ messages in thread
From: Jean Delvare @ 2007-03-26 11:29 UTC (permalink / raw)
To: Andrew Morton; +Cc: LKML, Rudolf Marek, Alexey Dobriyan, Dave Jones
Hi Andrew,
On Sun, 25 Mar 2007 14:22:15 -0800, Andrew Morton wrote:
> On Sun, 25 Mar 2007 14:18:23 +0200 Jean Delvare <khali@linux-fr.org> wrote:
>
> > Add support for _safe (exception handled) variants of rdmsr_on_cpu
> > and wrmsr_on_cpu. This is needed for the upcoming coretemp hardware
> > monitoring driver, which might step into non-existing (poorly
> > documented) MSR.
>
> Crappy changelog. What do these functions do? What is "safe" about them?
> This is described in neither the changelog nor the code comments, but it
> should be described in both.
This is needed when you attempt to access an MSR which might not
actually exist. Exactly what the changelog says... The core safe msr
functions (in include/asm/msr.h) also have a comment that says: "with
exception handling".
So I don't think this is as crappy as you say. But I still updated the
changelog and added a comment in the code, hope you like it better this
way.
> Also, Andi presently has several MSR patches queued (most prominently
> ftp://ftp.firstfloor.org/pub/ak/x86_64/quilt-current/patches/rwmsr-on-cpu)
> but I'm not including them in -mm due to rejects. If I get a properly
> changlogged and commented version of this patch I can merge it, but it
> might get wrecked again when the x86_64 tree gets sorted out.
That patch you point me to is _already_ in Linus' tree:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=b077ffb3b767c3efb44d00b998385a9cb127255c
So I'm not suprised you get rejects... Andi seemingly didn't update his
tree for over a month now.
The patch from Rudolf Marek which I am posting here builds on top of
what is already in Linus' tree. Taking it in your tree should not cause
any problem.
Thanks.
* * * * * Updated patch * * * * *
From: Rudolf Marek <r.marek@assembler.cz>
Add safe (exception handled) variants of rdmsr_on_cpu and wrmsr_on_cpu.
You should use these when the target MSR may not actually exist, as
doing so could trigger an exception which the regular functions do not
handle. The safe variants are slower, though.
The upcoming coretemp hardware monitoring driver will need this.
Signed-off-by: Rudolf Marek <r.marek@assembler.cz>
Cc: Alexey Dobriyan <adobriyan@openvz.org>
Cc: Dave Jones <davej@redhat.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
---
arch/i386/lib/msr-on-cpu.c | 73 ++++++++++++++++++++++++++++++++++++++++----
include/asm-i386/msr.h | 12 +++++++
include/asm-x86_64/msr.h | 11 +++++++
3 files changed, 89 insertions(+), 7 deletions(-)
--- linux-2.6.21-rc4.orig/arch/i386/lib/msr-on-cpu.c 2007-03-25 14:31:37.000000000 +0200
+++ linux-2.6.21-rc4/arch/i386/lib/msr-on-cpu.c 2007-03-26 13:11:26.000000000 +0200
@@ -6,6 +6,7 @@
struct msr_info {
u32 msr_no;
u32 l, h;
+ int err;
};
static void __rdmsr_on_cpu(void *info)
@@ -15,20 +16,38 @@ static void __rdmsr_on_cpu(void *info)
rdmsr(rv->msr_no, rv->l, rv->h);
}
-void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+static void __rdmsr_safe_on_cpu(void *info)
{
+ struct msr_info *rv = info;
+
+ rv->err = rdmsr_safe(rv->msr_no, &rv->l, &rv->h);
+}
+
+static int _rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h, int safe)
+{
+ int err = 0;
preempt_disable();
if (smp_processor_id() == cpu)
- rdmsr(msr_no, *l, *h);
+ if (safe)
+ err = rdmsr_safe(msr_no, l, h);
+ else
+ rdmsr(msr_no, *l, *h);
else {
struct msr_info rv;
rv.msr_no = msr_no;
- smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 0, 1);
+ if (safe) {
+ smp_call_function_single(cpu, __rdmsr_safe_on_cpu,
+ &rv, 0, 1);
+ err = rv.err;
+ } else {
+ smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 0, 1);
+ }
*l = rv.l;
*h = rv.h;
}
preempt_enable();
+ return err;
}
static void __wrmsr_on_cpu(void *info)
@@ -38,21 +57,63 @@ static void __wrmsr_on_cpu(void *info)
wrmsr(rv->msr_no, rv->l, rv->h);
}
-void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+static void __wrmsr_safe_on_cpu(void *info)
{
+ struct msr_info *rv = info;
+
+ rv->err = wrmsr_safe(rv->msr_no, rv->l, rv->h);
+}
+
+static int _wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h, int safe)
+{
+ int err = 0;
preempt_disable();
if (smp_processor_id() == cpu)
- wrmsr(msr_no, l, h);
+ if (safe)
+ err = wrmsr_safe(msr_no, l, h);
+ else
+ wrmsr(msr_no, l, h);
else {
struct msr_info rv;
rv.msr_no = msr_no;
rv.l = l;
rv.h = h;
- smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 0, 1);
+ if (safe) {
+ smp_call_function_single(cpu, __wrmsr_safe_on_cpu,
+ &rv, 0, 1);
+ err = rv.err;
+ } else {
+ smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 0, 1);
+ }
}
preempt_enable();
+ return err;
+}
+
+void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ _wrmsr_on_cpu(cpu, msr_no, l, h, 0);
+}
+
+void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ _rdmsr_on_cpu(cpu, msr_no, l, h, 0);
+}
+
+/* These "safe" variants are slower and should be used when the target MSR
+ may not actually exist. */
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ return _wrmsr_on_cpu(cpu, msr_no, l, h, 1);
+}
+
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ return _rdmsr_on_cpu(cpu, msr_no, l, h, 1);
}
EXPORT_SYMBOL(rdmsr_on_cpu);
EXPORT_SYMBOL(wrmsr_on_cpu);
+EXPORT_SYMBOL(rdmsr_safe_on_cpu);
+EXPORT_SYMBOL(wrmsr_safe_on_cpu);
--- linux-2.6.21-rc4.orig/include/asm-i386/msr.h 2007-03-25 14:31:37.000000000 +0200
+++ linux-2.6.21-rc4/include/asm-i386/msr.h 2007-03-26 07:52:18.000000000 +0200
@@ -4,7 +4,7 @@
#ifdef CONFIG_PARAVIRT
#include <asm/paravirt.h>
#else
-
+#include <linux/errno.h>
/*
* Access to machine-specific registers (available on 586 and better only)
* Note: the rd* operations modify the parameters directly (without using
@@ -86,6 +86,8 @@ static inline void wrmsrl (unsigned long
#ifdef CONFIG_SMP
void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
#else /* CONFIG_SMP */
static inline void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
{
@@ -95,6 +97,14 @@ static inline void wrmsr_on_cpu(unsigned
{
wrmsr(msr_no, l, h);
}
+static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ return rdmsr_safe(msr_no, l, h);
+}
+static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ return wrmsr_safe(msr_no, l, h);
+}
#endif /* CONFIG_SMP */
/* symbolic names for some interesting MSRs */
--- linux-2.6.21-rc4.orig/include/asm-x86_64/msr.h 2007-03-25 14:31:37.000000000 +0200
+++ linux-2.6.21-rc4/include/asm-x86_64/msr.h 2007-03-26 07:52:19.000000000 +0200
@@ -2,6 +2,7 @@
#define X86_64_MSR_H 1
#ifndef __ASSEMBLY__
+#include <linux/errno.h>
/*
* Access to machine-specific registers (available on 586 and better only)
* Note: the rd* operations modify the parameters directly (without using
@@ -163,6 +164,8 @@ static inline unsigned int cpuid_edx(uns
#ifdef CONFIG_SMP
void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
#else /* CONFIG_SMP */
static inline void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
{
@@ -172,6 +175,14 @@ static inline void wrmsr_on_cpu(unsigned
{
wrmsr(msr_no, l, h);
}
+static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+{
+ return rdmsr_safe(msr_no, l, h);
+}
+static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+{
+ return wrmsr_safe(msr_no, l, h);
+}
#endif /* CONFIG_SMP */
#endif
--
Jean Delvare
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] MSR: Add support for safe variants
2007-03-26 11:29 ` Jean Delvare
@ 2007-03-26 11:55 ` Andrew Morton
0 siblings, 0 replies; 7+ messages in thread
From: Andrew Morton @ 2007-03-26 11:55 UTC (permalink / raw)
To: Jean Delvare; +Cc: LKML, Rudolf Marek, Alexey Dobriyan, Dave Jones
On Mon, 26 Mar 2007 13:29:37 +0200 Jean Delvare <khali@linux-fr.org> wrote:
> The patch from Rudolf Marek which I am posting here builds on top of
> what is already in Linus' tree. Taking it in your tree should not cause
> any problem.
OK, thanks - I'll add this then I'll un-revert the patch which uses it.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2007-03-28 21:05 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-03-26 11:57 [PATCH 1/2] MSR: Add support for safe variants Mikael Pettersson
2007-03-26 12:09 ` Jean Delvare
2007-03-28 21:04 ` Rudolf Marek
-- strict thread matches above, loose matches on Subject: below --
2007-03-25 12:18 Jean Delvare
2007-03-25 22:22 ` Andrew Morton
2007-03-26 11:29 ` Jean Delvare
2007-03-26 11:55 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).