LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git
@ 2008-04-16 21:34 Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 01/27] From: Adrian Bunk <bunk@kernel.org> Mathieu Desnoyers
` (26 more replies)
0 siblings, 27 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Hi Ingo,
Here is the patchset you requested. I did not port the marker reintegration
to your sched-devel tree though, because many changes happened since you have
done the original work.
It applies on top of sched-devel.git latest.
You will notice that I implemented what we discussed yesterday : using nops and
jump for the heavily optimized version of markers. Comments are welcome. Running
this with my ~120 LTTng markers on x86_32 detects 97% of the sites. 4 out of 120
did had to fall back on the standard immediate values because they have been
manipulated by gcc optimizations. The sched-devel.git port has been tested
on x86_32. Patches before the port are tested on x86_32 and x86_64.
Note that some folding of the immediate values patches could eventually be
required. At that point, add-all-cpus-option-to-stop-machine-run.patch would
become useless.
The series order is the following :
make-marker_debug-static.patch # in -mm
x86-nmi-safe-int3-and-page-fault.patch
check-for-breakpoint-in-text-poke-to-eliminate-bug-on.patch
#Kprobes mutex cleanup
kprobes-use-mutex-for-insn-pages.patch
kprobes-dont-use-kprobes-mutex-in-arch-code.patch
kprobes-declare-kprobes-mutex-static.patch
#Text Edit Lock (depends on Enhance DEBUG_RODATA and kprobes mutex cleanup)
text-edit-lock-architecture-independent-code.patch
text-edit-lock-kprobes-architecture-independent-support.patch
#
#Immediate Values
add-all-cpus-option-to-stop-machine-run.patch
immediate-values-architecture-independent-code.patch
immediate-values-kconfig-menu-in-embedded.patch
immediate-values-x86-optimization.patch
add-text-poke-and-sync-core-to-powerpc.patch
immediate-values-powerpc-optimization.patch
immediate-values-documentation.patch
immediate-values-support-init.patch
#
scheduler-profiling-use-immediate-values.patch
#
markers-remove-extra-format-argument.patch
markers-define-non-optimized-marker.patch
#
immediate-values-move-kprobes-x86-restore-interrupt-to-kdebug-h.patch
add-discard-section-to-x86.patch
immediate-values-x86-optimization-nmi-mce-support.patch
immediate-values-powerpc-optimization-nmi-mce-support.patch
immediate-values-use-arch-nmi-mce-support.patch
linux-kernel-markers-immediate-values.patch
#
immediate-values-jump.patch
markers-use-imv-jump.patch
Mathieu
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 01/27] From: Adrian Bunk <bunk@kernel.org>
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 02/27] x86 NMI-safe INT3 and Page Fault Mathieu Desnoyers
` (25 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Adrian Bunk, Mathieu Desnoyers, Andrew Morton
[-- Attachment #1: make-marker_debug-static.patch --]
[-- Type: text/plain, Size: 931 bytes --]
With the needlessly global marker_debug being static gcc can optimize the
unused code away.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
kernel/marker.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -puN kernel/marker.c~make-marker_debug-static kernel/marker.c
--- a/kernel/marker.c~make-marker_debug-static
+++ a/kernel/marker.c
@@ -28,7 +28,7 @@ extern struct marker __start___markers[]
extern struct marker __stop___markers[];
/* Set to 1 to enable marker debug output */
-const int marker_debug;
+static const int marker_debug;
/*
* markers_mutex nests inside module_mutex. Markers mutex protects the builtin
_
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 02/27] x86 NMI-safe INT3 and Page Fault
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 01/27] From: Adrian Bunk <bunk@kernel.org> Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 03/27] Check for breakpoint in text_poke to eliminate bug_on Mathieu Desnoyers
` (24 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Andi Kleen, akpm, H. Peter Anvin,
Jeremy Fitzhardinge, Steven Rostedt, Frank Ch. Eigler
[-- Attachment #1: x86-nmi-safe-int3-and-page-fault.patch --]
[-- Type: text/plain, Size: 13939 bytes --]
Implements an alternative iret with popf and return so trap and exception
handlers can return to the NMI handler without issuing iret. iret would cause
NMIs to be reenabled prematurely. x86_32 uses popf and far return. x86_64 has to
copy the return instruction pointer to the top of the previous stack, issue a
popf, loads the previous esp and issue a near return (ret).
It allows placing immediate values (and therefore optimized trace_marks) in NMI
code since returning from a breakpoint would be valid. Accessing vmalloc'd
memory, which allows executing module code or accessing vmapped or vmalloc'd
areas from NMI context, would also be valid. This is very useful to tracers like
LTTng.
This patch makes all faults, traps and exception safe to be called from NMI
context *except* single-stepping, which requires iret to restore the TF (trap
flag) and jump to the return address in a single instruction. Sorry, no kprobes
support in NMI handlers because of this limitation. We cannot single-step an
NMI handler, because iret must set the TF flag and return back to the
instruction to single-step in a single instruction. This cannot be emulated with
popf/lret, because lret would be single-stepped. It does not apply to immediate
values because they do not use single-stepping. This code detects if the TF
flag is set and uses the iret path for single-stepping, even if it reactivates
NMIs prematurely.
alpha and avr32 use the active count bit 31. This patch moves them to 28.
TODO : test alpha and avr32 active count modification
tested on x86_32 (tests implemented in a separate patch) :
- instrumented the return path to export the EIP, CS and EFLAGS values when
taken so we know the return path code has been executed.
- trace_mark, using immediate values, with 10ms delay with the breakpoint
activated. Runs well through the return path.
- tested vmalloc faults in NMI handler by placing a non-optimized marker in the
NMI handler (so no breakpoint is executed) and connecting a probe which
touches every pages of a 20MB vmalloc'd buffer. It executes trough the return
path without problem.
- Tested with and without preemption
tested on x86_64
- instrumented the return path to export the EIP, CS and EFLAGS values when
taken so we know the return path code has been executed.
- trace_mark, using immediate values, with 10ms delay with the breakpoint
activated. Runs well through the return path.
To test on x86_64 :
- Test without preemption
- Test vmalloc faults
- Test on Intel 64 bits CPUs.
"This way lies madness. Don't go there."
- Andi
Changelog since v1 :
- x86_64 fixes.
Changelog since v2 :
- paravirt support
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: akpm@osdl.org
CC: mingo@elte.hu
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: Jeremy Fitzhardinge <jeremy@goop.org>
CC: Steven Rostedt <rostedt@goodmis.org>
CC: "Frank Ch. Eigler" <fche@redhat.com>
---
arch/x86/kernel/entry_32.S | 25 +++++++++++++++-
arch/x86/kernel/entry_64.S | 31 ++++++++++++++++++++
include/asm-alpha/thread_info.h | 2 -
include/asm-avr32/thread_info.h | 2 -
include/asm-x86/irqflags.h | 61 ++++++++++++++++++++++++++++++++++++++++
include/asm-x86/paravirt.h | 2 +
include/linux/hardirq.h | 24 ++++++++++++++-
7 files changed, 142 insertions(+), 5 deletions(-)
Index: linux-2.6-sched-devel/include/linux/hardirq.h
===================================================================
--- linux-2.6-sched-devel.orig/include/linux/hardirq.h 2008-04-16 12:29:24.000000000 -0400
+++ linux-2.6-sched-devel/include/linux/hardirq.h 2008-04-16 12:29:42.000000000 -0400
@@ -22,10 +22,13 @@
* PREEMPT_MASK: 0x000000ff
* SOFTIRQ_MASK: 0x0000ff00
* HARDIRQ_MASK: 0x0fff0000
+ * HARDNMI_MASK: 0x40000000
*/
#define PREEMPT_BITS 8
#define SOFTIRQ_BITS 8
+#define HARDNMI_BITS 1
+
#ifndef HARDIRQ_BITS
#define HARDIRQ_BITS 12
@@ -45,16 +48,19 @@
#define PREEMPT_SHIFT 0
#define SOFTIRQ_SHIFT (PREEMPT_SHIFT + PREEMPT_BITS)
#define HARDIRQ_SHIFT (SOFTIRQ_SHIFT + SOFTIRQ_BITS)
+#define HARDNMI_SHIFT (30)
#define __IRQ_MASK(x) ((1UL << (x))-1)
#define PREEMPT_MASK (__IRQ_MASK(PREEMPT_BITS) << PREEMPT_SHIFT)
#define SOFTIRQ_MASK (__IRQ_MASK(SOFTIRQ_BITS) << SOFTIRQ_SHIFT)
#define HARDIRQ_MASK (__IRQ_MASK(HARDIRQ_BITS) << HARDIRQ_SHIFT)
+#define HARDNMI_MASK (__IRQ_MASK(HARDNMI_BITS) << HARDNMI_SHIFT)
#define PREEMPT_OFFSET (1UL << PREEMPT_SHIFT)
#define SOFTIRQ_OFFSET (1UL << SOFTIRQ_SHIFT)
#define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT)
+#define HARDNMI_OFFSET (1UL << HARDNMI_SHIFT)
#if PREEMPT_ACTIVE < (1 << (HARDIRQ_SHIFT + HARDIRQ_BITS))
#error PREEMPT_ACTIVE is too low!
@@ -63,6 +69,7 @@
#define hardirq_count() (preempt_count() & HARDIRQ_MASK)
#define softirq_count() (preempt_count() & SOFTIRQ_MASK)
#define irq_count() (preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK))
+#define hardnmi_count() (preempt_count() & HARDNMI_MASK)
/*
* Are we doing bottom half or hardware interrupt processing?
@@ -71,6 +78,7 @@
#define in_irq() (hardirq_count())
#define in_softirq() (softirq_count())
#define in_interrupt() (irq_count())
+#define in_nmi() (hardnmi_count())
/*
* Are we running in atomic context? WARNING: this macro cannot
@@ -159,7 +167,19 @@ extern void irq_enter(void);
*/
extern void irq_exit(void);
-#define nmi_enter() do { lockdep_off(); __irq_enter(); } while (0)
-#define nmi_exit() do { __irq_exit(); lockdep_on(); } while (0)
+#define nmi_enter() \
+ do { \
+ lockdep_off(); \
+ BUG_ON(hardnmi_count()); \
+ add_preempt_count(HARDNMI_OFFSET); \
+ __irq_enter(); \
+ } while (0)
+
+#define nmi_exit() \
+ do { \
+ __irq_exit(); \
+ sub_preempt_count(HARDNMI_OFFSET); \
+ lockdep_on(); \
+ } while (0)
#endif /* LINUX_HARDIRQ_H */
Index: linux-2.6-sched-devel/arch/x86/kernel/entry_32.S
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kernel/entry_32.S 2008-04-16 12:29:25.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kernel/entry_32.S 2008-04-16 12:29:42.000000000 -0400
@@ -72,7 +72,6 @@
#define preempt_stop(clobbers) DISABLE_INTERRUPTS(clobbers); TRACE_IRQS_OFF
#else
#define preempt_stop(clobbers)
-#define resume_kernel restore_nocheck
#endif
.macro TRACE_IRQS_IRET
@@ -258,6 +257,8 @@ END(ret_from_exception)
#ifdef CONFIG_PREEMPT
ENTRY(resume_kernel)
DISABLE_INTERRUPTS(CLBR_ANY)
+ testl $0x40000000,TI_preempt_count(%ebp) # nested over NMI ?
+ jnz return_to_nmi
cmpl $0,TI_preempt_count(%ebp) # non-zero preempt_count ?
jnz restore_nocheck
need_resched:
@@ -269,6 +270,12 @@ need_resched:
call preempt_schedule_irq
jmp need_resched
END(resume_kernel)
+#else
+ENTRY(resume_kernel)
+ testl $0x40000000,TI_preempt_count(%ebp) # nested over NMI ?
+ jnz return_to_nmi
+ jmp restore_nocheck
+END(resume_kernel)
#endif
CFI_ENDPROC
@@ -408,6 +415,22 @@ restore_nocheck_notrace:
CFI_ADJUST_CFA_OFFSET -4
irq_return:
INTERRUPT_RETURN
+return_to_nmi:
+ testl $X86_EFLAGS_TF, PT_EFLAGS(%esp)
+ jnz restore_nocheck /*
+ * If single-stepping an NMI handler,
+ * use the normal iret path instead of
+ * the popf/lret because lret would be
+ * single-stepped. It should not
+ * happen : it will reactivate NMIs
+ * prematurely.
+ */
+ TRACE_IRQS_IRET
+ RESTORE_REGS
+ addl $4, %esp # skip orig_eax/error_code
+ CFI_ADJUST_CFA_OFFSET -4
+ INTERRUPT_RETURN_NMI_SAFE
+
.section .fixup,"ax"
ENTRY(iret_exc)
pushl $0 # no error code
Index: linux-2.6-sched-devel/arch/x86/kernel/entry_64.S
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kernel/entry_64.S 2008-04-16 12:29:25.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kernel/entry_64.S 2008-04-16 12:29:42.000000000 -0400
@@ -681,12 +681,27 @@ retint_restore_args: /* return to kernel
* The iretq could re-enable interrupts:
*/
TRACE_IRQS_IRETQ
+ testl $0x40000000,threadinfo_preempt_count(%rcx) /* Nested over NMI ? */
+ jnz return_to_nmi
restore_args:
RESTORE_ARGS 0,8,0
irq_return:
INTERRUPT_RETURN
+return_to_nmi: /*
+ * If single-stepping an NMI handler,
+ * use the normal iret path instead of
+ * the popf/lret because lret would be
+ * single-stepped. It should not
+ * happen : it will reactivate NMIs
+ * prematurely.
+ */
+ bt $8,EFLAGS-ARGOFFSET(%rsp) /* trap flag? */
+ jc restore_args
+ RESTORE_ARGS 0,8,0
+ INTERRUPT_RETURN_NMI_SAFE
+
.section __ex_table, "a"
.quad irq_return, bad_iret
.previous
@@ -902,6 +917,10 @@ END(spurious_interrupt)
.macro paranoidexit trace=1
/* ebx: no swapgs flag */
paranoid_exit\trace:
+ GET_THREAD_INFO(%rcx)
+ testl $0x40000000,threadinfo_preempt_count(%rcx) /* Nested over NMI ? */
+ jnz paranoid_return_to_nmi\trace
+paranoid_exit_no_nmi\trace:
testl %ebx,%ebx /* swapgs needed? */
jnz paranoid_restore\trace
testl $3,CS(%rsp)
@@ -914,6 +933,18 @@ paranoid_swapgs\trace:
paranoid_restore\trace:
RESTORE_ALL 8
jmp irq_return
+paranoid_return_to_nmi\trace: /*
+ * If single-stepping an NMI handler,
+ * use the normal iret path instead of
+ * the popf/lret because lret would be
+ * single-stepped. It should not
+ * happen : it will reactivate NMIs
+ * prematurely.
+ */
+ bt $8,EFLAGS-0(%rsp) /* trap flag? */
+ jc paranoid_exit_no_nmi\trace
+ RESTORE_ALL 8
+ INTERRUPT_RETURN_NMI_SAFE
paranoid_userspace\trace:
GET_THREAD_INFO(%rcx)
movl threadinfo_flags(%rcx),%ebx
Index: linux-2.6-sched-devel/include/asm-x86/irqflags.h
===================================================================
--- linux-2.6-sched-devel.orig/include/asm-x86/irqflags.h 2008-04-16 12:29:24.000000000 -0400
+++ linux-2.6-sched-devel/include/asm-x86/irqflags.h 2008-04-16 12:29:42.000000000 -0400
@@ -112,12 +112,73 @@ static inline unsigned long __raw_local_
#ifdef CONFIG_X86_64
#define INTERRUPT_RETURN iretq
+
+/*
+ * Only returns from a trap or exception to a NMI context (intra-privilege
+ * level near return) to the same SS and CS segments. Should be used
+ * upon trap or exception return when nested over a NMI context so no iret is
+ * issued. It takes care of modifying the eflags, rsp and returning to the
+ * previous function.
+ *
+ * The stack, at that point, looks like :
+ *
+ * 0(rsp) RIP
+ * 8(rsp) CS
+ * 16(rsp) EFLAGS
+ * 24(rsp) RSP
+ * 32(rsp) SS
+ *
+ * Upon execution :
+ * Copy EIP to the top of the return stack
+ * Update top of return stack address
+ * Pop eflags into the eflags register
+ * Make the return stack current
+ * Near return (popping the return address from the return stack)
+ */
+#define INTERRUPT_RETURN_NMI_SAFE pushq %rax; \
+ pushq %rbx; \
+ movq 40(%rsp), %rax; \
+ movq 16(%rsp), %rbx; \
+ subq $8, %rax; \
+ movq %rbx, (%rax); \
+ movq %rax, 40(%rsp); \
+ popq %rbx; \
+ popq %rax; \
+ addq $16, %rsp; \
+ popfq; \
+ movq (%rsp), %rsp; \
+ ret; \
+
#define ENABLE_INTERRUPTS_SYSCALL_RET \
movq %gs:pda_oldrsp, %rsp; \
swapgs; \
sysretq;
#else
#define INTERRUPT_RETURN iret
+
+/*
+ * Protected mode only, no V8086. Implies that protected mode must
+ * be entered before NMIs or MCEs are enabled. Only returns from a trap or
+ * exception to a NMI context (intra-privilege level far return). Should be used
+ * upon trap or exception return when nested over a NMI context so no iret is
+ * issued.
+ *
+ * The stack, at that point, looks like :
+ *
+ * 0(esp) EIP
+ * 4(esp) CS
+ * 8(esp) EFLAGS
+ *
+ * Upon execution :
+ * Copy the stack eflags to top of stack
+ * Pop eflags into the eflags register
+ * Far return: pop EIP and CS into their register, and additionally pop EFLAGS.
+ */
+#define INTERRUPT_RETURN_NMI_SAFE pushl 8(%esp); \
+ popfl; \
+ .byte 0xCA; \
+ .word 4;
+
#define ENABLE_INTERRUPTS_SYSCALL_RET sti; sysexit
#define GET_CR0_INTO_EAX movl %cr0, %eax
#endif
Index: linux-2.6-sched-devel/include/asm-alpha/thread_info.h
===================================================================
--- linux-2.6-sched-devel.orig/include/asm-alpha/thread_info.h 2008-04-16 12:29:24.000000000 -0400
+++ linux-2.6-sched-devel/include/asm-alpha/thread_info.h 2008-04-16 12:29:42.000000000 -0400
@@ -57,7 +57,7 @@ register struct thread_info *__current_t
#endif /* __ASSEMBLY__ */
-#define PREEMPT_ACTIVE 0x40000000
+#define PREEMPT_ACTIVE 0x10000000
/*
* Thread information flags:
Index: linux-2.6-sched-devel/include/asm-avr32/thread_info.h
===================================================================
--- linux-2.6-sched-devel.orig/include/asm-avr32/thread_info.h 2008-04-16 12:29:24.000000000 -0400
+++ linux-2.6-sched-devel/include/asm-avr32/thread_info.h 2008-04-16 12:29:42.000000000 -0400
@@ -70,7 +70,7 @@ static inline struct thread_info *curren
#endif /* !__ASSEMBLY__ */
-#define PREEMPT_ACTIVE 0x40000000
+#define PREEMPT_ACTIVE 0x10000000
/*
* Thread information flags
Index: linux-2.6-sched-devel/include/asm-x86/paravirt.h
===================================================================
--- linux-2.6-sched-devel.orig/include/asm-x86/paravirt.h 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/include/asm-x86/paravirt.h 2008-04-16 12:29:42.000000000 -0400
@@ -1385,6 +1385,8 @@ static inline unsigned long __raw_local_
PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_iret), CLBR_NONE, \
jmp *%cs:pv_cpu_ops+PV_CPU_iret)
+#define INTERRUPT_RETURN_NMI_SAFE INTERRUPT_RETURN
+
#define DISABLE_INTERRUPTS(clobbers) \
PARA_SITE(PARA_PATCH(pv_irq_ops, PV_IRQ_irq_disable), clobbers, \
PV_SAVE_REGS; \
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 03/27] Check for breakpoint in text_poke to eliminate bug_on
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 01/27] From: Adrian Bunk <bunk@kernel.org> Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 02/27] x86 NMI-safe INT3 and Page Fault Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 04/27] Kprobes - use a mutex to protect the instruction pages list Mathieu Desnoyers
` (23 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers
[-- Attachment #1: check-for-breakpoint-in-text-poke-to-eliminate-bug-on.patch --]
[-- Type: text/plain, Size: 3304 bytes --]
It's ok to modify an instruction non-atomically (multiple memory accesses to a
large and/or non aligned instruction) *if and only if* we have inserted a
breakpoint at the beginning of the instruction.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
---
arch/x86/kernel/alternative.c | 49 ++++++++++++++++++++++++------------------
1 file changed, 29 insertions(+), 20 deletions(-)
Index: linux-2.6-sched-devel/arch/x86/kernel/alternative.c
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kernel/alternative.c 2008-04-16 17:17:59.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kernel/alternative.c 2008-04-16 17:19:53.000000000 -0400
@@ -15,6 +15,7 @@
#include <asm/io.h>
#define MAX_PATCH_LEN (255-1)
+#define BREAKPOINT_INSTRUCTION 0xcc
#ifdef CONFIG_HOTPLUG_CPU
static int smp_alt_once;
@@ -505,37 +506,45 @@ void *text_poke_early(void *addr, const
* It means the size must be writable atomically and the address must be aligned
* in a way that permits an atomic write. It also makes sure we fit on a single
* page.
+ *
+ * It's ok to modify an instruction non-atomically (multiple memory accesses to
+ * a large and/or non aligned instruction) *if and only if* we have inserted a
+ * breakpoint at the beginning of the instruction and we are modifying the rest
+ * of the instruction.
*/
void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
{
unsigned long flags;
char *vaddr;
int nr_pages = 2;
+ struct page *pages[2];
+ int i;
- BUG_ON(len > sizeof(long));
- BUG_ON((((long)addr + len - 1) & ~(sizeof(long) - 1))
- - ((long)addr & ~(sizeof(long) - 1)));
- if (kernel_text_address((unsigned long)addr)) {
- struct page *pages[2] = { virt_to_page(addr),
- virt_to_page(addr + PAGE_SIZE) };
- if (!pages[1])
- nr_pages = 1;
- vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL);
- BUG_ON(!vaddr);
- local_irq_save(flags);
- memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
- local_irq_restore(flags);
- vunmap(vaddr);
+ if (*((uint8_t *)addr - 1) != BREAKPOINT_INSTRUCTION) {
+ BUG_ON(len > sizeof(long));
+ BUG_ON((((long)addr + len - 1) & ~(sizeof(long) - 1))
+ - ((long)addr & ~(sizeof(long) - 1)));
+ }
+ if (!core_kernel_text((unsigned long)addr)) {
+ pages[0] = vmalloc_to_page(addr);
+ pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
} else {
- /*
- * modules are in vmalloc'ed memory, always writable.
- */
- local_irq_save(flags);
- memcpy(addr, opcode, len);
- local_irq_restore(flags);
+ pages[0] = virt_to_page(addr);
+ pages[1] = virt_to_page(addr + PAGE_SIZE);
}
+ BUG_ON(!pages[0]);
+ if (!pages[1])
+ nr_pages = 1;
+ vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL);
+ BUG_ON(!vaddr);
+ local_irq_save(flags);
+ memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
+ local_irq_restore(flags);
+ vunmap(vaddr);
sync_core();
/* Could also do a CLFLUSH here to speed up CPU recovery; but
that causes hangs on some VIA CPUs. */
+ for (i = 0; i < len; i++)
+ BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
return addr;
}
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 04/27] Kprobes - use a mutex to protect the instruction pages list.
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (2 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 03/27] Check for breakpoint in text_poke to eliminate bug_on Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 05/27] Kprobes - do not use kprobes mutex in arch code Mathieu Desnoyers
` (22 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli, Masami Hiramatsu,
hch, anil.s.keshavamurthy, davem
[-- Attachment #1: kprobes-use-mutex-for-insn-pages.patch --]
[-- Type: text/plain, Size: 3650 bytes --]
Protect the instruction pages list by a specific insn pages mutex, called in
get_insn_slot() and free_insn_slot(). It makes sure that architectures that does
not need to call arch_remove_kprobe() does not take an unneeded kprobes mutex.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
CC: hch@infradead.org
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
kernel/kprobes.c | 27 +++++++++++++++++++++------
1 file changed, 21 insertions(+), 6 deletions(-)
Index: linux-2.6-lttng/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/kernel/kprobes.c 2007-08-27 11:48:56.000000000 -0400
+++ linux-2.6-lttng/kernel/kprobes.c 2007-08-27 11:48:58.000000000 -0400
@@ -95,6 +95,10 @@ enum kprobe_slot_state {
SLOT_USED = 2,
};
+/*
+ * Protects the kprobe_insn_pages list. Can nest into kprobe_mutex.
+ */
+static DEFINE_MUTEX(kprobe_insn_mutex);
static struct hlist_head kprobe_insn_pages;
static int kprobe_garbage_slots;
static int collect_garbage_slots(void);
@@ -131,7 +135,9 @@ kprobe_opcode_t __kprobes *get_insn_slot
{
struct kprobe_insn_page *kip;
struct hlist_node *pos;
+ kprobe_opcode_t *ret;
+ mutex_lock(&kprobe_insn_mutex);
retry:
hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) {
if (kip->nused < INSNS_PER_PAGE) {
@@ -140,7 +146,8 @@ kprobe_opcode_t __kprobes *get_insn_slot
if (kip->slot_used[i] == SLOT_CLEAN) {
kip->slot_used[i] = SLOT_USED;
kip->nused++;
- return kip->insns + (i * MAX_INSN_SIZE);
+ ret = kip->insns + (i * MAX_INSN_SIZE);
+ goto end;
}
}
/* Surprise! No unused slots. Fix kip->nused. */
@@ -154,8 +161,10 @@ kprobe_opcode_t __kprobes *get_insn_slot
}
/* All out of space. Need to allocate a new page. Use slot 0. */
kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL);
- if (!kip)
- return NULL;
+ if (!kip) {
+ ret = NULL;
+ goto end;
+ }
/*
* Use module_alloc so this page is within +/- 2GB of where the
@@ -165,7 +174,8 @@ kprobe_opcode_t __kprobes *get_insn_slot
kip->insns = module_alloc(PAGE_SIZE);
if (!kip->insns) {
kfree(kip);
- return NULL;
+ ret = NULL;
+ goto end;
}
INIT_HLIST_NODE(&kip->hlist);
hlist_add_head(&kip->hlist, &kprobe_insn_pages);
@@ -173,7 +183,10 @@ kprobe_opcode_t __kprobes *get_insn_slot
kip->slot_used[0] = SLOT_USED;
kip->nused = 1;
kip->ngarbage = 0;
- return kip->insns;
+ ret = kip->insns;
+end:
+ mutex_unlock(&kprobe_insn_mutex);
+ return ret;
}
/* Return 1 if all garbages are collected, otherwise 0. */
@@ -207,7 +220,7 @@ static int __kprobes collect_garbage_slo
struct kprobe_insn_page *kip;
struct hlist_node *pos, *next;
- /* Ensure no-one is preepmted on the garbages */
+ /* Ensure no-one is preempted on the garbages */
if (check_safety() != 0)
return -EAGAIN;
@@ -231,6 +244,7 @@ void __kprobes free_insn_slot(kprobe_opc
struct kprobe_insn_page *kip;
struct hlist_node *pos;
+ mutex_lock(&kprobe_insn_mutex);
hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) {
if (kip->insns <= slot &&
slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) {
@@ -247,6 +261,7 @@ void __kprobes free_insn_slot(kprobe_opc
if (dirty && ++kprobe_garbage_slots > INSNS_PER_PAGE)
collect_garbage_slots();
+ mutex_unlock(&kprobe_insn_mutex);
}
#endif
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 05/27] Kprobes - do not use kprobes mutex in arch code
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (3 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 04/27] Kprobes - use a mutex to protect the instruction pages list Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 06/27] Kprobes - declare kprobe_mutex static Mathieu Desnoyers
` (21 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli, Masami Hiramatsu,
anil.s.keshavamurthy, davem
[-- Attachment #1: kprobes-dont-use-kprobes-mutex-in-arch-code.patch --]
[-- Type: text/plain, Size: 4151 bytes --]
Remove the kprobes mutex from kprobes.h, since it does not belong there. Also
remove all use of this mutex in the architecture specific code, replacing it by
a proper mutex lock/unlock in the architecture agnostic code.
Changelog :
- remove unnecessary kprobe_mutex around arch_remove_kprobe()
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
arch/ia64/kernel/kprobes.c | 2 --
arch/powerpc/kernel/kprobes.c | 2 --
arch/s390/kernel/kprobes.c | 2 --
arch/x86/kernel/kprobes.c | 2 --
include/linux/kprobes.h | 2 --
kernel/kprobes.c | 2 ++
6 files changed, 2 insertions(+), 10 deletions(-)
Index: linux-2.6-lttng/include/linux/kprobes.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/kprobes.h 2008-04-08 11:59:57.000000000 -0400
+++ linux-2.6-lttng/include/linux/kprobes.h 2008-04-08 12:01:39.000000000 -0400
@@ -35,7 +35,6 @@
#include <linux/percpu.h>
#include <linux/spinlock.h>
#include <linux/rcupdate.h>
-#include <linux/mutex.h>
#ifdef CONFIG_KPROBES
#include <asm/kprobes.h>
@@ -195,7 +194,6 @@ static inline int init_test_probes(void)
#endif /* CONFIG_KPROBES_SANITY_TEST */
extern spinlock_t kretprobe_lock;
-extern struct mutex kprobe_mutex;
extern int arch_prepare_kprobe(struct kprobe *p);
extern void arch_arm_kprobe(struct kprobe *p);
extern void arch_disarm_kprobe(struct kprobe *p);
Index: linux-2.6-lttng/arch/x86/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/x86/kernel/kprobes.c 2008-04-08 11:59:57.000000000 -0400
+++ linux-2.6-lttng/arch/x86/kernel/kprobes.c 2008-04-08 12:01:39.000000000 -0400
@@ -376,9 +376,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, (p->ainsn.boostable == 1));
- mutex_unlock(&kprobe_mutex);
}
static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
Index: linux-2.6-lttng/arch/ia64/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/ia64/kernel/kprobes.c 2008-04-08 11:59:57.000000000 -0400
+++ linux-2.6-lttng/arch/ia64/kernel/kprobes.c 2008-04-08 12:01:39.000000000 -0400
@@ -583,9 +583,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, 0);
- mutex_unlock(&kprobe_mutex);
}
/*
* We are resuming execution after a single step fault, so the pt_regs
Index: linux-2.6-lttng/arch/powerpc/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/powerpc/kernel/kprobes.c 2008-04-08 11:59:57.000000000 -0400
+++ linux-2.6-lttng/arch/powerpc/kernel/kprobes.c 2008-04-08 12:01:39.000000000 -0400
@@ -88,9 +88,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, 0);
- mutex_unlock(&kprobe_mutex);
}
static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
Index: linux-2.6-lttng/arch/s390/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/arch/s390/kernel/kprobes.c 2008-04-08 11:59:57.000000000 -0400
+++ linux-2.6-lttng/arch/s390/kernel/kprobes.c 2008-04-08 12:01:39.000000000 -0400
@@ -220,9 +220,7 @@ void __kprobes arch_disarm_kprobe(struct
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
- mutex_lock(&kprobe_mutex);
free_insn_slot(p->ainsn.insn, 0);
- mutex_unlock(&kprobe_mutex);
}
static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 06/27] Kprobes - declare kprobe_mutex static
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (4 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 05/27] Kprobes - do not use kprobes mutex in arch code Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 07/27] Text Edit Lock - Architecture Independent Code Mathieu Desnoyers
` (20 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli, Masami Hiramatsu,
hch, anil.s.keshavamurthy, davem
[-- Attachment #1: kprobes-declare-kprobes-mutex-static.patch --]
[-- Type: text/plain, Size: 1254 bytes --]
Since it will not be used by other kernel objects, it makes sense to declare it
static.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
CC: hch@infradead.org
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
---
kernel/kprobes.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6-lttng/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/kernel/kprobes.c 2007-08-19 09:09:15.000000000 -0400
+++ linux-2.6-lttng/kernel/kprobes.c 2007-08-19 17:18:07.000000000 -0400
@@ -68,7 +68,7 @@ static struct hlist_head kretprobe_inst_
/* NOTE: change this value only with kprobe_mutex held */
static bool kprobe_enabled;
-DEFINE_MUTEX(kprobe_mutex); /* Protects kprobe_table */
+static DEFINE_MUTEX(kprobe_mutex); /* Protects kprobe_table */
DEFINE_SPINLOCK(kretprobe_lock); /* Protects kretprobe_inst_table */
static DEFINE_PER_CPU(struct kprobe *, kprobe_instance) = NULL;
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 07/27] Text Edit Lock - Architecture Independent Code
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (5 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 06/27] Kprobes - declare kprobe_mutex static Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 08/27] Text Edit Lock - kprobes architecture independent support Mathieu Desnoyers
` (19 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers, Andi Kleen
[-- Attachment #1: text-edit-lock-architecture-independent-code.patch --]
[-- Type: text/plain, Size: 3170 bytes --]
This is an architecture independant synchronization around kernel text
modifications through use of a global mutex.
A mutex has been chosen so that kprobes, the main user of this, can sleep during
memory allocation between the memory read of the instructions it must replace
and the memory write of the breakpoint.
Other user of this interface: immediate values.
Paravirt and alternatives are always done when SMP is inactive, so there is no
need to use locks.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <andi@firstfloor.org>
CC: Ingo Molnar <mingo@elte.hu>
---
include/linux/memory.h | 7 +++++++
mm/memory.c | 34 ++++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+)
Index: linux-2.6-lttng/include/linux/memory.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/memory.h 2008-04-08 12:01:44.000000000 -0400
+++ linux-2.6-lttng/include/linux/memory.h 2008-04-08 12:01:56.000000000 -0400
@@ -93,4 +93,11 @@ extern int memory_notify(unsigned long v
#define hotplug_memory_notifier(fn, pri) do { } while (0)
#endif
+/*
+ * Take and release the kernel text modification lock, used for code patching.
+ * Users of this lock can sleep.
+ */
+extern void kernel_text_lock(void);
+extern void kernel_text_unlock(void);
+
#endif /* _LINUX_MEMORY_H_ */
Index: linux-2.6-lttng/mm/memory.c
===================================================================
--- linux-2.6-lttng.orig/mm/memory.c 2008-04-08 12:01:44.000000000 -0400
+++ linux-2.6-lttng/mm/memory.c 2008-04-08 12:01:56.000000000 -0400
@@ -51,6 +51,8 @@
#include <linux/init.h>
#include <linux/writeback.h>
#include <linux/memcontrol.h>
+#include <linux/kprobes.h>
+#include <linux/mutex.h>
#include <asm/pgalloc.h>
#include <asm/uaccess.h>
@@ -96,6 +98,12 @@ int randomize_va_space __read_mostly =
2;
#endif
+/*
+ * mutex protecting text section modification (dynamic code patching).
+ * some users need to sleep (allocating memory...) while they hold this lock.
+ */
+static DEFINE_MUTEX(text_mutex);
+
static int __init disable_randmaps(char *s)
{
randomize_va_space = 0;
@@ -2737,3 +2745,29 @@ void print_vma_addr(char *prefix, unsign
}
up_read(¤t->mm->mmap_sem);
}
+
+/**
+ * kernel_text_lock - Take the kernel text modification lock
+ *
+ * Insures mutual write exclusion of kernel and modules text live text
+ * modification. Should be used for code patching.
+ * Users of this lock can sleep.
+ */
+void __kprobes kernel_text_lock(void)
+{
+ mutex_lock(&text_mutex);
+}
+EXPORT_SYMBOL_GPL(kernel_text_lock);
+
+/**
+ * kernel_text_unlock - Release the kernel text modification lock
+ *
+ * Insures mutual write exclusion of kernel and modules text live text
+ * modification. Should be used for code patching.
+ * Users of this lock can sleep.
+ */
+void __kprobes kernel_text_unlock(void)
+{
+ mutex_unlock(&text_mutex);
+}
+EXPORT_SYMBOL_GPL(kernel_text_unlock);
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 08/27] Text Edit Lock - kprobes architecture independent support
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (6 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 07/27] Text Edit Lock - Architecture Independent Code Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 09/27] Add all cpus option to stop machine run Mathieu Desnoyers
` (18 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli,
anil.s.keshavamurthy, davem, Roel Kluin
[-- Attachment #1: text-edit-lock-kprobes-architecture-independent-support.patch --]
[-- Type: text/plain, Size: 3077 bytes --]
Use the mutual exclusion provided by the text edit lock in the kprobes code. It
allows coherent manipulation of the kernel code by other subsystems.
Changelog:
Move the kernel_text_lock/unlock out of the for loops.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
CC: ananth@in.ibm.com
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
CC: Roel Kluin <12o3l@tiscali.nl>
---
kernel/kprobes.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
Index: linux-2.6-lttng/kernel/kprobes.c
===================================================================
--- linux-2.6-lttng.orig/kernel/kprobes.c 2008-04-09 10:52:51.000000000 -0400
+++ linux-2.6-lttng/kernel/kprobes.c 2008-04-09 10:52:57.000000000 -0400
@@ -43,6 +43,7 @@
#include <linux/seq_file.h>
#include <linux/debugfs.h>
#include <linux/kdebug.h>
+#include <linux/memory.h>
#include <asm-generic/sections.h>
#include <asm/cacheflush.h>
@@ -577,9 +578,10 @@ static int __kprobes __register_kprobe(s
goto out;
}
+ kernel_text_lock();
ret = arch_prepare_kprobe(p);
if (ret)
- goto out;
+ goto out_unlock_text;
INIT_HLIST_NODE(&p->hlist);
hlist_add_head_rcu(&p->hlist,
@@ -587,7 +589,8 @@ static int __kprobes __register_kprobe(s
if (kprobe_enabled)
arch_arm_kprobe(p);
-
+out_unlock_text:
+ kernel_text_unlock();
out:
mutex_unlock(&kprobe_mutex);
@@ -630,8 +633,11 @@ valid_p:
* enabled - otherwise, the breakpoint would already have
* been removed. We save on flushing icache.
*/
- if (kprobe_enabled)
+ if (kprobe_enabled) {
+ kernel_text_lock();
arch_disarm_kprobe(p);
+ kernel_text_unlock();
+ }
hlist_del_rcu(&old_p->hlist);
cleanup_p = 1;
} else {
@@ -729,7 +735,6 @@ static int __kprobes pre_handler_kretpro
}
arch_prepare_kretprobe(ri, regs);
-
/* XXX(hch): why is there no hlist_move_head? */
hlist_del(&ri->uflist);
hlist_add_head(&ri->uflist, &ri->rp->used_instances);
@@ -951,11 +956,13 @@ static void __kprobes enable_all_kprobes
if (kprobe_enabled)
goto already_enabled;
+ kernel_text_lock();
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i];
hlist_for_each_entry_rcu(p, node, head, hlist)
arch_arm_kprobe(p);
}
+ kernel_text_unlock();
kprobe_enabled = true;
printk(KERN_INFO "Kprobes globally enabled\n");
@@ -980,6 +987,7 @@ static void __kprobes disable_all_kprobe
kprobe_enabled = false;
printk(KERN_INFO "Kprobes globally disabled\n");
+ kernel_text_lock();
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i];
hlist_for_each_entry_rcu(p, node, head, hlist) {
@@ -987,6 +995,7 @@ static void __kprobes disable_all_kprobe
arch_disarm_kprobe(p);
}
}
+ kernel_text_unlock();
mutex_unlock(&kprobe_mutex);
/* Allow all currently running kprobes to complete */
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 09/27] Add all cpus option to stop machine run
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (7 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 08/27] Text Edit Lock - kprobes architecture independent support Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 10/27] Immediate Values - Architecture Independent Code Mathieu Desnoyers
` (17 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Jason Baron, Mathieu Desnoyers, Rusty Russell, Adrian Bunk,
Andi Kleen, Christoph Hellwig, akpm
[-- Attachment #1: add-all-cpus-option-to-stop-machine-run.patch --]
[-- Type: text/plain, Size: 4368 bytes --]
-allow stop_mahcine_run() to call a function on all cpus. Calling
stop_machine_run() with a 'ALL_CPUS' invokes this new behavior.
stop_machine_run() proceeds as normal until the calling cpu has
invoked 'fn'. Then, we tell all the other cpus to call 'fn'.
Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: Christoph Hellwig <hch@infradead.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
---
include/linux/stop_machine.h | 8 +++++++-
kernel/stop_machine.c | 31 ++++++++++++++++++++++++-------
2 files changed, 31 insertions(+), 8 deletions(-)
Index: linux-2.6-sched-devel/include/linux/stop_machine.h
===================================================================
--- linux-2.6-sched-devel.orig/include/linux/stop_machine.h 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/include/linux/stop_machine.h 2008-04-16 11:13:48.000000000 -0400
@@ -8,11 +8,17 @@
#include <asm/system.h>
#if defined(CONFIG_STOP_MACHINE) && defined(CONFIG_SMP)
+
+#define ALL_CPUS ~0U
+
/**
* stop_machine_run: freeze the machine on all CPUs and run this function
* @fn: the function to run
* @data: the data ptr for the @fn()
- * @cpu: the cpu to run @fn() on (or any, if @cpu == NR_CPUS.
+ * @cpu: if @cpu == n, run @fn() on cpu n
+ * if @cpu == NR_CPUS, run @fn() on any cpu
+ * if @cpu == ALL_CPUS, run @fn() first on the calling cpu, and then
+ * concurrently on all the other cpus
*
* Description: This causes a thread to be scheduled on every other cpu,
* each of which disables interrupts, and finally interrupts are disabled
Index: linux-2.6-sched-devel/kernel/stop_machine.c
===================================================================
--- linux-2.6-sched-devel.orig/kernel/stop_machine.c 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/kernel/stop_machine.c 2008-04-16 11:13:48.000000000 -0400
@@ -23,9 +23,17 @@ enum stopmachine_state {
STOPMACHINE_WAIT,
STOPMACHINE_PREPARE,
STOPMACHINE_DISABLE_IRQ,
+ STOPMACHINE_RUN,
STOPMACHINE_EXIT,
};
+struct stop_machine_data {
+ int (*fn)(void *);
+ void *data;
+ struct completion done;
+ int run_all;
+} smdata;
+
static enum stopmachine_state stopmachine_state;
static unsigned int stopmachine_num_threads;
static atomic_t stopmachine_thread_ack;
@@ -34,6 +42,7 @@ static int stopmachine(void *cpu)
{
int irqs_disabled = 0;
int prepared = 0;
+ int ran = 0;
set_cpus_allowed_ptr(current, &cpumask_of_cpu((int)(long)cpu));
@@ -58,6 +67,11 @@ static int stopmachine(void *cpu)
prepared = 1;
smp_mb(); /* Must read state first. */
atomic_inc(&stopmachine_thread_ack);
+ } else if (stopmachine_state == STOPMACHINE_RUN && !ran) {
+ smdata.fn(smdata.data);
+ ran = 1;
+ smp_mb(); /* Must read state first. */
+ atomic_inc(&stopmachine_thread_ack);
}
/* Yield in first stage: migration threads need to
* help our sisters onto their CPUs. */
@@ -135,12 +149,10 @@ static void restart_machine(void)
preempt_enable_no_resched();
}
-struct stop_machine_data
+static void run_other_cpus(void)
{
- int (*fn)(void *);
- void *data;
- struct completion done;
-};
+ stopmachine_set_state(STOPMACHINE_RUN);
+}
static int do_stop(void *_smdata)
{
@@ -150,6 +162,8 @@ static int do_stop(void *_smdata)
ret = stop_machine();
if (ret == 0) {
ret = smdata->fn(smdata->data);
+ if (smdata->run_all)
+ run_other_cpus();
restart_machine();
}
@@ -173,14 +187,17 @@ struct task_struct *__stop_machine_run(i
struct stop_machine_data smdata;
struct task_struct *p;
+ mutex_lock(&stopmachine_mutex);
+
smdata.fn = fn;
smdata.data = data;
+ smdata.run_all = (cpu == ALL_CPUS) ? 1 : 0;
init_completion(&smdata.done);
- mutex_lock(&stopmachine_mutex);
+ smp_wmb(); /* make sure other cpus see smdata updates */
/* If they don't care which CPU fn runs on, bind to any online one. */
- if (cpu == NR_CPUS)
+ if (cpu == NR_CPUS || cpu == ALL_CPUS)
cpu = raw_smp_processor_id();
p = kthread_create(do_stop, &smdata, "kstopmachine");
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 10/27] Immediate Values - Architecture Independent Code
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (8 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 09/27] Add all cpus option to stop machine run Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 11/27] Immediate Values - Kconfig menu in EMBEDDED Mathieu Desnoyers
` (16 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Jason Baron, Rusty Russell, Adrian Bunk,
Andi Kleen, Christoph Hellwig, akpm
[-- Attachment #1: immediate-values-architecture-independent-code.patch --]
[-- Type: text/plain, Size: 18602 bytes --]
Immediate values are used as read mostly variables that are rarely updated. They
use code patching to modify the values inscribed in the instruction stream. It
provides a way to save precious cache lines that would otherwise have to be used
by these variables.
There is a generic _imv_read() version, which uses standard global
variables, and optimized per architecture imv_read() implementations,
which use a load immediate to remove a data cache hit. When the immediate values
functionnality is disabled in the kernel, it falls back to global variables.
It adds a new rodata section "__imv" to place the pointers to the enable
value. Immediate values activation functions sits in kernel/immediate.c.
Immediate values refer to the memory address of a previously declared integer.
This integer holds the information about the state of the immediate values
associated, and must be accessed through the API found in linux/immediate.h.
At module load time, each immediate value is checked to see if it must be
enabled. It would be the case if the variable they refer to is exported from
another module and already enabled.
In the early stages of start_kernel(), the immediate values are updated to
reflect the state of the variable they refer to.
* Why should this be merged *
It improves performances on heavy memory I/O workloads.
An interesting result shows the potential this infrastructure has by
showing the slowdown a simple system call such as getppid() suffers when it is
used under heavy user-space cache trashing:
Random walk L1 and L2 trashing surrounding a getppid() call:
(note: in this test, do_syscal_trace was taken at each system call, see
Documentation/immediate.txt in these patches for details)
- No memory pressure : getppid() takes 1573 cycles
- With memory pressure : getppid() takes 15589 cycles
We therefore have a slowdown of 10 times just to get the kernel variables from
memory. Another test on the same architecture (Intel P4) measured the memory
latency to be 559 cycles. Therefore, each cache line removed from the hot path
would improve the syscall time of 3.5% in these conditions.
Changelog:
- section __imv is already SHF_ALLOC
- Because of the wonders of ELF, section 0 has sh_addr and sh_size 0. So
the if (immediateindex) is unnecessary here.
- Remove module_mutex usage: depend on functions implemented in module.c for
that.
- Does not update tainted module's immediate values.
- remove imv_*_t types, add DECLARE_IMV() and DEFINE_IMV().
- imv_read(&var) becomes imv_read(var) because of this.
- Adding a new EXPORT_IMV_SYMBOL(_GPL).
- remove imv_if(). Should use if (unlikely(imv_read(var))) instead.
- Wait until we have gcc support before we add the imv_if macro, since
its form may have to change.
- Dont't declare the __imv section in vmlinux.lds.h, just put the content
in the rodata section.
- Simplify interface : remove imv_set_early, keep track of kernel boot
status internally.
- Remove the ALIGN(8) before the __imv section. It is packed now.
- Uses an IPI busy-loop on each CPU with interrupts disabled as a simple,
architecture agnostic, update mechanism.
- Use imv_* instead of immediate_*.
- Updating immediate values, cannot rely on smp_call_function() b/c
synchronizing cpus using IPIs leads to deadlocks. Process A held a read lock
on tasklist_lock, then process B called apply_imv_update(). Process A received
the IPI and begins executing ipi_busy_loop(). Then process C takes a write
lock irq on the task list lock, before receiving the IPI. Thus, process A
holds up process C, and C can't get an IPI b/c interrupts are disabled. Solve
this problem by using a new 'ALL_CPUS' parameter to stop_machine_run(). Which
runs a function on all cpus after they are busy looping and have disabled
irqs. Since this is done in a new process context, we don't have to worry
about interrupted spin_locks. Also, less lines of code. Has survived 24 hours+
of testing...
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Signed-off-by: Jason Baron <jbaron@redhat.com>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: Christoph Hellwig <hch@infradead.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
---
include/asm-generic/vmlinux.lds.h | 3
include/linux/immediate.h | 94 +++++++++++++++++++++++
include/linux/module.h | 16 ++++
init/main.c | 8 ++
kernel/Makefile | 1
kernel/immediate.c | 149 ++++++++++++++++++++++++++++++++++++++
kernel/module.c | 50 ++++++++++++
7 files changed, 320 insertions(+), 1 deletion(-)
Index: linux-2.6-sched-devel/include/linux/immediate.h
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-sched-devel/include/linux/immediate.h 2008-04-16 11:14:29.000000000 -0400
@@ -0,0 +1,94 @@
+#ifndef _LINUX_IMMEDIATE_H
+#define _LINUX_IMMEDIATE_H
+
+/*
+ * Immediate values, can be updated at runtime and save cache lines.
+ *
+ * (C) Copyright 2007 Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
+ *
+ * This file is released under the GPLv2.
+ * See the file COPYING for more details.
+ */
+
+#ifdef CONFIG_IMMEDIATE
+
+struct __imv {
+ unsigned long var; /* Pointer to the identifier variable of the
+ * immediate value
+ */
+ unsigned long imv; /*
+ * Pointer to the memory location of the
+ * immediate value within the instruction.
+ */
+ unsigned char size; /* Type size. */
+} __attribute__ ((packed));
+
+#include <asm/immediate.h>
+
+/**
+ * imv_set - set immediate variable (with locking)
+ * @name: immediate value name
+ * @i: required value
+ *
+ * Sets the value of @name, taking the module_mutex if required by
+ * the architecture.
+ */
+#define imv_set(name, i) \
+ do { \
+ name##__imv = (i); \
+ core_imv_update(); \
+ module_imv_update(); \
+ } while (0)
+
+/*
+ * Internal update functions.
+ */
+extern void core_imv_update(void);
+extern void imv_update_range(const struct __imv *begin,
+ const struct __imv *end);
+
+#else
+
+/*
+ * Generic immediate values: a simple, standard, memory load.
+ */
+
+/**
+ * imv_read - read immediate variable
+ * @name: immediate value name
+ *
+ * Reads the value of @name.
+ */
+#define imv_read(name) _imv_read(name)
+
+/**
+ * imv_set - set immediate variable (with locking)
+ * @name: immediate value name
+ * @i: required value
+ *
+ * Sets the value of @name, taking the module_mutex if required by
+ * the architecture.
+ */
+#define imv_set(name, i) (name##__imv = (i))
+
+static inline void core_imv_update(void) { }
+static inline void module_imv_update(void) { }
+
+#endif
+
+#define DECLARE_IMV(type, name) extern __typeof__(type) name##__imv
+#define DEFINE_IMV(type, name) __typeof__(type) name##__imv
+
+#define EXPORT_IMV_SYMBOL(name) EXPORT_SYMBOL(name##__imv)
+#define EXPORT_IMV_SYMBOL_GPL(name) EXPORT_SYMBOL_GPL(name##__imv)
+
+/**
+ * _imv_read - Read immediate value with standard memory load.
+ * @name: immediate value name
+ *
+ * Force a data read of the immediate value instead of the immediate value
+ * based mechanism. Useful for __init and __exit section data read.
+ */
+#define _imv_read(name) (name##__imv)
+
+#endif
Index: linux-2.6-sched-devel/include/linux/module.h
===================================================================
--- linux-2.6-sched-devel.orig/include/linux/module.h 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/include/linux/module.h 2008-04-16 11:14:29.000000000 -0400
@@ -15,6 +15,7 @@
#include <linux/stringify.h>
#include <linux/kobject.h>
#include <linux/moduleparam.h>
+#include <linux/immediate.h>
#include <linux/marker.h>
#include <asm/local.h>
@@ -355,6 +356,10 @@ struct module
/* The command line arguments (may be mangled). People like
keeping pointers to this stuff */
char *args;
+#ifdef CONFIG_IMMEDIATE
+ const struct __imv *immediate;
+ unsigned int num_immediate;
+#endif
#ifdef CONFIG_MARKERS
struct marker *markers;
unsigned int num_markers;
@@ -467,6 +472,9 @@ extern void print_modules(void);
extern void module_update_markers(void);
+extern void _module_imv_update(void);
+extern void module_imv_update(void);
+
#else /* !CONFIG_MODULES... */
#define EXPORT_SYMBOL(sym)
#define EXPORT_SYMBOL_GPL(sym)
@@ -571,6 +579,14 @@ static inline void module_update_markers
{
}
+static inline void _module_imv_update(void)
+{
+}
+
+static inline void module_imv_update(void)
+{
+}
+
#endif /* CONFIG_MODULES */
struct device_driver;
Index: linux-2.6-sched-devel/kernel/module.c
===================================================================
--- linux-2.6-sched-devel.orig/kernel/module.c 2008-04-16 11:10:44.000000000 -0400
+++ linux-2.6-sched-devel/kernel/module.c 2008-04-16 11:15:32.000000000 -0400
@@ -33,6 +33,7 @@
#include <linux/cpu.h>
#include <linux/moduleparam.h>
#include <linux/errno.h>
+#include <linux/immediate.h>
#include <linux/err.h>
#include <linux/vermagic.h>
#include <linux/notifier.h>
@@ -1716,6 +1717,7 @@ static struct module *load_module(void _
unsigned int unusedcrcindex;
unsigned int unusedgplindex;
unsigned int unusedgplcrcindex;
+ unsigned int immediateindex;
unsigned int markersindex;
unsigned int markersstringsindex;
struct module *mod;
@@ -1814,6 +1816,7 @@ static struct module *load_module(void _
#ifdef ARCH_UNWIND_SECTION_NAME
unwindex = find_sec(hdr, sechdrs, secstrings, ARCH_UNWIND_SECTION_NAME);
#endif
+ immediateindex = find_sec(hdr, sechdrs, secstrings, "__imv");
/* Don't keep modinfo section */
sechdrs[infoindex].sh_flags &= ~(unsigned long)SHF_ALLOC;
@@ -1972,6 +1975,11 @@ static struct module *load_module(void _
mod->gpl_future_syms = (void *)sechdrs[gplfutureindex].sh_addr;
if (gplfuturecrcindex)
mod->gpl_future_crcs = (void *)sechdrs[gplfuturecrcindex].sh_addr;
+#ifdef CONFIG_IMMEDIATE
+ mod->immediate = (void *)sechdrs[immediateindex].sh_addr;
+ mod->num_immediate =
+ sechdrs[immediateindex].sh_size / sizeof(*mod->immediate);
+#endif
mod->unused_syms = (void *)sechdrs[unusedindex].sh_addr;
if (unusedcrcindex)
@@ -2039,11 +2047,16 @@ static struct module *load_module(void _
add_kallsyms(mod, sechdrs, symindex, strindex, secstrings);
+ if (!mod->taints) {
#ifdef CONFIG_MARKERS
- if (!mod->taints)
marker_update_probe_range(mod->markers,
mod->markers + mod->num_markers);
#endif
+#ifdef CONFIG_IMMEDIATE
+ imv_update_range(mod->immediate,
+ mod->immediate + mod->num_immediate);
+#endif
+ }
err = module_finalize(hdr, sechdrs, mod);
if (err < 0)
goto cleanup;
@@ -2589,3 +2602,38 @@ void module_update_markers(void)
mutex_unlock(&module_mutex);
}
#endif
+
+#ifdef CONFIG_IMMEDIATE
+/**
+ * _module_imv_update - update all immediate values in the kernel
+ *
+ * Iterate on the kernel core and modules to update the immediate values.
+ * Module_mutex must be held be the caller.
+ */
+void _module_imv_update(void)
+{
+ struct module *mod;
+
+ list_for_each_entry(mod, &modules, list) {
+ if (mod->taints)
+ continue;
+ imv_update_range(mod->immediate,
+ mod->immediate + mod->num_immediate);
+ }
+}
+EXPORT_SYMBOL_GPL(_module_imv_update);
+
+/**
+ * module_imv_update - update all immediate values in the kernel
+ *
+ * Iterate on the kernel core and modules to update the immediate values.
+ * Takes module_mutex.
+ */
+void module_imv_update(void)
+{
+ mutex_lock(&module_mutex);
+ _module_imv_update();
+ mutex_unlock(&module_mutex);
+}
+EXPORT_SYMBOL_GPL(module_imv_update);
+#endif
Index: linux-2.6-sched-devel/kernel/immediate.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-sched-devel/kernel/immediate.c 2008-04-16 11:14:29.000000000 -0400
@@ -0,0 +1,149 @@
+/*
+ * Copyright (C) 2007 Mathieu Desnoyers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/immediate.h>
+#include <linux/memory.h>
+#include <linux/cpu.h>
+#include <linux/stop_machine.h>
+
+#include <asm/cacheflush.h>
+
+/*
+ * Kernel ready to execute the SMP update that may depend on trap and ipi.
+ */
+static int imv_early_boot_complete;
+static int wrote_text;
+
+extern const struct __imv __start___imv[];
+extern const struct __imv __stop___imv[];
+
+static int stop_machine_imv_update(void *imv_ptr)
+{
+ struct __imv *imv = imv_ptr;
+
+ if (!wrote_text) {
+ text_poke((void *)imv->imv, (void *)imv->var, imv->size);
+ wrote_text = 1;
+ smp_wmb(); /* make sure other cpus see that this has run */
+ } else
+ sync_core();
+
+ flush_icache_range(imv->imv, imv->imv + imv->size);
+
+ return 0;
+}
+
+/*
+ * imv_mutex nests inside module_mutex. imv_mutex protects builtin
+ * immediates and module immediates.
+ */
+static DEFINE_MUTEX(imv_mutex);
+
+
+/**
+ * apply_imv_update - update one immediate value
+ * @imv: pointer of type const struct __imv to update
+ *
+ * Update one immediate value. Must be called with imv_mutex held.
+ * It makes sure all CPUs are not executing the modified code by having them
+ * busy looping with interrupts disabled.
+ * It does _not_ protect against NMI and MCE (could be a problem with Intel's
+ * errata if we use immediate values in their code path).
+ */
+static int apply_imv_update(const struct __imv *imv)
+{
+ /*
+ * If the variable and the instruction have the same value, there is
+ * nothing to do.
+ */
+ switch (imv->size) {
+ case 1: if (*(uint8_t *)imv->imv
+ == *(uint8_t *)imv->var)
+ return 0;
+ break;
+ case 2: if (*(uint16_t *)imv->imv
+ == *(uint16_t *)imv->var)
+ return 0;
+ break;
+ case 4: if (*(uint32_t *)imv->imv
+ == *(uint32_t *)imv->var)
+ return 0;
+ break;
+ case 8: if (*(uint64_t *)imv->imv
+ == *(uint64_t *)imv->var)
+ return 0;
+ break;
+ default:return -EINVAL;
+ }
+
+ if (imv_early_boot_complete) {
+ kernel_text_lock();
+ wrote_text = 0;
+ stop_machine_run(stop_machine_imv_update, (void *)imv,
+ ALL_CPUS);
+ kernel_text_unlock();
+ } else
+ text_poke_early((void *)imv->imv, (void *)imv->var,
+ imv->size);
+ return 0;
+}
+
+/**
+ * imv_update_range - Update immediate values in a range
+ * @begin: pointer to the beginning of the range
+ * @end: pointer to the end of the range
+ *
+ * Updates a range of immediates.
+ */
+void imv_update_range(const struct __imv *begin,
+ const struct __imv *end)
+{
+ const struct __imv *iter;
+ int ret;
+ for (iter = begin; iter < end; iter++) {
+ mutex_lock(&imv_mutex);
+ ret = apply_imv_update(iter);
+ if (imv_early_boot_complete && ret)
+ printk(KERN_WARNING
+ "Invalid immediate value. "
+ "Variable at %p, "
+ "instruction at %p, size %hu\n",
+ (void *)iter->imv,
+ (void *)iter->var, iter->size);
+ mutex_unlock(&imv_mutex);
+ }
+}
+EXPORT_SYMBOL_GPL(imv_update_range);
+
+/**
+ * imv_update - update all immediate values in the kernel
+ *
+ * Iterate on the kernel core and modules to update the immediate values.
+ */
+void core_imv_update(void)
+{
+ /* Core kernel imvs */
+ imv_update_range(__start___imv, __stop___imv);
+}
+EXPORT_SYMBOL_GPL(core_imv_update);
+
+void __init imv_init_complete(void)
+{
+ imv_early_boot_complete = 1;
+}
Index: linux-2.6-sched-devel/init/main.c
===================================================================
--- linux-2.6-sched-devel.orig/init/main.c 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/init/main.c 2008-04-16 11:15:51.000000000 -0400
@@ -60,6 +60,7 @@
#include <linux/sched.h>
#include <linux/signal.h>
#include <linux/kmemcheck.h>
+#include <linux/immediate.h>
#include <asm/io.h>
#include <asm/bugs.h>
@@ -103,6 +104,11 @@ static inline void mark_rodata_ro(void)
#ifdef CONFIG_TC
extern void tc_init(void);
#endif
+#ifdef CONFIG_IMMEDIATE
+extern void imv_init_complete(void);
+#else
+static inline void imv_init_complete(void) { }
+#endif
enum system_states system_state;
EXPORT_SYMBOL(system_state);
@@ -547,6 +553,7 @@ asmlinkage void __init start_kernel(void
boot_init_stack_canary();
cgroup_init_early();
+ core_imv_update();
local_irq_disable();
early_boot_irqs_off();
@@ -671,6 +678,7 @@ asmlinkage void __init start_kernel(void
cpuset_init();
taskstats_init_early();
delayacct_init();
+ imv_init_complete();
check_bugs();
Index: linux-2.6-sched-devel/kernel/Makefile
===================================================================
--- linux-2.6-sched-devel.orig/kernel/Makefile 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/kernel/Makefile 2008-04-16 11:14:29.000000000 -0400
@@ -75,6 +75,7 @@ obj-$(CONFIG_RELAY) += relay.o
obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
+obj-$(CONFIG_IMMEDIATE) += immediate.o
obj-$(CONFIG_MARKERS) += marker.o
obj-$(CONFIG_LATENCYTOP) += latencytop.o
obj-$(CONFIG_FTRACE) += trace/
Index: linux-2.6-sched-devel/include/asm-generic/vmlinux.lds.h
===================================================================
--- linux-2.6-sched-devel.orig/include/asm-generic/vmlinux.lds.h 2008-04-16 11:07:23.000000000 -0400
+++ linux-2.6-sched-devel/include/asm-generic/vmlinux.lds.h 2008-04-16 11:14:29.000000000 -0400
@@ -61,6 +61,9 @@
*(.rodata) *(.rodata.*) \
*(__vermagic) /* Kernel version magic */ \
*(__markers_strings) /* Markers: strings */ \
+ VMLINUX_SYMBOL(__start___imv) = .; \
+ *(__imv) /* Immediate values: pointers */ \
+ VMLINUX_SYMBOL(__stop___imv) = .; \
} \
\
.rodata1 : AT(ADDR(.rodata1) - LOAD_OFFSET) { \
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 11/27] Immediate Values - Kconfig menu in EMBEDDED
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (9 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 10/27] Immediate Values - Architecture Independent Code Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 12/27] Immediate Values - x86 Optimization Mathieu Desnoyers
` (15 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Rusty Russell, Adrian Bunk, Andi Kleen,
Christoph Hellwig, akpm
[-- Attachment #1: immediate-values-kconfig-menu-in-embedded.patch --]
[-- Type: text/plain, Size: 2476 bytes --]
Immediate values provide a way to use dynamic code patching to update variables
sitting within the instruction stream. It saves caches lines normally used by
static read mostly variables. Enable it by default, but let users disable it
through the EMBEDDED menu with the "Disable immediate values" submenu entry.
Note: Since I think that I really should let embedded systems developers using
RO memory the option to disable the immediate values, I choose to leave this
menu option there, in the EMBEDDED menu. Also, the "CONFIG_IMMEDIATE" makes
sense because we want to compile out all the immediate code when we decide not
to use optimized immediate values at all (it removes otherwise unused code).
Changelog:
- Change ARCH_SUPPORTS_IMMEDIATE for HAS_IMMEDIATE
- Turn DISABLE_IMMEDIATE into positive logic
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: Christoph Hellwig <hch@infradead.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
---
init/Kconfig | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
Index: linux-2.6-lttng/init/Kconfig
===================================================================
--- linux-2.6-lttng.orig/init/Kconfig 2008-04-10 15:59:46.000000000 -0400
+++ linux-2.6-lttng/init/Kconfig 2008-04-14 19:51:54.000000000 -0400
@@ -758,6 +758,24 @@ config PROC_PAGE_MONITOR
/proc/kpagecount, and /proc/kpageflags. Disabling these
interfaces will reduce the size of the kernel by approximately 4kb.
+config HAVE_IMMEDIATE
+ def_bool n
+
+config IMMEDIATE
+ default y
+ depends on HAVE_IMMEDIATE
+ bool "Immediate value optimization" if EMBEDDED
+ help
+ Immediate values are used as read-mostly variables that are rarely
+ updated. They use code patching to modify the values inscribed in the
+ instruction stream. It provides a way to save precious cache lines
+ that would otherwise have to be used by these variables. They can be
+ disabled through the EMBEDDED menu.
+
+ It consumes slightly more memory and modifies the instruction stream
+ each time any specially-marked variable is updated. Should really be
+ disabled for embedded systems with read-only text.
+
endmenu # General setup
config SLABINFO
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 12/27] Immediate Values - x86 Optimization
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (10 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 11/27] Immediate Values - Kconfig menu in EMBEDDED Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 13/27] Add text_poke and sync_core to powerpc Mathieu Desnoyers
` (14 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Andi Kleen, H. Peter Anvin, Chuck Ebbert,
Christoph Hellwig, Jeremy Fitzhardinge, Thomas Gleixner,
Ingo Molnar, Rusty Russell, Adrian Bunk, akpm
[-- Attachment #1: immediate-values-x86-optimization.patch --]
[-- Type: text/plain, Size: 4877 bytes --]
x86 optimization of the immediate values which uses a movl with code patching
to set/unset the value used to populate the register used as variable source.
Note : a movb needs to get its value froma =q constraint.
Quoting "H. Peter Anvin" <hpa@zytor.com>
Using =r for single-byte values is incorrect for 32-bit code -- that would
permit %spl, %bpl, %sil, %dil which are illegal in 32-bit mode.
Changelog:
- Use text_poke_early with cr0 WP save/restore to patch the bypass. We are doing
non atomic writes to a code region only touched by us (nobody can execute it
since we are protected by the imv_mutex).
- Put imv_set and _imv_set in the architecture independent header.
- Use $0 instead of %2 with (0) operand.
- Add x86_64 support, ready for i386+x86_64 -> x86 merge.
- Use asm-x86/asm.h.
- Bugfix : 8 bytes 64 bits immediate value was declared as "4 bytes" in the
immediate structure.
- Change the immediate.c update code to support variable length opcodes.
- Vastly simplified, using a busy looping IPI with interrupts disabled.
Does not protect against NMI nor MCE.
- Pack the __imv section. Use smallest types required for size (char).
- Use imv_* instead of immediate_*.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <ak@muc.de>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: Chuck Ebbert <cebbert@redhat.com>
CC: Christoph Hellwig <hch@infradead.org>
CC: Jeremy Fitzhardinge <jeremy@goop.org>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Adrian Bunk <bunk@stusta.de>
CC: akpm@osdl.org
---
arch/x86/Kconfig | 1
include/asm-x86/immediate.h | 77 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 78 insertions(+)
Index: linux-2.6-sched-devel/include/asm-x86/immediate.h
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-sched-devel/include/asm-x86/immediate.h 2008-04-16 11:16:32.000000000 -0400
@@ -0,0 +1,77 @@
+#ifndef _ASM_X86_IMMEDIATE_H
+#define _ASM_X86_IMMEDIATE_H
+
+/*
+ * Immediate values. x86 architecture optimizations.
+ *
+ * (C) Copyright 2006 Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
+ *
+ * This file is released under the GPLv2.
+ * See the file COPYING for more details.
+ */
+
+#include <asm/asm.h>
+
+/**
+ * imv_read - read immediate variable
+ * @name: immediate value name
+ *
+ * Reads the value of @name.
+ * Optimized version of the immediate.
+ * Do not use in __init and __exit functions. Use _imv_read() instead.
+ * If size is bigger than the architecture long size, fall back on a memory
+ * read.
+ *
+ * Make sure to populate the initial static 64 bits opcode with a value
+ * what will generate an instruction with 8 bytes immediate value (not the REX.W
+ * prefixed one that loads a sign extended 32 bits immediate value in a r64
+ * register).
+ */
+#define imv_read(name) \
+ ({ \
+ __typeof__(name##__imv) value; \
+ BUILD_BUG_ON(sizeof(value) > 8); \
+ switch (sizeof(value)) { \
+ case 1: \
+ asm(".section __imv,\"a\",@progbits\n\t" \
+ _ASM_PTR "%c1, (3f)-%c2\n\t" \
+ ".byte %c2\n\t" \
+ ".previous\n\t" \
+ "mov $0,%0\n\t" \
+ "3:\n\t" \
+ : "=q" (value) \
+ : "i" (&name##__imv), \
+ "i" (sizeof(value))); \
+ break; \
+ case 2: \
+ case 4: \
+ asm(".section __imv,\"a\",@progbits\n\t" \
+ _ASM_PTR "%c1, (3f)-%c2\n\t" \
+ ".byte %c2\n\t" \
+ ".previous\n\t" \
+ "mov $0,%0\n\t" \
+ "3:\n\t" \
+ : "=r" (value) \
+ : "i" (&name##__imv), \
+ "i" (sizeof(value))); \
+ break; \
+ case 8: \
+ if (sizeof(long) < 8) { \
+ value = name##__imv; \
+ break; \
+ } \
+ asm(".section __imv,\"a\",@progbits\n\t" \
+ _ASM_PTR "%c1, (3f)-%c2\n\t" \
+ ".byte %c2\n\t" \
+ ".previous\n\t" \
+ "mov $0xFEFEFEFE01010101,%0\n\t" \
+ "3:\n\t" \
+ : "=r" (value) \
+ : "i" (&name##__imv), \
+ "i" (sizeof(value))); \
+ break; \
+ }; \
+ value; \
+ })
+
+#endif /* _ASM_X86_IMMEDIATE_H */
Index: linux-2.6-sched-devel/arch/x86/Kconfig
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/Kconfig 2008-04-16 11:07:19.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/Kconfig 2008-04-16 11:16:50.000000000 -0400
@@ -25,6 +25,7 @@ config X86
select HAVE_KRETPROBES
select HAVE_KVM if ((X86_32 && !X86_VOYAGER && !X86_VISWS && !X86_NUMAQ) || X86_64)
select HAVE_ARCH_KGDB
+ select HAVE_IMMEDIATE
config GENERIC_LOCKBREAK
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 13/27] Add text_poke and sync_core to powerpc
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (11 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 12/27] Immediate Values - x86 Optimization Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 14/27] Immediate Values - Powerpc Optimization Mathieu Desnoyers
` (13 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Rusty Russell, Christoph Hellwig,
Paul Mackerras, Adrian Bunk, Andi Kleen, akpm
[-- Attachment #1: add-text-poke-and-sync-core-to-powerpc.patch --]
[-- Type: text/plain, Size: 1461 bytes --]
- Needed on architectures where we must surround live instruction modification
with "WP flag disable".
- Turns into a memcpy on powerpc since there is no WP flag activated for
instruction pages (yet..).
- Add empty sync_core to powerpc so it can be used in architecture independent
code.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Christoph Hellwig <hch@infradead.org>
CC: Paul Mackerras <paulus@samba.org>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
---
include/asm-powerpc/cacheflush.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
Index: linux-2.6-lttng/include/asm-powerpc/cacheflush.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-powerpc/cacheflush.h 2007-11-19 12:05:50.000000000 -0500
+++ linux-2.6-lttng/include/asm-powerpc/cacheflush.h 2007-11-19 13:27:36.000000000 -0500
@@ -63,7 +63,9 @@ extern void flush_dcache_phys_range(unsi
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
-
+#define text_poke memcpy
+#define text_poke_early text_poke
+#define sync_core()
#ifdef CONFIG_DEBUG_PAGEALLOC
/* internal debugging function */
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 14/27] Immediate Values - Powerpc Optimization
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (12 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 13/27] Add text_poke and sync_core to powerpc Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 15/27] Immediate Values - Documentation Mathieu Desnoyers
` (12 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Rusty Russell, Christoph Hellwig,
Paul Mackerras, Adrian Bunk, Andi Kleen, akpm
[-- Attachment #1: immediate-values-powerpc-optimization.patch --]
[-- Type: text/plain, Size: 3122 bytes --]
PowerPC optimization of the immediate values which uses a li instruction,
patched with an immediate value.
Changelog:
- Put imv_set and _imv_set in the architecture independent header.
- Pack the __imv section. Use smallest types required for size (char).
- Remove architecture specific update code : now handled by architecture
agnostic code.
- Use imv_* instead of immediate_*.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Christoph Hellwig <hch@infradead.org>
CC: Paul Mackerras <paulus@samba.org>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
---
arch/powerpc/Kconfig | 1
include/asm-powerpc/immediate.h | 55 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
Index: linux-2.6-lttng/include/asm-powerpc/immediate.h
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-lttng/include/asm-powerpc/immediate.h 2008-03-06 08:55:54.000000000 -0500
@@ -0,0 +1,55 @@
+#ifndef _ASM_POWERPC_IMMEDIATE_H
+#define _ASM_POWERPC_IMMEDIATE_H
+
+/*
+ * Immediate values. PowerPC architecture optimizations.
+ *
+ * (C) Copyright 2006 Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
+ *
+ * This file is released under the GPLv2.
+ * See the file COPYING for more details.
+ */
+
+#include <asm/asm-compat.h>
+
+/**
+ * imv_read - read immediate variable
+ * @name: immediate value name
+ *
+ * Reads the value of @name.
+ * Optimized version of the immediate.
+ * Do not use in __init and __exit functions. Use _imv_read() instead.
+ */
+#define imv_read(name) \
+ ({ \
+ __typeof__(name##__imv) value; \
+ BUILD_BUG_ON(sizeof(value) > 8); \
+ switch (sizeof(value)) { \
+ case 1: \
+ asm(".section __imv,\"a\",@progbits\n\t" \
+ PPC_LONG "%c1, ((1f)-1)\n\t" \
+ ".byte 1\n\t" \
+ ".previous\n\t" \
+ "li %0,0\n\t" \
+ "1:\n\t" \
+ : "=r" (value) \
+ : "i" (&name##__imv)); \
+ break; \
+ case 2: \
+ asm(".section __imv,\"a\",@progbits\n\t" \
+ PPC_LONG "%c1, ((1f)-2)\n\t" \
+ ".byte 2\n\t" \
+ ".previous\n\t" \
+ "li %0,0\n\t" \
+ "1:\n\t" \
+ : "=r" (value) \
+ : "i" (&name##__imv)); \
+ break; \
+ case 4: \
+ case 8: value = name##__imv; \
+ break; \
+ }; \
+ value; \
+ })
+
+#endif /* _ASM_POWERPC_IMMEDIATE_H */
Index: linux-2.6-lttng/arch/powerpc/Kconfig
===================================================================
--- linux-2.6-lttng.orig/arch/powerpc/Kconfig 2008-03-06 08:45:31.000000000 -0500
+++ linux-2.6-lttng/arch/powerpc/Kconfig 2008-03-06 08:56:14.000000000 -0500
@@ -91,6 +91,7 @@ config PPC
select HAVE_OPROFILE
select HAVE_KPROBES
select HAVE_KRETPROBES
+ select HAVE_IMMEDIATE
config EARLY_PRINTK
bool
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 15/27] Immediate Values - Documentation
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (13 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 14/27] Immediate Values - Powerpc Optimization Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-17 9:52 ` KOSAKI Motohiro
2008-04-16 21:34 ` [RFC patch 16/27] Immediate Values Support init Mathieu Desnoyers
` (11 subsequent siblings)
26 siblings, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Rusty Russell, Adrian Bunk, Andi Kleen,
Christoph Hellwig, akpm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: immediate-values-documentation.patch --]
[-- Type: text/plain, Size: 9015 bytes --]
Changelog:
- Remove imv_set_early (removed from API).
- Use imv_* instead of immediate_*.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: Christoph Hellwig <hch@infradead.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
---
Documentation/immediate.txt | 221 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 221 insertions(+)
Index: linux-2.6-lttng/Documentation/immediate.txt
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-lttng/Documentation/immediate.txt 2008-02-01 07:42:01.000000000 -0500
@@ -0,0 +1,221 @@
+ Using the Immediate Values
+
+ Mathieu Desnoyers
+
+
+This document introduces Immediate Values and their use.
+
+
+* Purpose of immediate values
+
+An immediate value is used to compile into the kernel variables that sit within
+the instruction stream. They are meant to be rarely updated but read often.
+Using immediate values for these variables will save cache lines.
+
+This infrastructure is specialized in supporting dynamic patching of the values
+in the instruction stream when multiple CPUs are running without disturbing the
+normal system behavior.
+
+Compiling code meant to be rarely enabled at runtime can be done using
+if (unlikely(imv_read(var))) as condition surrounding the code. The
+smallest data type required for the test (an 8 bits char) is preferred, since
+some architectures, such as powerpc, only allow up to 16 bits immediate values.
+
+
+* Usage
+
+In order to use the "immediate" macros, you should include linux/immediate.h.
+
+#include <linux/immediate.h>
+
+DEFINE_IMV(char, this_immediate);
+EXPORT_IMV_SYMBOL(this_immediate);
+
+
+And use, in the body of a function:
+
+Use imv_set(this_immediate) to set the immediate value.
+
+Use imv_read(this_immediate) to read the immediate value.
+
+The immediate mechanism supports inserting multiple instances of the same
+immediate. Immediate values can be put in inline functions, inlined static
+functions, and unrolled loops.
+
+If you have to read the immediate values from a function declared as __init or
+__exit, you should explicitly use _imv_read(), which will fall back on a
+global variable read. Failing to do so will leave a reference to the __init
+section after it is freed (it would generate a modpost warning).
+
+You can choose to set an initial static value to the immediate by using, for
+instance:
+
+DEFINE_IMV(long, myptr) = 10;
+
+
+* Optimization for a given architecture
+
+One can implement optimized immediate values for a given architecture by
+replacing asm-$ARCH/immediate.h.
+
+
+* Performance improvement
+
+
+ * Memory hit for a data-based branch
+
+Here are the results on a 3GHz Pentium 4:
+
+number of tests: 100
+number of branches per test: 100000
+memory hit cycles per iteration (mean): 636.611
+L1 cache hit cycles per iteration (mean): 89.6413
+instruction stream based test, cycles per iteration (mean): 85.3438
+Just getting the pointer from a modulo on a pseudo-random value, doing
+ nothing with it, cycles per iteration (mean): 77.5044
+
+So:
+Base case: 77.50 cycles
+instruction stream based test: +7.8394 cycles
+L1 cache hit based test: +12.1369 cycles
+Memory load based test: +559.1066 cycles
+
+So let's say we have a ping flood coming at
+(14014 packets transmitted, 14014 received, 0% packet loss, time 1826ms)
+7674 packets per second. If we put 2 markers for irq entry/exit, it
+brings us to 15348 markers sites executed per second.
+
+(15348 exec/s) * (559 cycles/exec) / (3G cycles/s) = 0.0029
+We therefore have a 0.29% slowdown just on this case.
+
+Compared to this, the instruction stream based test will cause a
+slowdown of:
+
+(15348 exec/s) * (7.84 cycles/exec) / (3G cycles/s) = 0.00004
+For a 0.004% slowdown.
+
+If we plan to use this for memory allocation, spinlock, and all sorts of
+very high event rate tracing, we can assume it will execute 10 to 100
+times more sites per second, which brings us to 0.4% slowdown with the
+instruction stream based test compared to 29% slowdown with the memory
+load based test on a system with high memory pressure.
+
+
+
+ * Markers impact under heavy memory load
+
+Running a kernel with my LTTng instrumentation set, in a test that
+generates memory pressure (from userspace) by trashing L1 and L2 caches
+between calls to getppid() (note: syscall_trace is active and calls
+a marker upon syscall entry and syscall exit; markers are disarmed).
+This test is done in user-space, so there are some delays due to IRQs
+coming and to the scheduler. (UP 2.6.22-rc6-mm1 kernel, task with -20
+nice level)
+
+My first set of results: Linear cache trashing, turned out not to be
+very interesting, because it seems like the linearity of the memset on a
+full array is somehow detected and it does not "really" trash the
+caches.
+
+Now the most interesting result: Random walk L1 and L2 trashing
+surrounding a getppid() call.
+
+- Markers compiled out (but syscall_trace execution forced)
+number of tests: 10000
+No memory pressure
+Reading timestamps takes 108.033 cycles
+getppid: 1681.4 cycles
+With memory pressure
+Reading timestamps takes 102.938 cycles
+getppid: 15691.6 cycles
+
+
+- With the immediate values based markers:
+number of tests: 10000
+No memory pressure
+Reading timestamps takes 108.006 cycles
+getppid: 1681.84 cycles
+With memory pressure
+Reading timestamps takes 100.291 cycles
+getppid: 11793 cycles
+
+
+- With global variables based markers:
+number of tests: 10000
+No memory pressure
+Reading timestamps takes 107.999 cycles
+getppid: 1669.06 cycles
+With memory pressure
+Reading timestamps takes 102.839 cycles
+getppid: 12535 cycles
+
+The result is quite interesting in that the kernel is slower without
+markers than with markers. I explain it by the fact that the data
+accessed is not laid out in the same manner in the cache lines when the
+markers are compiled in or out. It seems that it aligns the function's
+data better to compile-in the markers in this case.
+
+But since the interesting comparison is between the immediate values and
+global variables based markers, and because they share the same memory
+layout, except for the movl being replaced by a movz, we see that the
+global variable based markers (2 markers) adds 742 cycles to each system
+call (syscall entry and exit are traced and memory locations for both
+global variables lie on the same cache line).
+
+
+- Test redone with less iterations, but with error estimates
+
+10 runs of 100 iterations each: Tests done on a 3GHz P4. Here I run getppid with
+syscall trace inactive, comparing the case with memory pressure and without
+memory pressure. (sorry, my system is not setup to execute syscall_trace this
+time, but it will make the point anyway).
+
+No memory pressure
+Reading timestamps: 150.92 cycles, std dev. 1.01 cycles
+getppid: 1462.09 cycles, std dev. 18.87 cycles
+
+With memory pressure
+Reading timestamps: 578.22 cycles, std dev. 269.51 cycles
+getppid: 17113.33 cycles, std dev. 1655.92 cycles
+
+
+Now for memory read timing: (10 runs, branches per test: 100000)
+Memory read based branch:
+ 644.09 cycles, std dev. 11.39 cycles
+L1 cache hit based branch:
+ 88.16 cycles, std dev. 1.35 cycles
+
+
+So, now that we have the raw results, let's calculate:
+
+Memory read:
+644.09±11.39 - 88.16±1.35 = 555.93±11.46 cycles
+
+Getppid without memory pressure:
+1462.09±18.87 - 150.92±1.01 = 1311.17±18.90 cycles
+
+Getppid with memory pressure:
+17113.33±1655.92 - 578.22±269.51 = 16535.11±1677.71 cycles
+
+Therefore, if we add 2 markers not based on immediate values to the getppid
+code, which would add 2 memory reads, we would add
+2 * 555.93±12.74 = 1111.86±25.48 cycles
+
+Therefore,
+
+1111.86±25.48 / 16535.11±1677.71 = 0.0672
+ relative error: sqrt(((25.48/1111.86)^2)+((1677.71/16535.11)^2))
+ = 0.1040
+ absolute error: 0.1040 * 0.0672 = 0.0070
+
+Therefore: 0.0672±0.0070 * 100% = 6.72±0.70 %
+
+We can therefore affirm that adding 2 markers to getppid, on a system with high
+memory pressure, would have a performance hit of at least 6.0% on the system
+call time, all within the uncertainty limits of these tests. The same applies to
+other kernel code paths. The smaller those code paths are, the highest the
+impact ratio will be.
+
+Therefore, not only is it interesting to use the immediate values to dynamically
+activate dormant code such as the markers, but I think it should also be
+considered as a replacement for many of the "read-mostly" static variables.
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 16/27] Immediate Values Support init
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (14 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 15/27] Immediate Values - Documentation Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-19 11:04 ` KOSAKI Motohiro
2008-04-16 21:34 ` [RFC patch 17/27] Scheduler Profiling - Use Immediate Values Mathieu Desnoyers
` (10 subsequent siblings)
26 siblings, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Rusty Russell, Frank Ch. Eigler
[-- Attachment #1: immediate-values-support-init.patch --]
[-- Type: text/plain, Size: 9860 bytes --]
Supports placing immediate values in init code
We need to put the immediate values in RW data section so we can edit them
before init section unload.
This code puts NULL pointers in lieu of original pointer referencing init code
before the init sections are freed, both in the core kernel and in modules.
TODO : support __exit section.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: "Frank Ch. Eigler" <fche@redhat.com>
---
Documentation/immediate.txt | 8 ++++----
include/asm-generic/vmlinux.lds.h | 8 ++++----
include/asm-powerpc/immediate.h | 4 ++--
include/asm-x86/immediate.h | 6 +++---
include/linux/immediate.h | 7 ++++++-
include/linux/module.h | 2 +-
init/main.c | 1 +
kernel/immediate.c | 31 +++++++++++++++++++++++++++++--
kernel/module.c | 2 ++
9 files changed, 52 insertions(+), 17 deletions(-)
Index: linux-2.6-lttng/kernel/immediate.c
===================================================================
--- linux-2.6-lttng.orig/kernel/immediate.c 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/kernel/immediate.c 2008-04-16 11:24:25.000000000 -0400
@@ -22,6 +22,7 @@
#include <linux/cpu.h>
#include <linux/stop_machine.h>
+#include <asm/sections.h>
#include <asm/cacheflush.h>
/*
@@ -30,8 +31,8 @@
static int imv_early_boot_complete;
static int wrote_text;
-extern const struct __imv __start___imv[];
-extern const struct __imv __stop___imv[];
+extern struct __imv __start___imv[];
+extern struct __imv __stop___imv[];
static int stop_machine_imv_update(void *imv_ptr)
{
@@ -118,6 +119,8 @@ void imv_update_range(const struct __imv
int ret;
for (iter = begin; iter < end; iter++) {
mutex_lock(&imv_mutex);
+ if (!iter->imv) /* Skip removed __init immediate values */
+ goto skip;
ret = apply_imv_update(iter);
if (imv_early_boot_complete && ret)
printk(KERN_WARNING
@@ -126,6 +129,7 @@ void imv_update_range(const struct __imv
"instruction at %p, size %hu\n",
(void *)iter->imv,
(void *)iter->var, iter->size);
+skip:
mutex_unlock(&imv_mutex);
}
}
@@ -143,6 +147,29 @@ void core_imv_update(void)
}
EXPORT_SYMBOL_GPL(core_imv_update);
+/**
+ * imv_unref
+ *
+ * Deactivate any immediate value reference pointing into the code region in the
+ * range start to start + size.
+ */
+void imv_unref(struct __imv *begin, struct __imv *end, void *start,
+ unsigned long size)
+{
+ struct __imv *iter;
+
+ for (iter = begin; iter < end; iter++)
+ if (iter->imv >= (unsigned long)start
+ && iter->imv < (unsigned long)start + size)
+ iter->imv = 0UL;
+}
+
+void imv_unref_core_init(void)
+{
+ imv_unref(__start___imv, __stop___imv, __init_begin,
+ (unsigned long)__init_end - (unsigned long)__init_begin);
+}
+
void __init imv_init_complete(void)
{
imv_early_boot_complete = 1;
Index: linux-2.6-lttng/kernel/module.c
===================================================================
--- linux-2.6-lttng.orig/kernel/module.c 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/kernel/module.c 2008-04-16 11:24:25.000000000 -0400
@@ -2208,6 +2208,8 @@ sys_init_module(void __user *umod,
/* Drop initial reference. */
module_put(mod);
unwind_remove_table(mod->unwind_info, 1);
+ imv_unref(mod->immediate, mod->immediate + mod->num_immediate,
+ mod->module_init, mod->init_size);
module_free(mod, mod->module_init);
mod->module_init = NULL;
mod->init_size = 0;
Index: linux-2.6-lttng/include/linux/module.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/module.h 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/include/linux/module.h 2008-04-16 11:24:25.000000000 -0400
@@ -357,7 +357,7 @@ struct module
keeping pointers to this stuff */
char *args;
#ifdef CONFIG_IMMEDIATE
- const struct __imv *immediate;
+ struct __imv *immediate;
unsigned int num_immediate;
#endif
#ifdef CONFIG_MARKERS
Index: linux-2.6-lttng/include/asm-generic/vmlinux.lds.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-generic/vmlinux.lds.h 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/include/asm-generic/vmlinux.lds.h 2008-04-16 11:24:25.000000000 -0400
@@ -52,7 +52,10 @@
. = ALIGN(8); \
VMLINUX_SYMBOL(__start___markers) = .; \
*(__markers) \
- VMLINUX_SYMBOL(__stop___markers) = .;
+ VMLINUX_SYMBOL(__stop___markers) = .; \
+ VMLINUX_SYMBOL(__start___imv) = .; \
+ *(__imv) /* Immediate values: pointers */ \
+ VMLINUX_SYMBOL(__stop___imv) = .;
#define RO_DATA(align) \
. = ALIGN((align)); \
@@ -61,9 +64,6 @@
*(.rodata) *(.rodata.*) \
*(__vermagic) /* Kernel version magic */ \
*(__markers_strings) /* Markers: strings */ \
- VMLINUX_SYMBOL(__start___imv) = .; \
- *(__imv) /* Immediate values: pointers */ \
- VMLINUX_SYMBOL(__stop___imv) = .; \
} \
\
.rodata1 : AT(ADDR(.rodata1) - LOAD_OFFSET) { \
Index: linux-2.6-lttng/include/linux/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/immediate.h 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/include/linux/immediate.h 2008-04-16 11:24:25.000000000 -0400
@@ -46,6 +46,9 @@ struct __imv {
extern void core_imv_update(void);
extern void imv_update_range(const struct __imv *begin,
const struct __imv *end);
+extern void imv_unref_core_init(void);
+extern void imv_unref(struct __imv *begin, struct __imv *end, void *start,
+ unsigned long size);
#else
@@ -73,7 +76,9 @@ extern void imv_update_range(const struc
static inline void core_imv_update(void) { }
static inline void module_imv_update(void) { }
-
+static inline void imv_unref_core_init(void) { }
+static inline void imv_unref_init(struct __imv *begin, struct __imv *end,
+ void *init, unsigned long init_size) { }
#endif
#define DECLARE_IMV(type, name) extern __typeof__(type) name##__imv
Index: linux-2.6-lttng/init/main.c
===================================================================
--- linux-2.6-lttng.orig/init/main.c 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/init/main.c 2008-04-16 11:24:25.000000000 -0400
@@ -776,6 +776,7 @@ static void run_init_process(char *init_
*/
static int noinline init_post(void)
{
+ imv_unref_core_init();
free_initmem();
unlock_kernel();
mark_rodata_ro();
Index: linux-2.6-lttng/include/asm-x86/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-x86/immediate.h 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/include/asm-x86/immediate.h 2008-04-16 11:24:25.000000000 -0400
@@ -33,7 +33,7 @@
BUILD_BUG_ON(sizeof(value) > 8); \
switch (sizeof(value)) { \
case 1: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
".byte %c2\n\t" \
".previous\n\t" \
@@ -45,7 +45,7 @@
break; \
case 2: \
case 4: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
".byte %c2\n\t" \
".previous\n\t" \
@@ -60,7 +60,7 @@
value = name##__imv; \
break; \
} \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
".byte %c2\n\t" \
".previous\n\t" \
Index: linux-2.6-lttng/include/asm-powerpc/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-powerpc/immediate.h 2008-04-16 11:24:03.000000000 -0400
+++ linux-2.6-lttng/include/asm-powerpc/immediate.h 2008-04-16 11:24:25.000000000 -0400
@@ -26,7 +26,7 @@
BUILD_BUG_ON(sizeof(value) > 8); \
switch (sizeof(value)) { \
case 1: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
PPC_LONG "%c1, ((1f)-1)\n\t" \
".byte 1\n\t" \
".previous\n\t" \
@@ -36,7 +36,7 @@
: "i" (&name##__imv)); \
break; \
case 2: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
PPC_LONG "%c1, ((1f)-2)\n\t" \
".byte 2\n\t" \
".previous\n\t" \
Index: linux-2.6-lttng/Documentation/immediate.txt
===================================================================
--- linux-2.6-lttng.orig/Documentation/immediate.txt 2008-04-16 11:24:30.000000000 -0400
+++ linux-2.6-lttng/Documentation/immediate.txt 2008-04-16 11:24:45.000000000 -0400
@@ -42,10 +42,10 @@ The immediate mechanism supports inserti
immediate. Immediate values can be put in inline functions, inlined static
functions, and unrolled loops.
-If you have to read the immediate values from a function declared as __init or
-__exit, you should explicitly use _imv_read(), which will fall back on a
-global variable read. Failing to do so will leave a reference to the __init
-section after it is freed (it would generate a modpost warning).
+If you have to read the immediate values from a function declared as __exit, you
+should explicitly use _imv_read(), which will fall back on a global variable
+read. Failing to do so will leave a reference to the __exit section in kernel
+without module unload support. imv_read() in the __init section is supported.
You can choose to set an initial static value to the immediate by using, for
instance:
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 17/27] Scheduler Profiling - Use Immediate Values
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (15 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 16/27] Immediate Values Support init Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 18/27] Markers - remove extra format argument Mathieu Desnoyers
` (9 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Rusty Russell, Adrian Bunk, Andi Kleen,
Christoph Hellwig, akpm
[-- Attachment #1: scheduler-profiling-use-immediate-values.patch --]
[-- Type: text/plain, Size: 6060 bytes --]
Use immediate values with lower d-cache hit in optimized version as a
condition for scheduler profiling call.
Changelog :
- Use imv_* instead of immediate_*.
- Follow the white rabbit : kvm_main.c which becomes x86.c.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: Christoph Hellwig <hch@infradead.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
---
arch/x86/kvm/x86.c | 2 +-
include/linux/profile.h | 5 +++--
kernel/profile.c | 22 +++++++++++-----------
kernel/sched_fair.c | 5 +----
4 files changed, 16 insertions(+), 18 deletions(-)
Index: linux-2.6-sched-devel/kernel/profile.c
===================================================================
--- linux-2.6-sched-devel.orig/kernel/profile.c 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/kernel/profile.c 2008-04-16 11:17:00.000000000 -0400
@@ -41,8 +41,8 @@ static int (*timer_hook)(struct pt_regs
static atomic_t *prof_buffer;
static unsigned long prof_len, prof_shift;
-int prof_on __read_mostly;
-EXPORT_SYMBOL_GPL(prof_on);
+DEFINE_IMV(char, prof_on) __read_mostly;
+EXPORT_IMV_SYMBOL_GPL(prof_on);
static cpumask_t prof_cpu_mask = CPU_MASK_ALL;
#ifdef CONFIG_SMP
@@ -60,7 +60,7 @@ static int __init profile_setup(char *st
if (!strncmp(str, sleepstr, strlen(sleepstr))) {
#ifdef CONFIG_SCHEDSTATS
- prof_on = SLEEP_PROFILING;
+ imv_set(prof_on, SLEEP_PROFILING);
if (str[strlen(sleepstr)] == ',')
str += strlen(sleepstr) + 1;
if (get_option(&str, &par))
@@ -73,7 +73,7 @@ static int __init profile_setup(char *st
"kernel sleep profiling requires CONFIG_SCHEDSTATS\n");
#endif /* CONFIG_SCHEDSTATS */
} else if (!strncmp(str, schedstr, strlen(schedstr))) {
- prof_on = SCHED_PROFILING;
+ imv_set(prof_on, SCHED_PROFILING);
if (str[strlen(schedstr)] == ',')
str += strlen(schedstr) + 1;
if (get_option(&str, &par))
@@ -82,7 +82,7 @@ static int __init profile_setup(char *st
"kernel schedule profiling enabled (shift: %ld)\n",
prof_shift);
} else if (!strncmp(str, kvmstr, strlen(kvmstr))) {
- prof_on = KVM_PROFILING;
+ imv_set(prof_on, KVM_PROFILING);
if (str[strlen(kvmstr)] == ',')
str += strlen(kvmstr) + 1;
if (get_option(&str, &par))
@@ -92,7 +92,7 @@ static int __init profile_setup(char *st
prof_shift);
} else if (get_option(&str, &par)) {
prof_shift = par;
- prof_on = CPU_PROFILING;
+ imv_set(prof_on, CPU_PROFILING);
printk(KERN_INFO "kernel profiling enabled (shift: %ld)\n",
prof_shift);
}
@@ -103,7 +103,7 @@ __setup("profile=", profile_setup);
void __init profile_init(void)
{
- if (!prof_on)
+ if (!_imv_read(prof_on))
return;
/* only text is profiled */
@@ -290,7 +290,7 @@ void profile_hits(int type, void *__pc,
int i, j, cpu;
struct profile_hit *hits;
- if (prof_on != type || !prof_buffer)
+ if (!prof_buffer)
return;
pc = min((pc - (unsigned long)_stext) >> prof_shift, prof_len - 1);
i = primary = (pc & (NR_PROFILE_GRP - 1)) << PROFILE_GRPSHIFT;
@@ -400,7 +400,7 @@ void profile_hits(int type, void *__pc,
{
unsigned long pc;
- if (prof_on != type || !prof_buffer)
+ if (!prof_buffer)
return;
pc = ((unsigned long)__pc - (unsigned long)_stext) >> prof_shift;
atomic_add(nr_hits, &prof_buffer[min(pc, prof_len - 1)]);
@@ -557,7 +557,7 @@ static int __init create_hash_tables(voi
}
return 0;
out_cleanup:
- prof_on = 0;
+ imv_set(prof_on, 0);
smp_mb();
on_each_cpu(profile_nop, NULL, 0, 1);
for_each_online_cpu(cpu) {
@@ -584,7 +584,7 @@ static int __init create_proc_profile(vo
{
struct proc_dir_entry *entry;
- if (!prof_on)
+ if (!_imv_read(prof_on))
return 0;
if (create_hash_tables())
return -1;
Index: linux-2.6-sched-devel/include/linux/profile.h
===================================================================
--- linux-2.6-sched-devel.orig/include/linux/profile.h 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/include/linux/profile.h 2008-04-16 11:17:00.000000000 -0400
@@ -7,10 +7,11 @@
#include <linux/init.h>
#include <linux/cpumask.h>
#include <linux/cache.h>
+#include <linux/immediate.h>
#include <asm/errno.h>
-extern int prof_on __read_mostly;
+DECLARE_IMV(char, prof_on) __read_mostly;
#define CPU_PROFILING 1
#define SCHED_PROFILING 2
@@ -38,7 +39,7 @@ static inline void profile_hit(int type,
/*
* Speedup for the common (no profiling enabled) case:
*/
- if (unlikely(prof_on == type))
+ if (unlikely(imv_read(prof_on) == type))
profile_hits(type, ip, 1);
}
Index: linux-2.6-sched-devel/kernel/sched_fair.c
===================================================================
--- linux-2.6-sched-devel.orig/kernel/sched_fair.c 2008-04-16 11:07:24.000000000 -0400
+++ linux-2.6-sched-devel/kernel/sched_fair.c 2008-04-16 11:17:00.000000000 -0400
@@ -455,11 +455,8 @@ static void enqueue_sleeper(struct cfs_r
* get a milliseconds-range estimation of the amount of
* time that the task spent sleeping:
*/
- if (unlikely(prof_on == SLEEP_PROFILING)) {
-
- profile_hits(SLEEP_PROFILING, (void *)get_wchan(tsk),
+ profile_hits(SLEEP_PROFILING, (void *)get_wchan(task_of(se)),
delta >> 20);
- }
account_scheduler_latency(tsk, delta >> 10, 0);
}
#endif
Index: linux-2.6-sched-devel/arch/x86/kvm/x86.c
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kvm/x86.c 2008-04-16 11:07:19.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kvm/x86.c 2008-04-16 11:17:00.000000000 -0400
@@ -2604,7 +2604,7 @@ again:
/*
* Profile KVM exit RIPs:
*/
- if (unlikely(prof_on == KVM_PROFILING)) {
+ if (unlikely(imv_read(prof_on) == KVM_PROFILING)) {
kvm_x86_ops->cache_regs(vcpu);
profile_hit(KVM_PROFILING, (void *)vcpu->arch.rip);
}
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 18/27] Markers - remove extra format argument
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (16 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 17/27] Scheduler Profiling - Use Immediate Values Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 19/27] Markers - define non optimized marker Mathieu Desnoyers
` (8 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers, Denys Vlasenko
[-- Attachment #1: markers-remove-extra-format-argument.patch --]
[-- Type: text/plain, Size: 7731 bytes --]
Denys Vlasenko <vda.linux@googlemail.com> :
> Not in this patch, but I noticed:
>
> #define __trace_mark(name, call_private, format, args...) \
> do { \
> static const char __mstrtab_##name[] \
> __attribute__((section("__markers_strings"))) \
> = #name "\0" format; \
> static struct marker __mark_##name \
> __attribute__((section("__markers"), aligned(8))) = \
> { __mstrtab_##name, &__mstrtab_##name[sizeof(#name)], \
> 0, 0, marker_probe_cb, \
> { __mark_empty_function, NULL}, NULL }; \
> __mark_check_format(format, ## args); \
> if (unlikely(__mark_##name.state)) { \
> (*__mark_##name.call) \
> (&__mark_##name, call_private, \
> format, ## args); \
> } \
> } while (0)
>
> In this call:
>
> (*__mark_##name.call) \
> (&__mark_##name, call_private, \
> format, ## args); \
>
> you make gcc allocate duplicate format string. You can use
> &__mstrtab_##name[sizeof(#name)] instead since it holds the same string,
> or drop ", format," above and "const char *fmt" from here:
>
> void (*call)(const struct marker *mdata, /* Probe wrapper */
> void *call_private, const char *fmt, ...);
>
> since mdata->format is the same and all callees which need it can take it there.
Very good point. I actually thought about dropping it, since it would
remove an unnecessary argument from the stack. And actually, since I now
have the marker_probe_cb sitting between the marker site and the
callbacks, there is no API change required. Thanks :)
Mathieu
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Denys Vlasenko <vda.linux@googlemail.com>
---
include/linux/marker.h | 11 +++++------
kernel/marker.c | 30 ++++++++++++++----------------
2 files changed, 19 insertions(+), 22 deletions(-)
Index: linux-2.6-lttng/include/linux/marker.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/marker.h 2008-03-27 20:51:34.000000000 -0400
+++ linux-2.6-lttng/include/linux/marker.h 2008-03-27 20:54:55.000000000 -0400
@@ -44,8 +44,8 @@ struct marker {
*/
char state; /* Marker state. */
char ptype; /* probe type : 0 : single, 1 : multi */
- void (*call)(const struct marker *mdata, /* Probe wrapper */
- void *call_private, const char *fmt, ...);
+ /* Probe wrapper */
+ void (*call)(const struct marker *mdata, void *call_private, ...);
struct marker_probe_closure single;
struct marker_probe_closure *multi;
} __attribute__((aligned(8)));
@@ -72,8 +72,7 @@ struct marker {
__mark_check_format(format, ## args); \
if (unlikely(__mark_##name.state)) { \
(*__mark_##name.call) \
- (&__mark_##name, call_private, \
- format, ## args); \
+ (&__mark_##name, call_private, ## args);\
} \
} while (0)
@@ -117,9 +116,9 @@ static inline void __printf(1, 2) ___mar
extern marker_probe_func __mark_empty_function;
extern void marker_probe_cb(const struct marker *mdata,
- void *call_private, const char *fmt, ...);
+ void *call_private, ...);
extern void marker_probe_cb_noarg(const struct marker *mdata,
- void *call_private, const char *fmt, ...);
+ void *call_private, ...);
/*
* Connect a probe to a marker.
Index: linux-2.6-lttng/kernel/marker.c
===================================================================
--- linux-2.6-lttng.orig/kernel/marker.c 2008-03-27 20:52:09.000000000 -0400
+++ linux-2.6-lttng/kernel/marker.c 2008-03-27 20:56:13.000000000 -0400
@@ -54,8 +54,8 @@ static DEFINE_MUTEX(markers_mutex);
struct marker_entry {
struct hlist_node hlist;
char *format;
- void (*call)(const struct marker *mdata, /* Probe wrapper */
- void *call_private, const char *fmt, ...);
+ /* Probe wrapper */
+ void (*call)(const struct marker *mdata, void *call_private, ...);
struct marker_probe_closure single;
struct marker_probe_closure *multi;
int refcount; /* Number of times armed. 0 if disarmed. */
@@ -90,15 +90,13 @@ EXPORT_SYMBOL_GPL(__mark_empty_function)
* marker_probe_cb Callback that prepares the variable argument list for probes.
* @mdata: pointer of type struct marker
* @call_private: caller site private data
- * @fmt: format string
* @...: Variable argument list.
*
* Since we do not use "typical" pointer based RCU in the 1 argument case, we
* need to put a full smp_rmb() in this branch. This is why we do not use
* rcu_dereference() for the pointer read.
*/
-void marker_probe_cb(const struct marker *mdata, void *call_private,
- const char *fmt, ...)
+void marker_probe_cb(const struct marker *mdata, void *call_private, ...)
{
va_list args;
char ptype;
@@ -119,8 +117,9 @@ void marker_probe_cb(const struct marker
/* Must read the ptr before private data. They are not data
* dependant, so we put an explicit smp_rmb() here. */
smp_rmb();
- va_start(args, fmt);
- func(mdata->single.probe_private, call_private, fmt, &args);
+ va_start(args, call_private);
+ func(mdata->single.probe_private, call_private, mdata->format,
+ &args);
va_end(args);
} else {
struct marker_probe_closure *multi;
@@ -135,9 +134,9 @@ void marker_probe_cb(const struct marker
smp_read_barrier_depends();
multi = mdata->multi;
for (i = 0; multi[i].func; i++) {
- va_start(args, fmt);
- multi[i].func(multi[i].probe_private, call_private, fmt,
- &args);
+ va_start(args, call_private);
+ multi[i].func(multi[i].probe_private, call_private,
+ mdata->format, &args);
va_end(args);
}
}
@@ -149,13 +148,11 @@ EXPORT_SYMBOL_GPL(marker_probe_cb);
* marker_probe_cb Callback that does not prepare the variable argument list.
* @mdata: pointer of type struct marker
* @call_private: caller site private data
- * @fmt: format string
* @...: Variable argument list.
*
* Should be connected to markers "MARK_NOARGS".
*/
-void marker_probe_cb_noarg(const struct marker *mdata,
- void *call_private, const char *fmt, ...)
+void marker_probe_cb_noarg(const struct marker *mdata, void *call_private, ...)
{
va_list args; /* not initialized */
char ptype;
@@ -171,7 +168,8 @@ void marker_probe_cb_noarg(const struct
/* Must read the ptr before private data. They are not data
* dependant, so we put an explicit smp_rmb() here. */
smp_rmb();
- func(mdata->single.probe_private, call_private, fmt, &args);
+ func(mdata->single.probe_private, call_private, mdata->format,
+ &args);
} else {
struct marker_probe_closure *multi;
int i;
@@ -185,8 +183,8 @@ void marker_probe_cb_noarg(const struct
smp_read_barrier_depends();
multi = mdata->multi;
for (i = 0; multi[i].func; i++)
- multi[i].func(multi[i].probe_private, call_private, fmt,
- &args);
+ multi[i].func(multi[i].probe_private, call_private,
+ mdata->format, &args);
}
preempt_enable();
}
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 19/27] Markers - define non optimized marker
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (17 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 18/27] Markers - remove extra format argument Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 20/27] Immediate Values - Move Kprobes x86 restore_interrupt to kdebug.h Mathieu Desnoyers
` (7 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers
[-- Attachment #1: markers-define-non-optimized-marker.patch --]
[-- Type: text/plain, Size: 3184 bytes --]
To support the forthcoming "immediate values" marker optimization, we must have
a way to declare markers in few code paths that does not use instruction
modification based enable. This will be the case of printk(), some traps and
eventually lockdep instrumentation.
Changelog :
- Fix reversed boolean logic of "generic".
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
---
include/linux/marker.h | 29 ++++++++++++++++++++++++-----
1 file changed, 24 insertions(+), 5 deletions(-)
Index: linux-2.6-lttng/include/linux/marker.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/marker.h 2008-03-27 20:47:44.000000000 -0400
+++ linux-2.6-lttng/include/linux/marker.h 2008-03-27 20:49:04.000000000 -0400
@@ -58,8 +58,12 @@ struct marker {
* Make sure the alignment of the structure in the __markers section will
* not add unwanted padding between the beginning of the section and the
* structure. Force alignment to the same alignment as the section start.
+ *
+ * The "generic" argument controls which marker enabling mechanism must be used.
+ * If generic is true, a variable read is used.
+ * If generic is false, immediate values are used.
*/
-#define __trace_mark(name, call_private, format, args...) \
+#define __trace_mark(generic, name, call_private, format, args...) \
do { \
static const char __mstrtab_##name[] \
__attribute__((section("__markers_strings"))) \
@@ -79,7 +83,7 @@ struct marker {
extern void marker_update_probe_range(struct marker *begin,
struct marker *end);
#else /* !CONFIG_MARKERS */
-#define __trace_mark(name, call_private, format, args...) \
+#define __trace_mark(generic, name, call_private, format, args...) \
__mark_check_format(format, ## args)
static inline void marker_update_probe_range(struct marker *begin,
struct marker *end)
@@ -87,15 +91,30 @@ static inline void marker_update_probe_r
#endif /* CONFIG_MARKERS */
/**
- * trace_mark - Marker
+ * trace_mark - Marker using code patching
* @name: marker name, not quoted.
* @format: format string
* @args...: variable argument list
*
- * Places a marker.
+ * Places a marker using optimized code patching technique (imv_read())
+ * to be enabled when immediate values are present.
*/
#define trace_mark(name, format, args...) \
- __trace_mark(name, NULL, format, ## args)
+ __trace_mark(0, name, NULL, format, ## args)
+
+/**
+ * _trace_mark - Marker using variable read
+ * @name: marker name, not quoted.
+ * @format: format string
+ * @args...: variable argument list
+ *
+ * Places a marker using a standard memory read (_imv_read()) to be
+ * enabled. Should be used for markers in code paths where instruction
+ * modification based enabling is not welcome. (__init and __exit functions,
+ * lockdep, some traps, printk).
+ */
+#define _trace_mark(name, format, args...) \
+ __trace_mark(1, name, NULL, format, ## args)
/**
* MARK_NOARGS - Format string for a marker with no argument.
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 20/27] Immediate Values - Move Kprobes x86 restore_interrupt to kdebug.h
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (18 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 19/27] Markers - define non optimized marker Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 21/27] Add __discard section to x86 Mathieu Desnoyers
` (6 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Ananth N Mavinakayanahalli, Christoph Hellwig,
anil.s.keshavamurthy, davem, Thomas Gleixner, Ingo Molnar,
H. Peter Anvin
[-- Attachment #1: immediate-values-move-kprobes-x86-restore-interrupt-to-kdebug-h.patch --]
[-- Type: text/plain, Size: 2516 bytes --]
Since the breakpoint handler is useful both to kprobes and immediate values, it
makes sense to make the required restore_interrupt() available through
asm-i386/kdebug.h.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
CC: Christoph Hellwig <hch@infradead.org>
CC: anil.s.keshavamurthy@intel.com
CC: davem@davemloft.net
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
CC: H. Peter Anvin <hpa@zytor.com>
---
include/asm-x86/kdebug.h | 12 ++++++++++++
include/asm-x86/kprobes.h | 9 ---------
2 files changed, 12 insertions(+), 9 deletions(-)
Index: linux-2.6-lttng/include/asm-x86/kdebug.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-x86/kdebug.h 2008-03-25 08:56:54.000000000 -0400
+++ linux-2.6-lttng/include/asm-x86/kdebug.h 2008-03-25 09:00:17.000000000 -0400
@@ -3,6 +3,9 @@
#include <linux/notifier.h>
+#include <linux/ptrace.h>
+#include <asm/system.h>
+
struct pt_regs;
/* Grossly misnamed. */
@@ -34,4 +37,13 @@ extern void show_regs(struct pt_regs *re
extern unsigned long oops_begin(void);
extern void oops_end(unsigned long, struct pt_regs *, int signr);
+/* trap3/1 are intr gates for kprobes. So, restore the status of IF,
+ * if necessary, before executing the original int3/1 (trap) handler.
+ */
+static inline void restore_interrupts(struct pt_regs *regs)
+{
+ if (regs->flags & X86_EFLAGS_IF)
+ local_irq_enable();
+}
+
#endif
Index: linux-2.6-lttng/include/asm-x86/kprobes.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-x86/kprobes.h 2008-03-25 08:56:54.000000000 -0400
+++ linux-2.6-lttng/include/asm-x86/kprobes.h 2008-03-25 09:00:17.000000000 -0400
@@ -82,15 +82,6 @@ struct kprobe_ctlblk {
struct prev_kprobe prev_kprobe;
};
-/* trap3/1 are intr gates for kprobes. So, restore the status of IF,
- * if necessary, before executing the original int3/1 (trap) handler.
- */
-static inline void restore_interrupts(struct pt_regs *regs)
-{
- if (regs->flags & X86_EFLAGS_IF)
- local_irq_enable();
-}
-
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
extern int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data);
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 21/27] Add __discard section to x86
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (19 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 20/27] Immediate Values - Move Kprobes x86 restore_interrupt to kdebug.h Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 22/27] Immediate Values - x86 Optimization NMI and MCE support Mathieu Desnoyers
` (5 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, H. Peter Anvin, Andi Kleen, Chuck Ebbert,
Christoph Hellwig, Jeremy Fitzhardinge, Thomas Gleixner,
Ingo Molnar
[-- Attachment #1: add-discard-section-to-x86.patch --]
[-- Type: text/plain, Size: 1812 bytes --]
Add a __discard sectionto the linker script. Code produced in this section will
not be put in the vmlinux file. This is useful when we have to calculate the
size of an instruction before actually declaring it (for alignment purposes for
instance). This is used by the immediate values.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: H. Peter Anvin <hpa@zytor.com>
CC: Andi Kleen <ak@muc.de>
CC: Chuck Ebbert <cebbert@redhat.com>
CC: Christoph Hellwig <hch@infradead.org>
CC: Jeremy Fitzhardinge <jeremy@goop.org>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
---
arch/x86/kernel/vmlinux_32.lds.S | 1 +
arch/x86/kernel/vmlinux_64.lds.S | 1 +
2 files changed, 2 insertions(+)
Index: linux-2.6-sched-devel/arch/x86/kernel/vmlinux_32.lds.S
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kernel/vmlinux_32.lds.S 2008-04-16 11:07:19.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kernel/vmlinux_32.lds.S 2008-04-16 11:17:04.000000000 -0400
@@ -213,6 +213,7 @@ SECTIONS
/* Sections to be discarded */
/DISCARD/ : {
*(.exitcall.exit)
+ *(__discard)
}
STABS_DEBUG
Index: linux-2.6-sched-devel/arch/x86/kernel/vmlinux_64.lds.S
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kernel/vmlinux_64.lds.S 2008-04-16 11:07:19.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kernel/vmlinux_64.lds.S 2008-04-16 11:17:04.000000000 -0400
@@ -246,6 +246,7 @@ SECTIONS
/DISCARD/ : {
*(.exitcall.exit)
*(.eh_frame)
+ *(__discard)
}
STABS_DEBUG
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 22/27] Immediate Values - x86 Optimization NMI and MCE support
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (20 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 21/27] Add __discard section to x86 Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI " Mathieu Desnoyers
` (4 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Andi Kleen, H. Peter Anvin, Chuck Ebbert,
Christoph Hellwig, Jeremy Fitzhardinge, Thomas Gleixner,
Ingo Molnar
[-- Attachment #1: immediate-values-x86-optimization-nmi-mce-support.patch --]
[-- Type: text/plain, Size: 17320 bytes --]
x86 optimization of the immediate values which uses a movl with code patching
to set/unset the value used to populate the register used as variable source.
It uses a breakpoint to bypass the instruction being changed, which lessens the
interrupt latency of the operation and protects against NMIs and MCE.
- More reentrant immediate value : uses a breakpoint. Needs to know the
instruction's first byte. This is why we keep the "instruction size"
variable, so we can support the REX prefixed instructions too.
Changelog:
- Change the immediate.c update code to support variable length opcodes.
- Use text_poke_early with cr0 WP save/restore to patch the bypass. We are doing
non atomic writes to a code region only touched by us (nobody can execute it
since we are protected by the imv_mutex).
- Add x86_64 support, ready for i386+x86_64 -> x86 merge.
- Use asm-x86/asm.h.
- Change the immediate.c update code to support variable length opcodes.
- Use imv_* instead of immediate_*.
- Use kernel_wp_disable/enable instead of save/restore.
- Fix 1 byte immediate value so it declares its instruction size.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Andi Kleen <ak@muc.de>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: Chuck Ebbert <cebbert@redhat.com>
CC: Christoph Hellwig <hch@infradead.org>
CC: Jeremy Fitzhardinge <jeremy@goop.org>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
---
arch/x86/kernel/Makefile | 1
arch/x86/kernel/immediate.c | 291 ++++++++++++++++++++++++++++++++++++++++++++
arch/x86/kernel/traps_32.c | 9 -
include/asm-x86/immediate.h | 48 ++++++-
4 files changed, 339 insertions(+), 10 deletions(-)
Index: linux-2.6-sched-devel/include/asm-x86/immediate.h
===================================================================
--- linux-2.6-sched-devel.orig/include/asm-x86/immediate.h 2008-04-16 12:01:00.000000000 -0400
+++ linux-2.6-sched-devel/include/asm-x86/immediate.h 2008-04-16 12:01:01.000000000 -0400
@@ -12,6 +12,18 @@
#include <asm/asm.h>
+struct __imv {
+ unsigned long var; /* Pointer to the identifier variable of the
+ * immediate value
+ */
+ unsigned long imv; /*
+ * Pointer to the memory location of the
+ * immediate value within the instruction.
+ */
+ unsigned char size; /* Type size. */
+ unsigned char insn_size;/* Instruction size. */
+} __attribute__ ((packed));
+
/**
* imv_read - read immediate variable
* @name: immediate value name
@@ -26,6 +38,11 @@
* what will generate an instruction with 8 bytes immediate value (not the REX.W
* prefixed one that loads a sign extended 32 bits immediate value in a r64
* register).
+ *
+ * Create the instruction in a discarded section to calculate its size. This is
+ * how we can align the beginning of the instruction on an address that will
+ * permit atomic modification of the immediate value without knowing the size of
+ * the opcode used by the compiler. The operand size is known in advance.
*/
#define imv_read(name) \
({ \
@@ -33,9 +50,14 @@
BUILD_BUG_ON(sizeof(value) > 8); \
switch (sizeof(value)) { \
case 1: \
- asm(".section __imv,\"aw\",@progbits\n\t" \
+ asm(".section __discard,\"\",@progbits\n\t" \
+ "1:\n\t" \
+ "mov $0,%0\n\t" \
+ "2:\n\t" \
+ ".previous\n\t" \
+ ".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
- ".byte %c2\n\t" \
+ ".byte %c2, (2b-1b)\n\t" \
".previous\n\t" \
"mov $0,%0\n\t" \
"3:\n\t" \
@@ -45,10 +67,16 @@
break; \
case 2: \
case 4: \
- asm(".section __imv,\"aw\",@progbits\n\t" \
+ asm(".section __discard,\"\",@progbits\n\t" \
+ "1:\n\t" \
+ "mov $0,%0\n\t" \
+ "2:\n\t" \
+ ".previous\n\t" \
+ ".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
- ".byte %c2\n\t" \
+ ".byte %c2, (2b-1b)\n\t" \
".previous\n\t" \
+ ".org . + ((-.-(2b-1b)) & (%c2-1)), 0x90\n\t" \
"mov $0,%0\n\t" \
"3:\n\t" \
: "=r" (value) \
@@ -60,10 +88,16 @@
value = name##__imv; \
break; \
} \
- asm(".section __imv,\"aw\",@progbits\n\t" \
+ asm(".section __discard,\"\",@progbits\n\t" \
+ "1:\n\t" \
+ "mov $0xFEFEFEFE01010101,%0\n\t" \
+ "2:\n\t" \
+ ".previous\n\t" \
+ ".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
- ".byte %c2\n\t" \
+ ".byte %c2, (2b-1b)\n\t" \
".previous\n\t" \
+ ".org . + ((-.-(2b-1b)) & (%c2-1)), 0x90\n\t" \
"mov $0xFEFEFEFE01010101,%0\n\t" \
"3:\n\t" \
: "=r" (value) \
@@ -74,4 +108,6 @@
value; \
})
+extern int arch_imv_update(const struct __imv *imv, int early);
+
#endif /* _ASM_X86_IMMEDIATE_H */
Index: linux-2.6-sched-devel/arch/x86/kernel/traps_32.c
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kernel/traps_32.c 2008-04-16 12:01:00.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kernel/traps_32.c 2008-04-16 12:01:01.000000000 -0400
@@ -592,7 +592,7 @@ void do_##name(struct pt_regs *regs, lon
}
DO_VM86_ERROR_INFO(0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->ip)
-#ifndef CONFIG_KPROBES
+#if !defined(CONFIG_KPROBES) && !defined(CONFIG_IMMEDIATE)
DO_VM86_ERROR(3, SIGTRAP, "int3", int3)
#endif
DO_VM86_ERROR(4, SIGSEGV, "overflow", overflow)
@@ -857,7 +857,7 @@ void restart_nmi(void)
acpi_nmi_enable();
}
-#ifdef CONFIG_KPROBES
+#if defined(CONFIG_KPROBES) || defined(CONFIG_IMMEDIATE)
void __kprobes do_int3(struct pt_regs *regs, long error_code)
{
trace_hardirqs_fixup();
@@ -866,9 +866,10 @@ void __kprobes do_int3(struct pt_regs *r
== NOTIFY_STOP)
return;
/*
- * This is an interrupt gate, because kprobes wants interrupts
- * disabled. Normal trap handlers don't.
+ * This is an interrupt gate, because kprobes and immediate values wants
+ * interrupts disabled. Normal trap handlers don't.
*/
+
restore_interrupts(regs);
do_trap(3, SIGTRAP, "int3", 1, regs, error_code, NULL);
Index: linux-2.6-sched-devel/arch/x86/kernel/Makefile
===================================================================
--- linux-2.6-sched-devel.orig/arch/x86/kernel/Makefile 2008-04-16 12:01:00.000000000 -0400
+++ linux-2.6-sched-devel/arch/x86/kernel/Makefile 2008-04-16 12:01:01.000000000 -0400
@@ -68,6 +68,7 @@ obj-y += vsmp_64.o
obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_MODULES) += module_$(BITS).o
obj-$(CONFIG_ACPI_SRAT) += srat_32.o
+obj-$(CONFIG_IMMEDIATE) += immediate.o
obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o
obj-$(CONFIG_DOUBLEFAULT) += doublefault_32.o
obj-$(CONFIG_KGDB) += kgdb.o
Index: linux-2.6-sched-devel/arch/x86/kernel/immediate.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-sched-devel/arch/x86/kernel/immediate.c 2008-04-16 12:01:20.000000000 -0400
@@ -0,0 +1,291 @@
+/*
+ * Immediate Value - x86 architecture specific code.
+ *
+ * Rationale
+ *
+ * Required because of :
+ * - Erratum 49 fix for Intel PIII.
+ * - Still present on newer processors : Intel Core 2 Duo Processor for Intel
+ * Centrino Duo Processor Technology Specification Update, AH33.
+ * Unsynchronized Cross-Modifying Code Operations Can Cause Unexpected
+ * Instruction Execution Results.
+ *
+ * Permits immediate value modification by XMC with correct serialization.
+ *
+ * Reentrant for NMI and trap handler instrumentation. Permits XMC to a
+ * location that has preemption enabled because it involves no temporary or
+ * reused data structure.
+ *
+ * Quoting Richard J Moore, source of the information motivating this
+ * implementation which differs from the one proposed by Intel which is not
+ * suitable for kernel context (does not support NMI and would require disabling
+ * interrupts on every CPU for a long period) :
+ *
+ * "There is another issue to consider when looking into using probes other
+ * then int3:
+ *
+ * Intel erratum 54 - Unsynchronized Cross-modifying code - refers to the
+ * practice of modifying code on one processor where another has prefetched
+ * the unmodified version of the code. Intel states that unpredictable general
+ * protection faults may result if a synchronizing instruction (iret, int,
+ * int3, cpuid, etc ) is not executed on the second processor before it
+ * executes the pre-fetched out-of-date copy of the instruction.
+ *
+ * When we became aware of this I had a long discussion with Intel's
+ * microarchitecture guys. It turns out that the reason for this erratum
+ * (which incidentally Intel does not intend to fix) is because the trace
+ * cache - the stream of micro-ops resulting from instruction interpretation -
+ * cannot be guaranteed to be valid. Reading between the lines I assume this
+ * issue arises because of optimization done in the trace cache, where it is
+ * no longer possible to identify the original instruction boundaries. If the
+ * CPU discoverers that the trace cache has been invalidated because of
+ * unsynchronized cross-modification then instruction execution will be
+ * aborted with a GPF. Further discussion with Intel revealed that replacing
+ * the first opcode byte with an int3 would not be subject to this erratum.
+ *
+ * So, is cmpxchg reliable? One has to guarantee more than mere atomicity."
+ *
+ * Overall design
+ *
+ * The algorithm proposed by Intel applies not so well in kernel context: it
+ * would imply disabling interrupts and looping on every CPUs while modifying
+ * the code and would not support instrumentation of code called from interrupt
+ * sources that cannot be disabled.
+ *
+ * Therefore, we use a different algorithm to respect Intel's erratum (see the
+ * quoted discussion above). We make sure that no CPU sees an out-of-date copy
+ * of a pre-fetched instruction by 1 - using a breakpoint, which skips the
+ * instruction that is going to be modified, 2 - issuing an IPI to every CPU to
+ * execute a sync_core(), to make sure that even when the breakpoint is removed,
+ * no cpu could possibly still have the out-of-date copy of the instruction,
+ * modify the now unused 2nd byte of the instruction, and then put back the
+ * original 1st byte of the instruction.
+ *
+ * It has exactly the same intent as the algorithm proposed by Intel, but
+ * it has less side-effects, scales better and supports NMI, SMI and MCE.
+ *
+ * Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
+ */
+
+#include <linux/preempt.h>
+#include <linux/smp.h>
+#include <linux/notifier.h>
+#include <linux/module.h>
+#include <linux/immediate.h>
+#include <linux/kdebug.h>
+#include <linux/rcupdate.h>
+#include <linux/kprobes.h>
+#include <linux/io.h>
+
+#include <asm/cacheflush.h>
+
+#define BREAKPOINT_INSTRUCTION 0xcc
+#define BREAKPOINT_INS_LEN 1
+#define NR_NOPS 10
+
+static unsigned long target_after_int3; /* EIP of the target after the int3 */
+static unsigned long bypass_eip; /* EIP of the bypass. */
+static unsigned long bypass_after_int3; /* EIP after the end-of-bypass int3 */
+static unsigned long after_imv; /*
+ * EIP where to resume after the
+ * single-stepping.
+ */
+
+/*
+ * Internal bypass used during value update. The bypass is skipped by the
+ * function in which it is inserted.
+ * No need to be aligned because we exclude readers from the site during
+ * update.
+ * Layout is:
+ * (10x nop) int3
+ * (maximum size is 2 bytes opcode + 8 bytes immediate value for long on x86_64)
+ * The nops are the target replaced by the instruction to single-step.
+ * Align on 16 bytes to make sure the nops fit within a single page so remapping
+ * it can be done easily.
+ */
+static inline void _imv_bypass(unsigned long *bypassaddr,
+ unsigned long *breaknextaddr)
+{
+ asm volatile("jmp 2f;\n\t"
+ ".align 16;\n\t"
+ "0:\n\t"
+ ".space 10, 0x90;\n\t"
+ "1:\n\t"
+ "int3;\n\t"
+ "2:\n\t"
+ "mov $(0b),%0;\n\t"
+ "mov $((1b)+1),%1;\n\t"
+ : "=r" (*bypassaddr),
+ "=r" (*breaknextaddr));
+}
+
+static void imv_synchronize_core(void *info)
+{
+ sync_core(); /* use cpuid to stop speculative execution */
+}
+
+/*
+ * The eip value points right after the breakpoint instruction, in the second
+ * byte of the movl.
+ * Disable preemption in the bypass to make sure no thread will be preempted in
+ * it. We can then use synchronize_sched() to make sure every bypass users have
+ * ended.
+ */
+static int imv_notifier(struct notifier_block *nb,
+ unsigned long val, void *data)
+{
+ enum die_val die_val = (enum die_val) val;
+ struct die_args *args = data;
+
+ if (!args->regs || user_mode_vm(args->regs))
+ return NOTIFY_DONE;
+
+ if (die_val == DIE_INT3) {
+ if (args->regs->ip == target_after_int3) {
+ preempt_disable();
+ args->regs->ip = bypass_eip;
+ return NOTIFY_STOP;
+ } else if (args->regs->ip == bypass_after_int3) {
+ args->regs->ip = after_imv;
+ preempt_enable();
+ return NOTIFY_STOP;
+ }
+ }
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block imv_notify = {
+ .notifier_call = imv_notifier,
+ .priority = 0x7fffffff, /* we need to be notified first */
+};
+
+/**
+ * arch_imv_update - update one immediate value
+ * @imv: pointer of type const struct __imv to update
+ * @early: early boot (1) or normal (0)
+ *
+ * Update one immediate value. Must be called with imv_mutex held.
+ */
+__kprobes int arch_imv_update(const struct __imv *imv, int early)
+{
+ int ret;
+ unsigned char opcode_size = imv->insn_size - imv->size;
+ unsigned long insn = imv->imv - opcode_size;
+ unsigned long len;
+ char *vaddr;
+ struct page *pages[1];
+
+#ifdef CONFIG_KPROBES
+ /*
+ * Fail if a kprobe has been set on this instruction.
+ * (TODO: we could eventually do better and modify all the (possibly
+ * nested) kprobes for this site if kprobes had an API for this.
+ */
+ if (unlikely(!early
+ && *(unsigned char *)insn == BREAKPOINT_INSTRUCTION)) {
+ printk(KERN_WARNING "Immediate value in conflict with kprobe. "
+ "Variable at %p, "
+ "instruction at %p, size %hu\n",
+ (void *)imv->imv,
+ (void *)imv->var, imv->size);
+ return -EBUSY;
+ }
+#endif
+
+ /*
+ * If the variable and the instruction have the same value, there is
+ * nothing to do.
+ */
+ switch (imv->size) {
+ case 1: if (*(uint8_t *)imv->imv
+ == *(uint8_t *)imv->var)
+ return 0;
+ break;
+ case 2: if (*(uint16_t *)imv->imv
+ == *(uint16_t *)imv->var)
+ return 0;
+ break;
+ case 4: if (*(uint32_t *)imv->imv
+ == *(uint32_t *)imv->var)
+ return 0;
+ break;
+#ifdef CONFIG_X86_64
+ case 8: if (*(uint64_t *)imv->imv
+ == *(uint64_t *)imv->var)
+ return 0;
+ break;
+#endif
+ default:return -EINVAL;
+ }
+
+ if (!early) {
+ /* bypass is 10 bytes long for x86_64 long */
+ WARN_ON(imv->insn_size > 10);
+ _imv_bypass(&bypass_eip, &bypass_after_int3);
+
+ after_imv = imv->imv + imv->size;
+
+ /*
+ * Using the _early variants because nobody is executing the
+ * bypass code while we patch it. It is protected by the
+ * imv_mutex. Since we modify the instructions non atomically
+ * (for nops), we have to use the _early variant.
+ * We must however deal with RO pages.
+ * Use a single page : 10 bytes are aligned on 16 bytes
+ * boundaries.
+ */
+ pages[0] = virt_to_page((void *)bypass_eip);
+ vaddr = vmap(pages, 1, VM_MAP, PAGE_KERNEL);
+ BUG_ON(!vaddr);
+ text_poke_early(&vaddr[bypass_eip & ~PAGE_MASK],
+ (void *)insn, imv->insn_size);
+ /*
+ * Fill the rest with nops.
+ */
+ len = NR_NOPS - imv->insn_size;
+ add_nops((void *)
+ &vaddr[(bypass_eip & ~PAGE_MASK) + imv->insn_size],
+ len);
+ vunmap(vaddr);
+
+ target_after_int3 = insn + BREAKPOINT_INS_LEN;
+ /* register_die_notifier has memory barriers */
+ register_die_notifier(&imv_notify);
+ /* The breakpoint will single-step the bypass */
+ text_poke((void *)insn,
+ ((unsigned char[]){BREAKPOINT_INSTRUCTION}), 1);
+ /*
+ * Make sure the breakpoint is set before we continue (visible
+ * to other CPUs and interrupts).
+ */
+ wmb();
+ /*
+ * Execute serializing instruction on each CPU.
+ */
+ ret = on_each_cpu(imv_synchronize_core, NULL, 1, 1);
+ BUG_ON(ret != 0);
+
+ text_poke((void *)(insn + opcode_size), (void *)imv->var,
+ imv->size);
+ /*
+ * Make sure the value can be seen from other CPUs and
+ * interrupts.
+ */
+ wmb();
+ text_poke((void *)insn, (unsigned char *)bypass_eip, 1);
+ /*
+ * Wait for all int3 handlers to end (interrupts are disabled in
+ * int3). This CPU is clearly not in a int3 handler, because
+ * int3 handler is not preemptible and there cannot be any more
+ * int3 handler called for this site, because we placed the
+ * original instruction back. synchronize_sched has memory
+ * barriers.
+ */
+ synchronize_sched();
+ unregister_die_notifier(&imv_notify);
+ /* unregister_die_notifier has memory barriers */
+ } else
+ text_poke_early((void *)imv->imv, (void *)imv->var,
+ imv->size);
+ return 0;
+}
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI MCE support
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (21 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 22/27] Immediate Values - x86 Optimization NMI and MCE support Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 23:09 ` Paul Mackerras
2008-04-16 21:34 ` [RFC patch 24/27] Immediate Values Use Arch NMI and MCE Support Mathieu Desnoyers
` (3 subsequent siblings)
26 siblings, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel
Cc: Mathieu Desnoyers, Rusty Russell, Christoph Hellwig, Paul Mackerras
[-- Attachment #1: immediate-values-powerpc-optimization-nmi-mce-support.patch --]
[-- Type: text/plain, Size: 4966 bytes --]
Use an atomic update for immediate values.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Christoph Hellwig <hch@infradead.org>
CC: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kernel/Makefile | 1
arch/powerpc/kernel/immediate.c | 73 ++++++++++++++++++++++++++++++++++++++++
include/asm-powerpc/immediate.h | 18 +++++++++
3 files changed, 92 insertions(+)
Index: linux-2.6-lttng/arch/powerpc/kernel/immediate.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-lttng/arch/powerpc/kernel/immediate.c 2008-03-03 10:23:54.000000000 -0500
@@ -0,0 +1,73 @@
+/*
+ * Powerpc optimized immediate values enabling/disabling.
+ *
+ * Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
+ */
+
+#include <linux/module.h>
+#include <linux/immediate.h>
+#include <linux/string.h>
+#include <linux/kprobes.h>
+#include <asm/cacheflush.h>
+#include <asm/page.h>
+
+#define LI_OPCODE_LEN 2
+
+/**
+ * arch_imv_update - update one immediate value
+ * @imv: pointer of type const struct __imv to update
+ * @early: early boot (1), normal (0)
+ *
+ * Update one immediate value. Must be called with imv_mutex held.
+ */
+int arch_imv_update(const struct __imv *imv, int early)
+{
+#ifdef CONFIG_KPROBES
+ kprobe_opcode_t *insn;
+ /*
+ * Fail if a kprobe has been set on this instruction.
+ * (TODO: we could eventually do better and modify all the (possibly
+ * nested) kprobes for this site if kprobes had an API for this.
+ */
+ switch (imv->size) {
+ case 1: /* The uint8_t points to the 3rd byte of the
+ * instruction */
+ insn = (void *)(imv->imv - 1 - LI_OPCODE_LEN);
+ break;
+ case 2: insn = (void *)(imv->imv - LI_OPCODE_LEN);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (unlikely(!early && *insn == BREAKPOINT_INSTRUCTION)) {
+ printk(KERN_WARNING "Immediate value in conflict with kprobe. "
+ "Variable at %p, "
+ "instruction at %p, size %lu\n",
+ (void *)imv->imv,
+ (void *)imv->var, imv->size);
+ return -EBUSY;
+ }
+#endif
+
+ /*
+ * If the variable and the instruction have the same value, there is
+ * nothing to do.
+ */
+ switch (imv->size) {
+ case 1: if (*(uint8_t *)imv->imv
+ == *(uint8_t *)imv->var)
+ return 0;
+ break;
+ case 2: if (*(uint16_t *)imv->imv
+ == *(uint16_t *)imv->var)
+ return 0;
+ break;
+ default:return -EINVAL;
+ }
+ memcpy((void *)imv->imv, (void *)imv->var,
+ imv->size);
+ flush_icache_range(imv->imv,
+ imv->imv + imv->size);
+ return 0;
+}
Index: linux-2.6-lttng/include/asm-powerpc/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-powerpc/immediate.h 2008-03-03 10:23:54.000000000 -0500
+++ linux-2.6-lttng/include/asm-powerpc/immediate.h 2008-03-03 10:23:54.000000000 -0500
@@ -12,6 +12,16 @@
#include <asm/asm-compat.h>
+struct __imv {
+ unsigned long var; /* Identifier variable of the immediate value */
+ unsigned long imv; /*
+ * Pointer to the memory location that holds
+ * the immediate value within the load immediate
+ * instruction.
+ */
+ unsigned char size; /* Type size. */
+} __attribute__ ((packed));
+
/**
* imv_read - read immediate variable
* @name: immediate value name
@@ -19,6 +29,11 @@
* Reads the value of @name.
* Optimized version of the immediate.
* Do not use in __init and __exit functions. Use _imv_read() instead.
+ * Makes sure the 2 bytes update will be atomic by aligning the immediate
+ * value. Use a normal memory read for the 4 bytes immediate because there is no
+ * way to atomically update it without using a seqlock read side, which would
+ * cost more in term of total i-cache and d-cache space than a simple memory
+ * read.
*/
#define imv_read(name) \
({ \
@@ -40,6 +55,7 @@
PPC_LONG "%c1, ((1f)-2)\n\t" \
".byte 2\n\t" \
".previous\n\t" \
+ ".align 2\n\t" \
"li %0,0\n\t" \
"1:\n\t" \
: "=r" (value) \
@@ -52,4 +68,6 @@
value; \
})
+extern int arch_imv_update(const struct __imv *imv, int early);
+
#endif /* _ASM_POWERPC_IMMEDIATE_H */
Index: linux-2.6-lttng/arch/powerpc/kernel/Makefile
===================================================================
--- linux-2.6-lttng.orig/arch/powerpc/kernel/Makefile 2008-03-03 09:51:27.000000000 -0500
+++ linux-2.6-lttng/arch/powerpc/kernel/Makefile 2008-03-03 10:23:54.000000000 -0500
@@ -45,6 +45,7 @@ obj-$(CONFIG_HIBERNATION) += swsusp.o su
obj64-$(CONFIG_HIBERNATION) += swsusp_asm64.o
obj-$(CONFIG_MODULES) += module_$(CONFIG_WORD_SIZE).o
obj-$(CONFIG_44x) += cpu_setup_44x.o
+obj-$(CONFIG_IMMEDIATE) += immediate.o
ifeq ($(CONFIG_PPC_MERGE),y)
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 24/27] Immediate Values Use Arch NMI and MCE Support
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (22 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI " Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 25/27] Linux Kernel Markers - Use Immediate Values Mathieu Desnoyers
` (2 subsequent siblings)
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers
[-- Attachment #1: immediate-values-use-arch-nmi-mce-support.patch --]
[-- Type: text/plain, Size: 4216 bytes --]
Remove the architecture agnostic code now replaced by architecture specific,
atomic instruction updates.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
---
include/linux/immediate.h | 11 ------
kernel/immediate.c | 73 +---------------------------------------------
2 files changed, 3 insertions(+), 81 deletions(-)
Index: linux-2.6-lttng/kernel/immediate.c
===================================================================
--- linux-2.6-lttng.orig/kernel/immediate.c 2008-04-11 09:41:33.000000000 -0400
+++ linux-2.6-lttng/kernel/immediate.c 2008-04-14 18:48:05.000000000 -0400
@@ -19,92 +19,23 @@
#include <linux/mutex.h>
#include <linux/immediate.h>
#include <linux/memory.h>
-#include <linux/cpu.h>
-#include <linux/stop_machine.h>
#include <asm/sections.h>
-#include <asm/cacheflush.h>
/*
* Kernel ready to execute the SMP update that may depend on trap and ipi.
*/
static int imv_early_boot_complete;
-static int wrote_text;
extern struct __imv __start___imv[];
extern struct __imv __stop___imv[];
-static int stop_machine_imv_update(void *imv_ptr)
-{
- struct __imv *imv = imv_ptr;
-
- if (!wrote_text) {
- text_poke((void *)imv->imv, (void *)imv->var, imv->size);
- wrote_text = 1;
- smp_wmb(); /* make sure other cpus see that this has run */
- } else
- sync_core();
-
- flush_icache_range(imv->imv, imv->imv + imv->size);
-
- return 0;
-}
-
/*
* imv_mutex nests inside module_mutex. imv_mutex protects builtin
* immediates and module immediates.
*/
static DEFINE_MUTEX(imv_mutex);
-
-/**
- * apply_imv_update - update one immediate value
- * @imv: pointer of type const struct __imv to update
- *
- * Update one immediate value. Must be called with imv_mutex held.
- * It makes sure all CPUs are not executing the modified code by having them
- * busy looping with interrupts disabled.
- * It does _not_ protect against NMI and MCE (could be a problem with Intel's
- * errata if we use immediate values in their code path).
- */
-static int apply_imv_update(const struct __imv *imv)
-{
- /*
- * If the variable and the instruction have the same value, there is
- * nothing to do.
- */
- switch (imv->size) {
- case 1: if (*(uint8_t *)imv->imv
- == *(uint8_t *)imv->var)
- return 0;
- break;
- case 2: if (*(uint16_t *)imv->imv
- == *(uint16_t *)imv->var)
- return 0;
- break;
- case 4: if (*(uint32_t *)imv->imv
- == *(uint32_t *)imv->var)
- return 0;
- break;
- case 8: if (*(uint64_t *)imv->imv
- == *(uint64_t *)imv->var)
- return 0;
- break;
- default:return -EINVAL;
- }
-
- if (imv_early_boot_complete) {
- kernel_text_lock();
- wrote_text = 0;
- stop_machine_run(stop_machine_imv_update, (void *)imv,
- ALL_CPUS);
- kernel_text_unlock();
- } else
- text_poke_early((void *)imv->imv, (void *)imv->var,
- imv->size);
- return 0;
-}
-
/**
* imv_update_range - Update immediate values in a range
* @begin: pointer to the beginning of the range
@@ -121,7 +52,9 @@ void imv_update_range(const struct __imv
mutex_lock(&imv_mutex);
if (!iter->imv) /* Skip removed __init immediate values */
goto skip;
- ret = apply_imv_update(iter);
+ kernel_text_lock();
+ ret = arch_imv_update(iter, !imv_early_boot_complete);
+ kernel_text_unlock();
if (imv_early_boot_complete && ret)
printk(KERN_WARNING
"Invalid immediate value. "
Index: linux-2.6-lttng/include/linux/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/immediate.h 2008-04-11 09:36:58.000000000 -0400
+++ linux-2.6-lttng/include/linux/immediate.h 2008-04-14 18:46:47.000000000 -0400
@@ -12,17 +12,6 @@
#ifdef CONFIG_IMMEDIATE
-struct __imv {
- unsigned long var; /* Pointer to the identifier variable of the
- * immediate value
- */
- unsigned long imv; /*
- * Pointer to the memory location of the
- * immediate value within the instruction.
- */
- unsigned char size; /* Type size. */
-} __attribute__ ((packed));
-
#include <asm/immediate.h>
/**
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 25/27] Linux Kernel Markers - Use Immediate Values
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (23 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 24/27] Immediate Values Use Arch NMI and MCE Support Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 26/27] Immediate Values - Jump Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 27/27] Markers use imv jump Mathieu Desnoyers
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers
[-- Attachment #1: linux-kernel-markers-immediate-values.patch --]
[-- Type: text/plain, Size: 5749 bytes --]
Make markers use immediate values.
Changelog :
- Use imv_* instead of immediate_*.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
---
Documentation/markers.txt | 17 +++++++++++++----
include/linux/marker.h | 16 ++++++++++++----
kernel/marker.c | 8 ++++++--
kernel/module.c | 1 +
4 files changed, 32 insertions(+), 10 deletions(-)
Index: linux-2.6-sched-devel/include/linux/marker.h
===================================================================
--- linux-2.6-sched-devel.orig/include/linux/marker.h 2008-04-16 11:30:51.000000000 -0400
+++ linux-2.6-sched-devel/include/linux/marker.h 2008-04-16 11:30:55.000000000 -0400
@@ -12,6 +12,7 @@
* See the file COPYING for more details.
*/
+#include <linux/immediate.h>
#include <linux/types.h>
struct module;
@@ -42,7 +43,7 @@ struct marker {
const char *format; /* Marker format string, describing the
* variable argument list.
*/
- char state; /* Marker state. */
+ DEFINE_IMV(char, state);/* Immediate value state. */
char ptype; /* probe type : 0 : single, 1 : multi */
/* Probe wrapper */
void (*call)(const struct marker *mdata, void *call_private, ...);
@@ -74,9 +75,16 @@ struct marker {
0, 0, marker_probe_cb, \
{ __mark_empty_function, NULL}, NULL }; \
__mark_check_format(format, ## args); \
- if (unlikely(__mark_##name.state)) { \
- (*__mark_##name.call) \
- (&__mark_##name, call_private, ## args);\
+ if (!generic) { \
+ if (unlikely(imv_read(__mark_##name.state))) \
+ (*__mark_##name.call) \
+ (&__mark_##name, call_private, \
+ ## args); \
+ } else { \
+ if (unlikely(_imv_read(__mark_##name.state))) \
+ (*__mark_##name.call) \
+ (&__mark_##name, call_private, \
+ ## args); \
} \
} while (0)
Index: linux-2.6-sched-devel/kernel/marker.c
===================================================================
--- linux-2.6-sched-devel.orig/kernel/marker.c 2008-04-16 11:30:51.000000000 -0400
+++ linux-2.6-sched-devel/kernel/marker.c 2008-04-16 11:30:51.000000000 -0400
@@ -23,6 +23,7 @@
#include <linux/rcupdate.h>
#include <linux/marker.h>
#include <linux/err.h>
+#include <linux/immediate.h>
extern struct marker __start___markers[];
extern struct marker __stop___markers[];
@@ -542,7 +543,7 @@ static int set_marker(struct marker_entr
*/
smp_wmb();
elem->ptype = (*entry)->ptype;
- elem->state = active;
+ elem->state__imv = active;
return 0;
}
@@ -556,7 +557,7 @@ static int set_marker(struct marker_entr
static void disable_marker(struct marker *elem)
{
/* leave "call" as is. It is known statically. */
- elem->state = 0;
+ elem->state__imv = 0;
elem->single.func = __mark_empty_function;
/* Update the function before setting the ptype */
smp_wmb();
@@ -620,6 +621,9 @@ static void marker_update_probes(void)
marker_update_probe_range(__start___markers, __stop___markers);
/* Markers in modules. */
module_update_markers();
+ /* Update immediate values */
+ core_imv_update();
+ module_imv_update();
}
/**
Index: linux-2.6-sched-devel/Documentation/markers.txt
===================================================================
--- linux-2.6-sched-devel.orig/Documentation/markers.txt 2008-04-16 11:30:32.000000000 -0400
+++ linux-2.6-sched-devel/Documentation/markers.txt 2008-04-16 11:30:51.000000000 -0400
@@ -15,10 +15,12 @@ provide at runtime. A marker can be "on"
(no probe is attached). When a marker is "off" it has no effect, except for
adding a tiny time penalty (checking a condition for a branch) and space
penalty (adding a few bytes for the function call at the end of the
-instrumented function and adds a data structure in a separate section). When a
-marker is "on", the function you provide is called each time the marker is
-executed, in the execution context of the caller. When the function provided
-ends its execution, it returns to the caller (continuing from the marker site).
+instrumented function and adds a data structure in a separate section). The
+immediate values are used to minimize the impact on data cache, encoding the
+condition in the instruction stream. When a marker is "on", the function you
+provide is called each time the marker is executed, in the execution context of
+the caller. When the function provided ends its execution, it returns to the
+caller (continuing from the marker site).
You can put markers at important locations in the code. Markers are
lightweight hooks that can pass an arbitrary number of parameters,
@@ -69,6 +71,13 @@ a printk warning which identifies the in
"Format mismatch for probe probe_name (format), marker (format)"
+* Optimization for a given architecture
+
+To force use of a non-optimized version of the markers, _trace_mark() should be
+used. It takes the same parameters as the normal markers, but it does not use
+the immediate values based on code patching.
+
+
* Probe / marker example
See the example provided in samples/markers/src
Index: linux-2.6-sched-devel/kernel/module.c
===================================================================
--- linux-2.6-sched-devel.orig/kernel/module.c 2008-04-16 11:30:51.000000000 -0400
+++ linux-2.6-sched-devel/kernel/module.c 2008-04-16 11:30:51.000000000 -0400
@@ -2053,6 +2053,7 @@ static struct module *load_module(void _
mod->markers + mod->num_markers);
#endif
#ifdef CONFIG_IMMEDIATE
+ /* Immediate values must be updated after markers */
imv_update_range(mod->immediate,
mod->immediate + mod->num_immediate);
#endif
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 26/27] Immediate Values - Jump
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (24 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 25/27] Linux Kernel Markers - Use Immediate Values Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
2008-04-19 11:41 ` KOSAKI Motohiro
2008-04-16 21:34 ` [RFC patch 27/27] Markers use imv jump Mathieu Desnoyers
26 siblings, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers
[-- Attachment #1: immediate-values-jump.patch --]
[-- Type: text/plain, Size: 18684 bytes --]
Adds a new imv_cond() macro to declare a byte read that is meant to be embedded
in unlikely(imv_cond(var)), so the kernel can dynamically detect patterns such
as mov, test, jne or mov, test, je and patch it with nops and a jump.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
---
arch/x86/kernel/immediate.c | 381 ++++++++++++++++++++++++++++++++--------
include/asm-powerpc/immediate.h | 2
include/asm-x86/immediate.h | 34 +++
include/linux/immediate.h | 11 -
kernel/immediate.c | 6
5 files changed, 359 insertions(+), 75 deletions(-)
Index: linux-2.6-lttng/include/asm-x86/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-x86/immediate.h 2008-04-16 14:04:47.000000000 -0400
+++ linux-2.6-lttng/include/asm-x86/immediate.h 2008-04-16 14:19:13.000000000 -0400
@@ -20,6 +20,7 @@ struct __imv {
* Pointer to the memory location of the
* immediate value within the instruction.
*/
+ int jmp_off; /* offset for jump target */
unsigned char size; /* Type size. */
unsigned char insn_size;/* Instruction size. */
} __attribute__ ((packed));
@@ -57,6 +58,7 @@ struct __imv {
".previous\n\t" \
".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
+ ".int 0\n\t" \
".byte %c2, (2b-1b)\n\t" \
".previous\n\t" \
"mov $0,%0\n\t" \
@@ -74,6 +76,7 @@ struct __imv {
".previous\n\t" \
".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
+ ".int 0\n\t" \
".byte %c2, (2b-1b)\n\t" \
".previous\n\t" \
".org . + ((-.-(2b-1b)) & (%c2-1)), 0x90\n\t" \
@@ -95,6 +98,7 @@ struct __imv {
".previous\n\t" \
".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
+ ".int 0\n\t" \
".byte %c2, (2b-1b)\n\t" \
".previous\n\t" \
".org . + ((-.-(2b-1b)) & (%c2-1)), 0x90\n\t" \
@@ -108,6 +112,34 @@ struct __imv {
value; \
})
-extern int arch_imv_update(const struct __imv *imv, int early);
+/*
+ * Uses %al.
+ * size is 0.
+ * Use in if (unlikely(imv_cond(var)))
+ * Given a char as argument.
+ */
+#define imv_cond(name) \
+ ({ \
+ __typeof__(name##__imv) value; \
+ BUILD_BUG_ON(sizeof(value) > 1); \
+ asm (".section __discard,\"\",@progbits\n\t" \
+ "1:\n\t" \
+ "mov $0,%0\n\t" \
+ "2:\n\t" \
+ ".previous\n\t" \
+ ".section __imv,\"aw\",@progbits\n\t" \
+ _ASM_PTR "%c1, (3f)-1\n\t" \
+ ".int 0\n\t" \
+ ".byte %c2, (2b-1b)\n\t" \
+ ".previous\n\t" \
+ "mov $0,%0\n\t" \
+ "3:\n\t" \
+ : "=a" (value) \
+ : "i" (&name##__imv), \
+ "i" (0)); \
+ value; \
+ })
+
+extern int arch_imv_update(struct __imv *imv, int early);
#endif /* _ASM_X86_IMMEDIATE_H */
Index: linux-2.6-lttng/arch/x86/kernel/immediate.c
===================================================================
--- linux-2.6-lttng.orig/arch/x86/kernel/immediate.c 2008-04-16 14:04:47.000000000 -0400
+++ linux-2.6-lttng/arch/x86/kernel/immediate.c 2008-04-16 14:06:17.000000000 -0400
@@ -80,13 +80,19 @@
#include <asm/cacheflush.h>
#define BREAKPOINT_INSTRUCTION 0xcc
+#define JMP_REL8 0xeb
+#define JMP_REL32 0xe9
+#define INSN_NOP1 0x90
+#define INSN_NOP2 0x89, 0xf6
#define BREAKPOINT_INS_LEN 1
#define NR_NOPS 10
+/*#define DEBUG_IMMEDIATE 1*/
+
static unsigned long target_after_int3; /* EIP of the target after the int3 */
static unsigned long bypass_eip; /* EIP of the bypass. */
static unsigned long bypass_after_int3; /* EIP after the end-of-bypass int3 */
-static unsigned long after_imv; /*
+static unsigned long after_imv; /*
* EIP where to resume after the
* single-stepping.
*/
@@ -142,6 +148,25 @@ static int imv_notifier(struct notifier_
if (die_val == DIE_INT3) {
if (args->regs->ip == target_after_int3) {
+ /* deal with non-relocatable jmp instructions */
+ switch (*(uint8_t *)bypass_eip) {
+ case JMP_REL8: /* eb cb jmp rel8 */
+ args->regs->ip +=
+ *(signed char *)(bypass_eip + 1) + 1;
+ return NOTIFY_STOP;
+ case JMP_REL32: /* e9 cw jmp rel16 (valid on ia32) */
+ /* e9 cd jmp rel32 */
+ args->regs->ip +=
+ *(int *)(bypass_eip + 1) + 4;
+ return NOTIFY_STOP;
+ case INSN_NOP1:
+ /* deal with insertion of nop + jmp_rel32 */
+ if (*((uint8_t *)bypass_eip + 1) == JMP_REL32) {
+ args->regs->ip +=
+ *(int *)(bypass_eip + 2) + 5;
+ return NOTIFY_STOP;
+ }
+ }
preempt_disable();
args->regs->ip = bypass_eip;
return NOTIFY_STOP;
@@ -159,71 +184,107 @@ static struct notifier_block imv_notify
.priority = 0x7fffffff, /* we need to be notified first */
};
-/**
- * arch_imv_update - update one immediate value
- * @imv: pointer of type const struct __imv to update
- * @early: early boot (1) or normal (0)
- *
- * Update one immediate value. Must be called with imv_mutex held.
+/*
+ * returns -1 if not found
+ * return 0 if found.
*/
-__kprobes int arch_imv_update(const struct __imv *imv, int early)
+static inline int detect_mov_test_jne(uint8_t *addr, uint8_t **opcode,
+ uint8_t **jmp_offset, int *offset_len)
{
- int ret;
- unsigned char opcode_size = imv->insn_size - imv->size;
- unsigned long insn = imv->imv - opcode_size;
- unsigned long len;
- char *vaddr;
- struct page *pages[1];
-
-#ifdef CONFIG_KPROBES
- /*
- * Fail if a kprobe has been set on this instruction.
- * (TODO: we could eventually do better and modify all the (possibly
- * nested) kprobes for this site if kprobes had an API for this.
- */
- if (unlikely(!early
- && *(unsigned char *)insn == BREAKPOINT_INSTRUCTION)) {
- printk(KERN_WARNING "Immediate value in conflict with kprobe. "
- "Variable at %p, "
- "instruction at %p, size %hu\n",
- (void *)imv->imv,
- (void *)imv->var, imv->size);
- return -EBUSY;
- }
-#endif
-
- /*
- * If the variable and the instruction have the same value, there is
- * nothing to do.
- */
- switch (imv->size) {
- case 1: if (*(uint8_t *)imv->imv
- == *(uint8_t *)imv->var)
- return 0;
- break;
- case 2: if (*(uint16_t *)imv->imv
- == *(uint16_t *)imv->var)
- return 0;
- break;
- case 4: if (*(uint32_t *)imv->imv
- == *(uint32_t *)imv->var)
+ printk(KERN_DEBUG "Trying at %p %hx %hx %hx %hx %hx %hx\n",
+ addr, addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
+ /* b0 cb movb cb,%al */
+ if (addr[0] != 0xb0)
+ return -1;
+ /* 84 c0 test %al,%al */
+ if (addr[2] != 0x84 || addr[3] != 0xc0)
+ return -1;
+ printk(KERN_DEBUG "Found test %%al,%%al at %p\n", addr + 2);
+ switch (addr[4]) {
+ case 0x75: /* 75 cb jne rel8 */
+ printk(KERN_DEBUG "Found jne rel8 at %p\n", addr + 4);
+ *opcode = addr + 4;
+ *jmp_offset = addr + 5;
+ *offset_len = 1;
+ return 0;
+ case 0x0f:
+ switch (addr[5]) {
+ case 0x85: /* 0F 85 cw jne rel16 (valid on ia32) */
+ /* 0F 85 cd jne rel32 */
+ printk(KERN_DEBUG "Found jne rel16/32 at %p\n",
+ addr + 5);
+ *opcode = addr + 4;
+ *jmp_offset = addr + 6;
+ *offset_len = 4;
return 0;
+ default:
+ return -1;
+ }
break;
-#ifdef CONFIG_X86_64
- case 8: if (*(uint64_t *)imv->imv
- == *(uint64_t *)imv->var)
+ default: return -1;
+ }
+}
+
+/*
+ * returns -1 if not found
+ * return 0 if found.
+ */
+static inline int detect_mov_test_je(uint8_t *addr, uint8_t **opcode,
+ uint8_t **jmp_offset, int *offset_len)
+{
+ /* b0 cb movb cb,%al */
+ if (addr[0] != 0xb0)
+ return -1;
+ /* 84 c0 test %al,%al */
+ if (addr[2] != 0x84 || addr[3] != 0xc0)
+ return -1;
+ printk(KERN_DEBUG "Found test %%al,%%al at %p\n", addr + 2);
+ switch (addr[4]) {
+ case 0x74: /* 74 cb je rel8 */
+ printk(KERN_DEBUG "Found je rel8 at %p\n", addr + 4);
+ *opcode = addr + 4;
+ *jmp_offset = addr + 5;
+ *offset_len = 1;
+ return 0;
+ case 0x0f:
+ switch (addr[5]) {
+ case 0x84: /* 0F 84 cw je rel16 (valid on ia32) */
+ /* 0F 84 cd je rel32 */
+ printk(KERN_DEBUG "Found je rel16/32 at %p\n",
+ addr + 5);
+ *opcode = addr + 4;
+ *jmp_offset = addr + 6;
+ *offset_len = 4;
return 0;
+ default:
+ return -1;
+ }
break;
-#endif
- default:return -EINVAL;
+ default: return -1;
}
+}
+
+static int static_early;
- if (!early) {
- /* bypass is 10 bytes long for x86_64 long */
- WARN_ON(imv->insn_size > 10);
- _imv_bypass(&bypass_eip, &bypass_after_int3);
+/*
+ * Marked noinline because we prefer to have only one _imv_bypass. Not that it
+ * is required, but there is no need to edit two bypasses.
+ */
+static noinline int replace_instruction_safe(uint8_t *addr, uint8_t *newcode,
+ int size)
+{
+ char *vaddr;
+ struct page *pages[1];
+ int len;
+ int ret;
+
+ /* bypass is 10 bytes long for x86_64 long */
+ WARN_ON(size > 10);
+
+ _imv_bypass(&bypass_eip, &bypass_after_int3);
- after_imv = imv->imv + imv->size;
+ if (!static_early) {
+ after_imv = (unsigned long)addr + size;
/*
* Using the _early variants because nobody is executing the
@@ -238,22 +299,23 @@ __kprobes int arch_imv_update(const stru
vaddr = vmap(pages, 1, VM_MAP, PAGE_KERNEL);
BUG_ON(!vaddr);
text_poke_early(&vaddr[bypass_eip & ~PAGE_MASK],
- (void *)insn, imv->insn_size);
+ (void *)addr, size);
/*
* Fill the rest with nops.
*/
- len = NR_NOPS - imv->insn_size;
+ len = NR_NOPS - size;
add_nops((void *)
- &vaddr[(bypass_eip & ~PAGE_MASK) + imv->insn_size],
+ &vaddr[(bypass_eip & ~PAGE_MASK) + size],
len);
vunmap(vaddr);
- target_after_int3 = insn + BREAKPOINT_INS_LEN;
+ target_after_int3 = (unsigned long)addr + BREAKPOINT_INS_LEN;
/* register_die_notifier has memory barriers */
register_die_notifier(&imv_notify);
- /* The breakpoint will single-step the bypass */
- text_poke((void *)insn,
- ((unsigned char[]){BREAKPOINT_INSTRUCTION}), 1);
+ /* The breakpoint will execute the bypass */
+ text_poke((void *)addr,
+ ((unsigned char[]){BREAKPOINT_INSTRUCTION}),
+ BREAKPOINT_INS_LEN);
/*
* Make sure the breakpoint is set before we continue (visible
* to other CPUs and interrupts).
@@ -265,14 +327,18 @@ __kprobes int arch_imv_update(const stru
ret = on_each_cpu(imv_synchronize_core, NULL, 1, 1);
BUG_ON(ret != 0);
- text_poke((void *)(insn + opcode_size), (void *)imv->var,
- imv->size);
+ text_poke((void *)(addr + BREAKPOINT_INS_LEN),
+ &newcode[BREAKPOINT_INS_LEN],
+ size - BREAKPOINT_INS_LEN);
/*
* Make sure the value can be seen from other CPUs and
* interrupts.
*/
wmb();
- text_poke((void *)insn, (unsigned char *)bypass_eip, 1);
+#ifdef DEBUG_IMMEDIATE
+ mdelay(10); /* lets the breakpoint for a while */
+#endif
+ text_poke(addr, newcode, BREAKPOINT_INS_LEN);
/*
* Wait for all int3 handlers to end (interrupts are disabled in
* int3). This CPU is clearly not in a int3 handler, because
@@ -285,7 +351,184 @@ __kprobes int arch_imv_update(const stru
unregister_die_notifier(&imv_notify);
/* unregister_die_notifier has memory barriers */
} else
- text_poke_early((void *)imv->imv, (void *)imv->var,
- imv->size);
+ text_poke_early(addr, newcode, size);
+ return 0;
+}
+
+static int patch_jump_target(struct __imv *imv)
+{
+ uint8_t *opcode, *jmp_offset;
+ int offset_len;
+ int mov_test_j_found = 0;
+
+ if(!detect_mov_test_jne((uint8_t *)imv->imv - 1,
+ &opcode, &jmp_offset, &offset_len)) {
+ imv->insn_size = 1; /* positive logic */
+ mov_test_j_found = 1;
+ } else if(!detect_mov_test_je((uint8_t *)imv->imv - 1,
+ &opcode, &jmp_offset, &offset_len)) {
+ imv->insn_size = 0; /* negative logic */
+ mov_test_j_found = 1;
+ }
+
+ if (mov_test_j_found) {
+ int logicvar = imv->insn_size ? imv->var : !imv->var;
+ int newoff;
+
+ if (offset_len == 1) {
+ imv->jmp_off = *(signed char *)jmp_offset;
+ /* replace with JMP_REL8 opcode. */
+ replace_instruction_safe(opcode,
+ ((unsigned char[]){ JMP_REL8,
+ (logicvar ? (signed char)imv->jmp_off : 0) }),
+ 2);
+ } else {
+ /* replace with nop and JMP_REL16/32 opcode.
+ * It's ok to shrink an instruction, never ok to
+ * grow it afterward. */
+ imv->jmp_off = *(int *)jmp_offset;
+ newoff = logicvar ? (int)imv->jmp_off : 0;
+ replace_instruction_safe(opcode,
+ ((unsigned char[]){ INSN_NOP1, JMP_REL32,
+ ((unsigned char *)&newoff)[0],
+ ((unsigned char *)&newoff)[1],
+ ((unsigned char *)&newoff)[2],
+ ((unsigned char *)&newoff)[3] }),
+ 6);
+ }
+ /* now we can get rid of the movb */
+ replace_instruction_safe((uint8_t *)imv->imv - 1,
+ ((unsigned char[]){ INSN_NOP2 }),
+ 2);
+ /* now we can get rid of the testb */
+ replace_instruction_safe((uint8_t *)imv->imv + 1,
+ ((unsigned char[]){ INSN_NOP2 }),
+ 2);
+ /* remember opcode + 1 to enable the JMP_REL patching */
+ if (offset_len == 1)
+ imv->imv = (unsigned long)opcode + 1;
+ else
+ imv->imv = (unsigned long)opcode + 2; /* skip nop */
+ return 0;
+
+ }
+
+ if (*((uint8_t *)imv->imv - 1) == JMP_REL8) {
+ int logicvar = imv->insn_size ? imv->var : !imv->var;
+
+ printk(KERN_DEBUG "Found JMP_REL8 at %p\n",
+ ((uint8_t *)imv->imv - 1));
+ replace_instruction_safe((uint8_t *)imv->imv - 1,
+ ((unsigned char[]){ JMP_REL8,
+ (logicvar ? (signed char)imv->jmp_off : 0) }),
+ 2);
+ return 0;
+ }
+
+ if (*((uint8_t *)imv->imv - 1) == JMP_REL32) {
+ int logicvar = imv->insn_size ? imv->var : !imv->var;
+ int newoff = logicvar ? (int)imv->jmp_off : 0;
+
+ printk(KERN_DEBUG "Found JMP_REL32 at %p, update with %x\n",
+ ((uint8_t *)imv->imv - 1), newoff);
+ replace_instruction_safe((uint8_t *)imv->imv - 1,
+ ((unsigned char[]){ JMP_REL32,
+ ((unsigned char *)&newoff)[0],
+ ((unsigned char *)&newoff)[1],
+ ((unsigned char *)&newoff)[2],
+ ((unsigned char *)&newoff)[3] }),
+ 5);
+ return 0;
+ }
+
+ /* Nothing known found. */
+ return -1;
+}
+
+/**
+ * arch_imv_update - update one immediate value
+ * @imv: pointer of type const struct __imv to update
+ * @early: early boot (1) or normal (0)
+ *
+ * Update one immediate value. Must be called with imv_mutex held.
+ */
+__kprobes int arch_imv_update(struct __imv *imv, int early)
+{
+ int ret;
+ uint8_t buf[10];
+ unsigned long insn, opcode_size;
+
+ static_early = early;
+
+ /*
+ * If imv_cond is encountered, try to patch it with
+ * patch_jump_target. Continue with normal immediate values if the area
+ * surrounding the instruction is not as expected.
+ */
+ if (imv->size == 0) {
+ ret = patch_jump_target(imv);
+ if (ret) {
+#ifdef DEBUG_IMMEDIATE
+ static int nr_fail;
+ printk("Jump target fallback at %lX, nr fail %d\n",
+ imv->imv, ++nr_fail);
+#endif
+ imv->size = 1;
+ } else {
+#ifdef DEBUG_IMMEDIATE
+ static int nr_success;
+ printk("Jump target at %lX, nr success %d\n",
+ imv->imv, ++nr_success);
+#endif
+ return 0;
+ }
+ }
+
+ opcode_size = imv->insn_size - imv->size;
+ insn = imv->imv - opcode_size;
+
+#ifdef CONFIG_KPROBES
+ /*
+ * Fail if a kprobe has been set on this instruction.
+ * (TODO: we could eventually do better and modify all the (possibly
+ * nested) kprobes for this site if kprobes had an API for this.
+ */
+ if (unlikely(!early
+ && *(unsigned char *)insn == BREAKPOINT_INSTRUCTION)) {
+ printk(KERN_WARNING "Immediate value in conflict with kprobe. "
+ "Variable at %p, "
+ "instruction at %p, size %hu\n",
+ (void *)imv->var,
+ (void *)imv->imv, imv->size);
+ return -EBUSY;
+ }
+#endif
+
+ /*
+ * If the variable and the instruction have the same value, there is
+ * nothing to do.
+ */
+ switch (imv->size) {
+ case 1: if (*(uint8_t *)imv->imv == *(uint8_t *)imv->var)
+ return 0;
+ break;
+ case 2: if (*(uint16_t *)imv->imv == *(uint16_t *)imv->var)
+ return 0;
+ break;
+ case 4: if (*(uint32_t *)imv->imv == *(uint32_t *)imv->var)
+ return 0;
+ break;
+#ifdef CONFIG_X86_64
+ case 8: if (*(uint64_t *)imv->imv == *(uint64_t *)imv->var)
+ return 0;
+ break;
+#endif
+ default:return -EINVAL;
+ }
+
+ memcpy(buf, (uint8_t *)insn, opcode_size);
+ memcpy(&buf[opcode_size], (void *)imv->var, imv->size);
+ replace_instruction_safe((uint8_t *)insn, buf, imv->insn_size);
+
return 0;
}
Index: linux-2.6-lttng/include/linux/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/immediate.h 2008-04-16 14:04:47.000000000 -0400
+++ linux-2.6-lttng/include/linux/immediate.h 2008-04-16 14:04:48.000000000 -0400
@@ -33,8 +33,7 @@
* Internal update functions.
*/
extern void core_imv_update(void);
-extern void imv_update_range(const struct __imv *begin,
- const struct __imv *end);
+extern void imv_update_range(struct __imv *begin, struct __imv *end);
extern void imv_unref_core_init(void);
extern void imv_unref(struct __imv *begin, struct __imv *end, void *start,
unsigned long size);
@@ -54,6 +53,14 @@ extern void imv_unref(struct __imv *begi
#define imv_read(name) _imv_read(name)
/**
+ * imv_cond - read immediate variable use as condition for if()
+ * @name: immediate value name
+ *
+ * Reads the value of @name.
+ */
+#define imv_cond _imv_read(name)
+
+/**
* imv_set - set immediate variable (with locking)
* @name: immediate value name
* @i: required value
Index: linux-2.6-lttng/kernel/immediate.c
===================================================================
--- linux-2.6-lttng.orig/kernel/immediate.c 2008-04-16 14:04:47.000000000 -0400
+++ linux-2.6-lttng/kernel/immediate.c 2008-04-16 14:04:48.000000000 -0400
@@ -43,10 +43,10 @@ static DEFINE_MUTEX(imv_mutex);
*
* Updates a range of immediates.
*/
-void imv_update_range(const struct __imv *begin,
- const struct __imv *end)
+void imv_update_range(struct __imv *begin,
+ struct __imv *end)
{
- const struct __imv *iter;
+ struct __imv *iter;
int ret;
for (iter = begin; iter < end; iter++) {
mutex_lock(&imv_mutex);
Index: linux-2.6-lttng/include/asm-powerpc/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-powerpc/immediate.h 2008-04-16 14:04:47.000000000 -0400
+++ linux-2.6-lttng/include/asm-powerpc/immediate.h 2008-04-16 14:04:48.000000000 -0400
@@ -68,6 +68,8 @@ struct __imv {
value; \
})
+#define imv_cond(name) imv_read(name)
+
extern int arch_imv_update(const struct __imv *imv, int early);
#endif /* _ASM_POWERPC_IMMEDIATE_H */
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* [RFC patch 27/27] Markers use imv jump
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
` (25 preceding siblings ...)
2008-04-16 21:34 ` [RFC patch 26/27] Immediate Values - Jump Mathieu Desnoyers
@ 2008-04-16 21:34 ` Mathieu Desnoyers
26 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 21:34 UTC (permalink / raw)
To: Ingo Molnar, linux-kernel; +Cc: Mathieu Desnoyers
[-- Attachment #1: markers-use-imv-jump.patch --]
[-- Type: text/plain, Size: 1017 bytes --]
Let markers use the heavily optimized imv_cond() version of immediate values.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
---
include/linux/marker.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6-lttng/include/linux/marker.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/marker.h 2008-04-16 00:16:52.000000000 -0400
+++ linux-2.6-lttng/include/linux/marker.h 2008-04-16 00:17:12.000000000 -0400
@@ -76,7 +76,7 @@ struct marker {
{ __mark_empty_function, NULL}, NULL }; \
__mark_check_format(format, ## args); \
if (!generic) { \
- if (unlikely(imv_read(__mark_##name.state))) \
+ if (unlikely(imv_cond(__mark_##name.state))) \
(*__mark_##name.call) \
(&__mark_##name, call_private, \
## args); \
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI MCE support
2008-04-16 21:34 ` [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI " Mathieu Desnoyers
@ 2008-04-16 23:09 ` Paul Mackerras
2008-04-16 23:33 ` Mathieu Desnoyers
0 siblings, 1 reply; 43+ messages in thread
From: Paul Mackerras @ 2008-04-16 23:09 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: Ingo Molnar, linux-kernel, Rusty Russell, Christoph Hellwig
Mathieu Desnoyers writes:
> Use an atomic update for immediate values.
What is meant by an "atomic" update in this context? AFAICS you are
using memcpy, which is not in any way guaranteed to be atomic.
Paul.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI MCE support
2008-04-16 23:09 ` Paul Mackerras
@ 2008-04-16 23:33 ` Mathieu Desnoyers
2008-04-17 0:35 ` Paul Mackerras
0 siblings, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-16 23:33 UTC (permalink / raw)
To: Paul Mackerras
Cc: Ingo Molnar, linux-kernel, Rusty Russell, Christoph Hellwig
* Paul Mackerras (paulus@samba.org) wrote:
> Mathieu Desnoyers writes:
>
> > Use an atomic update for immediate values.
>
> What is meant by an "atomic" update in this context? AFAICS you are
> using memcpy, which is not in any way guaranteed to be atomic.
>
> Paul.
I expect memcpy to perform the copy in one memory access, given I put a
.align 2
before the 2 bytes instruction. It makes sure the instruction modified
fits in a single, aligned, memory write.
Or maybe am I expecting too much from memcpy ?
Mathieu
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI MCE support
2008-04-16 23:33 ` Mathieu Desnoyers
@ 2008-04-17 0:35 ` Paul Mackerras
2008-04-17 1:24 ` Mathieu Desnoyers
0 siblings, 1 reply; 43+ messages in thread
From: Paul Mackerras @ 2008-04-17 0:35 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: Ingo Molnar, linux-kernel, Rusty Russell, Christoph Hellwig
Mathieu Desnoyers writes:
> * Paul Mackerras (paulus@samba.org) wrote:
> > Mathieu Desnoyers writes:
> >
> > > Use an atomic update for immediate values.
> >
> > What is meant by an "atomic" update in this context? AFAICS you are
> > using memcpy, which is not in any way guaranteed to be atomic.
> >
> > Paul.
>
> I expect memcpy to perform the copy in one memory access, given I put a
>
> .align 2
>
> before the 2 bytes instruction. It makes sure the instruction modified
> fits in a single, aligned, memory write.
My original question was in the context of the powerpc architecture,
where instructions are always 4 bytes long and aligned. So that's not
an issue.
> Or maybe am I expecting too much from memcpy ?
I don't think memcpy gives you any such guarantees. It would be quite
within its rights to say "it's only a few bytes, I'll do it byte by
byte".
If you really want it to be atomic (which I agree is probably a good
idea), I think the best way to do it is to use an asm to generate a
sth (store halfword) instruction to the immediate field (instruction
address + 2). That's on powerpc of course; I don't know what you
would do on other architectures.
Paul.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI MCE support
2008-04-17 0:35 ` Paul Mackerras
@ 2008-04-17 1:24 ` Mathieu Desnoyers
2008-04-19 23:40 ` Paul E. McKenney
0 siblings, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-17 1:24 UTC (permalink / raw)
To: Paul Mackerras, Paul E. McKenney
Cc: Ingo Molnar, linux-kernel, Rusty Russell, Christoph Hellwig
* Paul Mackerras (paulus@samba.org) wrote:
> Mathieu Desnoyers writes:
>
> > * Paul Mackerras (paulus@samba.org) wrote:
> > > Mathieu Desnoyers writes:
> > >
> > > > Use an atomic update for immediate values.
> > >
> > > What is meant by an "atomic" update in this context? AFAICS you are
> > > using memcpy, which is not in any way guaranteed to be atomic.
> > >
> > > Paul.
> >
> > I expect memcpy to perform the copy in one memory access, given I put a
> >
> > .align 2
> >
> > before the 2 bytes instruction. It makes sure the instruction modified
> > fits in a single, aligned, memory write.
>
> My original question was in the context of the powerpc architecture,
> where instructions are always 4 bytes long and aligned. So that's not
> an issue.
>
Sorry, I meant 4 byte instruction with 2 bytes immediate value, but we
both understand it would be a memory write aligned on 2 bytes since we
only change the immediate value.
> > Or maybe am I expecting too much from memcpy ?
>
> I don't think memcpy gives you any such guarantees. It would be quite
> within its rights to say "it's only a few bytes, I'll do it byte by
> byte".
>
> If you really want it to be atomic (which I agree is probably a good
> idea), I think the best way to do it is to use an asm to generate a
> sth (store halfword) instruction to the immediate field (instruction
> address + 2). That's on powerpc of course; I don't know what you
> would do on other architectures.
>
A simple
*(uint16_t* )destptr = newvalue;
seems to generate the "sth" instruction.
Do you see any reason why the compiler could choose a different, non
atomic assembler primitive ?
quoting Documentation/RCU/whatisRCU.txt :
"In contrast, RCU-based updaters typically take advantage of the fact
that writes to single aligned pointers are atomic on modern CPUs"
Paul E. McKenney could say if I am wrong if I assume that any object
smaller or equal to the architecture pointer size, aligned on a multiple
of its own size, will be read or written atomically.
Therefore, I would suggest the following replacement patch :
Immediate Values - Powerpc Optimization NMI MCE support
Use an atomic update for immediate values.
- Changelog :
Use a direct assignment instead of memcpy to be sure the update is
atomic.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Christoph Hellwig <hch@infradead.org>
CC: Paul Mackerras <paulus@samba.org>
---
arch/powerpc/kernel/Makefile | 1
arch/powerpc/kernel/immediate.c | 70 ++++++++++++++++++++++++++++++++++++++++
include/asm-powerpc/immediate.h | 18 ++++++++++
3 files changed, 89 insertions(+)
Index: linux-2.6-lttng/arch/powerpc/kernel/immediate.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-lttng/arch/powerpc/kernel/immediate.c 2008-04-16 21:22:29.000000000 -0400
@@ -0,0 +1,70 @@
+/*
+ * Powerpc optimized immediate values enabling/disabling.
+ *
+ * Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
+ */
+
+#include <linux/module.h>
+#include <linux/immediate.h>
+#include <linux/string.h>
+#include <linux/kprobes.h>
+#include <asm/cacheflush.h>
+#include <asm/page.h>
+
+#define LI_OPCODE_LEN 2
+
+/**
+ * arch_imv_update - update one immediate value
+ * @imv: pointer of type const struct __imv to update
+ * @early: early boot (1), normal (0)
+ *
+ * Update one immediate value. Must be called with imv_mutex held.
+ */
+int arch_imv_update(const struct __imv *imv, int early)
+{
+#ifdef CONFIG_KPROBES
+ kprobe_opcode_t *insn;
+ /*
+ * Fail if a kprobe has been set on this instruction.
+ * (TODO: we could eventually do better and modify all the (possibly
+ * nested) kprobes for this site if kprobes had an API for this.
+ */
+ switch (imv->size) {
+ case 1: /* The uint8_t points to the 3rd byte of the
+ * instruction */
+ insn = (void *)(imv->imv - 1 - LI_OPCODE_LEN);
+ break;
+ case 2: insn = (void *)(imv->imv - LI_OPCODE_LEN);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (unlikely(!early && *insn == BREAKPOINT_INSTRUCTION)) {
+ printk(KERN_WARNING "Immediate value in conflict with kprobe. "
+ "Variable at %p, "
+ "instruction at %p, size %lu\n",
+ (void *)imv->imv,
+ (void *)imv->var, imv->size);
+ return -EBUSY;
+ }
+#endif
+
+ /*
+ * If the variable and the instruction have the same value, there is
+ * nothing to do.
+ */
+ switch (imv->size) {
+ case 1: if (*(uint8_t *)imv->imv == *(uint8_t *)imv->var)
+ return 0;
+ *(uint8_t *)imv->imv = *(uint8_t *)imv->var;
+ break;
+ case 2: if (*(uint16_t *)imv->imv == *(uint16_t *)imv->var)
+ return 0;
+ *(uint16_t *)imv->imv = *(uint16_t *)imv->var;
+ break;
+ default:return -EINVAL;
+ }
+ flush_icache_range(imv->imv, imv->imv + imv->size);
+ return 0;
+}
Index: linux-2.6-lttng/include/asm-powerpc/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-powerpc/immediate.h 2008-04-16 12:25:42.000000000 -0400
+++ linux-2.6-lttng/include/asm-powerpc/immediate.h 2008-04-16 20:49:48.000000000 -0400
@@ -12,6 +12,16 @@
#include <asm/asm-compat.h>
+struct __imv {
+ unsigned long var; /* Identifier variable of the immediate value */
+ unsigned long imv; /*
+ * Pointer to the memory location that holds
+ * the immediate value within the load immediate
+ * instruction.
+ */
+ unsigned char size; /* Type size. */
+} __attribute__ ((packed));
+
/**
* imv_read - read immediate variable
* @name: immediate value name
@@ -19,6 +29,11 @@
* Reads the value of @name.
* Optimized version of the immediate.
* Do not use in __init and __exit functions. Use _imv_read() instead.
+ * Makes sure the 2 bytes update will be atomic by aligning the immediate
+ * value. Use a normal memory read for the 4 bytes immediate because there is no
+ * way to atomically update it without using a seqlock read side, which would
+ * cost more in term of total i-cache and d-cache space than a simple memory
+ * read.
*/
#define imv_read(name) \
({ \
@@ -40,6 +55,7 @@
PPC_LONG "%c1, ((1f)-2)\n\t" \
".byte 2\n\t" \
".previous\n\t" \
+ ".align 2\n\t" \
"li %0,0\n\t" \
"1:\n\t" \
: "=r" (value) \
@@ -52,4 +68,6 @@
value; \
})
+extern int arch_imv_update(const struct __imv *imv, int early);
+
#endif /* _ASM_POWERPC_IMMEDIATE_H */
Index: linux-2.6-lttng/arch/powerpc/kernel/Makefile
===================================================================
--- linux-2.6-lttng.orig/arch/powerpc/kernel/Makefile 2008-04-16 12:23:07.000000000 -0400
+++ linux-2.6-lttng/arch/powerpc/kernel/Makefile 2008-04-16 12:25:44.000000000 -0400
@@ -45,6 +45,7 @@ obj-$(CONFIG_HIBERNATION) += swsusp.o su
obj64-$(CONFIG_HIBERNATION) += swsusp_asm64.o
obj-$(CONFIG_MODULES) += module_$(CONFIG_WORD_SIZE).o
obj-$(CONFIG_44x) += cpu_setup_44x.o
+obj-$(CONFIG_IMMEDIATE) += immediate.o
ifeq ($(CONFIG_PPC_MERGE),y)
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 15/27] Immediate Values - Documentation
2008-04-16 21:34 ` [RFC patch 15/27] Immediate Values - Documentation Mathieu Desnoyers
@ 2008-04-17 9:52 ` KOSAKI Motohiro
2008-04-17 10:36 ` Adrian Bunk
2008-04-17 12:17 ` [RFC patch 15/27] Immediate Values - Documentation (updated) Mathieu Desnoyers
0 siblings, 2 replies; 43+ messages in thread
From: KOSAKI Motohiro @ 2008-04-17 9:52 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: kosaki.motohiro, Ingo Molnar, linux-kernel, Rusty Russell,
Adrian Bunk, Andi Kleen, Christoph Hellwig, akpm
> +Memory read:
> +644.09ツア11.39 - 88.16ツア1.35 = 555.93ツア11.46 cycles
> +
> +Getppid without memory pressure:
> +1462.09ツア18.87 - 150.92ツア1.01 = 1311.17ツア18.90 cycles
> +
> +Getppid with memory pressure:
> +17113.33ツア1655.92 - 578.22ツア269.51 = 16535.11ツア1677.71 cycles
> +
> +Therefore, if we add 2 markers not based on immediate values to the getppid
> +code, which would add 2 memory reads, we would add
> +2 * 555.93ツア12.74 = 1111.86ツア25.48 cycles
sorry..
non ascii character is harmful for some language region user.
Couldn't you write ascii only?
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 15/27] Immediate Values - Documentation
2008-04-17 9:52 ` KOSAKI Motohiro
@ 2008-04-17 10:36 ` Adrian Bunk
2008-04-17 12:56 ` Mathieu Desnoyers
2008-04-17 12:17 ` [RFC patch 15/27] Immediate Values - Documentation (updated) Mathieu Desnoyers
1 sibling, 1 reply; 43+ messages in thread
From: Adrian Bunk @ 2008-04-17 10:36 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: Mathieu Desnoyers, linux-kernel, Rusty Russell, Andi Kleen,
Christoph Hellwig, akpm
On Thu, Apr 17, 2008 at 06:52:54PM +0900, KOSAKI Motohiro wrote:
> > +Memory read:
> > +644.09ツア11.39 - 88.16ツア1.35 = 555.93ツア11.46 cycles
> > +
> > +Getppid without memory pressure:
> > +1462.09ツア18.87 - 150.92ツア1.01 = 1311.17ツア18.90 cycles
> > +
> > +Getppid with memory pressure:
> > +17113.33ツア1655.92 - 578.22ツア269.51 = 16535.11ツア1677.71 cycles
> > +
> > +Therefore, if we add 2 markers not based on immediate values to the getppid
> > +code, which would add 2 memory reads, we would add
> > +2 * 555.93ツア12.74 = 1111.86ツア25.48 cycles
>
> sorry..
>
> non ascii character is harmful for some language region user.
> Couldn't you write ascii only?
The kernel is UTF-8, and this shouldn't be a problem.
The problem seems to be that Mathieu's emails don't contain a header
indicating the charset.
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 15/27] Immediate Values - Documentation (updated)
2008-04-17 9:52 ` KOSAKI Motohiro
2008-04-17 10:36 ` Adrian Bunk
@ 2008-04-17 12:17 ` Mathieu Desnoyers
2008-04-18 2:27 ` KOSAKI Motohiro
1 sibling, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-17 12:17 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: Ingo Molnar, linux-kernel, Rusty Russell, Adrian Bunk,
Andi Kleen, Christoph Hellwig, akpm
Changelog:
- Remove imv_set_early (removed from API).
- Use imv_* instead of immediate_*.
- Remove non ascii characters.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Adrian Bunk <bunk@stusta.de>
CC: Andi Kleen <andi@firstfloor.org>
CC: Christoph Hellwig <hch@infradead.org>
CC: mingo@elte.hu
CC: akpm@osdl.org
CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
Documentation/immediate.txt | 221 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 221 insertions(+)
Index: linux-2.6-lttng/Documentation/immediate.txt
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-lttng/Documentation/immediate.txt 2008-04-17 08:15:55.000000000 -0400
@@ -0,0 +1,221 @@
+ Using the Immediate Values
+
+ Mathieu Desnoyers
+
+
+This document introduces Immediate Values and their use.
+
+
+* Purpose of immediate values
+
+An immediate value is used to compile into the kernel variables that sit within
+the instruction stream. They are meant to be rarely updated but read often.
+Using immediate values for these variables will save cache lines.
+
+This infrastructure is specialized in supporting dynamic patching of the values
+in the instruction stream when multiple CPUs are running without disturbing the
+normal system behavior.
+
+Compiling code meant to be rarely enabled at runtime can be done using
+if (unlikely(imv_read(var))) as condition surrounding the code. The
+smallest data type required for the test (an 8 bits char) is preferred, since
+some architectures, such as powerpc, only allow up to 16 bits immediate values.
+
+
+* Usage
+
+In order to use the "immediate" macros, you should include linux/immediate.h.
+
+#include <linux/immediate.h>
+
+DEFINE_IMV(char, this_immediate);
+EXPORT_IMV_SYMBOL(this_immediate);
+
+
+And use, in the body of a function:
+
+Use imv_set(this_immediate) to set the immediate value.
+
+Use imv_read(this_immediate) to read the immediate value.
+
+The immediate mechanism supports inserting multiple instances of the same
+immediate. Immediate values can be put in inline functions, inlined static
+functions, and unrolled loops.
+
+If you have to read the immediate values from a function declared as __init or
+__exit, you should explicitly use _imv_read(), which will fall back on a
+global variable read. Failing to do so will leave a reference to the __init
+section after it is freed (it would generate a modpost warning).
+
+You can choose to set an initial static value to the immediate by using, for
+instance:
+
+DEFINE_IMV(long, myptr) = 10;
+
+
+* Optimization for a given architecture
+
+One can implement optimized immediate values for a given architecture by
+replacing asm-$ARCH/immediate.h.
+
+
+* Performance improvement
+
+
+ * Memory hit for a data-based branch
+
+Here are the results on a 3GHz Pentium 4:
+
+number of tests: 100
+number of branches per test: 100000
+memory hit cycles per iteration (mean): 636.611
+L1 cache hit cycles per iteration (mean): 89.6413
+instruction stream based test, cycles per iteration (mean): 85.3438
+Just getting the pointer from a modulo on a pseudo-random value, doing
+ nothing with it, cycles per iteration (mean): 77.5044
+
+So:
+Base case: 77.50 cycles
+instruction stream based test: +7.8394 cycles
+L1 cache hit based test: +12.1369 cycles
+Memory load based test: +559.1066 cycles
+
+So let's say we have a ping flood coming at
+(14014 packets transmitted, 14014 received, 0% packet loss, time 1826ms)
+7674 packets per second. If we put 2 markers for irq entry/exit, it
+brings us to 15348 markers sites executed per second.
+
+(15348 exec/s) * (559 cycles/exec) / (3G cycles/s) = 0.0029
+We therefore have a 0.29% slowdown just on this case.
+
+Compared to this, the instruction stream based test will cause a
+slowdown of:
+
+(15348 exec/s) * (7.84 cycles/exec) / (3G cycles/s) = 0.00004
+For a 0.004% slowdown.
+
+If we plan to use this for memory allocation, spinlock, and all sorts of
+very high event rate tracing, we can assume it will execute 10 to 100
+times more sites per second, which brings us to 0.4% slowdown with the
+instruction stream based test compared to 29% slowdown with the memory
+load based test on a system with high memory pressure.
+
+
+
+ * Markers impact under heavy memory load
+
+Running a kernel with my LTTng instrumentation set, in a test that
+generates memory pressure (from userspace) by trashing L1 and L2 caches
+between calls to getppid() (note: syscall_trace is active and calls
+a marker upon syscall entry and syscall exit; markers are disarmed).
+This test is done in user-space, so there are some delays due to IRQs
+coming and to the scheduler. (UP 2.6.22-rc6-mm1 kernel, task with -20
+nice level)
+
+My first set of results: Linear cache trashing, turned out not to be
+very interesting, because it seems like the linearity of the memset on a
+full array is somehow detected and it does not "really" trash the
+caches.
+
+Now the most interesting result: Random walk L1 and L2 trashing
+surrounding a getppid() call.
+
+- Markers compiled out (but syscall_trace execution forced)
+number of tests: 10000
+No memory pressure
+Reading timestamps takes 108.033 cycles
+getppid: 1681.4 cycles
+With memory pressure
+Reading timestamps takes 102.938 cycles
+getppid: 15691.6 cycles
+
+
+- With the immediate values based markers:
+number of tests: 10000
+No memory pressure
+Reading timestamps takes 108.006 cycles
+getppid: 1681.84 cycles
+With memory pressure
+Reading timestamps takes 100.291 cycles
+getppid: 11793 cycles
+
+
+- With global variables based markers:
+number of tests: 10000
+No memory pressure
+Reading timestamps takes 107.999 cycles
+getppid: 1669.06 cycles
+With memory pressure
+Reading timestamps takes 102.839 cycles
+getppid: 12535 cycles
+
+The result is quite interesting in that the kernel is slower without
+markers than with markers. I explain it by the fact that the data
+accessed is not laid out in the same manner in the cache lines when the
+markers are compiled in or out. It seems that it aligns the function's
+data better to compile-in the markers in this case.
+
+But since the interesting comparison is between the immediate values and
+global variables based markers, and because they share the same memory
+layout, except for the movl being replaced by a movz, we see that the
+global variable based markers (2 markers) adds 742 cycles to each system
+call (syscall entry and exit are traced and memory locations for both
+global variables lie on the same cache line).
+
+
+- Test redone with less iterations, but with error estimates
+
+10 runs of 100 iterations each: Tests done on a 3GHz P4. Here I run getppid with
+syscall trace inactive, comparing the case with memory pressure and without
+memory pressure. (sorry, my system is not setup to execute syscall_trace this
+time, but it will make the point anyway).
+
+No memory pressure
+Reading timestamps: 150.92 cycles, std dev. 1.01 cycles
+getppid: 1462.09 cycles, std dev. 18.87 cycles
+
+With memory pressure
+Reading timestamps: 578.22 cycles, std dev. 269.51 cycles
+getppid: 17113.33 cycles, std dev. 1655.92 cycles
+
+
+Now for memory read timing: (10 runs, branches per test: 100000)
+Memory read based branch:
+ 644.09 cycles, std dev. 11.39 cycles
+L1 cache hit based branch:
+ 88.16 cycles, std dev. 1.35 cycles
+
+
+So, now that we have the raw results, let's calculate:
+
+Memory read:
+644.09 +/- 11.39 - 88.16 +/- 1.35 = 555.93 +/- 11.46 cycles
+
+Getppid without memory pressure:
+1462.09 +/- 18.87 - 150.92 +/- 1.01 = 1311.17 +/- 18.90 cycles
+
+Getppid with memory pressure:
+17113.33 +/- 1655.92 - 578.22 +/- 269.51 = 16535.11 +/- 1677.71 cycles
+
+Therefore, if we add 2 markers not based on immediate values to the getppid
+code, which would add 2 memory reads, we would add
+2 * 555.93 +/- 12.74 = 1111.86 +/- 25.48 cycles
+
+Therefore,
+
+1111.86 +/- 25.48 / 16535.11 +/- 1677.71 = 0.0672
+ relative error: sqrt(((25.48/1111.86)^2)+((1677.71/16535.11)^2))
+ = 0.1040
+ absolute error: 0.1040 * 0.0672 = 0.0070
+
+Therefore: 0.0672 +/- 0.0070 * 100% = 6.72 +/- 0.70 %
+
+We can therefore affirm that adding 2 markers to getppid, on a system with high
+memory pressure, would have a performance hit of at least 6.0% on the system
+call time, all within the uncertainty limits of these tests. The same applies to
+other kernel code paths. The smaller those code paths are, the highest the
+impact ratio will be.
+
+Therefore, not only is it interesting to use the immediate values to dynamically
+activate dormant code such as the markers, but I think it should also be
+considered as a replacement for many of the "read-mostly" static variables.
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 15/27] Immediate Values - Documentation
2008-04-17 10:36 ` Adrian Bunk
@ 2008-04-17 12:56 ` Mathieu Desnoyers
0 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-17 12:56 UTC (permalink / raw)
To: Adrian Bunk
Cc: KOSAKI Motohiro, linux-kernel, Rusty Russell, Andi Kleen,
Christoph Hellwig, akpm
* Adrian Bunk (bunk@kernel.org) wrote:
> On Thu, Apr 17, 2008 at 06:52:54PM +0900, KOSAKI Motohiro wrote:
> > > +Memory read:
> > > +644.09ツア11.39 - 88.16ツア1.35 = 555.93ツア11.46 cycles
> > > +
> > > +Getppid without memory pressure:
> > > +1462.09ツア18.87 - 150.92ツア1.01 = 1311.17ツア18.90 cycles
> > > +
> > > +Getppid with memory pressure:
> > > +17113.33ツア1655.92 - 578.22ツア269.51 = 16535.11ツア1677.71 cycles
> > > +
> > > +Therefore, if we add 2 markers not based on immediate values to the getppid
> > > +code, which would add 2 memory reads, we would add
> > > +2 * 555.93ツア12.74 = 1111.86ツア25.48 cycles
> >
> > sorry..
> >
> > non ascii character is harmful for some language region user.
> > Couldn't you write ascii only?
>
> The kernel is UTF-8, and this shouldn't be a problem.
>
> The problem seems to be that Mathieu's emails don't contain a header
> indicating the charset.
>
I removed the UTF-8 characters for now. Someday I should fix my
quilt+exit4 setup, when I find the time. :)
Thanks,
Mathieu
> cu
> Adrian
>
> --
>
> "Is there not promise of rain?" Ling Tan asked suddenly out
> of the darkness. There had been need of rain for many days.
> "Only a promise," Lao Er said.
> Pearl S. Buck - Dragon Seed
>
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 15/27] Immediate Values - Documentation (updated)
2008-04-17 12:17 ` [RFC patch 15/27] Immediate Values - Documentation (updated) Mathieu Desnoyers
@ 2008-04-18 2:27 ` KOSAKI Motohiro
0 siblings, 0 replies; 43+ messages in thread
From: KOSAKI Motohiro @ 2008-04-18 2:27 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: kosaki.motohiro, Ingo Molnar, linux-kernel, Rusty Russell,
Adrian Bunk, Andi Kleen, Christoph Hellwig, akpm
> Changelog:
> - Remove imv_set_early (removed from API).
> - Use imv_* instead of immediate_*.
> - Remove non ascii characters.
Looks good to me.
Thanks!
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 16/27] Immediate Values Support init
2008-04-16 21:34 ` [RFC patch 16/27] Immediate Values Support init Mathieu Desnoyers
@ 2008-04-19 11:04 ` KOSAKI Motohiro
2008-04-19 13:24 ` Mathieu Desnoyers
0 siblings, 1 reply; 43+ messages in thread
From: KOSAKI Motohiro @ 2008-04-19 11:04 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: kosaki.motohiro, Ingo Molnar, linux-kernel, Rusty Russell,
Frank Ch. Eigler
> #else
>
> @@ -73,7 +76,9 @@ extern void imv_update_range(const struc
>
> static inline void core_imv_update(void) { }
> static inline void module_imv_update(void) { }
> -
> +static inline void imv_unref_core_init(void) { }
> +static inline void imv_unref_init(struct __imv *begin, struct __imv *end,
> + void *init, unsigned long init_size) { }
> #endif
err.
When turn off CONFIG_IMMEDIATE, "struct __imv" is not defined.
is cause following warnings.
include/linux/immediate.h:81: warning: 'struct __imv' declared inside parameter list
include/linux/immediate.h:81: warning: its scope is only this definition or declaration, \
which is probably not what you want
and
> +extern void imv_unref(struct __imv *begin, struct __imv *end, void *start,
> + unsigned long size);
>
> #else
>
> (snip)
> +static inline void imv_unref_init(struct __imv *begin, struct __imv *end,
> + void *init, unsigned long init_size) { }
> #endif
if CONFIG_IMMEDIATE is on, imv_unref() is declared.
but if CONFIG_IMMEDIATE is off, imv_unref_init() is declared instead imv_unref()
it cause following error.
CC kernel/module.o
kernel/module.c: In function 'sys_init_module':
kernel/module.c:2211: error: implicit declaration of function 'imv_unref'
kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
kernel/module.c:2211: error: 'struct module' has no member named 'num_immediate'
make[1]: *** [kernel/module.o] Error 1
and,
in kernel/module.c#sys_init_module(),
immediate member of struct module is used though CONFIG_IMMEDIATE is off.
> imv_unref(mod->immediate, mod->immediate + mod->num_immediate,
> mod->module_init, mod->init_size);
it cause following error.
CC kernel/module.o
kernel/module.c: In function 'sys_init_module':
kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
kernel/module.c:2211: error: 'struct module' has no member named 'num_immediate'
make[1]: *** [kernel/module.o] Error 1
bellow patch fixed these.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/immediate.h | 8 ++++++--
include/linux/module.h | 21 +++++++++++++++++++++
kernel/module.c | 3 ++-
3 files changed, 29 insertions(+), 3 deletions(-)
Index: b/include/linux/immediate.h
===================================================================
--- a/include/linux/immediate.h 2008-04-19 19:53:03.000000000 +0900
+++ b/include/linux/immediate.h 2008-04-19 20:04:58.000000000 +0900
@@ -56,6 +56,10 @@ extern void imv_unref(struct __imv *begi
* Generic immediate values: a simple, standard, memory load.
*/
+/* empty declaration for avoid warning */
+struct __imv {
+};
+
/**
* imv_read - read immediate variable
* @name: immediate value name
@@ -77,8 +81,8 @@ extern void imv_unref(struct __imv *begi
static inline void core_imv_update(void) { }
static inline void module_imv_update(void) { }
static inline void imv_unref_core_init(void) { }
-static inline void imv_unref_init(struct __imv *begin, struct __imv *end,
- void *init, unsigned long init_size) { }
+static inline void imv_unref(struct __imv *begin, struct __imv *end,
+ void *start, unsigned long size) { }
#endif
#define DECLARE_IMV(type, name) extern __typeof__(type) name##__imv
Index: b/include/linux/module.h
===================================================================
--- a/include/linux/module.h 2008-04-19 19:53:03.000000000 +0900
+++ b/include/linux/module.h 2008-04-19 20:22:14.000000000 +0900
@@ -634,4 +634,25 @@ static inline void module_remove_modinfo
#define __MODULE_STRING(x) __stringify(x)
+#ifdef CONFIG_IMMEDIATE
+static inline struct __imv* mod_immediate_address(struct module* mod)
+{
+ return mod->immediate;
+}
+static inline unsigned int mod_num_immediate(struct module* mod)
+{
+ return mod->num_immediate;
+}
+#else
+static inline struct __imv* mod_immediate_address(struct module* mod)
+{
+ return NULL;
+}
+static inline unsigned int mod_num_immediate(struct module* mod)
+{
+ return 0;
+}
+#endif
+
+
#endif /* _LINUX_MODULE_H */
Index: b/kernel/module.c
===================================================================
--- a/kernel/module.c 2008-04-19 19:53:03.000000000 +0900
+++ b/kernel/module.c 2008-04-19 20:23:51.000000000 +0900
@@ -2208,7 +2208,8 @@ sys_init_module(void __user *umod,
/* Drop initial reference. */
module_put(mod);
unwind_remove_table(mod->unwind_info, 1);
- imv_unref(mod->immediate, mod->immediate + mod->num_immediate,
+ imv_unref(mod_immediate_address(mod),
+ mod_immediate_address(mod) + mod_num_immediate(mod),
mod->module_init, mod->init_size);
module_free(mod, mod->module_init);
mod->module_init = NULL;
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 26/27] Immediate Values - Jump
2008-04-16 21:34 ` [RFC patch 26/27] Immediate Values - Jump Mathieu Desnoyers
@ 2008-04-19 11:41 ` KOSAKI Motohiro
2008-04-19 13:25 ` Mathieu Desnoyers
0 siblings, 1 reply; 43+ messages in thread
From: KOSAKI Motohiro @ 2008-04-19 11:41 UTC (permalink / raw)
To: Mathieu Desnoyers; +Cc: kosaki.motohiro, Ingo Molnar, linux-kernel
> Index: linux-2.6-lttng/include/linux/immediate.h
> ===================================================================
> --- linux-2.6-lttng.orig/include/linux/immediate.h 2008-04-16 14:04:47.000000000 -0400
> +++ linux-2.6-lttng/include/linux/immediate.h 2008-04-16 14:04:48.000000000 -0400
> @@ -33,8 +33,7 @@
> * Internal update functions.
> */
> extern void core_imv_update(void);
> -extern void imv_update_range(const struct __imv *begin,
> - const struct __imv *end);
> +extern void imv_update_range(struct __imv *begin, struct __imv *end);
> extern void imv_unref_core_init(void);
> extern void imv_unref(struct __imv *begin, struct __imv *end, void *start,
> unsigned long size);
> @@ -54,6 +53,14 @@ extern void imv_unref(struct __imv *begi
> #define imv_read(name) _imv_read(name)
>
> /**
> + * imv_cond - read immediate variable use as condition for if()
> + * @name: immediate value name
> + *
> + * Reads the value of @name.
> + */
> +#define imv_cond _imv_read(name)
> +
> +/**
err, missing name argument.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/immediate.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: b/include/linux/immediate.h
===================================================================
--- a/include/linux/immediate.h 2008-04-19 20:57:19.000000000 +0900
+++ b/include/linux/immediate.h 2008-04-19 21:09:01.000000000 +0900
@@ -62,7 +62,7 @@ struct __imv {
*
* Reads the value of @name.
*/
-#define imv_cond _imv_read(name)
+#define imv_cond(name) _imv_read(name)
/**
* imv_set - set immediate variable (with locking)
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 16/27] Immediate Values Support init
2008-04-19 11:04 ` KOSAKI Motohiro
@ 2008-04-19 13:24 ` Mathieu Desnoyers
2008-04-19 14:06 ` KOSAKI Motohiro
0 siblings, 1 reply; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-19 13:24 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: Ingo Molnar, linux-kernel, Rusty Russell, Frank Ch. Eigler
* KOSAKI Motohiro (kosaki.motohiro@jp.fujitsu.com) wrote:
> > #else
> >
> > @@ -73,7 +76,9 @@ extern void imv_update_range(const struc
> >
> > static inline void core_imv_update(void) { }
> > static inline void module_imv_update(void) { }
> > -
> > +static inline void imv_unref_core_init(void) { }
> > +static inline void imv_unref_init(struct __imv *begin, struct __imv *end,
> > + void *init, unsigned long init_size) { }
> > #endif
>
> err.
> When turn off CONFIG_IMMEDIATE, "struct __imv" is not defined.
> is cause following warnings.
>
> include/linux/immediate.h:81: warning: 'struct __imv' declared inside parameter list
> include/linux/immediate.h:81: warning: its scope is only this definition or declaration, \
> which is probably not what you want
>
>
> and
>
> > +extern void imv_unref(struct __imv *begin, struct __imv *end, void *start,
> > + unsigned long size);
> >
> > #else
> >
> > (snip)
> > +static inline void imv_unref_init(struct __imv *begin, struct __imv *end,
> > + void *init, unsigned long init_size) { }
> > #endif
>
> if CONFIG_IMMEDIATE is on, imv_unref() is declared.
> but if CONFIG_IMMEDIATE is off, imv_unref_init() is declared instead imv_unref()
> it cause following error.
>
>
> CC kernel/module.o
> kernel/module.c: In function 'sys_init_module':
> kernel/module.c:2211: error: implicit declaration of function 'imv_unref'
> kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
> kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
> kernel/module.c:2211: error: 'struct module' has no member named 'num_immediate'
> make[1]: *** [kernel/module.o] Error 1
>
>
> and,
>
> in kernel/module.c#sys_init_module(),
> immediate member of struct module is used though CONFIG_IMMEDIATE is off.
>
> > imv_unref(mod->immediate, mod->immediate + mod->num_immediate,
> > mod->module_init, mod->init_size);
>
> it cause following error.
>
> CC kernel/module.o
> kernel/module.c: In function 'sys_init_module':
> kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
> kernel/module.c:2211: error: 'struct module' has no member named 'immediate'
> kernel/module.c:2211: error: 'struct module' has no member named 'num_immediate'
> make[1]: *** [kernel/module.o] Error 1
>
>
> bellow patch fixed these.
>
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>
> ---
> include/linux/immediate.h | 8 ++++++--
> include/linux/module.h | 21 +++++++++++++++++++++
> kernel/module.c | 3 ++-
> 3 files changed, 29 insertions(+), 3 deletions(-)
>
> Index: b/include/linux/immediate.h
> ===================================================================
> --- a/include/linux/immediate.h 2008-04-19 19:53:03.000000000 +0900
> +++ b/include/linux/immediate.h 2008-04-19 20:04:58.000000000 +0900
> @@ -56,6 +56,10 @@ extern void imv_unref(struct __imv *begi
> * Generic immediate values: a simple, standard, memory load.
> */
>
> +/* empty declaration for avoid warning */
> +struct __imv {
> +};
> +
I prefer to add an ifdef CONFIG_IMMEDIATE to module.c to follow what I
have already done previously. Defining this empty structure is a bit
odd. Here is the updated patch.
Thanks for testing/reporting this.
Mathieu
Immediate Values Support init
Supports placing immediate values in init code
We need to put the immediate values in RW data section so we can edit them
before init section unload.
This code puts NULL pointers in lieu of original pointer referencing init code
before the init sections are freed, both in the core kernel and in modules.
TODO : support __exit section.
Changelog:
- Fix !CONFIG_IMMEDIATE
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: "Frank Ch. Eigler" <fche@redhat.com>
CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
Documentation/immediate.txt | 8 ++++----
include/asm-generic/vmlinux.lds.h | 8 ++++----
include/asm-powerpc/immediate.h | 4 ++--
include/asm-x86/immediate.h | 6 +++---
include/linux/immediate.h | 4 ++++
include/linux/module.h | 2 +-
init/main.c | 1 +
kernel/immediate.c | 31 +++++++++++++++++++++++++++++--
kernel/module.c | 4 ++++
9 files changed, 52 insertions(+), 16 deletions(-)
Index: linux-2.6-lttng/kernel/immediate.c
===================================================================
--- linux-2.6-lttng.orig/kernel/immediate.c 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/kernel/immediate.c 2008-04-19 09:20:53.000000000 -0400
@@ -22,6 +22,7 @@
#include <linux/cpu.h>
#include <linux/stop_machine.h>
+#include <asm/sections.h>
#include <asm/cacheflush.h>
/*
@@ -30,8 +31,8 @@
static int imv_early_boot_complete;
static int wrote_text;
-extern const struct __imv __start___imv[];
-extern const struct __imv __stop___imv[];
+extern struct __imv __start___imv[];
+extern struct __imv __stop___imv[];
static int stop_machine_imv_update(void *imv_ptr)
{
@@ -118,6 +119,8 @@ void imv_update_range(const struct __imv
int ret;
for (iter = begin; iter < end; iter++) {
mutex_lock(&imv_mutex);
+ if (!iter->imv) /* Skip removed __init immediate values */
+ goto skip;
ret = apply_imv_update(iter);
if (imv_early_boot_complete && ret)
printk(KERN_WARNING
@@ -126,6 +129,7 @@ void imv_update_range(const struct __imv
"instruction at %p, size %hu\n",
(void *)iter->imv,
(void *)iter->var, iter->size);
+skip:
mutex_unlock(&imv_mutex);
}
}
@@ -143,6 +147,29 @@ void core_imv_update(void)
}
EXPORT_SYMBOL_GPL(core_imv_update);
+/**
+ * imv_unref
+ *
+ * Deactivate any immediate value reference pointing into the code region in the
+ * range start to start + size.
+ */
+void imv_unref(struct __imv *begin, struct __imv *end, void *start,
+ unsigned long size)
+{
+ struct __imv *iter;
+
+ for (iter = begin; iter < end; iter++)
+ if (iter->imv >= (unsigned long)start
+ && iter->imv < (unsigned long)start + size)
+ iter->imv = 0UL;
+}
+
+void imv_unref_core_init(void)
+{
+ imv_unref(__start___imv, __stop___imv, __init_begin,
+ (unsigned long)__init_end - (unsigned long)__init_begin);
+}
+
void __init imv_init_complete(void)
{
imv_early_boot_complete = 1;
Index: linux-2.6-lttng/kernel/module.c
===================================================================
--- linux-2.6-lttng.orig/kernel/module.c 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/kernel/module.c 2008-04-19 09:20:55.000000000 -0400
@@ -2208,6 +2208,10 @@ sys_init_module(void __user *umod,
/* Drop initial reference. */
module_put(mod);
unwind_remove_table(mod->unwind_info, 1);
+#ifdef CONFIG_IMMEDIATE
+ imv_unref(mod->immediate, mod->immediate + mod->num_immediate,
+ mod->module_init, mod->init_size);
+#endif
module_free(mod, mod->module_init);
mod->module_init = NULL;
mod->init_size = 0;
Index: linux-2.6-lttng/include/linux/module.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/module.h 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/include/linux/module.h 2008-04-19 09:20:46.000000000 -0400
@@ -357,7 +357,7 @@ struct module
keeping pointers to this stuff */
char *args;
#ifdef CONFIG_IMMEDIATE
- const struct __imv *immediate;
+ struct __imv *immediate;
unsigned int num_immediate;
#endif
#ifdef CONFIG_MARKERS
Index: linux-2.6-lttng/include/asm-generic/vmlinux.lds.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-generic/vmlinux.lds.h 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/include/asm-generic/vmlinux.lds.h 2008-04-19 09:10:20.000000000 -0400
@@ -52,7 +52,10 @@
. = ALIGN(8); \
VMLINUX_SYMBOL(__start___markers) = .; \
*(__markers) \
- VMLINUX_SYMBOL(__stop___markers) = .;
+ VMLINUX_SYMBOL(__stop___markers) = .; \
+ VMLINUX_SYMBOL(__start___imv) = .; \
+ *(__imv) /* Immediate values: pointers */ \
+ VMLINUX_SYMBOL(__stop___imv) = .;
#define RO_DATA(align) \
. = ALIGN((align)); \
@@ -61,9 +64,6 @@
*(.rodata) *(.rodata.*) \
*(__vermagic) /* Kernel version magic */ \
*(__markers_strings) /* Markers: strings */ \
- VMLINUX_SYMBOL(__start___imv) = .; \
- *(__imv) /* Immediate values: pointers */ \
- VMLINUX_SYMBOL(__stop___imv) = .; \
} \
\
.rodata1 : AT(ADDR(.rodata1) - LOAD_OFFSET) { \
Index: linux-2.6-lttng/include/linux/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/immediate.h 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/include/linux/immediate.h 2008-04-19 09:21:34.000000000 -0400
@@ -46,6 +46,9 @@ struct __imv {
extern void core_imv_update(void);
extern void imv_update_range(const struct __imv *begin,
const struct __imv *end);
+extern void imv_unref_core_init(void);
+extern void imv_unref(struct __imv *begin, struct __imv *end, void *start,
+ unsigned long size);
#else
@@ -73,6 +76,7 @@ extern void imv_update_range(const struc
static inline void core_imv_update(void) { }
static inline void module_imv_update(void) { }
+static inline void imv_unref_core_init(void) { }
#endif
Index: linux-2.6-lttng/init/main.c
===================================================================
--- linux-2.6-lttng.orig/init/main.c 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/init/main.c 2008-04-19 09:10:20.000000000 -0400
@@ -776,6 +776,7 @@ static void run_init_process(char *init_
*/
static int noinline init_post(void)
{
+ imv_unref_core_init();
free_initmem();
unlock_kernel();
mark_rodata_ro();
Index: linux-2.6-lttng/include/asm-x86/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-x86/immediate.h 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/include/asm-x86/immediate.h 2008-04-19 09:20:54.000000000 -0400
@@ -33,7 +33,7 @@
BUILD_BUG_ON(sizeof(value) > 8); \
switch (sizeof(value)) { \
case 1: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
".byte %c2\n\t" \
".previous\n\t" \
@@ -45,7 +45,7 @@
break; \
case 2: \
case 4: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
".byte %c2\n\t" \
".previous\n\t" \
@@ -60,7 +60,7 @@
value = name##__imv; \
break; \
} \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
_ASM_PTR "%c1, (3f)-%c2\n\t" \
".byte %c2\n\t" \
".previous\n\t" \
Index: linux-2.6-lttng/include/asm-powerpc/immediate.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-powerpc/immediate.h 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/include/asm-powerpc/immediate.h 2008-04-19 09:20:54.000000000 -0400
@@ -26,7 +26,7 @@
BUILD_BUG_ON(sizeof(value) > 8); \
switch (sizeof(value)) { \
case 1: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
PPC_LONG "%c1, ((1f)-1)\n\t" \
".byte 1\n\t" \
".previous\n\t" \
@@ -36,7 +36,7 @@
: "i" (&name##__imv)); \
break; \
case 2: \
- asm(".section __imv,\"a\",@progbits\n\t" \
+ asm(".section __imv,\"aw\",@progbits\n\t" \
PPC_LONG "%c1, ((1f)-2)\n\t" \
".byte 2\n\t" \
".previous\n\t" \
Index: linux-2.6-lttng/Documentation/immediate.txt
===================================================================
--- linux-2.6-lttng.orig/Documentation/immediate.txt 2008-04-19 09:10:20.000000000 -0400
+++ linux-2.6-lttng/Documentation/immediate.txt 2008-04-19 09:10:20.000000000 -0400
@@ -42,10 +42,10 @@ The immediate mechanism supports inserti
immediate. Immediate values can be put in inline functions, inlined static
functions, and unrolled loops.
-If you have to read the immediate values from a function declared as __init or
-__exit, you should explicitly use _imv_read(), which will fall back on a
-global variable read. Failing to do so will leave a reference to the __init
-section after it is freed (it would generate a modpost warning).
+If you have to read the immediate values from a function declared as __exit, you
+should explicitly use _imv_read(), which will fall back on a global variable
+read. Failing to do so will leave a reference to the __exit section in kernel
+without module unload support. imv_read() in the __init section is supported.
You can choose to set an initial static value to the immediate by using, for
instance:
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 26/27] Immediate Values - Jump
2008-04-19 11:41 ` KOSAKI Motohiro
@ 2008-04-19 13:25 ` Mathieu Desnoyers
0 siblings, 0 replies; 43+ messages in thread
From: Mathieu Desnoyers @ 2008-04-19 13:25 UTC (permalink / raw)
To: KOSAKI Motohiro; +Cc: Ingo Molnar, linux-kernel
* KOSAKI Motohiro (kosaki.motohiro@jp.fujitsu.com) wrote:
> > Index: linux-2.6-lttng/include/linux/immediate.h
> > ===================================================================
> > --- linux-2.6-lttng.orig/include/linux/immediate.h 2008-04-16 14:04:47.000000000 -0400
> > +++ linux-2.6-lttng/include/linux/immediate.h 2008-04-16 14:04:48.000000000 -0400
> > @@ -33,8 +33,7 @@
> > * Internal update functions.
> > */
> > extern void core_imv_update(void);
> > -extern void imv_update_range(const struct __imv *begin,
> > - const struct __imv *end);
> > +extern void imv_update_range(struct __imv *begin, struct __imv *end);
> > extern void imv_unref_core_init(void);
> > extern void imv_unref(struct __imv *begin, struct __imv *end, void *start,
> > unsigned long size);
> > @@ -54,6 +53,14 @@ extern void imv_unref(struct __imv *begi
> > #define imv_read(name) _imv_read(name)
> >
> > /**
> > + * imv_cond - read immediate variable use as condition for if()
> > + * @name: immediate value name
> > + *
> > + * Reads the value of @name.
> > + */
> > +#define imv_cond _imv_read(name)
> > +
> > +/**
>
> err, missing name argument.
>
Thanks, I merged it into my patchset.
Mathieu
>
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>
> ---
> include/linux/immediate.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: b/include/linux/immediate.h
> ===================================================================
> --- a/include/linux/immediate.h 2008-04-19 20:57:19.000000000 +0900
> +++ b/include/linux/immediate.h 2008-04-19 21:09:01.000000000 +0900
> @@ -62,7 +62,7 @@ struct __imv {
> *
> * Reads the value of @name.
> */
> -#define imv_cond _imv_read(name)
> +#define imv_cond(name) _imv_read(name)
>
> /**
> * imv_set - set immediate variable (with locking)
>
>
>
--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 16/27] Immediate Values Support init
2008-04-19 13:24 ` Mathieu Desnoyers
@ 2008-04-19 14:06 ` KOSAKI Motohiro
0 siblings, 0 replies; 43+ messages in thread
From: KOSAKI Motohiro @ 2008-04-19 14:06 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: kosaki.motohiro, Ingo Molnar, linux-kernel, Rusty Russell,
Frank Ch. Eigler
> I prefer to add an ifdef CONFIG_IMMEDIATE to module.c to follow what I
> have already done previously. Defining this empty structure is a bit
> odd. Here is the updated patch.
>
> Thanks for testing/reporting this.
OK.
I tested and confirmed your latest patch solved my reporting problem.
Thanks.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI MCE support
2008-04-17 1:24 ` Mathieu Desnoyers
@ 2008-04-19 23:40 ` Paul E. McKenney
0 siblings, 0 replies; 43+ messages in thread
From: Paul E. McKenney @ 2008-04-19 23:40 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: Paul Mackerras, Ingo Molnar, linux-kernel, Rusty Russell,
Christoph Hellwig
On Wed, Apr 16, 2008 at 09:24:20PM -0400, Mathieu Desnoyers wrote:
> * Paul Mackerras (paulus@samba.org) wrote:
> > Mathieu Desnoyers writes:
> >
> > > * Paul Mackerras (paulus@samba.org) wrote:
> > > > Mathieu Desnoyers writes:
> > > >
> > > > > Use an atomic update for immediate values.
> > > >
> > > > What is meant by an "atomic" update in this context? AFAICS you are
> > > > using memcpy, which is not in any way guaranteed to be atomic.
> > > >
> > > > Paul.
> > >
> > > I expect memcpy to perform the copy in one memory access, given I put a
> > >
> > > .align 2
> > >
> > > before the 2 bytes instruction. It makes sure the instruction modified
> > > fits in a single, aligned, memory write.
> >
> > My original question was in the context of the powerpc architecture,
> > where instructions are always 4 bytes long and aligned. So that's not
> > an issue.
> >
>
> Sorry, I meant 4 byte instruction with 2 bytes immediate value, but we
> both understand it would be a memory write aligned on 2 bytes since we
> only change the immediate value.
>
> > > Or maybe am I expecting too much from memcpy ?
> >
> > I don't think memcpy gives you any such guarantees. It would be quite
> > within its rights to say "it's only a few bytes, I'll do it byte by
> > byte".
> >
> > If you really want it to be atomic (which I agree is probably a good
> > idea), I think the best way to do it is to use an asm to generate a
> > sth (store halfword) instruction to the immediate field (instruction
> > address + 2). That's on powerpc of course; I don't know what you
> > would do on other architectures.
> >
>
> A simple
>
> *(uint16_t* )destptr = newvalue;
>
> seems to generate the "sth" instruction.
>
> Do you see any reason why the compiler could choose a different, non
> atomic assembler primitive ?
>
> quoting Documentation/RCU/whatisRCU.txt :
>
> "In contrast, RCU-based updaters typically take advantage of the fact
> that writes to single aligned pointers are atomic on modern CPUs"
>
> Paul E. McKenney could say if I am wrong if I assume that any object
> smaller or equal to the architecture pointer size, aligned on a multiple
> of its own size, will be read or written atomically.
There have been CPUs in the past for which this was false. I am not aware
of any these days, but I would need to ask the architecture maintainers.
A lot depends on the compiler as well as the CPU, of course. :-(
Thanx, Paul
> Therefore, I would suggest the following replacement patch :
>
>
> Immediate Values - Powerpc Optimization NMI MCE support
>
> Use an atomic update for immediate values.
>
> - Changelog :
> Use a direct assignment instead of memcpy to be sure the update is
> atomic.
>
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
> CC: Rusty Russell <rusty@rustcorp.com.au>
> CC: Christoph Hellwig <hch@infradead.org>
> CC: Paul Mackerras <paulus@samba.org>
> ---
> arch/powerpc/kernel/Makefile | 1
> arch/powerpc/kernel/immediate.c | 70 ++++++++++++++++++++++++++++++++++++++++
> include/asm-powerpc/immediate.h | 18 ++++++++++
> 3 files changed, 89 insertions(+)
>
> Index: linux-2.6-lttng/arch/powerpc/kernel/immediate.c
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6-lttng/arch/powerpc/kernel/immediate.c 2008-04-16 21:22:29.000000000 -0400
> @@ -0,0 +1,70 @@
> +/*
> + * Powerpc optimized immediate values enabling/disabling.
> + *
> + * Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/immediate.h>
> +#include <linux/string.h>
> +#include <linux/kprobes.h>
> +#include <asm/cacheflush.h>
> +#include <asm/page.h>
> +
> +#define LI_OPCODE_LEN 2
> +
> +/**
> + * arch_imv_update - update one immediate value
> + * @imv: pointer of type const struct __imv to update
> + * @early: early boot (1), normal (0)
> + *
> + * Update one immediate value. Must be called with imv_mutex held.
> + */
> +int arch_imv_update(const struct __imv *imv, int early)
> +{
> +#ifdef CONFIG_KPROBES
> + kprobe_opcode_t *insn;
> + /*
> + * Fail if a kprobe has been set on this instruction.
> + * (TODO: we could eventually do better and modify all the (possibly
> + * nested) kprobes for this site if kprobes had an API for this.
> + */
> + switch (imv->size) {
> + case 1: /* The uint8_t points to the 3rd byte of the
> + * instruction */
> + insn = (void *)(imv->imv - 1 - LI_OPCODE_LEN);
> + break;
> + case 2: insn = (void *)(imv->imv - LI_OPCODE_LEN);
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + if (unlikely(!early && *insn == BREAKPOINT_INSTRUCTION)) {
> + printk(KERN_WARNING "Immediate value in conflict with kprobe. "
> + "Variable at %p, "
> + "instruction at %p, size %lu\n",
> + (void *)imv->imv,
> + (void *)imv->var, imv->size);
> + return -EBUSY;
> + }
> +#endif
> +
> + /*
> + * If the variable and the instruction have the same value, there is
> + * nothing to do.
> + */
> + switch (imv->size) {
> + case 1: if (*(uint8_t *)imv->imv == *(uint8_t *)imv->var)
> + return 0;
> + *(uint8_t *)imv->imv = *(uint8_t *)imv->var;
> + break;
> + case 2: if (*(uint16_t *)imv->imv == *(uint16_t *)imv->var)
> + return 0;
> + *(uint16_t *)imv->imv = *(uint16_t *)imv->var;
> + break;
> + default:return -EINVAL;
> + }
> + flush_icache_range(imv->imv, imv->imv + imv->size);
> + return 0;
> +}
> Index: linux-2.6-lttng/include/asm-powerpc/immediate.h
> ===================================================================
> --- linux-2.6-lttng.orig/include/asm-powerpc/immediate.h 2008-04-16 12:25:42.000000000 -0400
> +++ linux-2.6-lttng/include/asm-powerpc/immediate.h 2008-04-16 20:49:48.000000000 -0400
> @@ -12,6 +12,16 @@
>
> #include <asm/asm-compat.h>
>
> +struct __imv {
> + unsigned long var; /* Identifier variable of the immediate value */
> + unsigned long imv; /*
> + * Pointer to the memory location that holds
> + * the immediate value within the load immediate
> + * instruction.
> + */
> + unsigned char size; /* Type size. */
> +} __attribute__ ((packed));
> +
> /**
> * imv_read - read immediate variable
> * @name: immediate value name
> @@ -19,6 +29,11 @@
> * Reads the value of @name.
> * Optimized version of the immediate.
> * Do not use in __init and __exit functions. Use _imv_read() instead.
> + * Makes sure the 2 bytes update will be atomic by aligning the immediate
> + * value. Use a normal memory read for the 4 bytes immediate because there is no
> + * way to atomically update it without using a seqlock read side, which would
> + * cost more in term of total i-cache and d-cache space than a simple memory
> + * read.
> */
> #define imv_read(name) \
> ({ \
> @@ -40,6 +55,7 @@
> PPC_LONG "%c1, ((1f)-2)\n\t" \
> ".byte 2\n\t" \
> ".previous\n\t" \
> + ".align 2\n\t" \
> "li %0,0\n\t" \
> "1:\n\t" \
> : "=r" (value) \
> @@ -52,4 +68,6 @@
> value; \
> })
>
> +extern int arch_imv_update(const struct __imv *imv, int early);
> +
> #endif /* _ASM_POWERPC_IMMEDIATE_H */
> Index: linux-2.6-lttng/arch/powerpc/kernel/Makefile
> ===================================================================
> --- linux-2.6-lttng.orig/arch/powerpc/kernel/Makefile 2008-04-16 12:23:07.000000000 -0400
> +++ linux-2.6-lttng/arch/powerpc/kernel/Makefile 2008-04-16 12:25:44.000000000 -0400
> @@ -45,6 +45,7 @@ obj-$(CONFIG_HIBERNATION) += swsusp.o su
> obj64-$(CONFIG_HIBERNATION) += swsusp_asm64.o
> obj-$(CONFIG_MODULES) += module_$(CONFIG_WORD_SIZE).o
> obj-$(CONFIG_44x) += cpu_setup_44x.o
> +obj-$(CONFIG_IMMEDIATE) += immediate.o
>
> ifeq ($(CONFIG_PPC_MERGE),y)
>
>
> --
> Mathieu Desnoyers
> Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
> OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 43+ messages in thread
end of thread, other threads:[~2008-04-19 23:40 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-04-16 21:34 [RFC patch 00/27] Jump-based NMI-safe immediate values and markers for sched-devel.git Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 01/27] From: Adrian Bunk <bunk@kernel.org> Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 02/27] x86 NMI-safe INT3 and Page Fault Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 03/27] Check for breakpoint in text_poke to eliminate bug_on Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 04/27] Kprobes - use a mutex to protect the instruction pages list Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 05/27] Kprobes - do not use kprobes mutex in arch code Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 06/27] Kprobes - declare kprobe_mutex static Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 07/27] Text Edit Lock - Architecture Independent Code Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 08/27] Text Edit Lock - kprobes architecture independent support Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 09/27] Add all cpus option to stop machine run Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 10/27] Immediate Values - Architecture Independent Code Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 11/27] Immediate Values - Kconfig menu in EMBEDDED Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 12/27] Immediate Values - x86 Optimization Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 13/27] Add text_poke and sync_core to powerpc Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 14/27] Immediate Values - Powerpc Optimization Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 15/27] Immediate Values - Documentation Mathieu Desnoyers
2008-04-17 9:52 ` KOSAKI Motohiro
2008-04-17 10:36 ` Adrian Bunk
2008-04-17 12:56 ` Mathieu Desnoyers
2008-04-17 12:17 ` [RFC patch 15/27] Immediate Values - Documentation (updated) Mathieu Desnoyers
2008-04-18 2:27 ` KOSAKI Motohiro
2008-04-16 21:34 ` [RFC patch 16/27] Immediate Values Support init Mathieu Desnoyers
2008-04-19 11:04 ` KOSAKI Motohiro
2008-04-19 13:24 ` Mathieu Desnoyers
2008-04-19 14:06 ` KOSAKI Motohiro
2008-04-16 21:34 ` [RFC patch 17/27] Scheduler Profiling - Use Immediate Values Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 18/27] Markers - remove extra format argument Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 19/27] Markers - define non optimized marker Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 20/27] Immediate Values - Move Kprobes x86 restore_interrupt to kdebug.h Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 21/27] Add __discard section to x86 Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 22/27] Immediate Values - x86 Optimization NMI and MCE support Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 23/27] Immediate Values - Powerpc Optimization NMI " Mathieu Desnoyers
2008-04-16 23:09 ` Paul Mackerras
2008-04-16 23:33 ` Mathieu Desnoyers
2008-04-17 0:35 ` Paul Mackerras
2008-04-17 1:24 ` Mathieu Desnoyers
2008-04-19 23:40 ` Paul E. McKenney
2008-04-16 21:34 ` [RFC patch 24/27] Immediate Values Use Arch NMI and MCE Support Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 25/27] Linux Kernel Markers - Use Immediate Values Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 26/27] Immediate Values - Jump Mathieu Desnoyers
2008-04-19 11:41 ` KOSAKI Motohiro
2008-04-19 13:25 ` Mathieu Desnoyers
2008-04-16 21:34 ` [RFC patch 27/27] Markers use imv jump Mathieu Desnoyers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).