LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE
@ 2018-03-05 16:49 Torsten Duwe
  2018-03-05 17:09 ` Segher Boessenkool
  2018-03-08 16:26 ` Josh Poimboeuf
  0 siblings, 2 replies; 12+ messages in thread
From: Torsten Duwe @ 2018-03-05 16:49 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Jiri Kosina, Josh Poimboeuf, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:

[...] There are several rules that must be adhered to in order to ensure
reliable and consistent call chain backtracing:

* Before a function calls any other function, it shall establish its
  own stack frame, whose size shall be a multiple of 16 bytes.

 – In instances where a function’s prologue creates a stack frame, the
   back-chain word of the stack frame shall be updated atomically with
   the value of the stack pointer (r1) when a back chain is implemented.
   (This must be supported as default by all ELF V2 ABI-compliant
   environments.)
[...]
 – The function shall save the link register that contains its return
   address in the LR save doubleword of its caller’s stack frame before
   calling another function.

To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
This patch may be unneccessarily limited to ppc64le, but OTOH the only
user of this flag so far is livepatching, which is only implemented on
PPCs with 64-LE, a.k.a. ELF ABI v2.

This change also implements save_stack_trace_tsk_reliable() for ppc64
that checks for the above conditions, where possible.

Signed-off-by:   Torsten Duwe <duwe@suse.de>

---
v2:
 * implemented save_stack_trace_tsk_reliable(), with a bunch of sanity
   checks. The test for a kernel code pointer is much nicer now, and
   the exit condition is exact (when compared to last week's follow-up)

---
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..9f49913e19e3 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -220,6 +220,7 @@ config PPC
 	select HAVE_PERF_USER_STACK_DUMP
 	select HAVE_RCU_TABLE_FREE		if SMP
 	select HAVE_REGS_AND_STACK_ACCESS_API
+	select HAVE_RELIABLE_STACKTRACE		if PPC64 && CPU_LITTLE_ENDIAN
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_VIRT_CPU_ACCOUNTING
 	select HAVE_IRQ_TIME_ACCOUNTING
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index d534ed901538..e14c2dfd5311 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -11,8 +11,11 @@
  */
 
 #include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/module.h>
 #include <linux/sched.h>
 #include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
 #include <linux/stacktrace.h>
 #include <asm/ptrace.h>
 #include <asm/processor.h>
@@ -76,3 +79,77 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 	save_context_stack(trace, regs->gpr[1], current, 0);
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);
+
+#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+int
+save_stack_trace_tsk_reliable(struct task_struct *tsk,
+				struct stack_trace *trace)
+{
+	unsigned long sp;
+	unsigned long stack_page = (unsigned long)task_stack_page(tsk);
+
+	/* The last frame (unwinding first) may not yet have saved
+	 * its LR onto the stack.
+	 */
+	int firstframe = 1;
+
+	if (tsk == current)
+		sp = current_stack_pointer();
+	else
+		sp = tsk->thread.ksp;
+
+	if (sp < stack_page + sizeof(struct thread_struct)
+	    || sp > stack_page + THREAD_SIZE - STACK_FRAME_OVERHEAD)
+		return 1;
+
+	for (;;) {
+		unsigned long *stack = (unsigned long *) sp;
+		unsigned long newsp, ip;
+
+		/* sanity check: ABI requires SP to be aligned 16 bytes. */
+		if (sp & 0xF)
+			return 1;
+
+		newsp = stack[0];
+		/* Stack grows downwards; unwinder may only go up. */
+		if (newsp <= sp)
+			return 1;
+
+		if (newsp >= stack_page + THREAD_SIZE)
+			return 1; /* invalid backlink, too far up. */
+
+		/* Examine the saved LR: it must point into kernel code. */
+		ip = stack[STACK_FRAME_LR_SAVE];
+		if (!firstframe) {
+			if (!func_ptr_is_kernel_text((void *)ip)) {
+#ifdef CONFIG_MODULES
+				struct module *mod = __module_text_address(ip);
+
+				if (!mod)
+#endif
+					return 1;
+			}
+		}
+		firstframe = 0;
+
+		if (!trace->skip)
+			trace->entries[trace->nr_entries++] = ip;
+		else
+			trace->skip--;
+
+		/* SP value loaded on kernel entry, see "PACAKSAVE(r13)" in
+		 * _switch() and system_call_common()
+		 */
+		if (newsp == stack_page + THREAD_SIZE - /* SWITCH_FRAME_SIZE */
+		    (STACK_FRAME_OVERHEAD + sizeof(struct pt_regs)))
+			break;
+
+		if (trace->nr_entries >= trace->max_entries)
+			return -E2BIG;
+
+		sp = newsp;
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable);
+#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE
  2018-03-05 16:49 [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE Torsten Duwe
@ 2018-03-05 17:09 ` Segher Boessenkool
  2018-03-08 16:26 ` Josh Poimboeuf
  1 sibling, 0 replies; 12+ messages in thread
From: Segher Boessenkool @ 2018-03-05 17:09 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Michael Ellerman, Jiri Kosina, linux-kernel, Nicholas Piggin,
	Josh Poimboeuf, live-patching, linuxppc-dev

On Mon, Mar 05, 2018 at 05:49:28PM +0100, Torsten Duwe wrote:
> The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
> 
> [...] There are several rules that must be adhered to in order to ensure
> reliable and consistent call chain backtracing:
> 
> * Before a function calls any other function, it shall establish its
>   own stack frame, whose size shall be a multiple of 16 bytes.
> 
>  – In instances where a function’s prologue creates a stack frame, the
>    back-chain word of the stack frame shall be updated atomically with
>    the value of the stack pointer (r1) when a back chain is implemented.
>    (This must be supported as default by all ELF V2 ABI-compliant
>    environments.)
> [...]
>  – The function shall save the link register that contains its return
>    address in the LR save doubleword of its caller’s stack frame before
>    calling another function.

All of this is also true for the other PowerPC ABIs, fwiw (both 32-bit
and 64-bit; the offset of the LR save slot isn't the same in all ABIs).


Segher

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE
  2018-03-05 16:49 [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE Torsten Duwe
  2018-03-05 17:09 ` Segher Boessenkool
@ 2018-03-08 16:26 ` Josh Poimboeuf
  2018-03-09 16:47   ` Torsten Duwe
  1 sibling, 1 reply; 12+ messages in thread
From: Josh Poimboeuf @ 2018-03-08 16:26 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Michael Ellerman, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

On Mon, Mar 05, 2018 at 05:49:28PM +0100, Torsten Duwe wrote:
> The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
> 
> [...] There are several rules that must be adhered to in order to ensure
> reliable and consistent call chain backtracing:
> 
> * Before a function calls any other function, it shall establish its
>   own stack frame, whose size shall be a multiple of 16 bytes.
> 
>  – In instances where a function’s prologue creates a stack frame, the
>    back-chain word of the stack frame shall be updated atomically with
>    the value of the stack pointer (r1) when a back chain is implemented.
>    (This must be supported as default by all ELF V2 ABI-compliant
>    environments.)
> [...]
>  – The function shall save the link register that contains its return
>    address in the LR save doubleword of its caller’s stack frame before
>    calling another function.
> 
> To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
> This patch may be unneccessarily limited to ppc64le, but OTOH the only
> user of this flag so far is livepatching, which is only implemented on
> PPCs with 64-LE, a.k.a. ELF ABI v2.
> 
> This change also implements save_stack_trace_tsk_reliable() for ppc64
> that checks for the above conditions, where possible.
> 
> Signed-off-by:   Torsten Duwe <duwe@suse.de>

This doesn't seem to address some of my previous concerns:

- Bailing on interrupt/exception frames

- Function graph tracing return address conversion

- kretprobes return address conversion

-- 
Josh

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE
  2018-03-08 16:26 ` Josh Poimboeuf
@ 2018-03-09 16:47   ` Torsten Duwe
  2018-03-12 15:35     ` Josh Poimboeuf
  0 siblings, 1 reply; 12+ messages in thread
From: Torsten Duwe @ 2018-03-09 16:47 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Michael Ellerman, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

On Thu, 8 Mar 2018 10:26:16 -0600
Josh Poimboeuf <jpoimboe@redhat.com> wrote:

> This doesn't seem to address some of my previous concerns:

You're right. That discussion quickly headed towards objtool
and I forgot about this one paragraph with the remarks.

> - Bailing on interrupt/exception frames

That is a good question. My current code keeps unwinding as long
as the trace looks sane. If the exception frame has a valid code
pointer in the LR slot it will continue. Couldn't there be cases
where this is desirable? Should this be configurable? Not that
I have an idea how this situation could occur for a thread
that is current or sleeping...

Michael, Balbir: is that possible? Any Idea how to reliably detect
an exception frame? My approach would be to look at the next return
address and compare it to the usual suspects (i.e. collect all 
"b ret" addresses in the EXCEPTION_COMMON macro, for BookS).

> - Function graph tracing return address conversion
> 
> - kretprobes return address conversion

You mean like in arch/x86/kernel/unwind_frame.c the call to
ftrace_graph_ret_addr ?

Forgive me my directness but I don't see why these should be handled in
arch-dependent code, other than maybe a hook, if inevitable, that calls
back into the graph tracer / kretprobes in order to get the proper
address, or simply call the trace unreliable in case it finds such a
return address.

I understand that x86 has a hard time with its multiple unwinders, but
there should be a simpler, generic solution for all other architectures.

	Torsten

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE
  2018-03-09 16:47   ` Torsten Duwe
@ 2018-03-12 15:35     ` Josh Poimboeuf
  2018-05-04 12:38       ` [PATCH v3] " Torsten Duwe
  0 siblings, 1 reply; 12+ messages in thread
From: Josh Poimboeuf @ 2018-03-12 15:35 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Michael Ellerman, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

On Fri, Mar 09, 2018 at 05:47:18PM +0100, Torsten Duwe wrote:
> On Thu, 8 Mar 2018 10:26:16 -0600
> Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> 
> > This doesn't seem to address some of my previous concerns:
> 
> You're right. That discussion quickly headed towards objtool
> and I forgot about this one paragraph with the remarks.
> 
> > - Bailing on interrupt/exception frames
> 
> That is a good question. My current code keeps unwinding as long
> as the trace looks sane. If the exception frame has a valid code
> pointer in the LR slot it will continue. Couldn't there be cases
> where this is desirable?

I thought we established in the previous discussion that this could
cause some functions to get skipped in the stack trace:

  https://lkml.kernel.org/r/20171219214652.u7qeb7fxov62ttke@treble

> Should this be configurable? Not that
> I have an idea how this situation could occur for a thread
> that is current or sleeping...

Page faults and preemption.

> Michael, Balbir: is that possible? Any Idea how to reliably detect
> an exception frame? My approach would be to look at the next return
> address and compare it to the usual suspects (i.e. collect all 
> "b ret" addresses in the EXCEPTION_COMMON macro, for BookS).

It looks like show_stack() already knows how to do this:

		/*
		 * See if this is an exception frame.
		 * We look for the "regshere" marker in the current frame.
		 */
		if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE)
		    && stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {

So you could do something similar.

> > - Function graph tracing return address conversion
> > 
> > - kretprobes return address conversion
> 
> You mean like in arch/x86/kernel/unwind_frame.c the call to
> ftrace_graph_ret_addr ?
> 
> Forgive me my directness but I don't see why these should be handled in
> arch-dependent code, other than maybe a hook, if inevitable, that calls
> back into the graph tracer / kretprobes in order to get the proper
> address,

I don't really follow, where exactly would you propose calling
ftrace_graph_ret_addr() from?

> or simply call the trace unreliable in case it finds such a
> return address.

If you're going to make livepatch incompatible with function graph
tracing, there needs to be a good justification for it (and we'd need to
make sure existing users are fine with it).

-- 
Josh

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3] On ppc64le we HAVE_RELIABLE_STACKTRACE
  2018-03-12 15:35     ` Josh Poimboeuf
@ 2018-05-04 12:38       ` Torsten Duwe
  2018-05-07 15:42         ` Josh Poimboeuf
  2018-05-10 14:06         ` [v3] On ppc64le we HAVE_RELIABLE_STACKTRACE Michael Ellerman
  0 siblings, 2 replies; 12+ messages in thread
From: Torsten Duwe @ 2018-05-04 12:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Michael Ellerman, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching


The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:

[...] There are several rules that must be adhered to in order to ensure
reliable and consistent call chain backtracing:

* Before a function calls any other function, it shall establish its
  own stack frame, whose size shall be a multiple of 16 bytes.

 – In instances where a function’s prologue creates a stack frame, the
   back-chain word of the stack frame shall be updated atomically with
   the value of the stack pointer (r1) when a back chain is implemented.
   (This must be supported as default by all ELF V2 ABI-compliant
   environments.)
[...]
 – The function shall save the link register that contains its return
   address in the LR save doubleword of its caller’s stack frame before
   calling another function.

To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
This patch may be unneccessarily limited to ppc64le, but OTOH the only
user of this flag so far is livepatching, which is only implemented on
PPCs with 64-LE, a.k.a. ELF ABI v2.

Feel free to add other ppc variants, but so far only ppc64le got tested.

This change also implements save_stack_trace_tsk_reliable() for ppc64le
that checks for the above conditions, where possible.

Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Nicolai Stange <nstange@suse.de>

---
v3:
 * big bunch of fixes, credits go to Nicolai Stange:
   - get the correct return address from the graph tracer,
     should it be active.
     IMO this should be moved into to generic code, but I'll
     leave it like this for now, to get things going.
   - bail out on a kretprobe
   - also stop at an exception frame
   - accomodate for the different stack layout of the idle task
   - use an even more beautiful test: __kernel_text_address()

v2:
 * implemented save_stack_trace_tsk_reliable(), with a bunch of sanity
   checks. The test for a kernel code pointer is much nicer now, and
   the exit condition is exact (when compared to last week's follow-up)

---
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c32a181a7cbb..54f1daf4f9e5 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -220,6 +220,7 @@ config PPC
 	select HAVE_PERF_USER_STACK_DUMP
 	select HAVE_RCU_TABLE_FREE		if SMP
 	select HAVE_REGS_AND_STACK_ACCESS_API
+	select HAVE_RELIABLE_STACKTRACE		if PPC64 && CPU_LITTLE_ENDIAN
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_VIRT_CPU_ACCOUNTING
 	select HAVE_IRQ_TIME_ACCOUNTING
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index d534ed901538..3d62ecb2587b 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -2,7 +2,7 @@
  * Stack trace utility
  *
  * Copyright 2008 Christoph Hellwig, IBM Corp.
- *
+ * Copyright 2018 SUSE Linux GmbH
  *
  *      This program is free software; you can redistribute it and/or
  *      modify it under the terms of the GNU General Public License
@@ -11,11 +11,16 @@
  */
 
 #include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/module.h>
 #include <linux/sched.h>
 #include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
 #include <linux/stacktrace.h>
 #include <asm/ptrace.h>
 #include <asm/processor.h>
+#include <linux/ftrace.h>
+#include <asm/kprobes.h>
 
 /*
  * Save stack-backtrace addresses into a stack_trace buffer.
@@ -76,3 +81,114 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 	save_context_stack(trace, regs->gpr[1], current, 0);
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);
+
+#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+int
+save_stack_trace_tsk_reliable(struct task_struct *tsk,
+				struct stack_trace *trace)
+{
+	unsigned long sp;
+	unsigned long stack_page = (unsigned long)task_stack_page(tsk);
+	unsigned long stack_end;
+	int graph_idx = 0;
+
+	/* The last frame (unwinding first) may not yet have saved
+	 * its LR onto the stack.
+	 */
+	int firstframe = 1;
+
+	if (tsk == current)
+		sp = current_stack_pointer();
+	else
+		sp = tsk->thread.ksp;
+
+	stack_end = stack_page + THREAD_SIZE;
+	if (!is_idle_task(tsk)) {
+		/*
+		 * For user tasks, this is the SP value loaded on
+		 * kernel entry, see "PACAKSAVE(r13)" in _switch() and
+		 * system_call_common()/EXCEPTION_PROLOG_COMMON().
+		 *
+		 * Likewise for non-swapper kernel threads,
+		 * this also happens to be the top of the stack
+		 * as setup by copy_thread().
+		 *
+		 * Note that stack backlinks are not properly setup by
+		 * copy_thread() and thus, a forked task() will have
+		 * an unreliable stack trace until it's been
+		 * _switch()'ed to for the first time.
+		 */
+		stack_end -= STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
+	} else {
+		/*
+		 * idle tasks have a custom stack layout,
+		 * c.f. cpu_idle_thread_init().
+		 */
+		stack_end -= STACK_FRAME_OVERHEAD;
+	}
+
+	if (sp < stack_page + sizeof(struct thread_struct) ||
+	    sp > stack_end - STACK_FRAME_MIN_SIZE) {
+		return 1;
+	}
+
+	for (;;) {
+		unsigned long *stack = (unsigned long *) sp;
+		unsigned long newsp, ip;
+
+		/* sanity check: ABI requires SP to be aligned 16 bytes. */
+		if (sp & 0xF)
+			return 1;
+
+		/* Mark stacktraces with exception frames as unreliable. */
+		if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
+		    stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
+			return 1;
+		}
+
+		newsp = stack[0];
+		/* Stack grows downwards; unwinder may only go up. */
+		if (newsp <= sp)
+			return 1;
+
+		if (newsp != stack_end &&
+		    newsp > stack_end - STACK_FRAME_MIN_SIZE) {
+			return 1; /* invalid backlink, too far up. */
+		}
+
+		/* Examine the saved LR: it must point into kernel code. */
+		ip = stack[STACK_FRAME_LR_SAVE];
+		if (!firstframe && !__kernel_text_address(ip))
+			return 1;
+		firstframe = 0;
+
+		/*
+		 * FIXME: IMHO these tests do not belong in
+		 * arch-dependent code, they are generic.
+		 */
+		ip = ftrace_graph_ret_addr(tsk, &graph_idx, ip, NULL);
+
+		/*
+		 * Mark stacktraces with kretprobed functions on them
+		 * as unreliable.
+		 */
+		if (ip == (unsigned long)kretprobe_trampoline)
+			return 1;
+
+		if (!trace->skip)
+			trace->entries[trace->nr_entries++] = ip;
+		else
+			trace->skip--;
+
+		if (newsp == stack_end)
+			break;
+
+		if (trace->nr_entries >= trace->max_entries)
+			return -E2BIG;
+
+		sp = newsp;
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable);
+#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] On ppc64le we HAVE_RELIABLE_STACKTRACE
  2018-05-04 12:38       ` [PATCH v3] " Torsten Duwe
@ 2018-05-07 15:42         ` Josh Poimboeuf
  2018-05-08  8:38           ` [PATCH v3] ppc64le livepatch: implement reliable stacktrace for newer consistency models Torsten Duwe
  2018-05-10 14:06         ` [v3] On ppc64le we HAVE_RELIABLE_STACKTRACE Michael Ellerman
  1 sibling, 1 reply; 12+ messages in thread
From: Josh Poimboeuf @ 2018-05-07 15:42 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Michael Ellerman, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

On Fri, May 04, 2018 at 02:38:34PM +0200, Torsten Duwe wrote:
> 
> The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
> 
> [...] There are several rules that must be adhered to in order to ensure
> reliable and consistent call chain backtracing:
> 
> * Before a function calls any other function, it shall establish its
>   own stack frame, whose size shall be a multiple of 16 bytes.
> 
>  – In instances where a function’s prologue creates a stack frame, the
>    back-chain word of the stack frame shall be updated atomically with
>    the value of the stack pointer (r1) when a back chain is implemented.
>    (This must be supported as default by all ELF V2 ABI-compliant
>    environments.)
> [...]
>  – The function shall save the link register that contains its return
>    address in the LR save doubleword of its caller’s stack frame before
>    calling another function.
> 
> To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
> This patch may be unneccessarily limited to ppc64le, but OTOH the only
> user of this flag so far is livepatching, which is only implemented on
> PPCs with 64-LE, a.k.a. ELF ABI v2.
> 
> Feel free to add other ppc variants, but so far only ppc64le got tested.
> 
> This change also implements save_stack_trace_tsk_reliable() for ppc64le
> that checks for the above conditions, where possible.
> 
> Signed-off-by: Torsten Duwe <duwe@suse.de>
> Signed-off-by: Nicolai Stange <nstange@suse.de>

The subject doesn't actively describe what the patch does, maybe change
it to something like:

  powerpc: Add support for HAVE_RELIABLE_STACKTRACE

or maybe

  powerpc: Add support for livepatch consistency model

Otherwise it looks great to me.

Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>

-- 
Josh

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] ppc64le livepatch: implement reliable stacktrace for newer consistency models
  2018-05-07 15:42         ` Josh Poimboeuf
@ 2018-05-08  8:38           ` Torsten Duwe
  2018-05-09  0:14             ` Josh Poimboeuf
  0 siblings, 1 reply; 12+ messages in thread
From: Torsten Duwe @ 2018-05-08  8:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Michael Ellerman, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

On Mon, 7 May 2018 10:42:08 -0500
Josh Poimboeuf <jpoimboe@redhat.com> wrote:

> The subject doesn't actively describe what the patch does, maybe
> change it to something like:
> 
>   powerpc: Add support for HAVE_RELIABLE_STACKTRACE
> 
> or maybe
> 
>   powerpc: Add support for livepatch consistency model

Maybe $SUBJECT? You're absolutely right, the old subject was just a
leftover of my original attempt to just set the flag, before Miroslav
corrected me. I just kept on copying it without a second thought. Thanks
for noting.

> Otherwise it looks great to me.
> 
> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
> 
Thanks.

	Torsten

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] ppc64le livepatch: implement reliable stacktrace for newer consistency models
  2018-05-08  8:38           ` [PATCH v3] ppc64le livepatch: implement reliable stacktrace for newer consistency models Torsten Duwe
@ 2018-05-09  0:14             ` Josh Poimboeuf
  2018-05-09  1:41               ` Michael Ellerman
  0 siblings, 1 reply; 12+ messages in thread
From: Josh Poimboeuf @ 2018-05-09  0:14 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Michael Ellerman, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

On Tue, May 08, 2018 at 10:38:32AM +0200, Torsten Duwe wrote:
> On Mon, 7 May 2018 10:42:08 -0500
> Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> 
> > The subject doesn't actively describe what the patch does, maybe
> > change it to something like:
> > 
> >   powerpc: Add support for HAVE_RELIABLE_STACKTRACE
> > 
> > or maybe
> > 
> >   powerpc: Add support for livepatch consistency model
> 
> Maybe $SUBJECT? You're absolutely right, the old subject was just a
> leftover of my original attempt to just set the flag, before Miroslav
> corrected me. I just kept on copying it without a second thought. Thanks
> for noting.

Generally we refer to it as "the consistency model".  So I would
propose a minor edit:

  ppc64le/livepatch: Implement reliable stack tracing for the consistency model

> > Otherwise it looks great to me.
> > 
> > Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
> > 
> Thanks.
> 
> 	Torsten
> 

-- 
Josh

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] ppc64le livepatch: implement reliable stacktrace for newer consistency models
  2018-05-09  0:14             ` Josh Poimboeuf
@ 2018-05-09  1:41               ` Michael Ellerman
  2018-05-09 10:35                 ` Torsten Duwe
  0 siblings, 1 reply; 12+ messages in thread
From: Michael Ellerman @ 2018-05-09  1:41 UTC (permalink / raw)
  To: Josh Poimboeuf, Torsten Duwe
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, Nicholas Piggin, live-patching

Josh Poimboeuf <jpoimboe@redhat.com> writes:

> On Tue, May 08, 2018 at 10:38:32AM +0200, Torsten Duwe wrote:
>> On Mon, 7 May 2018 10:42:08 -0500
>> Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> 
>> > The subject doesn't actively describe what the patch does, maybe
>> > change it to something like:
>> > 
>> >   powerpc: Add support for HAVE_RELIABLE_STACKTRACE
>> > 
>> > or maybe
>> > 
>> >   powerpc: Add support for livepatch consistency model
>> 
>> Maybe $SUBJECT? You're absolutely right, the old subject was just a
>> leftover of my original attempt to just set the flag, before Miroslav
>> corrected me. I just kept on copying it without a second thought. Thanks
>> for noting.
>
> Generally we refer to it as "the consistency model".  So I would
> propose a minor edit:
>
>   ppc64le/livepatch: Implement reliable stack tracing for the consistency model

We use "powerpc" as the prefix.

So I've used:

  powerpc/livepatch: Implement reliable stack tracing for the consistency model


cheers

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] ppc64le livepatch: implement reliable stacktrace for newer consistency models
  2018-05-09  1:41               ` Michael Ellerman
@ 2018-05-09 10:35                 ` Torsten Duwe
  0 siblings, 0 replies; 12+ messages in thread
From: Torsten Duwe @ 2018-05-09 10:35 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Josh Poimboeuf, Jiri Kosina, linuxppc-dev, linux-kernel,
	Nicholas Piggin, live-patching

On Wed, May 09, 2018 at 11:41:09AM +1000, Michael Ellerman wrote:
> Josh Poimboeuf <jpoimboe@redhat.com> writes:
> 
> > Generally we refer to it as "the consistency model".
[...]
> 
> We use "powerpc" as the prefix.
> 
> So I've used:
> 
>   powerpc/livepatch: Implement reliable stack tracing for the consistency model

Perfect. Thanks!

	Torsten

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [v3] On ppc64le we HAVE_RELIABLE_STACKTRACE
  2018-05-04 12:38       ` [PATCH v3] " Torsten Duwe
  2018-05-07 15:42         ` Josh Poimboeuf
@ 2018-05-10 14:06         ` Michael Ellerman
  1 sibling, 0 replies; 12+ messages in thread
From: Michael Ellerman @ 2018-05-10 14:06 UTC (permalink / raw)
  To: Torsten Duwe, Josh Poimboeuf
  Cc: linux-kernel, Nicholas Piggin, Jiri Kosina, live-patching, linuxppc-dev

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1619 bytes --]

On Fri, 2018-05-04 at 12:38:34 UTC, Torsten Duwe wrote:
> The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
> 
> [...] There are several rules that must be adhered to in order to ensure
> reliable and consistent call chain backtracing:
> 
> * Before a function calls any other function, it shall establish its
>   own stack frame, whose size shall be a multiple of 16 bytes.
> 
>  – In instances where a function’s prologue creates a stack frame, the
>    back-chain word of the stack frame shall be updated atomically with
>    the value of the stack pointer (r1) when a back chain is implemented.
>    (This must be supported as default by all ELF V2 ABI-compliant
>    environments.)
> [...]
>  – The function shall save the link register that contains its return
>    address in the LR save doubleword of its caller’s stack frame before
>    calling another function.
> 
> To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
> This patch may be unneccessarily limited to ppc64le, but OTOH the only
> user of this flag so far is livepatching, which is only implemented on
> PPCs with 64-LE, a.k.a. ELF ABI v2.
> 
> Feel free to add other ppc variants, but so far only ppc64le got tested.
> 
> This change also implements save_stack_trace_tsk_reliable() for ppc64le
> that checks for the above conditions, where possible.
> 
> Signed-off-by: Torsten Duwe <duwe@suse.de>
> Signed-off-by: Nicolai Stange <nstange@suse.de>
> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/df78d3f6148092d33a9a24c7a9cfac

cheers

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-05-10 14:06 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-05 16:49 [PATCH v2] On ppc64le we HAVE_RELIABLE_STACKTRACE Torsten Duwe
2018-03-05 17:09 ` Segher Boessenkool
2018-03-08 16:26 ` Josh Poimboeuf
2018-03-09 16:47   ` Torsten Duwe
2018-03-12 15:35     ` Josh Poimboeuf
2018-05-04 12:38       ` [PATCH v3] " Torsten Duwe
2018-05-07 15:42         ` Josh Poimboeuf
2018-05-08  8:38           ` [PATCH v3] ppc64le livepatch: implement reliable stacktrace for newer consistency models Torsten Duwe
2018-05-09  0:14             ` Josh Poimboeuf
2018-05-09  1:41               ` Michael Ellerman
2018-05-09 10:35                 ` Torsten Duwe
2018-05-10 14:06         ` [v3] On ppc64le we HAVE_RELIABLE_STACKTRACE Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).