LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH 4/5] UML - Simplify helper stack handling
@ 2007-06-14 20:26 Jeff Dike
  2007-06-26 20:35 ` Andrew Morton
  2007-06-28  6:37 ` Andrew Morton
  0 siblings, 2 replies; 10+ messages in thread
From: Jeff Dike @ 2007-06-14 20:26 UTC (permalink / raw)
  To: Andrew Morton; +Cc: LKML, uml-devel

run_helper and run_helper_thread had arguments which were the same in
all callers.  run_helper's stack_out was always NULL and
run_helper_thread's stack_order was always 0.  These are now gone, and
the constants folded into the code.

Also fixed leaks of the helper stack in the AIO and SIGIO code.

Signed-off-by: Jeff Dike <jdike@linux.intel.com>
--
 arch/um/drivers/chan_user.c              |    2 +-
 arch/um/drivers/harddog_user.c           |    2 +-
 arch/um/drivers/net_user.c               |    2 +-
 arch/um/drivers/port_user.c              |    2 +-
 arch/um/drivers/slip_user.c              |    2 +-
 arch/um/drivers/slirp_user.c             |    2 +-
 arch/um/drivers/xterm.c                  |    2 +-
 arch/um/include/os.h                     |    6 ++----
 arch/um/os-Linux/aio.c                   |   11 ++++++-----
 arch/um/os-Linux/drivers/ethertap_user.c |    2 +-
 arch/um/os-Linux/drivers/tuntap_user.c   |    2 +-
 arch/um/os-Linux/helper.c                |   19 +++++++------------
 arch/um/os-Linux/sigio.c                 |   19 ++++++++++++-------
 13 files changed, 36 insertions(+), 37 deletions(-)

Index: linux-2.6.21-mm/arch/um/drivers/harddog_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/drivers/harddog_user.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/drivers/harddog_user.c	2007-06-13 18:44:41.000000000 -0400
@@ -68,7 +68,7 @@ int start_watchdog(int *in_fd_ret, int *
 		args = pid_args;
 	}
 
-	pid = run_helper(pre_exec, &data, args, NULL);
+	pid = run_helper(pre_exec, &data, args);
 
 	os_close_file(out_fds[0]);
 	os_close_file(in_fds[1]);
Index: linux-2.6.21-mm/arch/um/drivers/net_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/drivers/net_user.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/drivers/net_user.c	2007-06-14 11:20:36.000000000 -0400
@@ -187,7 +187,7 @@ static int change_tramp(char **argv, cha
 	}
 	pe_data.close_me = fds[0];
 	pe_data.stdout = fds[1];
-	pid = run_helper(change_pre_exec, &pe_data, argv, NULL);
+	pid = run_helper(change_pre_exec, &pe_data, argv);
 
 	if (pid > 0)	/* Avoid hang as we won't get data in failure case. */
 		read_output(fds[0], output, output_len);
Index: linux-2.6.21-mm/arch/um/drivers/port_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/drivers/port_user.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/drivers/port_user.c	2007-06-14 11:20:36.000000000 -0400
@@ -188,7 +188,7 @@ int port_connection(int fd, int *socket,
 		{ .sock_fd  		= new,
 		  .pipe_fd 		= socket[1] });
 
-	err = run_helper(port_pre_exec, &data, argv, NULL);
+	err = run_helper(port_pre_exec, &data, argv);
 	if(err < 0)
 		goto out_shutdown;
 
Index: linux-2.6.21-mm/arch/um/drivers/slip_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/drivers/slip_user.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/drivers/slip_user.c	2007-06-14 11:20:36.000000000 -0400
@@ -85,7 +85,7 @@ static int slip_tramp(char **argv, int f
 	pe_data.stdin = fd;
 	pe_data.stdout = fds[1];
 	pe_data.close_me = fds[0];
-	err = run_helper(slip_pre_exec, &pe_data, argv, NULL);
+	err = run_helper(slip_pre_exec, &pe_data, argv);
 	if(err < 0)
 		goto out_close;
 	pid = err;
Index: linux-2.6.21-mm/arch/um/drivers/slirp_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/drivers/slirp_user.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/drivers/slirp_user.c	2007-06-13 18:44:41.000000000 -0400
@@ -42,7 +42,7 @@ static int slirp_tramp(char **argv, int 
 
 	pe_data.stdin = fd;
 	pe_data.stdout = fd;
-	pid = run_helper(slirp_pre_exec, &pe_data, argv, NULL);
+	pid = run_helper(slirp_pre_exec, &pe_data, argv);
 
 	return(pid);
 }
Index: linux-2.6.21-mm/arch/um/drivers/xterm.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/drivers/xterm.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/drivers/xterm.c	2007-06-13 18:44:41.000000000 -0400
@@ -132,7 +132,7 @@ static int xterm_open(int input, int out
 	}
 
 	sprintf(title, data->title, data->device);
-	pid = run_helper(NULL, NULL, argv, NULL);
+	pid = run_helper(NULL, NULL, argv);
 	if (pid < 0) {
 		err = pid;
 		printk(UM_KERN_ERR "xterm_open : run_helper failed, "
Index: linux-2.6.21-mm/arch/um/include/os.h
===================================================================
--- linux-2.6.21-mm.orig/arch/um/include/os.h	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/include/os.h	2007-06-13 18:44:41.000000000 -0400
@@ -239,11 +239,9 @@ extern unsigned long __do_user_copy(void
 /* execvp.c */
 extern int execvp_noalloc(char *buf, const char *file, char *const argv[]);
 /* helper.c */
-extern int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv,
-		      unsigned long *stack_out);
+extern int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv);
 extern int run_helper_thread(int (*proc)(void *), void *arg,
-			     unsigned int flags, unsigned long *stack_out,
-			     int stack_order);
+			     unsigned int flags, unsigned long *stack_out);
 extern int helper_wait(int pid);
 
 
Index: linux-2.6.21-mm/arch/um/os-Linux/aio.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/os-Linux/aio.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/os-Linux/aio.c	2007-06-13 18:44:41.000000000 -0400
@@ -177,6 +177,7 @@ static int do_not_aio(struct aio_thread_
 static int aio_req_fd_r = -1;
 static int aio_req_fd_w = -1;
 static int aio_pid = -1;
+static unsigned long aio_stack;
 
 static int not_aio_thread(void *arg)
 {
@@ -212,7 +213,6 @@ static int not_aio_thread(void *arg)
 
 static int init_aio_24(void)
 {
-	unsigned long stack;
 	int fds[2], err;
 
 	err = os_pipe(fds, 1, 1);
@@ -227,7 +227,7 @@ static int init_aio_24(void)
 		goto out_close_pipe;
 
 	err = run_helper_thread(not_aio_thread, NULL,
-				CLONE_FILES | CLONE_VM | SIGCHLD, &stack, 0);
+				CLONE_FILES | CLONE_VM | SIGCHLD, &aio_stack);
 	if(err < 0)
 		goto out_close_pipe;
 
@@ -252,7 +252,6 @@ out:
 #define DEFAULT_24_AIO 0
 static int init_aio_26(void)
 {
-	unsigned long stack;
 	int err;
 
 	if(io_setup(256, &ctx)){
@@ -263,7 +262,7 @@ static int init_aio_26(void)
 	}
 
 	err = run_helper_thread(aio_thread, NULL,
-				CLONE_FILES | CLONE_VM | SIGCHLD, &stack, 0);
+				CLONE_FILES | CLONE_VM | SIGCHLD, &aio_stack);
 	if(err < 0)
 		return err;
 
@@ -365,8 +364,10 @@ __initcall(init_aio);
 
 static void exit_aio(void)
 {
-	if(aio_pid != -1)
+	if (aio_pid != -1) {
 		os_kill_process(aio_pid, 1);
+		free_stack(aio_stack, 0);
+	}
 }
 
 __uml_exitcall(exit_aio);
Index: linux-2.6.21-mm/arch/um/os-Linux/drivers/ethertap_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/os-Linux/drivers/ethertap_user.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/os-Linux/drivers/ethertap_user.c	2007-06-14 11:20:36.000000000 -0400
@@ -117,7 +117,7 @@ static int etap_tramp(char *dev, char *g
 	pe_data.control_remote = control_remote;
 	pe_data.control_me = control_me;
 	pe_data.data_me = data_me;
-	pid = run_helper(etap_pre_exec, &pe_data, args, NULL);
+	pid = run_helper(etap_pre_exec, &pe_data, args);
 
 	if(pid < 0)
 		err = pid;
Index: linux-2.6.21-mm/arch/um/os-Linux/drivers/tuntap_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/os-Linux/drivers/tuntap_user.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/os-Linux/drivers/tuntap_user.c	2007-06-13 18:44:41.000000000 -0400
@@ -83,7 +83,7 @@ static int tuntap_open_tramp(char *gate,
 	data.stdout = remote;
 	data.close_me = me;
 
-	pid = run_helper(tuntap_pre_exec, &data, argv, NULL);
+	pid = run_helper(tuntap_pre_exec, &data, argv);
 
 	if(pid < 0)
 		return -pid;
Index: linux-2.6.21-mm/arch/um/os-Linux/helper.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/os-Linux/helper.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/os-Linux/helper.c	2007-06-14 11:20:36.000000000 -0400
@@ -44,17 +44,13 @@ static int helper_child(void *arg)
 /* Returns either the pid of the child process we run or -E* on failure.
  * XXX The alloc_stack here breaks if this is called in the tracing thread, so
  * we need to receive a preallocated stack (a local buffer is ok). */
-int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv,
-	       unsigned long *stack_out)
+int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
 {
 	struct helper_data data;
 	unsigned long stack, sp;
 	int pid, fds[2], ret, n;
 
-	if ((stack_out != NULL) && (*stack_out != 0))
-		stack = *stack_out;
-	else
-		stack = alloc_stack(0, __cant_sleep());
+	stack = alloc_stack(0, __cant_sleep());
 	if (stack == 0)
 		return -ENOMEM;
 
@@ -113,22 +109,21 @@ out_close:
 		close(fds[1]);
 	close(fds[0]);
 out_free:
-	if ((stack_out == NULL) || (*stack_out == 0))
-		free_stack(stack, 0);
+	free_stack(stack, 0);
 	return ret;
 }
 
 int run_helper_thread(int (*proc)(void *), void *arg, unsigned int flags,
-		      unsigned long *stack_out, int stack_order)
+		      unsigned long *stack_out)
 {
 	unsigned long stack, sp;
 	int pid, status, err;
 
-	stack = alloc_stack(stack_order, __cant_sleep());
+	stack = alloc_stack(0, __cant_sleep());
 	if (stack == 0)
 		return -ENOMEM;
 
-	sp = stack + (UM_KERN_PAGE_SIZE << stack_order) - sizeof(void *);
+	sp = stack + UM_KERN_PAGE_SIZE - sizeof(void *);
 	pid = clone(proc, (void *) sp, flags | SIGCHLD, arg);
 	if (pid < 0) {
 		err = -errno;
@@ -147,7 +142,7 @@ int run_helper_thread(int (*proc)(void *
 		if (!WIFEXITED(status) || (WEXITSTATUS(status) != 0))
 			printk("run_helper_thread - thread returned status "
 			       "0x%x\n", status);
-		free_stack(stack, stack_order);
+		free_stack(stack, 0);
 	} else
 		*stack_out = stack;
 	return pid;
Index: linux-2.6.21-mm/arch/um/os-Linux/sigio.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/os-Linux/sigio.c	2007-06-13 17:13:42.000000000 -0400
+++ linux-2.6.21-mm/arch/um/os-Linux/sigio.c	2007-06-14 11:20:36.000000000 -0400
@@ -26,6 +26,7 @@
  * exitcall.
  */
 static int write_sigio_pid = -1;
+static unsigned long write_sigio_stack;
 
 /* These arrays are initialized before the sigio thread is started, and
  * the descriptors closed after it is killed.  So, it can't see them change.
@@ -144,8 +145,10 @@ static void update_thread(void)
 	return;
  fail:
 	/* Critical section start */
-	if(write_sigio_pid != -1)
+	if (write_sigio_pid != -1) {
 		os_kill_process(write_sigio_pid, 1);
+		free_stack(write_sigio_stack, 0);
+	}
 	write_sigio_pid = -1;
 	close(sigio_private[0]);
 	close(sigio_private[1]);
@@ -243,7 +246,6 @@ static struct pollfd *setup_initial_poll
 
 static void write_sigio_workaround(void)
 {
-	unsigned long stack;
 	struct pollfd *p;
 	int err;
 	int l_write_sigio_fds[2];
@@ -293,7 +295,8 @@ static void write_sigio_workaround(void)
 	memcpy(sigio_private, l_sigio_private, sizeof(l_sigio_private));
 
 	write_sigio_pid = run_helper_thread(write_sigio_thread, NULL,
-					    CLONE_FILES | CLONE_VM, &stack, 0);
+					    CLONE_FILES | CLONE_VM,
+					    &write_sigio_stack);
 
 	if (write_sigio_pid < 0)
 		goto out_clear;
@@ -356,10 +359,12 @@ out:
 
 static void sigio_cleanup(void)
 {
-	if(write_sigio_pid != -1){
-		os_kill_process(write_sigio_pid, 1);
-		write_sigio_pid = -1;
-	}
+	if (write_sigio_pid == -1)
+		return;
+
+	os_kill_process(write_sigio_pid, 1);
+	free_stack(write_sigio_stack, 0);
+	write_sigio_pid = -1;
 }
 
 __uml_exitcall(sigio_cleanup);
Index: linux-2.6.21-mm/arch/um/drivers/chan_user.c
===================================================================
--- linux-2.6.21-mm.orig/arch/um/drivers/chan_user.c	2007-06-13 18:44:36.000000000 -0400
+++ linux-2.6.21-mm/arch/um/drivers/chan_user.c	2007-06-13 18:44:41.000000000 -0400
@@ -161,7 +161,7 @@ static int winch_tramp(int fd, struct tt
 	 * problem with /dev/net/tun, which if held open by this
 	 * thread, prevents the TUN/TAP device from being reused.
 	 */
-	err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out, 0);
+	err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
 	if(err < 0){
 		printk("fork of winch_thread failed - errno = %d\n", -err);
 		goto out_close;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/5] UML - Simplify helper stack handling
  2007-06-14 20:26 [PATCH 4/5] UML - Simplify helper stack handling Jeff Dike
@ 2007-06-26 20:35 ` Andrew Morton
  2007-06-26 21:53   ` [uml-devel] " Jeff Dike
  2007-06-28  6:37 ` Andrew Morton
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2007-06-26 20:35 UTC (permalink / raw)
  To: Jeff Dike; +Cc: LKML, uml-devel

On Thu, 14 Jun 2007 16:26:55 -0400
Jeff Dike <jdike@addtoit.com> wrote:

> --- linux-2.6.21-mm.orig/arch/um/drivers/xterm.c	2007-06-13 17:13:42.000000000 -0400
> +++ linux-2.6.21-mm/arch/um/drivers/xterm.c	2007-06-13 18:44:41.000000000 -0400
> @@ -132,7 +132,7 @@ static int xterm_open(int input, int out
>  	}
>  
>  	sprintf(title, data->title, data->device);
> -	pid = run_helper(NULL, NULL, argv, NULL);
> +	pid = run_helper(NULL, NULL, argv);
>  	if (pid < 0) {
>  		err = pid;
>  		printk(UM_KERN_ERR "xterm_open : run_helper failed, "

Something's gone wrong here.  My copy of this file has

        pid = run_helper(NULL, NULL, argv, &stack);


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [uml-devel] [PATCH 4/5] UML - Simplify helper stack handling
  2007-06-26 20:35 ` Andrew Morton
@ 2007-06-26 21:53   ` Jeff Dike
  2007-06-26 22:07     ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Jeff Dike @ 2007-06-26 21:53 UTC (permalink / raw)
  To: Andrew Morton; +Cc: LKML, uml-devel

On Tue, Jun 26, 2007 at 01:35:50PM -0700, Andrew Morton wrote:
> >  	sprintf(title, data->title, data->device);
> > -	pid = run_helper(NULL, NULL, argv, NULL);
> > +	pid = run_helper(NULL, NULL, argv);
> >  	if (pid < 0) {
> >  		err = pid;
> >  		printk(UM_KERN_ERR "xterm_open : run_helper failed, "
> 
> Something's gone wrong here.  My copy of this file has
> 
>         pid = run_helper(NULL, NULL, argv, &stack);

Looks like you're applying patches out of order.  This is from the
14th, while a patch from the 13th ("UML - xterm driver tidying") has
-       pid = run_helper(NULL, NULL, argv, &stack);
	...
+       pid = run_helper(NULL, NULL, argv, NULL);

If you don't want to fiddle with this, just drop it and I'll rediff
against the next -mm and resend.

				Jeff

-- 
Work email - jdike at linux dot intel dot com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [uml-devel] [PATCH 4/5] UML - Simplify helper stack handling
  2007-06-26 21:53   ` [uml-devel] " Jeff Dike
@ 2007-06-26 22:07     ` Andrew Morton
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2007-06-26 22:07 UTC (permalink / raw)
  To: Jeff Dike; +Cc: LKML, uml-devel

On Tue, 26 Jun 2007 17:53:28 -0400
Jeff Dike <jdike@addtoit.com> wrote:

> On Tue, Jun 26, 2007 at 01:35:50PM -0700, Andrew Morton wrote:
> > >  	sprintf(title, data->title, data->device);
> > > -	pid = run_helper(NULL, NULL, argv, NULL);
> > > +	pid = run_helper(NULL, NULL, argv);
> > >  	if (pid < 0) {
> > >  		err = pid;
> > >  		printk(UM_KERN_ERR "xterm_open : run_helper failed, "
> > 
> > Something's gone wrong here.  My copy of this file has
> > 
> >         pid = run_helper(NULL, NULL, argv, &stack);
> 
> Looks like you're applying patches out of order.

Yes, I am.  After a few days off and a few more days being lazy I have
600-odd patches (and followups thereto) to process.

- If I do them in-order, I merge stuff which is out of date or wrong

- If I do them in reverse-order, I can miss dependencies.

On balance, reverse-order seems to be better.

>  This is from the
> 14th, while a patch from the 13th ("UML - xterm driver tidying") has
> -       pid = run_helper(NULL, NULL, argv, &stack);
> 	...
> +       pid = run_helper(NULL, NULL, argv, NULL);
> 
> If you don't want to fiddle with this, just drop it and I'll rediff
> against the next -mm and resend.

I sorted it out, thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/5] UML - Simplify helper stack handling
  2007-06-14 20:26 [PATCH 4/5] UML - Simplify helper stack handling Jeff Dike
  2007-06-26 20:35 ` Andrew Morton
@ 2007-06-28  6:37 ` Andrew Morton
  2007-07-03 15:28   ` [uml-devel] " Blaisorblade
  2007-08-05 20:41   ` Luca Tettamanti
  1 sibling, 2 replies; 10+ messages in thread
From: Andrew Morton @ 2007-06-28  6:37 UTC (permalink / raw)
  To: Jeff Dike; +Cc: LKML, uml-devel


So I'm running the generic version of this on i386 with 8k stacks (below),
with a quick LTP run.

Holy cow, either we use a _lot_ of stack or these numbers are off:

vmm:/home/akpm> dmesg -s 1000000|grep 'bytes left' 
khelper used greatest stack depth: 7176 bytes left
khelper used greatest stack depth: 7064 bytes left
khelper used greatest stack depth: 6840 bytes left
khelper used greatest stack depth: 6812 bytes left
hostname used greatest stack depth: 6636 bytes left
uname used greatest stack depth: 6592 bytes left
uname used greatest stack depth: 6284 bytes left
hotplug used greatest stack depth: 5568 bytes left
rpc.nfsd used greatest stack depth: 5136 bytes left
chown02 used greatest stack depth: 4956 bytes left
fchown01 used greatest stack depth: 4892 bytes left

That's the sum of process stack and interrupt stack, but I doubt if this
little box is using much interrupt stack space.

No wonder people are still getting stack overflows with 4k stacks...





From: Jeff Dike <jdike@addtoit.com>

Add generic exit-time stack-depth checking to CONFIG_DEBUG_STACK_USAGE.

This also adds UML support.

Tested on UML and i386.

[akpm@linux-foundation.org: cleanups, speedups, tweaks]
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 arch/um/Kconfig.debug        |    9 +++++++++
 arch/um/defconfig            |    1 +
 include/asm-um/thread_info.h |    9 +++++++++
 kernel/exit.c                |   29 +++++++++++++++++++++++++++++
 4 files changed, 48 insertions(+)

diff -puN arch/um/Kconfig.debug~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage arch/um/Kconfig.debug
--- a/arch/um/Kconfig.debug~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage
+++ a/arch/um/Kconfig.debug
@@ -47,4 +47,13 @@ config GCOV
         If you're involved in UML kernel development and want to use gcov,
         say Y.  If you're unsure, say N.
 
+config DEBUG_STACK_USAGE
+	bool "Stack utilization instrumentation"
+	default N
+	help
+	  Track the maximum kernel stack usage - this will look at each
+	  kernel stack at process exit and log it if it's the deepest
+	  stack seen so far.
+
+	  This option will slow down process creation and destruction somewhat.
 endmenu
diff -puN arch/um/defconfig~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage arch/um/defconfig
--- a/arch/um/defconfig~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage
+++ a/arch/um/defconfig
@@ -527,3 +527,4 @@ CONFIG_FORCED_INLINING=y
 # CONFIG_RCU_TORTURE_TEST is not set
 # CONFIG_GPROF is not set
 # CONFIG_GCOV is not set
+# CONFIG_DEBUG_STACK_USAGE is not set
diff -puN include/asm-um/thread_info.h~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage include/asm-um/thread_info.h
--- a/include/asm-um/thread_info.h~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage
+++ a/include/asm-um/thread_info.h
@@ -52,10 +52,19 @@ static inline struct thread_info *curren
 	return ti;
 }
 
+#ifdef CONFIG_DEBUG_STACK_USAGE
+
+#define alloc_thread_info(tsk) \
+	((struct thread_info *) __get_free_pages(GFP_KERNEL | __GFP_ZERO, \
+						 CONFIG_KERNEL_STACK_ORDER))
+#else
+
 /* thread information allocation */
 #define alloc_thread_info(tsk) \
 	((struct thread_info *) __get_free_pages(GFP_KERNEL, \
 						 CONFIG_KERNEL_STACK_ORDER))
+#endif
+
 #define free_thread_info(ti) \
 	free_pages((unsigned long)(ti),CONFIG_KERNEL_STACK_ORDER)
 
diff -puN kernel/exit.c~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage kernel/exit.c
--- a/kernel/exit.c~add-generic-exit-time-stack-depth-checking-to-config_debug_stack_usage
+++ a/kernel/exit.c
@@ -868,6 +868,34 @@ static void exit_notify(struct task_stru
 		release_task(tsk);
 }
 
+#ifdef CONFIG_DEBUG_STACK_USAGE
+static void check_stack_usage(void)
+{
+	static DEFINE_SPINLOCK(low_water_lock);
+	static int lowest_to_date = THREAD_SIZE;
+	unsigned long *n = end_of_stack(current);
+	unsigned long free;
+
+	while (*n == 0)
+		n++;
+	free = (unsigned long)n - (unsigned long)end_of_stack(current);
+
+	if (free >= lowest_to_date)
+		return;
+
+	spin_lock(&low_water_lock);
+	if (free < lowest_to_date) {
+		printk(KERN_WARNING "%s used greatest stack depth: %lu bytes "
+				"left\n",
+				current->comm, free);
+		lowest_to_date = free;
+	}
+	spin_unlock(&low_water_lock);
+}
+#else
+static inline void check_stack_usage(void) {}
+#endif
+
 fastcall NORET_TYPE void do_exit(long code)
 {
 	struct task_struct *tsk = current;
@@ -959,6 +987,7 @@ fastcall NORET_TYPE void do_exit(long co
 	exit_sem(tsk);
 	__exit_files(tsk);
 	__exit_fs(tsk);
+	check_stack_usage();
 	exit_thread();
 	cpuset_exit(tsk);
 	exit_keys(tsk);
_


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [uml-devel] [PATCH 4/5] UML - Simplify helper stack handling
  2007-06-28  6:37 ` Andrew Morton
@ 2007-07-03 15:28   ` Blaisorblade
  2007-07-03 15:58     ` Andrew Morton
  2007-07-03 17:26     ` Jeff Dike
  2007-08-05 20:41   ` Luca Tettamanti
  1 sibling, 2 replies; 10+ messages in thread
From: Blaisorblade @ 2007-07-03 15:28 UTC (permalink / raw)
  To: user-mode-linux-devel; +Cc: Andrew Morton, Jeff Dike, LKML

[-- Attachment #1: Type: text/plain, Size: 1608 bytes --]

On giovedì 28 giugno 2007, Andrew Morton wrote:
> So I'm running the generic version of this on i386 with 8k stacks (below),
> with a quick LTP run.
>
> Holy cow, either we use a _lot_ of stack or these numbers are off:
>
> vmm:/home/akpm> dmesg -s 1000000|grep 'bytes left'
> khelper used greatest stack depth: 7176 bytes left
> khelper used greatest stack depth: 7064 bytes left
> khelper used greatest stack depth: 6840 bytes left
> khelper used greatest stack depth: 6812 bytes left
> hostname used greatest stack depth: 6636 bytes left
> uname used greatest stack depth: 6592 bytes left
> uname used greatest stack depth: 6284 bytes left
> hotplug used greatest stack depth: 5568 bytes left
> rpc.nfsd used greatest stack depth: 5136 bytes left
> chown02 used greatest stack depth: 4956 bytes left
> fchown01 used greatest stack depth: 4892 bytes left

> That's the sum of process stack and interrupt stack, but I doubt if this
> little box is using much interrupt stack space.
>
> No wonder people are still getting stack overflows with 4k stacks...

First, those numbers pretend to be _unused_ stack space.

Well, UML tends to use more stack space than the rest of kernel. Apart it has 
a bit more layering (even if less than in the past), we must use libc's 
function too, and they're not written to be executed on an 8k stack.

We've reimplemented libc's printf() in terms of kernel sprintf() because it 
used 32K of stack.
-- 
Inform me of my mistakes, so I can add them to my list!
Paolo Giarrusso, aka Blaisorblade
http://www.user-mode-linux.org/~blaisorblade

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [uml-devel] [PATCH 4/5] UML - Simplify helper stack handling
  2007-07-03 15:28   ` [uml-devel] " Blaisorblade
@ 2007-07-03 15:58     ` Andrew Morton
  2007-07-03 17:26     ` Jeff Dike
  1 sibling, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2007-07-03 15:58 UTC (permalink / raw)
  To: Blaisorblade; +Cc: user-mode-linux-devel, Jeff Dike, LKML

On Tue, 3 Jul 2007 17:28:30 +0200 Blaisorblade <blaisorblade@yahoo.it> wrote:

> On giovedì 28 giugno 2007, Andrew Morton wrote:
> > So I'm running the generic version of this on i386 with 8k stacks (below),
> > with a quick LTP run.
> >
> > Holy cow, either we use a _lot_ of stack or these numbers are off:
> >
> > vmm:/home/akpm> dmesg -s 1000000|grep 'bytes left'
> > khelper used greatest stack depth: 7176 bytes left
> > khelper used greatest stack depth: 7064 bytes left
> > khelper used greatest stack depth: 6840 bytes left
> > khelper used greatest stack depth: 6812 bytes left
> > hostname used greatest stack depth: 6636 bytes left
> > uname used greatest stack depth: 6592 bytes left
> > uname used greatest stack depth: 6284 bytes left
> > hotplug used greatest stack depth: 5568 bytes left
> > rpc.nfsd used greatest stack depth: 5136 bytes left
> > chown02 used greatest stack depth: 4956 bytes left
> > fchown01 used greatest stack depth: 4892 bytes left
> 
> > That's the sum of process stack and interrupt stack, but I doubt if this
> > little box is using much interrupt stack space.
> >
> > No wonder people are still getting stack overflows with 4k stacks...
> 
> First, those numbers pretend to be _unused_ stack space.

Yep.  So fchown01 used ~3200 bytes of stack.  Problem.

> Well, UML tends to use more stack space than the rest of kernel.

That was a plain old i386 kernel.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [uml-devel] [PATCH 4/5] UML - Simplify helper stack handling
  2007-07-03 15:28   ` [uml-devel] " Blaisorblade
  2007-07-03 15:58     ` Andrew Morton
@ 2007-07-03 17:26     ` Jeff Dike
  1 sibling, 0 replies; 10+ messages in thread
From: Jeff Dike @ 2007-07-03 17:26 UTC (permalink / raw)
  To: Blaisorblade; +Cc: user-mode-linux-devel, Andrew Morton, LKML

On Tue, Jul 03, 2007 at 05:28:30PM +0200, Blaisorblade wrote:
> > fchown01 used greatest stack depth: 4892 bytes left
> 
> > That's the sum of process stack and interrupt stack, but I doubt if this
> > little box is using much interrupt stack space.
> >
> > No wonder people are still getting stack overflows with 4k stacks...
> 
> First, those numbers pretend to be _unused_ stack space.

But on an 8K stack.  If you pretend to be on a 4K stack, and take 4K
away from that, those numbers are 100s of bytes away from eating the
stack.

> Well, UML tends to use more stack space than the rest of
> kernel. Apart it has a bit more layering (even if less than in the
> past), we must use libc's function too, and they're not written to
> be executed on an 8k stack.

We don't use very much of libc on kernel stacks.  Also, various things
have been done to reduce stack usage.  We no longer initialize kernel
stacks with a signal frame on them.  We now have IRQ stacks.  I've
also done some amount of general stack usage reduction.  I haven't
seen anything come very close to running out of stack.

				Jeff

-- 
Work email - jdike at linux dot intel dot com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/5] UML - Simplify helper stack handling
  2007-06-28  6:37 ` Andrew Morton
  2007-07-03 15:28   ` [uml-devel] " Blaisorblade
@ 2007-08-05 20:41   ` Luca Tettamanti
  2007-08-05 20:54     ` Andrew Morton
  1 sibling, 1 reply; 10+ messages in thread
From: Luca Tettamanti @ 2007-08-05 20:41 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Jeff Dike, LKML

Il Wed, Jun 27, 2007 at 11:37:01PM -0700, Andrew Morton ha scritto: 
> 
> So I'm running the generic version of this on i386 with 8k stacks (below),
> with a quick LTP run.
> 
> Holy cow, either we use a _lot_ of stack or these numbers are off:
> 
> vmm:/home/akpm> dmesg -s 1000000|grep 'bytes left' 
> khelper used greatest stack depth: 7176 bytes left
> khelper used greatest stack depth: 7064 bytes left
> khelper used greatest stack depth: 6840 bytes left
> khelper used greatest stack depth: 6812 bytes left
> hostname used greatest stack depth: 6636 bytes left
> uname used greatest stack depth: 6592 bytes left
> uname used greatest stack depth: 6284 bytes left
> hotplug used greatest stack depth: 5568 bytes left
> rpc.nfsd used greatest stack depth: 5136 bytes left
> chown02 used greatest stack depth: 4956 bytes left
> fchown01 used greatest stack depth: 4892 bytes left
> 
> That's the sum of process stack and interrupt stack, but I doubt if this
> little box is using much interrupt stack space.
> 
> No wonder people are still getting stack overflows with 4k stacks...

Hi Andrew,
I was a bit worried about stack usage on my setup and google found your
mail :P

FYI:

khelper used greatest stack depth: 3228 bytes left
khelper used greatest stack depth: 3124 bytes left
busybox used greatest stack depth: 2808 bytes left
modprobe used greatest stack depth: 2744 bytes left
busybox used greatest stack depth: 2644 bytes left
modprobe used greatest stack depth: 1836 bytes left
modprobe used greatest stack depth: 1176 bytes left
java used greatest stack depth: 932 bytes left
java used greatest stack depth: 540 bytes left

I'm running git-current, with 4KiB stacks; filesystems are ext3 and XFS
on LVM (on libata devices).
Does it make sense to raise STACK_WARN to get a stack trace in do_IRQ?
Or is 540 bytes still "safe" taking into account the separate IRQ stack?

Luca
-- 
42

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/5] UML - Simplify helper stack handling
  2007-08-05 20:41   ` Luca Tettamanti
@ 2007-08-05 20:54     ` Andrew Morton
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2007-08-05 20:54 UTC (permalink / raw)
  To: Luca Tettamanti; +Cc: Jeff Dike, LKML

On Sun, 5 Aug 2007 22:41:14 +0200 Luca Tettamanti <kronos.it@gmail.com> wrote:

> Il Wed, Jun 27, 2007 at 11:37:01PM -0700, Andrew Morton ha scritto: 
> > 
> > So I'm running the generic version of this on i386 with 8k stacks (below),
> > with a quick LTP run.
> > 
> > Holy cow, either we use a _lot_ of stack or these numbers are off:
> > 
> > vmm:/home/akpm> dmesg -s 1000000|grep 'bytes left' 
> > khelper used greatest stack depth: 7176 bytes left
> > khelper used greatest stack depth: 7064 bytes left
> > khelper used greatest stack depth: 6840 bytes left
> > khelper used greatest stack depth: 6812 bytes left
> > hostname used greatest stack depth: 6636 bytes left
> > uname used greatest stack depth: 6592 bytes left
> > uname used greatest stack depth: 6284 bytes left
> > hotplug used greatest stack depth: 5568 bytes left
> > rpc.nfsd used greatest stack depth: 5136 bytes left
> > chown02 used greatest stack depth: 4956 bytes left
> > fchown01 used greatest stack depth: 4892 bytes left
> > 
> > That's the sum of process stack and interrupt stack, but I doubt if this
> > little box is using much interrupt stack space.
> > 
> > No wonder people are still getting stack overflows with 4k stacks...
> 
> Hi Andrew,
> I was a bit worried about stack usage on my setup and google found your
> mail :P
> 
> FYI:
> 
> khelper used greatest stack depth: 3228 bytes left
> khelper used greatest stack depth: 3124 bytes left
> busybox used greatest stack depth: 2808 bytes left
> modprobe used greatest stack depth: 2744 bytes left
> busybox used greatest stack depth: 2644 bytes left
> modprobe used greatest stack depth: 1836 bytes left
> modprobe used greatest stack depth: 1176 bytes left
> java used greatest stack depth: 932 bytes left
> java used greatest stack depth: 540 bytes left
> 
> I'm running git-current, with 4KiB stacks; filesystems are ext3 and XFS
> on LVM (on libata devices).
> Does it make sense to raise STACK_WARN to get a stack trace in do_IRQ?
> Or is 540 bytes still "safe" taking into account the separate IRQ stack?
> 

540 bytes free means that we've used 90% of the stack.  I'd say it is
extremely unsafe.

Unbelieveably unsafe.  I'm suspecting that the instrumentation is lying to
us for some reason.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2007-08-05 20:54 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-06-14 20:26 [PATCH 4/5] UML - Simplify helper stack handling Jeff Dike
2007-06-26 20:35 ` Andrew Morton
2007-06-26 21:53   ` [uml-devel] " Jeff Dike
2007-06-26 22:07     ` Andrew Morton
2007-06-28  6:37 ` Andrew Morton
2007-07-03 15:28   ` [uml-devel] " Blaisorblade
2007-07-03 15:58     ` Andrew Morton
2007-07-03 17:26     ` Jeff Dike
2007-08-05 20:41   ` Luca Tettamanti
2007-08-05 20:54     ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).