LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Luca Tettamanti <kronos.it@gmail.com>
Cc: Jeff Dike <jdike@addtoit.com>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 4/5] UML - Simplify helper stack handling
Date: Sun, 5 Aug 2007 13:54:03 -0700 [thread overview]
Message-ID: <20070805135403.3c812dda.akpm@linux-foundation.org> (raw)
In-Reply-To: <20070805204114.GA19857@dreamland.darkstar.lan>
On Sun, 5 Aug 2007 22:41:14 +0200 Luca Tettamanti <kronos.it@gmail.com> wrote:
> Il Wed, Jun 27, 2007 at 11:37:01PM -0700, Andrew Morton ha scritto:
> >
> > So I'm running the generic version of this on i386 with 8k stacks (below),
> > with a quick LTP run.
> >
> > Holy cow, either we use a _lot_ of stack or these numbers are off:
> >
> > vmm:/home/akpm> dmesg -s 1000000|grep 'bytes left'
> > khelper used greatest stack depth: 7176 bytes left
> > khelper used greatest stack depth: 7064 bytes left
> > khelper used greatest stack depth: 6840 bytes left
> > khelper used greatest stack depth: 6812 bytes left
> > hostname used greatest stack depth: 6636 bytes left
> > uname used greatest stack depth: 6592 bytes left
> > uname used greatest stack depth: 6284 bytes left
> > hotplug used greatest stack depth: 5568 bytes left
> > rpc.nfsd used greatest stack depth: 5136 bytes left
> > chown02 used greatest stack depth: 4956 bytes left
> > fchown01 used greatest stack depth: 4892 bytes left
> >
> > That's the sum of process stack and interrupt stack, but I doubt if this
> > little box is using much interrupt stack space.
> >
> > No wonder people are still getting stack overflows with 4k stacks...
>
> Hi Andrew,
> I was a bit worried about stack usage on my setup and google found your
> mail :P
>
> FYI:
>
> khelper used greatest stack depth: 3228 bytes left
> khelper used greatest stack depth: 3124 bytes left
> busybox used greatest stack depth: 2808 bytes left
> modprobe used greatest stack depth: 2744 bytes left
> busybox used greatest stack depth: 2644 bytes left
> modprobe used greatest stack depth: 1836 bytes left
> modprobe used greatest stack depth: 1176 bytes left
> java used greatest stack depth: 932 bytes left
> java used greatest stack depth: 540 bytes left
>
> I'm running git-current, with 4KiB stacks; filesystems are ext3 and XFS
> on LVM (on libata devices).
> Does it make sense to raise STACK_WARN to get a stack trace in do_IRQ?
> Or is 540 bytes still "safe" taking into account the separate IRQ stack?
>
540 bytes free means that we've used 90% of the stack. I'd say it is
extremely unsafe.
Unbelieveably unsafe. I'm suspecting that the instrumentation is lying to
us for some reason.
prev parent reply other threads:[~2007-08-05 20:54 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-06-14 20:26 Jeff Dike
2007-06-26 20:35 ` Andrew Morton
2007-06-26 21:53 ` [uml-devel] " Jeff Dike
2007-06-26 22:07 ` Andrew Morton
2007-06-28 6:37 ` Andrew Morton
2007-07-03 15:28 ` [uml-devel] " Blaisorblade
2007-07-03 15:58 ` Andrew Morton
2007-07-03 17:26 ` Jeff Dike
2007-08-05 20:41 ` Luca Tettamanti
2007-08-05 20:54 ` Andrew Morton [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070805135403.3c812dda.akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=jdike@addtoit.com \
--cc=kronos.it@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--subject='Re: [PATCH 4/5] UML - Simplify helper stack handling' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).