LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Andi Kleen <ak@suse.de>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] Only print kernel debug information for OOMs caused by kernel allocations
Date: Mon, 28 Jan 2008 00:56:57 -0800 [thread overview]
Message-ID: <20080128005657.24236df5.akpm@linux-foundation.org> (raw)
In-Reply-To: <200801280710.08204.ak@suse.de>
On Mon, 28 Jan 2008 07:10:07 +0100 Andi Kleen <ak@suse.de> wrote:
> On Monday 28 January 2008 06:52, Andrew Morton wrote:
> > On Wed, 16 Jan 2008 23:24:21 +0100 Andi Kleen <ak@suse.de> wrote:
> > > I recently suffered an 20+ minutes oom thrash disk to death and computer
> > > completely unresponsive situation on my desktop when some user program
> > > decided to grab all memory. It eventually recovered, but left lots
> > > of ugly and imho misleading messages in the kernel log. here's a minor
> > > improvement
>
> As a followup this was with swap over dm crypt. I've recently heard
> about other people having trouble with this too so this setup seems to trigger
> something bad in the VM.
Where's the backtrace and show_mem() output? :)
> > That information is useful for working out why a userspace allocation
> > attempt failed. If we don't print it, and the application gets killed and
> > thus frees a lot of memory, we will just never know why the allocation
> > failed.
>
> But it's basically only either page fault (direct or indirect) or write et.al.
> who do these page cache allocations. Do you really think it is that important
> to distingush these cases individually? In 95+% of all cases it should
> be a standard user page fault which always has the same backtrace.
Sure, the backtrace isn't very important. The show_mem() output is vital.
> To figure out why the application really oom'ed for those you would
> need a user level backtrace, but the message doesn't supply that one anyways.
>
> All other cases will still print the full backtrace so if some kernel
> subsystem runs amok it should be still possible to diagnose it.
>
We need the show_mem() output to see where all the memory went, and to see
what state page reclaim is in.
>
> >
> > > struct page *__page_cache_alloc(gfp_t gfp)
> > > {
> > > + struct task_struct *me = current;
> > > + unsigned old = (~me->flags) & PF_USER_ALLOC;
> > > + struct page *p;
> > > +
> > > + me->flags |= PF_USER_ALLOC;
> > > if (cpuset_do_page_mem_spread()) {
> > > int n = cpuset_mem_spread_node();
> > > - return alloc_pages_node(n, gfp, 0);
> > > - }
> > > - return alloc_pages(gfp, 0);
> > > + p = alloc_pages_node(n, gfp, 0);
> > > + } else
> > > + p = alloc_pages(gfp, 0);
> > > + /* Clear USER_ALLOC if it wasn't set originally */
> > > + me->flags ^= old;
> > > + return p;
> > > }
> >
> > That's appreciable amount of new overhead for at best a fairly marginal
> > benefit. Perhaps __GFP_USER could be [re|ab]used.
>
> It's a few non atomic bit operations. You really think that is considerable
> overhead? Also all should be cache hot already. My guess is that even with the
> additional function call it's < 10 cycles more.
Plus an additional function call. On the already-deep page allocation
path, I might add.
> > Alternatively: if we've printed the diagnostic on behalf of this process
> > and then decided to kill it, set some flag to prevent us from printing it
> > again.
>
> Do you really think that would help? I thought these messages came usually
> from different processes.
Dunno.
next prev parent reply other threads:[~2008-01-28 8:56 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-16 22:24 Andi Kleen
2008-01-16 22:55 ` Paul Jackson
2008-01-28 5:52 ` Andrew Morton
2008-01-28 6:10 ` Andi Kleen
2008-01-28 8:56 ` Andrew Morton [this message]
2008-01-28 9:11 ` Andi Kleen
2008-01-28 9:27 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080128005657.24236df5.akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=ak@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--subject='Re: [PATCH] Only print kernel debug information for OOMs caused by kernel allocations' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).