LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Justin Piszcz <jpiszcz@lucidpixels.com>
To: Bill Cizek <cizek@rcn.com>
Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org,
xfs@oss.sgi.com, Alan Piszcz <ap@solarrain.com>
Subject: Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2
Date: Thu, 25 Jan 2007 06:13:07 -0500 (EST) [thread overview]
Message-ID: <Pine.LNX.4.64.0701250612040.8945@p34.internal.lan> (raw)
In-Reply-To: <45B80610.5010804@rcn.com>
On Wed, 24 Jan 2007, Bill Cizek wrote:
> Justin Piszcz wrote:
> > On Mon, 22 Jan 2007, Andrew Morton wrote:
> >
> > > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz
> > > > <jpiszcz@lucidpixels.com> wrote:
> > > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to
> > > > invoke the OOM killer and kill all of my processes?
> > > >
> > Running with PREEMPT OFF lets me copy the file!! The machine LAGS
> > occasionally every 5-30-60 seconds or so VERY BADLY, talking 5-10 seconds of
> > lag, but hey, it does not crash!! I will boot the older kernel with preempt
> > on and see if I can get you that information you requested.
> >
> Justin,
>
> According to your kernel_ring_buffer.txt (attached to another email), you are
> using "anticipatory" as your io scheduler:
> 289 Jan 24 18:35:25 p34 kernel: [ 0.142130] io scheduler noop registered
> 290 Jan 24 18:35:25 p34 kernel: [ 0.142194] io scheduler anticipatory
> registered (default)
>
> I had a problem with this scheduler where my system would occasionally lockup
> during heavy I/O. Sometimes it would fix itself, sometimes I had to reboot.
> I changed to the "CFQ" io scheduler and my system has worked fine since then.
>
> CFQ has to be built into the kernel (under BlockLayer/IOSchedulers). It can
> be selected as default or you can set it during runtime:
>
> echo cfq > /sys/block/<disk>/queue/scheduler
> ...
>
> Hope this helps,
> Bill
>
>
I used to run CFQ awhile back but then I switched over to AS as it has
better performance for my workloads, currently, I am running with PREEMPT
off, if I see any additional issues, I will switch to the CFQ scheduler.
Right now, its the OOM killer that is going crazy.
Justin.
next prev parent reply other threads:[~2007-01-25 11:13 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-01-21 19:27 Justin Piszcz
2007-01-22 13:37 ` Pavel Machek
2007-01-22 18:48 ` Justin Piszcz
2007-01-22 23:47 ` Pavel Machek
2007-01-24 23:39 ` Justin Piszcz
2007-01-24 23:42 ` Justin Piszcz
2007-01-25 0:32 ` Pavel Machek
2007-01-25 0:36 ` Justin Piszcz
2007-01-25 0:58 ` Justin Piszcz
2007-01-25 9:08 ` Justin Piszcz
2007-01-25 22:34 ` Mark Hahn
2007-01-26 0:22 ` Justin Piszcz
2007-01-22 19:57 ` Andrew Morton
2007-01-22 20:20 ` Justin Piszcz
2007-01-23 0:37 ` Donald Douwsma
2007-01-23 1:12 ` Andrew Morton
2007-01-24 23:40 ` Justin Piszcz
2007-01-25 0:10 ` Justin Piszcz
2007-01-25 0:36 ` Nick Piggin
2007-01-25 11:11 ` Justin Piszcz
2007-01-25 1:21 ` Bill Cizek
2007-01-25 11:13 ` Justin Piszcz [this message]
2007-01-25 0:34 ` Justin Piszcz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.64.0701250612040.8945@p34.internal.lan \
--to=jpiszcz@lucidpixels.com \
--cc=ap@solarrain.com \
--cc=cizek@rcn.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=xfs@oss.sgi.com \
--subject='Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).