LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Chris Mason <chris.mason@oracle.com>
To: Al Boldi <a1426z@gawab.com>
Cc: "Ingo Molnar" <mingo@elte.hu>,
"Oliver Pinter" <oliver.pntr@gmail.com>,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: konqueror deadlocks on 2.6.22
Date: Tue, 22 Jan 2008 09:25:49 -0500 [thread overview]
Message-ID: <200801220925.50314.chris.mason@oracle.com> (raw)
In-Reply-To: <200801221623.42989.a1426z@gawab.com>
On Tuesday 22 January 2008, Al Boldi wrote:
> Ingo Molnar wrote:
> > * Oliver Pinter (Pintér Olivér) <oliver.pntr@gmail.com> wrote:
> > > and then please update to CFS-v24.1
> > > http://people.redhat.com/~mingo/cfs-scheduler/sched-cfs-v2.6.22.15-v24.
> > >1 .patch
> > >
> > > > Yes with CFSv20.4, as in the log.
> > > >
> > > > It also hangs on 2.6.23.13
> >
> > my feeling is that this is some sort of timing dependent race in
> > konqueror/kde/qt that is exposed when a different scheduler is put in.
> >
> > If it disappears with CFS-v24.1 it is probably just because the timings
> > will change again. Would be nice to debug this on the konqueror side and
> > analyze why it fails and how. You can probably tune the timings by
> > enabling SCHED_DEBUG and tweaking /proc/sys/kernel/*sched* values - in
> > particular sched_latency and the granularity settings. Setting wakeup
> > granularity to 0 might be one of the things that could make a
> > difference.
>
> Thanks Ingo, but Mike suggested that data=writeback may make a difference,
> which it does indeed.
>
> So the bug seems to be related to data=ordered, although I haven't gotten
> any feedback from the ext3 gurus yet.
>
> Seems rather critical though, as data=writeback is a dangerous mode to run.
Running fsync in data=ordered means that all of the dirty blocks on the FS
will get written before fsync returns. Your original stack trace shows
everyone either performing writeback for a log commit or waiting for the log
commit to return.
They key task in your trace is kjournald, stuck in get_request_wait. It could
be a block layer bug, not giving him requests quickly enough, or it could be
the scheduler not giving him back the cpu fast enough.
At any rate, that's where to concentrate the debugging. You should be able to
simulate this by running a few instances of the below loop and looking for
stalls:
while(true) ; do
time dd if=/dev/zero of=foo bs=50M count=4 oflags=sync
done
next prev parent reply other threads:[~2008-01-22 14:28 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-19 18:14 Al Boldi
2008-01-19 18:53 ` Oliver Pinter (Pintér Olivér)
2008-01-19 20:57 ` Al Boldi
2008-01-19 22:17 ` Oliver Pinter (Pintér Olivér)
2008-01-22 10:10 ` Ingo Molnar
2008-01-22 13:23 ` Al Boldi
2008-01-22 14:25 ` Chris Mason [this message]
2008-01-22 18:54 ` Al Boldi
2008-01-22 19:13 ` Chris Mason
[not found] ` <1200802290.4166.2.camel@homer.simson.net>
2008-01-20 5:41 ` Al Boldi
2008-01-20 7:23 ` Mike Galbraith
2008-01-20 7:30 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200801220925.50314.chris.mason@oracle.com \
--to=chris.mason@oracle.com \
--cc=a1426z@gawab.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=oliver.pntr@gmail.com \
--subject='Re: konqueror deadlocks on 2.6.22' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).