LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Andre Noll <maan@systemlinux.org>
To: Neil Brown <neilb@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org,
"K.Tanaka" <k-tanaka@ce.jp.nec.com>
Subject: Re: [PATCH 001 of 9] md: Fix deadlock in md/raid1 and md/raid10 when handling a read error.
Date: Thu, 6 Mar 2008 11:51:34 +0100 [thread overview]
Message-ID: <20080306105134.GC32242@skl-net.de> (raw)
In-Reply-To: <18383.25889.876350.431676@notabene.brown>
[-- Attachment #1: Type: text/plain, Size: 1088 bytes --]
On 14:29, Neil Brown wrote:
> > Are you worried about another CPU setting conf->pending_bio_list.head
> > to != NULL after the if statement? If that's an issue I think also
> > the original patch is problematic because the same might happen after
> > the final spin_unlock_irq() but but before flush_pending_writes()
> > returns zero.
>
> No. I'm worried that another CPU might set
> conf->pending_bio_list.head *before* the if statement, but it isn't
> seen by this CPU because of the lack of memory barriers. The spinlock
> ensures that the memory state is consistent.
But is that enough to avoid the deadlock? I think the following
scenario would be possible with the code in the original patch:
// suppose conf->pending_bio_list.head==NULL ATM
CPU0:
int rv = 0;
spin_lock_irq(&conf->device_lock);
if (conf->pending_bio_list.head) // false
spin_unlock_irq(&conf->device_lock);
CPU1:
conf->pending_bio_list.head = something;
CPU0:
return rv; // zero
Andre
--
The only person who always got his work done by Friday was Robinson Crusoe
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
next prev parent reply other threads:[~2008-03-06 10:52 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-03-03 0:16 [PATCH 000 of 9] md: Introduction EXPLAIN PATCH SET HERE NeilBrown
2008-03-03 0:17 ` [PATCH 001 of 9] md: Fix deadlock in md/raid1 and md/raid10 when handling a read error NeilBrown
2008-03-03 15:54 ` Andre Noll
2008-03-04 6:08 ` Neil Brown
2008-03-04 11:29 ` Andre Noll
2008-03-06 3:29 ` Neil Brown
2008-03-06 10:51 ` Andre Noll [this message]
2008-03-03 0:17 ` [PATCH 002 of 9] md: Reduce CPU wastage on idle md array with a write-intent bitmap NeilBrown
2008-03-03 0:17 ` [PATCH 003 of 9] md: Guard against possible bad array geometry in v1 metadata NeilBrown
2008-03-03 0:17 ` [PATCH 004 of 9] md: Clean up irregularity with raid autodetect NeilBrown
2008-03-03 0:17 ` [PATCH 005 of 9] md: Make sure a reshape is started when device switches to read-write NeilBrown
2008-03-03 0:17 ` [PATCH 006 of 9] md: Lock access to rdev attributes properly NeilBrown
2008-03-03 0:17 ` [PATCH 007 of 9] md: Don't attempt read-balancing for raid10 'far' layouts NeilBrown
2008-03-03 0:17 ` [PATCH 008 of 9] md: Fix possible raid1/raid10 deadlock on read error during resync NeilBrown
2008-03-03 0:18 ` [PATCH 009 of 9] md: The md RAID10 resync thread could cause a md RAID10 array deadlock NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080306105134.GC32242@skl-net.de \
--to=maan@systemlinux.org \
--cc=akpm@linux-foundation.org \
--cc=k-tanaka@ce.jp.nec.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
--subject='Re: [PATCH 001 of 9] md: Fix deadlock in md/raid1 and md/raid10 when handling a read error.' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).