LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH 006 of 9] md: Lock access to rdev attributes properly
Date: Mon, 3 Mar 2008 11:17:39 +1100	[thread overview]
Message-ID: <1080303001739.23653@suse.de> (raw)
In-Reply-To: <20080303111240.23302.patches@notabene>


When we access attributes of an rdev (component device on an md array)
through sysfs, we really need to lock the array against concurrent
changes.  We currently do that when we change an attribute, but not
when we read an attribute.  We need to lock when reading as well else
rdev->mddev could become NULL while we are accessing it.

So add appropriate locking (mddev_lock) to rdev_attr_show.

rdev_size_store requires some extra care as well as it needs to unlock
the mddev while scanning other mddevs for overlapping regions.  We
currently assume that rdev->mddev will still be unchanged after the
scan, but that cannot be certain.  So take a copy of rdev->mddev for
use at the end of the function.

Signed-off-by: Neil Brown <neilb@suse.de>

### Diffstat output
 ./drivers/md/md.c |   35 ++++++++++++++++++++++++++---------
 1 file changed, 26 insertions(+), 9 deletions(-)

diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c	2008-02-22 15:46:52.000000000 +1100
+++ ./drivers/md/md.c	2008-02-22 15:56:20.000000000 +1100
@@ -2001,9 +2001,11 @@ rdev_size_store(mdk_rdev_t *rdev, const 
 	char *e;
 	unsigned long long size = simple_strtoull(buf, &e, 10);
 	unsigned long long oldsize = rdev->size;
+	mddev_t *my_mddev = rdev->mddev;
+
 	if (e==buf || (*e && *e != '\n'))
 		return -EINVAL;
-	if (rdev->mddev->pers)
+	if (my_mddev->pers)
 		return -EBUSY;
 	rdev->size = size;
 	if (size > oldsize && rdev->mddev->external) {
@@ -2016,7 +2018,7 @@ rdev_size_store(mdk_rdev_t *rdev, const 
 		int overlap = 0;
 		struct list_head *tmp, *tmp2;
 
-		mddev_unlock(rdev->mddev);
+		mddev_unlock(my_mddev);
 		for_each_mddev(mddev, tmp) {
 			mdk_rdev_t *rdev2;
 
@@ -2036,7 +2038,7 @@ rdev_size_store(mdk_rdev_t *rdev, const 
 				break;
 			}
 		}
-		mddev_lock(rdev->mddev);
+		mddev_lock(my_mddev);
 		if (overlap) {
 			/* Someone else could have slipped in a size
 			 * change here, but doing so is just silly.
@@ -2048,8 +2050,8 @@ rdev_size_store(mdk_rdev_t *rdev, const 
 			return -EBUSY;
 		}
 	}
-	if (size < rdev->mddev->size || rdev->mddev->size == 0)
-		rdev->mddev->size = size;
+	if (size < my_mddev->size || my_mddev->size == 0)
+		my_mddev->size = size;
 	return len;
 }
 
@@ -2070,10 +2072,21 @@ rdev_attr_show(struct kobject *kobj, str
 {
 	struct rdev_sysfs_entry *entry = container_of(attr, struct rdev_sysfs_entry, attr);
 	mdk_rdev_t *rdev = container_of(kobj, mdk_rdev_t, kobj);
+	mddev_t *mddev = rdev->mddev;
+	ssize_t rv;
 
 	if (!entry->show)
 		return -EIO;
-	return entry->show(rdev, page);
+
+	rv = mddev ? mddev_lock(mddev) : -EBUSY;
+	if (!rv) {
+		if (rdev->mddev == NULL)
+			rv = -EBUSY;
+		else
+			rv = entry->show(rdev, page);
+		mddev_unlock(mddev);
+	}
+	return rv;
 }
 
 static ssize_t
@@ -2082,15 +2095,19 @@ rdev_attr_store(struct kobject *kobj, st
 {
 	struct rdev_sysfs_entry *entry = container_of(attr, struct rdev_sysfs_entry, attr);
 	mdk_rdev_t *rdev = container_of(kobj, mdk_rdev_t, kobj);
-	int rv;
+	ssize_t rv;
+	mddev_t *mddev = rdev->mddev;
 
 	if (!entry->store)
 		return -EIO;
 	if (!capable(CAP_SYS_ADMIN))
 		return -EACCES;
-	rv = mddev_lock(rdev->mddev);
+	rv = mddev ? mddev_lock(mddev): -EBUSY;
 	if (!rv) {
-		rv = entry->store(rdev, page, length);
+		if (rdev->mddev == NULL)
+			rv = -EBUSY;
+		else
+			rv = entry->store(rdev, page, length);
 		mddev_unlock(rdev->mddev);
 	}
 	return rv;

  parent reply	other threads:[~2008-03-03  0:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-03  0:16 [PATCH 000 of 9] md: Introduction EXPLAIN PATCH SET HERE NeilBrown
2008-03-03  0:17 ` [PATCH 001 of 9] md: Fix deadlock in md/raid1 and md/raid10 when handling a read error NeilBrown
2008-03-03 15:54   ` Andre Noll
2008-03-04  6:08     ` Neil Brown
2008-03-04 11:29       ` Andre Noll
2008-03-06  3:29         ` Neil Brown
2008-03-06 10:51           ` Andre Noll
2008-03-03  0:17 ` [PATCH 002 of 9] md: Reduce CPU wastage on idle md array with a write-intent bitmap NeilBrown
2008-03-03  0:17 ` [PATCH 003 of 9] md: Guard against possible bad array geometry in v1 metadata NeilBrown
2008-03-03  0:17 ` [PATCH 004 of 9] md: Clean up irregularity with raid autodetect NeilBrown
2008-03-03  0:17 ` [PATCH 005 of 9] md: Make sure a reshape is started when device switches to read-write NeilBrown
2008-03-03  0:17 ` NeilBrown [this message]
2008-03-03  0:17 ` [PATCH 007 of 9] md: Don't attempt read-balancing for raid10 'far' layouts NeilBrown
2008-03-03  0:17 ` [PATCH 008 of 9] md: Fix possible raid1/raid10 deadlock on read error during resync NeilBrown
2008-03-03  0:18 ` [PATCH 009 of 9] md: The md RAID10 resync thread could cause a md RAID10 array deadlock NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1080303001739.23653@suse.de \
    --to=neilb@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --subject='Re: [PATCH 006 of 9] md: Lock access to rdev attributes properly' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).