LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* Re: nbd locking problems
@ 2011-01-30 15:07 Paul Clements
  0 siblings, 0 replies; 7+ messages in thread
From: Paul Clements @ 2011-01-30 15:07 UTC (permalink / raw)
  To: Soren Hansen; +Cc: kernel list, Arnd Bergmann

2a48fc0ab24241755dc93bfd4f01d68efab47f5a replaced uses of the BKL in
the nbd driver with mutex operations. Since then, I've been been seeing
these lock ups:

 INFO: task qemu-nbd:16115 blocked for more than 120 seconds.
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 qemu-nbd      D 0000000000000001     0 16115  16114 0x00000004
  ffff88007d775d98 0000000000000082 ffff88007d775fd8 ffff88007d774000
  0000000000013a80 ffff8800020347e0 ffff88007d775fd8 0000000000013a80
  ffff880133730000 ffff880002034440 ffffea0004333db8 ffffffffa071c020
 Call Trace:
  [<ffffffff815b9997>] __mutex_lock_slowpath+0xf7/0x180
  [<ffffffff81132f13>] ? handle_mm_fault+0x1c3/0x410
  [<ffffffff815b93eb>] mutex_lock+0x2b/0x50
  [<ffffffffa071a21c>] nbd_ioctl+0x6c/0x1c0 [nbd]
  [<ffffffff812cb970>] blkdev_ioctl+0x230/0x730
  [<ffffffff811967a1>] block_ioctl+0x41/0x50
  [<ffffffff81175c03>] do_vfs_ioctl+0x93/0x370
  [<ffffffff8116fa83>] ? putname+0x33/0x50
  [<ffffffff81175f61>] sys_ioctl+0x81/0xa0
  [<ffffffff8100c0c2>] system_call_fastpath+0x16/0x1b
Instrumenting the nbd module's ioctl handler with some extra logging
clearly shows the NBD_DO_IT ioctl being invoked which is a long-lived
ioctl in the sense that it doesn't return until another ioctl asks the
driver to disconnect. However, that other ioctl blocks, waiting for the
module-level mutex that replaced the BKL, and then we're stuck.

Soren, just getting to this, but your analysis is correct. Since
Pavel's patch of about 2 years ago (putting tx_lock on the ioctls),
the BKL or any other locking around ioctls is not needed, so your
patch looks correct to me.

Thanks very much for this.

--
Paul


This patch removes the module-level mutex altogether. It's clearly
wrong, and as far as I can see, it's entirely unnecessary, since the
nbd driver maintains per-device mutexes, and I don't see anything that
would require a module-level (or kernel-level, for that matter) mutex.

Signed-off-by: Soren Hansen <soren@linux2go.dk>
---
 drivers/block/nbd.c |    3 ---
 1 files changed, 0 insertions(+), 3 deletions(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index a32fb41..e6fc716 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -53,7 +53,6 @@
 #define DBG_BLKDEV      0x0100
 #define DBG_RX          0x0200
 #define DBG_TX          0x0400
-static DEFINE_MUTEX(nbd_mutex);
 static unsigned int debugflags;
 #endif /* NDEBUG */

@@ -718,11 +717,9 @@ static int nbd_ioctl(struct block_device *bdev,
fmode_t mode,
 	dprintk(DBG_IOCTL, "%s: nbd_ioctl cmd=%s(0x%x) arg=%lu\n",
 			lo->disk->disk_name, ioctl_cmd_to_ascii(cmd), cmd, arg);

-	mutex_lock(&nbd_mutex);
 	mutex_lock(&lo->tx_lock);
 	error = __nbd_ioctl(bdev, lo, cmd, arg);
 	mutex_unlock(&lo->tx_lock);
-	mutex_unlock(&nbd_mutex);

 	return error;
 }

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: nbd locking problems
  2011-01-28 23:11   ` Soren Hansen
@ 2011-03-13 10:28     ` Pavel Machek
  0 siblings, 0 replies; 7+ messages in thread
From: Pavel Machek @ 2011-03-13 10:28 UTC (permalink / raw)
  To: Soren Hansen; +Cc: Serge E. Hallyn, linux-kernel, Dave Hansen

Hi!

> > In the meantime, could you please post your patch?  (Is it the
> > three-liner I'd expect?)
> 
> Indeed it is. I was holding off on posting it, because I was hoping
> Pavel Machek would chime in. Pavel's commit message in [1] suggested
> that there was more work to do before the BKL could be vanquished from
> this driver (by changing .ioctl to .unlocked_ioctl).

Well, I tried to figure out locking in nbd, but could not. It somehow
wants both transmit and receive protected... but at the same time it
needs receiving while it transmits. Not sure what to do there.

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: nbd locking problems
  2011-01-28 21:18 ` Serge E. Hallyn
  2011-01-28 21:30   ` Arnd Bergmann
  2011-01-28 23:11   ` Soren Hansen
@ 2011-01-31 19:35   ` Serge E. Hallyn
  2 siblings, 0 replies; 7+ messages in thread
From: Serge E. Hallyn @ 2011-01-31 19:35 UTC (permalink / raw)
  To: Serge E. Hallyn; +Cc: Soren Hansen, linux-kernel, Dave Hansen, Arnd Bergmann

Quoting Serge E. Hallyn (serge@hallyn.com):
> Quoting Soren Hansen (soren@linux2go.dk):

(Sorry, I seem to have stupidly deleted later replies)

> 
> As Dave just explained to me, BKL is released when you sleep :)  I
> assume that's how it gets released around the kthread_create().  I
> *think* you're right that the new mutex is superfluous, but I'd like to
> look through the ioctl code and make sure there's no shared state which
> we need to protect.  I don't see how there could be, or rather, if there
> were, then it was broken before.

Yup, removing nbd_mutex should be safe.  Esp since the
bdev->bd_disk->private_data doesn't get changed outside
of nbd_init().  Removing it looks safe.

thanks,
-serge

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: nbd locking problems
  2011-01-28 21:18 ` Serge E. Hallyn
  2011-01-28 21:30   ` Arnd Bergmann
@ 2011-01-28 23:11   ` Soren Hansen
  2011-03-13 10:28     ` Pavel Machek
  2011-01-31 19:35   ` Serge E. Hallyn
  2 siblings, 1 reply; 7+ messages in thread
From: Soren Hansen @ 2011-01-28 23:11 UTC (permalink / raw)
  To: Serge E. Hallyn; +Cc: linux-kernel, Dave Hansen

2011/1/28 Serge E. Hallyn <serge@hallyn.com>:
> Thanks for looking into this, Soren.

Sure :)

> As Dave just explained to me, BKL is released when you sleep :)

Ah, yes, that makes perfect sense.

> I assume that's how it gets released around the kthread_create().  I
> *think* you're right that the new mutex is superfluous, but I'd like to
> look through the ioctl code and make sure there's no shared state which
> we need to protect.

Yup, I've tried that and I didn't find anything, but this is very new
territory to me, so more eyes sounds great.

> I don't see how there could be, or rather, if there
> were, then it was broken before.

Good point.

> In the meantime, could you please post your patch?  (Is it the
> three-liner I'd expect?)

Indeed it is. I was holding off on posting it, because I was hoping
Pavel Machek would chime in. Pavel's commit message in [1] suggested
that there was more work to do before the BKL could be vanquished from
this driver (by changing .ioctl to .unlocked_ioctl).

I guess it can't hurt to post it, though. I'll do that in a second.

[1]: 1a2ad21128bb4eb79f3c05e5801edcc5ed3ef1d3

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: nbd locking problems
  2011-01-28 21:18 ` Serge E. Hallyn
@ 2011-01-28 21:30   ` Arnd Bergmann
  2011-01-28 23:11   ` Soren Hansen
  2011-01-31 19:35   ` Serge E. Hallyn
  2 siblings, 0 replies; 7+ messages in thread
From: Arnd Bergmann @ 2011-01-28 21:30 UTC (permalink / raw)
  To: Serge E. Hallyn; +Cc: Soren Hansen, linux-kernel, Dave Hansen

On Friday 28 January 2011 22:18:26 Serge E. Hallyn wrote:
> As Dave just explained to me, BKL is released when you sleep :)  I
> assume that's how it gets released around the kthread_create().  I
> think you're right that the new mutex is superfluous, but I'd like to
> look through the ioctl code and make sure there's no shared state which
> we need to protect.  I don't see how there could be, or rather, if there
> were, then it was broken before.

Good analysis. The mass-conversion from BKL to per-driver mutex indeed
assumes that no ioctl blocks for an indefinite amount of time, as nbd
evidently does. I think it's quite likely that the nbd driver does not
require any sort of locking to replace the BKL, it certainly seems
to have reasonable locking rules by itself, the ioctl function got
the BKL only because of an earlier push down from the global VFS ioctl
function.

	Arnd

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: nbd locking problems
  2011-01-26 10:52 Soren Hansen
@ 2011-01-28 21:18 ` Serge E. Hallyn
  2011-01-28 21:30   ` Arnd Bergmann
                     ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Serge E. Hallyn @ 2011-01-28 21:18 UTC (permalink / raw)
  To: Soren Hansen; +Cc: linux-kernel, Dave Hansen

Quoting Soren Hansen (soren@linux2go.dk):
> I'm seeing the following lockup on a current kernel:
> 
> [117360.880311] INFO: task qemu-nbd:16115 blocked for more than 120 seconds.
> [117360.880319] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [117360.880326] qemu-nbd      D 0000000000000001     0 16115  16114 0x00000004
> [117360.880337]  ffff88007d775d98 0000000000000082 ffff88007d775fd8
> ffff88007d774000
> [117360.880348]  0000000000013a80 ffff8800020347e0 ffff88007d775fd8
> 0000000000013a80
> [117360.880359]  ffff880133730000 ffff880002034440 ffffea0004333db8
> ffffffffa071c020
> [117360.880369] Call Trace:
> [117360.880389]  [<ffffffff815b9997>] __mutex_lock_slowpath+0xf7/0x180
> [117360.880400]  [<ffffffff81132f13>] ? handle_mm_fault+0x1c3/0x410
> [117360.880408]  [<ffffffff815b93eb>] mutex_lock+0x2b/0x50
> [117360.880419]  [<ffffffffa071a21c>] nbd_ioctl+0x6c/0x1c0 [nbd]
> [117360.880429]  [<ffffffff812cb970>] blkdev_ioctl+0x230/0x730
> [117360.880438]  [<ffffffff811967a1>] block_ioctl+0x41/0x50
> [117360.880447]  [<ffffffff81175c03>] do_vfs_ioctl+0x93/0x370
> [117360.880455]  [<ffffffff8116fa83>] ? putname+0x33/0x50
> [117360.880463]  [<ffffffff81175f61>] sys_ioctl+0x81/0xa0
> [117360.880473]  [<ffffffff8100c0c2>] system_call_fastpath+0x16/0x1b
> 
> It seems to be caused by 2a48fc0ab24241755dc93bfd4f01d68efab47f5a which
> replaced lock_kernel() with mutex operations.
> 
> In my attempts to debug this problem, I reverted that commit.
> 
> There's a printk statement that logs each ioctl call right before the
> lock_kernel call. I extended it to log the pid of the caller, and added
> a similar printk statement after unlock_kernel.
> 
> This worked for me, and I saw something like this in my dmesg:
> 
> [122322.650443] nbd9: in: nbd_ioctl pid=(24396) cmd=set-blksize(0xab01) arg=1024
> [122322.650454] nbd9: out: nbd_ioctl pid=(24396)
> cmd=set-blksize(0xab01) arg=1024
> [122322.650462] nbd9: in: nbd_ioctl pid=(24396)
> cmd=set-size-blocks(0xab07) arg=153600
> [122322.650469] nbd9: out: nbd_ioctl pid=(24396)
> cmd=set-size-blocks(0xab07) arg=153600
> [122322.650476] nbd9: in: nbd_ioctl pid=(24396) cmd=clear-sock(0xab04)
> arg=153600
> [122322.650483] nbd9: out: nbd_ioctl pid=(24396)
> cmd=clear-sock(0xab04) arg=153600
> [122322.650490] nbd9: in: nbd_ioctl pid=(24396) cmd=set-sock(0xab00) arg=3
> [122322.650497] nbd9: out: nbd_ioctl pid=(24396) cmd=set-sock(0xab00) arg=3
> [122322.651178] nbd9: in: nbd_ioctl pid=(24396) cmd=do-it(0xab03)
> arg=139925697560832
> [122322.652454]  nbd9: unknown partition table
> [122323.070522] nbd9: in: nbd_ioctl pid=(24427) cmd=clear-que(0xab05)
> arg=140267460190640
> [122323.070527] nbd9: out: nbd_ioctl pid=(24427) cmd=clear-que(0xab05)
> arg=140267460190640
> [122323.070530] nbd9: in: nbd_ioctl pid=(24427) cmd=disconnect(0xab08)
> arg=140267460190640
> [122323.070532] nbd9: NBD_DISCONNECT
> [122323.070555] nbd9: out: nbd_ioctl pid=(24427)
> cmd=disconnect(0xab08) arg=140267460190640
> [122323.070558] nbd9: in: nbd_ioctl pid=(24427) cmd=clear-sock(0xab04)
> arg=140267460190640
> [122323.070561] nbd9: out: nbd_ioctl pid=(24427)
> cmd=clear-sock(0xab04) arg=140267460190640
> [122323.070667] nbd9: Receive control failed (result -32)
> [122323.070710] nbd9: queue cleared
> [122323.071186] nbd9: out: nbd_ioctl pid=(24396) cmd=do-it(0xab03)
> arg=139925697560832
> [122323.071194] nbd9: in: nbd_ioctl pid=(24396) cmd=clear-que(0xab05) arg=32
> [122323.071197] nbd9: out: nbd_ioctl pid=(24396) cmd=clear-que(0xab05) arg=32
> [122323.071200] nbd9: in: nbd_ioctl pid=(24396) cmd=clear-sock(0xab04) arg=32
> [122323.071203] nbd9: out: nbd_ioctl pid=(24396) cmd=clear-sock(0xab04) arg=32
> 
> As I understand it, lock_kernel is implemented using a semaphore which
> allows a single process to invoke lock_kernel more than once without
> locking up. However, as can be seen above (timestamp 122322.651178), pid
> 24396 invokes the NBD_DO_IT ioctl. This calls lock_kernel and starts a
> kernel thread to do the heavy lifting of nbd. Later on, pid 24427 comes
> along, wanting to shut down the connection. It works. I don't understand
> how it makes it past the lock_kernel() call. There's only one instance
> of each of lock_kernel and unlock_kernel in the nbd driver, so I'm
> fairly certain unlock_kernel isn't being called (at least not by nbd
> directly).
> 
> With the mutex code introduced by
> 2a48fc0ab24241755dc93bfd4f01d68efab47f5a but with my extra printk's, this
> is what I get:
> 
> [125474.139952] nbd9: in: nbd_ioctl pid=(26015) cmd=set-blksize(0xab01) arg=1024
> [125474.139963] nbd9: out: nbd_ioctl pid=(26015)
> cmd=set-blksize(0xab01) arg=1024
> [125474.139971] nbd9: in: nbd_ioctl pid=(26015)
> cmd=set-size-blocks(0xab07) arg=153600
> [125474.139979] nbd9: out: nbd_ioctl pid=(26015)
> cmd=set-size-blocks(0xab07) arg=153600
> [125474.139986] nbd9: in: nbd_ioctl pid=(26015) cmd=clear-sock(0xab04)
> arg=153600
> [125474.139994] nbd9: out: nbd_ioctl pid=(26015)
> cmd=clear-sock(0xab04) arg=153600
> [125474.140001] nbd9: in: nbd_ioctl pid=(26015) cmd=set-sock(0xab00) arg=3
> [125474.140036] nbd9: out: nbd_ioctl pid=(26015) cmd=set-sock(0xab00) arg=3
> [125474.140660] nbd9: in: nbd_ioctl pid=(26015) cmd=do-it(0xab03)
> arg=140041726828800
> [125474.142255]  nbd9: unknown partition table
> [125475.604613] nbd9: in: nbd_ioctl pid=(26047) cmd=clear-que(0xab05)
> arg=139640717427120
> 
> ...and then it sits there until I kill -9 26015.  This makes sense to
> me, though. The mutex is held by 26015, so of course 26047 can't grab
> it, too. The mystery is why it works with lock_kernel. It doesn't
> release the kernel explicitly, and it doesn't (explicitly, at least)
> call schedule().

Thanks for looking into this, Soren.

As Dave just explained to me, BKL is released when you sleep :)  I
assume that's how it gets released around the kthread_create().  I
*think* you're right that the new mutex is superfluous, but I'd like to
look through the ioctl code and make sure there's no shared state which
we need to protect.  I don't see how there could be, or rather, if there
were, then it was broken before.

In the meantime, could you please post your patch?  (Is it the
three-liner I'd expect?)

thanks,
-serge

^ permalink raw reply	[flat|nested] 7+ messages in thread

* nbd locking problems
@ 2011-01-26 10:52 Soren Hansen
  2011-01-28 21:18 ` Serge E. Hallyn
  0 siblings, 1 reply; 7+ messages in thread
From: Soren Hansen @ 2011-01-26 10:52 UTC (permalink / raw)
  To: linux-kernel

I'm seeing the following lockup on a current kernel:

[117360.880311] INFO: task qemu-nbd:16115 blocked for more than 120 seconds.
[117360.880319] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[117360.880326] qemu-nbd      D 0000000000000001     0 16115  16114 0x00000004
[117360.880337]  ffff88007d775d98 0000000000000082 ffff88007d775fd8
ffff88007d774000
[117360.880348]  0000000000013a80 ffff8800020347e0 ffff88007d775fd8
0000000000013a80
[117360.880359]  ffff880133730000 ffff880002034440 ffffea0004333db8
ffffffffa071c020
[117360.880369] Call Trace:
[117360.880389]  [<ffffffff815b9997>] __mutex_lock_slowpath+0xf7/0x180
[117360.880400]  [<ffffffff81132f13>] ? handle_mm_fault+0x1c3/0x410
[117360.880408]  [<ffffffff815b93eb>] mutex_lock+0x2b/0x50
[117360.880419]  [<ffffffffa071a21c>] nbd_ioctl+0x6c/0x1c0 [nbd]
[117360.880429]  [<ffffffff812cb970>] blkdev_ioctl+0x230/0x730
[117360.880438]  [<ffffffff811967a1>] block_ioctl+0x41/0x50
[117360.880447]  [<ffffffff81175c03>] do_vfs_ioctl+0x93/0x370
[117360.880455]  [<ffffffff8116fa83>] ? putname+0x33/0x50
[117360.880463]  [<ffffffff81175f61>] sys_ioctl+0x81/0xa0
[117360.880473]  [<ffffffff8100c0c2>] system_call_fastpath+0x16/0x1b

It seems to be caused by 2a48fc0ab24241755dc93bfd4f01d68efab47f5a which
replaced lock_kernel() with mutex operations.

In my attempts to debug this problem, I reverted that commit.

There's a printk statement that logs each ioctl call right before the
lock_kernel call. I extended it to log the pid of the caller, and added
a similar printk statement after unlock_kernel.

This worked for me, and I saw something like this in my dmesg:

[122322.650443] nbd9: in: nbd_ioctl pid=(24396) cmd=set-blksize(0xab01) arg=1024
[122322.650454] nbd9: out: nbd_ioctl pid=(24396)
cmd=set-blksize(0xab01) arg=1024
[122322.650462] nbd9: in: nbd_ioctl pid=(24396)
cmd=set-size-blocks(0xab07) arg=153600
[122322.650469] nbd9: out: nbd_ioctl pid=(24396)
cmd=set-size-blocks(0xab07) arg=153600
[122322.650476] nbd9: in: nbd_ioctl pid=(24396) cmd=clear-sock(0xab04)
arg=153600
[122322.650483] nbd9: out: nbd_ioctl pid=(24396)
cmd=clear-sock(0xab04) arg=153600
[122322.650490] nbd9: in: nbd_ioctl pid=(24396) cmd=set-sock(0xab00) arg=3
[122322.650497] nbd9: out: nbd_ioctl pid=(24396) cmd=set-sock(0xab00) arg=3
[122322.651178] nbd9: in: nbd_ioctl pid=(24396) cmd=do-it(0xab03)
arg=139925697560832
[122322.652454]  nbd9: unknown partition table
[122323.070522] nbd9: in: nbd_ioctl pid=(24427) cmd=clear-que(0xab05)
arg=140267460190640
[122323.070527] nbd9: out: nbd_ioctl pid=(24427) cmd=clear-que(0xab05)
arg=140267460190640
[122323.070530] nbd9: in: nbd_ioctl pid=(24427) cmd=disconnect(0xab08)
arg=140267460190640
[122323.070532] nbd9: NBD_DISCONNECT
[122323.070555] nbd9: out: nbd_ioctl pid=(24427)
cmd=disconnect(0xab08) arg=140267460190640
[122323.070558] nbd9: in: nbd_ioctl pid=(24427) cmd=clear-sock(0xab04)
arg=140267460190640
[122323.070561] nbd9: out: nbd_ioctl pid=(24427)
cmd=clear-sock(0xab04) arg=140267460190640
[122323.070667] nbd9: Receive control failed (result -32)
[122323.070710] nbd9: queue cleared
[122323.071186] nbd9: out: nbd_ioctl pid=(24396) cmd=do-it(0xab03)
arg=139925697560832
[122323.071194] nbd9: in: nbd_ioctl pid=(24396) cmd=clear-que(0xab05) arg=32
[122323.071197] nbd9: out: nbd_ioctl pid=(24396) cmd=clear-que(0xab05) arg=32
[122323.071200] nbd9: in: nbd_ioctl pid=(24396) cmd=clear-sock(0xab04) arg=32
[122323.071203] nbd9: out: nbd_ioctl pid=(24396) cmd=clear-sock(0xab04) arg=32

As I understand it, lock_kernel is implemented using a semaphore which
allows a single process to invoke lock_kernel more than once without
locking up. However, as can be seen above (timestamp 122322.651178), pid
24396 invokes the NBD_DO_IT ioctl. This calls lock_kernel and starts a
kernel thread to do the heavy lifting of nbd. Later on, pid 24427 comes
along, wanting to shut down the connection. It works. I don't understand
how it makes it past the lock_kernel() call. There's only one instance
of each of lock_kernel and unlock_kernel in the nbd driver, so I'm
fairly certain unlock_kernel isn't being called (at least not by nbd
directly).

With the mutex code introduced by
2a48fc0ab24241755dc93bfd4f01d68efab47f5a but with my extra printk's, this
is what I get:

[125474.139952] nbd9: in: nbd_ioctl pid=(26015) cmd=set-blksize(0xab01) arg=1024
[125474.139963] nbd9: out: nbd_ioctl pid=(26015)
cmd=set-blksize(0xab01) arg=1024
[125474.139971] nbd9: in: nbd_ioctl pid=(26015)
cmd=set-size-blocks(0xab07) arg=153600
[125474.139979] nbd9: out: nbd_ioctl pid=(26015)
cmd=set-size-blocks(0xab07) arg=153600
[125474.139986] nbd9: in: nbd_ioctl pid=(26015) cmd=clear-sock(0xab04)
arg=153600
[125474.139994] nbd9: out: nbd_ioctl pid=(26015)
cmd=clear-sock(0xab04) arg=153600
[125474.140001] nbd9: in: nbd_ioctl pid=(26015) cmd=set-sock(0xab00) arg=3
[125474.140036] nbd9: out: nbd_ioctl pid=(26015) cmd=set-sock(0xab00) arg=3
[125474.140660] nbd9: in: nbd_ioctl pid=(26015) cmd=do-it(0xab03)
arg=140041726828800
[125474.142255]  nbd9: unknown partition table
[125475.604613] nbd9: in: nbd_ioctl pid=(26047) cmd=clear-que(0xab05)
arg=139640717427120

...and then it sits there until I kill -9 26015.  This makes sense to
me, though. The mutex is held by 26015, so of course 26047 can't grab
it, too. The mystery is why it works with lock_kernel. It doesn't
release the kernel explicitly, and it doesn't (explicitly, at least)
call schedule().

All this aside, it appears to me that this locking is superfluous to
begin with. The driver already has a per-device mutex that should
suffice, as far as I can tell. Removing this extra mutex works for me,
and I've not yet noticed any ill side effects, but I'll try to run some
better stress tests to make sure.

-- 
Soren Hansen <soren@linux2go.dk>
Ubuntu Developer
OpenStack Developer

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2011-03-13 10:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-30 15:07 nbd locking problems Paul Clements
  -- strict thread matches above, loose matches on Subject: below --
2011-01-26 10:52 Soren Hansen
2011-01-28 21:18 ` Serge E. Hallyn
2011-01-28 21:30   ` Arnd Bergmann
2011-01-28 23:11   ` Soren Hansen
2011-03-13 10:28     ` Pavel Machek
2011-01-31 19:35   ` Serge E. Hallyn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).