From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752854AbYCJPBd (ORCPT ); Mon, 10 Mar 2008 11:01:33 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751258AbYCJPBZ (ORCPT ); Mon, 10 Mar 2008 11:01:25 -0400 Received: from testure.choralone.org ([194.9.77.134]:58635 "EHLO testure.choralone.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751256AbYCJPBY (ORCPT ); Mon, 10 Mar 2008 11:01:24 -0400 Date: Mon, 10 Mar 2008 11:01:16 -0400 From: Dave Jones To: Linux Kernel Subject: 25rc4-git3 blockdev/loopback related lockdep trace. Message-ID: <20080310150116.GA30584@codemonkey.org.uk> Mail-Followup-To: Dave Jones , Linux Kernel MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.25-0.101.rc4.git3.fc9 #1 ------------------------------------------------------- mount/1208 is trying to acquire lock: (&bdev->bd_mutex){--..}, at: [] __blkdev_put+0x24/0x12f but task is already holding lock: (&lo->lo_ctl_mutex){--..}, at: [] lo_ioctl+0x3d/0xa01 [loop] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&lo->lo_ctl_mutex){--..}: [] __lock_acquire+0xa99/0xc11 [] lock_acquire+0x6a/0x90 [] mutex_lock_nested+0xdb/0x271 [] lo_open+0x23/0x33 [loop] [] do_open+0x97/0x281 [] blkdev_open+0x28/0x51 [] __dentry_open+0xcf/0x185 [] nameidata_to_filp+0x1f/0x33 [] do_filp_open+0x2e/0x35 [] do_sys_open+0x40/0xb5 [] sys_open+0x16/0x18 [] syscall_call+0x7/0xb [] 0xffffffff -> #0 (&bdev->bd_mutex){--..}: [] __lock_acquire+0x9b8/0xc11 [] lock_acquire+0x6a/0x90 [] mutex_lock_nested+0xdb/0x271 [] __blkdev_put+0x24/0x12f [] blkdev_put+0xa/0xc [] blkdev_close+0x28/0x2b [] __fput+0xb3/0x157 [] fput+0x17/0x19 [] loop_clr_fd+0x159/0x16d [loop] [] lo_ioctl+0x640/0xa01 [loop] [] blkdev_driver_ioctl+0x49/0x5b [] blkdev_ioctl+0x7ac/0x7c9 [] block_ioctl+0x16/0x1b [] vfs_ioctl+0x22/0x69 [] do_vfs_ioctl+0x239/0x24c [] sys_ioctl+0x40/0x5b [] syscall_call+0x7/0xb [] 0xffffffff other info that might help us debug this: 1 lock held by mount/1208: #0: (&lo->lo_ctl_mutex){--..}, at: [] lo_ioctl+0x3d/0xa01 [loop] stack backtrace: Pid: 1208, comm: mount Not tainted 2.6.25-0.101.rc4.git3.fc9 #1 [] print_circular_bug_tail+0x5b/0x66 [] ? print_circular_bug_header+0xa6/0xb1 [] __lock_acquire+0x9b8/0xc11 [] ? find_usage_backwards+0x92/0xb1 [] lock_acquire+0x6a/0x90 [] ? __blkdev_put+0x24/0x12f [] mutex_lock_nested+0xdb/0x271 [] ? __blkdev_put+0x24/0x12f [] ? __blkdev_put+0x24/0x12f [] __blkdev_put+0x24/0x12f [] blkdev_put+0xa/0xc [] blkdev_close+0x28/0x2b [] __fput+0xb3/0x157 [] fput+0x17/0x19 [] loop_clr_fd+0x159/0x16d [loop] [] lo_ioctl+0x640/0xa01 [loop] [] ? native_sched_clock+0xb5/0xd1 [] ? sched_clock+0x8/0xb [] ? lock_release_holdtime+0x1a/0x115 [] ? avc_has_perm_noaudit+0x3a1/0x3dc [] ? avc_has_perm_noaudit+0x3be/0x3dc [] ? native_sched_clock+0xb5/0xd1 [] ? native_sched_clock+0xb5/0xd1 [] ? __lock_acquire+0x55a/0xc11 [] ? native_sched_clock+0xb5/0xd1 [] ? __lock_acquire+0x55a/0xc11 [] ? sched_clock+0x8/0xb [] ? lock_release_holdtime+0x1a/0x115 [] ? native_sched_clock+0xb5/0xd1 [] ? __lock_acquire+0x55a/0xc11 [] ? native_sched_clock+0xb5/0xd1 [] ? sched_clock+0x8/0xb [] ? lock_release_holdtime+0x1a/0x115 [] ? avc_has_perm_noaudit+0x3a1/0x3dc [] blkdev_driver_ioctl+0x49/0x5b [] blkdev_ioctl+0x7ac/0x7c9 [] ? lock_release_holdtime+0x1a/0x115 [] ? inode_has_perm+0x5b/0x65 [] ? blkdev_open+0x28/0x51 [] ? check_object+0x111/0x184 [] ? file_has_perm+0x7f/0x88 [] block_ioctl+0x16/0x1b [] ? block_ioctl+0x0/0x1b [] vfs_ioctl+0x22/0x69 [] do_vfs_ioctl+0x239/0x24c [] ? selinux_file_ioctl+0xa8/0xab [] sys_ioctl+0x40/0x5b [] syscall_call+0x7/0xb -- http://www.codemonkey.org.uk