LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* XFS internal error
@ 2007-10-07  1:09 Max Waterman
  2007-10-08  0:14 ` David Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: Max Waterman @ 2007-10-07  1:09 UTC (permalink / raw)
  To: linux-kernel

Hi,

I have just had an XFS error occur while deleting some directory
hierarchy. I hope this is the correct place to report it.

It essentially shutdown the file system, and a reboot seemed to return
everything to normal.

This is in syslog :

> Oct  6 23:40:33 jeeves kernel: xfs_da_do_buf: bno 16777216
> Oct  6 23:40:33 jeeves kernel: dir: inode 2095141277
> Oct  6 23:40:33 jeeves kernel: Filesystem "md2": XFS internal error xfs_da_do_buf(1) at line 1994 of file fs/xfs/xfs_da_btree.c.  Caller 0xffffffff889b2de4
> Oct  6 23:40:33 jeeves kernel: 
> Oct  6 23:40:33 jeeves kernel: Call Trace:
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889b2a21>] :xfs:xfs_da_do_buf+0x2da/0x633
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889bafb3>] :xfs:xfs_dir2_leafn_lookup_int+0x2c6/0x44b
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889bb013>] :xfs:xfs_dir2_leafn_lookup_int+0x326/0x44b
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889d721a>] :xfs:xfs_trans_log_buf+0x55/0x81
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889b2de4>] :xfs:xfs_da_read_buf+0x24/0x29
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889b988e>] :xfs:xfs_dir2_node_removename+0x23a/0x43a
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889b988e>] :xfs:xfs_dir2_node_removename+0x23a/0x43a
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8106c632>] find_lock_page+0x26/0xa2
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889a5521>] :xfs:xfs_bmap_last_offset+0xcd/0xdb
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889b5189>] :xfs:xfs_dir_removename+0x102/0x110
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889e0de6>] :xfs:kmem_zone_alloc+0x52/0x9f
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889c7c98>] :xfs:xfs_inode_item_init+0x1e/0x7a
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889e050e>] :xfs:xfs_remove+0x2a9/0x437
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109d0f5>] __link_path_walk+0x16e/0xd9c
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889e6da7>] :xfs:xfs_vn_unlink+0x21/0x4f
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889c2310>] :xfs:xfs_iunlock+0x57/0x79
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889db297>] :xfs:xfs_access+0x3d/0x46Oct  6 23:40:33 jeeves kernel:  [<ffffffff889e6eaa>] :xfs:xfs_vn_permission+0x14/0x19
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109b7e5>] permission+0xaf/0xf7
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109c583>] vfs_unlink+0xbc/0x102
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109e4ef>] do_unlinkat+0xaa/0x144
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff81009c71>] tracesys+0x71/0xda
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff81009cd5>] tracesys+0xd5/0xda
> Oct  6 23:40:33 jeeves kernel: 
> Oct  6 23:40:33 jeeves kernel: Filesystem "md2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c.  Caller 0xffffffff889e0668
> Oct  6 23:40:33 jeeves kernel: 
> Oct  6 23:40:33 jeeves kernel: Call Trace:
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889d622d>] :xfs:xfs_trans_cancel+0x5b/0xf1
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889e0668>] :xfs:xfs_remove+0x403/0x437
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109d0f5>] __link_path_walk+0x16e/0xd9c
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889e6da7>] :xfs:xfs_vn_unlink+0x21/0x4f
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889c2310>] :xfs:xfs_iunlock+0x57/0x79
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889db297>] :xfs:xfs_access+0x3d/0x46
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff889e6eaa>] :xfs:xfs_vn_permission+0x14/0x19
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109b7e5>] permission+0xaf/0xf7
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109c583>] vfs_unlink+0xbc/0x102Oct  6 23:40:33 jeeves kernel:  [<ffffffff8109e4ef>] do_unlinkat+0xaa/0x144
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff81009c71>] tracesys+0x71/0xda
> Oct  6 23:40:33 jeeves kernel:  [<ffffffff81009cd5>] tracesys+0xd5/0xda
> Oct  6 23:40:33 jeeves kernel: 
> Oct  6 23:40:33 jeeves kernel: xfs_force_shutdown(md2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff889d624b
> Oct  6 23:40:33 jeeves kernel: Filesystem "md2": Corruption of in-memory data detected.  Shutting down filesystem: md2
> Oct  6 23:40:33 jeeves kernel: Please umount the filesystem, and rectify the problem(s)Oct  6 23:43:53 jeeves shutdown[18347]: shutting down for system reboot

I am fairly sure there is nothing I can do about this, but I thought it
prudent to mention it. Searching turned up some similar issues, but they
seem related to a previous kernel version and claimed to be fixed in
subsequent versions.

> Linux jeeves.mydomain 2.6.22.7-57.fc6 #1 SMP Fri Sep 21 19:45:12 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

The array is a little 'unorthodox', if that matters.

It's using 4 on-board(nforce) sata drives and 4 PCI IDE drives :

> /dev/md2:
>         Version : 00.90.03
>   Creation Time : Sat Aug  6 10:18:41 2005
>      Raid Level : raid5
>      Array Size : 976804480 (931.55 GiB 1000.25 GB)
>     Device Size : 195360896 (186.31 GiB 200.05 GB)
>    Raid Devices : 6
>   Total Devices : 8
> Preferred Minor : 2
>     Persistence : Superblock is persistent
> 
>     Update Time : Sun Oct  7 09:05:43 2007
>           State : clean
>  Active Devices : 6
> Working Devices : 8
>  Failed Devices : 0
>   Spare Devices : 2
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            UUID : 15bfec75:595ac793:0914f8ee:862effd8
>          Events : 0.9341058
> 
>     Number   Major   Minor   RaidDevice State
>        0      33        0        0      active sync   /dev/hde
>        1      34        0        1      active sync   /dev/hdg
>        2      56        0        2      active sync   /dev/hdi
>        3       8       32        3      active sync   /dev/sdc
>        4       8       48        4      active sync   /dev/sdd
>        5       8       80        5      active sync   /dev/sdf
> 
>        6       8       64        -      spare   /dev/sde
>        7      57        0        -      spare   /dev/hdk

Max.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2007-10-07  1:09 XFS internal error Max Waterman
@ 2007-10-08  0:14 ` David Chinner
  2007-10-08  1:54   ` Max Waterman
  2008-03-10 12:22   ` Andreas Kotes
  0 siblings, 2 replies; 12+ messages in thread
From: David Chinner @ 2007-10-08  0:14 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-kernel, xfs

[please cc xfs@oss.sgi.com on XFS bug reports. thx.]

On Sun, Oct 07, 2007 at 09:09:58AM +0800, Max Waterman wrote:
> Hi,
> 
> I have just had an XFS error occur while deleting some directory
> hierarchy. I hope this is the correct place to report it.

.....
> This is in syslog :
> 
> > Oct  6 23:40:33 jeeves kernel: xfs_da_do_buf: bno 16777216
                                                  ^^^^^^^^^^^^^
> > Oct  6 23:40:33 jeeves kernel: dir: inode 2095141277
> > Oct  6 23:40:33 jeeves kernel: Filesystem "md2": XFS internal error xfs_da_do_buf(1) at line 1994 of file fs/xfs/xfs_da_btree.c.  Caller 0xffffffff889b2de4

Did you ever run 2.6.17-2.6.17.6? If so, this implies:

http://oss.sgi.com/projects/xfs/faq.html#dir2

> I am fairly sure there is nothing I can do about this, but I thought it
> prudent to mention it. Searching turned up some similar issues, but they
> seem related to a previous kernel version and claimed to be fixed in
> subsequent versions.

Yes, but those previous corruptions get left on disk as a landmine
for you to trip over some time later, even on a kernel that has the
bug fixed.

I suggest that you run xfs_check on the filesystem and if that
shows up errors, run xfs_repair onteh filesystem to correct them.

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2007-10-08  0:14 ` David Chinner
@ 2007-10-08  1:54   ` Max Waterman
  2007-10-08  2:32     ` Barry Naujok
  2008-03-10 12:22   ` Andreas Kotes
  1 sibling, 1 reply; 12+ messages in thread
From: Max Waterman @ 2007-10-08  1:54 UTC (permalink / raw)
  To: David Chinner; +Cc: linux-kernel, xfs

David Chinner wrote:
>> 1994 of file fs/xfs/xfs_da_btree.c.  Caller 0xffffffff889b2de4
>>     
>
> Did you ever run 2.6.17-2.6.17.6?
I guess so, since I've been upgrading steadily since I installed FC6 
some time ago.
>  If so, this implies:
>
> http://oss.sgi.com/projects/xfs/faq.html#dir2
>   
Ah. I did see that, but stopped reading when I read it was fixed in 
later versions ... didn't get to the part where it still needed to be 
repaired/etc.

>> I am fairly sure there is nothing I can do about this, but I thought it
>> prudent to mention it. Searching turned up some similar issues, but they
>> seem related to a previous kernel version and claimed to be fixed in
>> subsequent versions.
>>     
>
> Yes, but those previous corruptions get left on disk as a landmine
> for you to trip over some time later, even on a kernel that has the
> bug fixed.
>   
ah, ok.
> I suggest that you run xfs_check on the filesystem and if that
> shows up errors, run xfs_repair onteh filesystem to correct them.
>   
It did, and I did, and another xfs_check produced no output.

Do I need to do anything else to correct it? xfs_repair produced a whole 
bunch of stuff that I don't understand...this is the bit that looks most 
significant :

> Phase 6 - check inode connectivity...
>         - resetting contents of realtime bitmap and summary inodes
>         - traversing filesystem ...
> can't read freespace block 16777216 for directory inode 2095141277
> rebuilding directory inode 2095141277
> free block 16777216 for directory inode 2100841732 bad nused
> rebuilding directory inode 2100841732
> free block 16777216 for directory inode 2102199514 bad nused
> rebuilding directory inode 2102199514
> free block 16777216 for directory inode 2102200124 bad nused
> rebuilding directory inode 2102200124
> free block 16777216 for directory inode 2102905843 bad nused
> rebuilding directory inode 2102905843
> free block 16777216 for directory inode 3277510927 bad nused
> rebuilding directory inode 3277510927
> free block 16777216 for directory inode 3277524487 bad nused
> rebuilding directory inode 3277524487
> free block 16777216 for directory inode 3379886019 bad nused
> rebuilding directory inode 3379886019
>         - traversal finished ...
>         - moving disconnected inodes to lost+found ...
That last line looks suspicious...furthermore, when I mount the 
filesystem, I don't see a 'lost+found' directory (which I've been used 
to seeing on IRIX). Ah, perhaps the '...' with *nothing* after it means 
it didn't do any moving. Am I right?

Max.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2007-10-08  1:54   ` Max Waterman
@ 2007-10-08  2:32     ` Barry Naujok
  2007-10-08  2:48       ` Max Waterman
  0 siblings, 1 reply; 12+ messages in thread
From: Barry Naujok @ 2007-10-08  2:32 UTC (permalink / raw)
  To: Max Waterman, David Chinner; +Cc: linux-kernel, xfs

On Mon, 08 Oct 2007 11:54:28 +1000, Max Waterman  
<davidmaxwaterman+kernel@fastmail.co.uk> wrote:

> David Chinner wrote:
>> I suggest that you run xfs_check on the filesystem and if that
>> shows up errors, run xfs_repair onteh filesystem to correct them.
>>
> It did, and I did, and another xfs_check produced no output.
>
> Do I need to do anything else to correct it? xfs_repair produced a whole  
> bunch of stuff that I don't understand...this is the bit that looks most  
> significant :
>
>> Phase 6 - check inode connectivity...
>>         - resetting contents of realtime bitmap and summary inodes
>>         - traversing filesystem ...
>> can't read freespace block 16777216 for directory inode 2095141277
>> rebuilding directory inode 2095141277
>> free block 16777216 for directory inode 2100841732 bad nused
>> rebuilding directory inode 2100841732
>> free block 16777216 for directory inode 2102199514 bad nused
>> rebuilding directory inode 2102199514
>> free block 16777216 for directory inode 2102200124 bad nused
>> rebuilding directory inode 2102200124
>> free block 16777216 for directory inode 2102905843 bad nused
>> rebuilding directory inode 2102905843
>> free block 16777216 for directory inode 3277510927 bad nused
>> rebuilding directory inode 3277510927
>> free block 16777216 for directory inode 3277524487 bad nused
>> rebuilding directory inode 3277524487
>> free block 16777216 for directory inode 3379886019 bad nused
>> rebuilding directory inode 3379886019
>>         - traversal finished ...
>>         - moving disconnected inodes to lost+found ...
> That last line looks suspicious...furthermore, when I mount the  
> filesystem, I don't see a 'lost+found' directory (which I've been used  
> to seeing on IRIX). Ah, perhaps the '...' with *nothing* after it means  
> it didn't do any moving. Am I right?

Yes, the latest xfs_repair doesn't create a lost+found unless it
needs to, and if it does so, it will list the inodes moved there.

So, in your case, nothing went to lost+found.

Regards,
Barry.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2007-10-08  2:32     ` Barry Naujok
@ 2007-10-08  2:48       ` Max Waterman
  0 siblings, 0 replies; 12+ messages in thread
From: Max Waterman @ 2007-10-08  2:48 UTC (permalink / raw)
  To: Barry Naujok; +Cc: David Chinner, linux-kernel, xfs

Barry Naujok wrote:
> Yes, the latest xfs_repair doesn't create a lost+found unless it
> needs to, and if it does so, it will list the inodes moved there.
>
> So, in your case, nothing went to lost+found.
>
> Regards,
> Barry.
Great. Thanks a lot for your help :)

Max.

PS. I'm still missing working at SGI :|

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2007-10-08  0:14 ` David Chinner
  2007-10-08  1:54   ` Max Waterman
@ 2008-03-10 12:22   ` Andreas Kotes
  2008-03-10 22:30     ` David Chinner
  1 sibling, 1 reply; 12+ messages in thread
From: Andreas Kotes @ 2008-03-10 12:22 UTC (permalink / raw)
  To: David Chinner; +Cc: linux-kernel, xfs

Hello,

* David Chinner <dgc@sgi.com> [20080310 13:18]:
> Yes, but those previous corruptions get left on disk as a landmine
> for you to trip over some time later, even on a kernel that has the
> bug fixed.
> 
> I suggest that you run xfs_check on the filesystem and if that
> shows up errors, run xfs_repair onteh filesystem to correct them.

I seem to be having similiar problems, and xfs_repair is not helping :(

I always run into:

[  137.099267] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c.  Caller 0xffffffff80372156
[  137.106267]
[  137.106268] Call Trace:
[  137.113129]  [<ffffffff803692f0>] xfs_trans_cancel+0x100/0x130
[  137.116524]  [<ffffffff80372156>] xfs_create+0x256/0x6e0
[  137.119904]  [<ffffffff80341e09>] xfs_dir2_isleaf+0x19/0x50
[  137.123269]  [<ffffffff8037e145>] xfs_vn_mknod+0x195/0x250
[  137.126607]  [<ffffffff8028f32c>] vfs_create+0xac/0xf0
[  137.129920]  [<ffffffff80292b3c>] open_namei+0x5dc/0x700
[  137.133227]  [<ffffffff8022a443>] __wake_up+0x43/0x70
[  137.136477]  [<ffffffff802851bc>] do_filp_open+0x1c/0x50
[  137.139693]  [<ffffffff8028524a>] do_sys_open+0x5a/0x100
[  137.142838]  [<ffffffff80220a83>] sysenter_do_call+0x1b/0x67
[  137.145964]
[  137.149014] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff8036930e
[  137.163485] Filesystem "sda2": Corruption of in-memory data detected.  Shutting down filesystem: sda2

directly after booting.

I'm using kernel 2.6.22.16 and xfs_repair version 2.9.7

How can I help finding the problem? I'd like xfs_repair to be able to
fix this.

Br,

   Andreas

-- 
flatline IT services - Andreas Kotes - Tailored solutions for your IT needs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2008-03-10 12:22   ` Andreas Kotes
@ 2008-03-10 22:30     ` David Chinner
  2008-03-10 22:59       ` Andreas Kotes
  0 siblings, 1 reply; 12+ messages in thread
From: David Chinner @ 2008-03-10 22:30 UTC (permalink / raw)
  To: Andreas Kotes; +Cc: David Chinner, linux-kernel, xfs

On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote:
> Hello,
> 
> * David Chinner <dgc@sgi.com> [20080310 13:18]:
> > Yes, but those previous corruptions get left on disk as a landmine
> > for you to trip over some time later, even on a kernel that has the
> > bug fixed.
> > 
> > I suggest that you run xfs_check on the filesystem and if that
> > shows up errors, run xfs_repair onteh filesystem to correct them.
> 
> I seem to be having similiar problems, and xfs_repair is not helping :(

xfs_repair is ensuring that the problem is not being caused by on-disk
corruption. In this case, it does not appear to be caused by on-disk
corruption, so xfs_repair won't help.

> I always run into:
> 
> [  137.099267] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c.  Caller 0xffffffff80372156
> [  137.106267]
> [  137.106268] Call Trace:
> [  137.113129]  [<ffffffff803692f0>] xfs_trans_cancel+0x100/0x130
> [  137.116524]  [<ffffffff80372156>] xfs_create+0x256/0x6e0
> [  137.119904]  [<ffffffff80341e09>] xfs_dir2_isleaf+0x19/0x50
> [  137.123269]  [<ffffffff8037e145>] xfs_vn_mknod+0x195/0x250
> [  137.126607]  [<ffffffff8028f32c>] vfs_create+0xac/0xf0
> [  137.129920]  [<ffffffff80292b3c>] open_namei+0x5dc/0x700
> [  137.133227]  [<ffffffff8022a443>] __wake_up+0x43/0x70
> [  137.136477]  [<ffffffff802851bc>] do_filp_open+0x1c/0x50
> [  137.139693]  [<ffffffff8028524a>] do_sys_open+0x5a/0x100
> [  137.142838]  [<ffffffff80220a83>] sysenter_do_call+0x1b/0x67
> [  137.145964]
> [  137.149014] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff8036930e
> [  137.163485] Filesystem "sda2": Corruption of in-memory data detected.  Shutting down filesystem: sda2
> 
> directly after booting.

Interesting. I think I just found a cause of this shutdown under
certain circumstances:

http://marc.info/?l=linux-xfs&m=120518791828200&w=2

To confirm it might be the same issue, can you dump the superblock of this
filesystem for me?  i.e.:

# xfs_db -r -c 'sb 0' -c p /dev/sda2

Also, what the mount options you are using are?

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2008-03-10 22:30     ` David Chinner
@ 2008-03-10 22:59       ` Andreas Kotes
  2008-03-10 23:45         ` David Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: Andreas Kotes @ 2008-03-10 22:59 UTC (permalink / raw)
  To: David Chinner; +Cc: linux-kernel, xfs

Hello Dave,

* David Chinner <dgc@sgi.com> [20080310 23:30]:
> On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote:
> > * David Chinner <dgc@sgi.com> [20080310 13:18]:
> > > Yes, but those previous corruptions get left on disk as a landmine
> > > for you to trip over some time later, even on a kernel that has the
> > > bug fixed.
> > > 
> > > I suggest that you run xfs_check on the filesystem and if that
> > > shows up errors, run xfs_repair onteh filesystem to correct them.
> > 
> > I seem to be having similiar problems, and xfs_repair is not helping :(
> 
> xfs_repair is ensuring that the problem is not being caused by on-disk
> corruption. In this case, it does not appear to be caused by on-disk
> corruption, so xfs_repair won't help.

ok, too bad - btw, is it a problem that I'm doing the xfs_repair on a
mounted filesystem with xfs_repair -f -L after a remount rw?

> > I always run into:
> > 
> > [  137.099267] Filesystem "sda2": XFS internal error xfs_trans_cancel at line 1132 of file fs/xfs/xfs_trans.c.  Caller 0xffffffff80372156
> > [  137.106267]
> > [  137.106268] Call Trace:
> > [  137.113129]  [<ffffffff803692f0>] xfs_trans_cancel+0x100/0x130
> > [  137.116524]  [<ffffffff80372156>] xfs_create+0x256/0x6e0
> > [  137.119904]  [<ffffffff80341e09>] xfs_dir2_isleaf+0x19/0x50
> > [  137.123269]  [<ffffffff8037e145>] xfs_vn_mknod+0x195/0x250
> > [  137.126607]  [<ffffffff8028f32c>] vfs_create+0xac/0xf0
> > [  137.129920]  [<ffffffff80292b3c>] open_namei+0x5dc/0x700
> > [  137.133227]  [<ffffffff8022a443>] __wake_up+0x43/0x70
> > [  137.136477]  [<ffffffff802851bc>] do_filp_open+0x1c/0x50
> > [  137.139693]  [<ffffffff8028524a>] do_sys_open+0x5a/0x100
> > [  137.142838]  [<ffffffff80220a83>] sysenter_do_call+0x1b/0x67
> > [  137.145964]
> > [  137.149014] xfs_force_shutdown(sda2,0x8) called from line 1133 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff8036930e
> > [  137.163485] Filesystem "sda2": Corruption of in-memory data detected.  Shutting down filesystem: sda2
> > 
> > directly after booting.
> 
> Interesting. I think I just found a cause of this shutdown under
> certain circumstances:
> 
> http://marc.info/?l=linux-xfs&m=120518791828200&w=2
> 
> To confirm it might be the same issue, can you dump the superblock of this
> filesystem for me?  i.e.:
> 
> # xfs_db -r -c 'sb 0' -c p /dev/sda2

certainly:

magicnum = 0x58465342
blocksize = 4096
dblocks = 35613152
rblocks = 0
rextents = 0
uuid = 62dae5fa-4085-4edc-ad76-5652d9fb00ae
logstart = 33554436
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 2225822
agcount = 16
rbmblocks = 0
logblocks = 17389
versionnum = 0x3084
sectsize = 512
inodesize = 256
inopblock = 16
fname = "s2g-serv\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 22
rextslog = 0
inprogress = 0
imax_pct = 25
icount = 15232
ifree = 2379
fdblocks = 5942436
frextents = 0
uquotino = 0
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 0
features2 = 0

> Also, what the mount options you are using are?

rw,noatime ...

if you want more info, just let me know :)

Kind regards from Berlin,

   Andreas

-- 
flatline IT services - Andreas Kotes - Tailored solutions for your IT needs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2008-03-10 22:59       ` Andreas Kotes
@ 2008-03-10 23:45         ` David Chinner
  2008-03-11 13:47           ` Andreas Kotes
  0 siblings, 1 reply; 12+ messages in thread
From: David Chinner @ 2008-03-10 23:45 UTC (permalink / raw)
  To: Andreas Kotes; +Cc: David Chinner, linux-kernel, xfs

On Mon, Mar 10, 2008 at 11:59:27PM +0100, Andreas Kotes wrote:
> * David Chinner <dgc@sgi.com> [20080310 23:30]:
> > On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote:
> > > * David Chinner <dgc@sgi.com> [20080310 13:18]:
> > > > Yes, but those previous corruptions get left on disk as a landmine
> > > > for you to trip over some time later, even on a kernel that has the
> > > > bug fixed.
> > > > 
> > > > I suggest that you run xfs_check on the filesystem and if that
> > > > shows up errors, run xfs_repair onteh filesystem to correct them.
> > > 
> > > I seem to be having similiar problems, and xfs_repair is not helping :(
> > 
> > xfs_repair is ensuring that the problem is not being caused by on-disk
> > corruption. In this case, it does not appear to be caused by on-disk
> > corruption, so xfs_repair won't help.
> 
> ok, too bad - btw, is it a problem that I'm doing the xfs_repair on a
> mounted filesystem with xfs_repair -f -L after a remount rw?

If it was read only, and you rebooted immediately afterwards, you'd
probably be ok. Doing this to a mounted, rw filesystem is asking
for trouble. If the shutdown is occurring after you've run xfs_repair,
then it is almost certainly the cause....

I'd suggest getting a knoppix (or similar) rescue disk and repairing
from that, rebooting and seeing if the problem persists. If it
does, then we'll have to look further into it.

FWIW, you've got plenty of free inodes so this does not look
to be the same problem I've just found. 

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2008-03-10 23:45         ` David Chinner
@ 2008-03-11 13:47           ` Andreas Kotes
  0 siblings, 0 replies; 12+ messages in thread
From: Andreas Kotes @ 2008-03-11 13:47 UTC (permalink / raw)
  To: David Chinner; +Cc: linux-kernel, xfs

Hello,

* David Chinner <dgc@sgi.com> [20080311 00:45]:
> On Mon, Mar 10, 2008 at 11:59:27PM +0100, Andreas Kotes wrote:
> > * David Chinner <dgc@sgi.com> [20080310 23:30]:
> > > On Mon, Mar 10, 2008 at 01:22:16PM +0100, Andreas Kotes wrote:
> > > > * David Chinner <dgc@sgi.com> [20080310 13:18]:
> > > > > Yes, but those previous corruptions get left on disk as a landmine
> > > > > for you to trip over some time later, even on a kernel that has the
> > > > > bug fixed.
> > > > > 
> > > > > I suggest that you run xfs_check on the filesystem and if that
> > > > > shows up errors, run xfs_repair onteh filesystem to correct them.
> > > > 
> > > > I seem to be having similiar problems, and xfs_repair is not helping :(
> > > 
> > > xfs_repair is ensuring that the problem is not being caused by on-disk
> > > corruption. In this case, it does not appear to be caused by on-disk
> > > corruption, so xfs_repair won't help.
> > 
> > ok, too bad - btw, is it a problem that I'm doing the xfs_repair on a
> > mounted filesystem with xfs_repair -f -L after a remount rw?
> 
> If it was read only, and you rebooted immediately afterwards, you'd
> probably be ok. Doing this to a mounted, rw filesystem is asking
> for trouble. If the shutdown is occurring after you've run xfs_repair,
> then it is almost certainly the cause....

whoops, that should have read 'remount ro' .. xfs_repair on a live and
writable filesystem is of course inviting desaster. I was trying read
only - btw, the system as such is booted via PXE and running complete
out of an initrd, using the HDD just for local data storage - not much
happening on shutdown/reboot either way.

> I'd suggest getting a knoppix (or similar) rescue disk and repairing
> from that, rebooting and seeing if the problem persists. If it
> does, then we'll have to look further into it.

I basically build a PXE image which does an xfs_repair -L /dev/sda2 from
initrd - and the problem persists. Sigh. Exactly no change.

> FWIW, you've got plenty of free inodes so this does not look
> to be the same problem I've just found. 

okay ... it happens on several of the dozens of machines I'm running
this way, but not on others - I have yet to find the difference.

what can I do to help find the problem?

   Andreas

-- 
flatline IT services - Andreas Kotes - Tailored solutions for your IT needs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: XFS internal error
  2006-08-26 16:06 Gerardo Exequiel Pozzi
@ 2006-08-26 17:05 ` Jeffrey Hundstad
  0 siblings, 0 replies; 12+ messages in thread
From: Jeffrey Hundstad @ 2006-08-26 17:05 UTC (permalink / raw)
  To: Gerardo Exequiel Pozzi; +Cc: linux-kernel

The use of linux-2.6.17.6 almost certainly caused your problems.  You'll 
need a xfs_repair version 2.8.10 or later to repair the damage.

See the following for complete info:
http://oss.sgi.com/projects/xfs/faq.html#dir2

-- 
Jeffrey Hundstad

Gerardo Exequiel Pozzi wrote:
> Hi,
>
> (sorry for my english)
> (please cc to my email i am not suscribe to this list)
> Linux version 2.6.17.11 (root@djgera) (gcc version 3.4.6) #1 PREEMPT 
> Thu Aug 24 00:27:47 ART 2006
> and previusly used, .7 .6 .2, and .16.X ...
>
> This problem can be related with previus kernels that write on fs, and 
> now when delete files apears the errors?
>
> I delete a big tree directory with many files (39G) from my filesystem,
> then appears this error:
>
> djgera@djgera:/mnt/sdb1$ time rm -rf frugalware
> rm: cannot remove 
> `frugalware/frugalware-stable/extra/frugalware-x86_64/gnet-2.0.7-1-x86_64.fpm': 
> Unknown error 990
> rm: cannot lstat 
> `frugalware/frugalware-stable/extra/frugalware-x86_64/firefox-hu-1.5-2-x86_64.fpm': 
> Input/output error
>
> real    0m54.650s
> user    0m0.059s
> sys     0m2.582s
>
> the dmesg shows:
> xfs_da_do_buf: bno 16777216
> dir: inode 168328424
> Filesystem "sdb1": XFS internal error xfs_da_do_buf(1) at line 2119 of 
> file fs/xfs/xfs_da_btree.c.  Caller 0xc01e6597
> <c01e6276> xfs_da_do_buf+0x5f6/0x880  <c01e6597> 
> xfs_da_read_buf+0x47/0x50
> <c01e43bd> xfs_da_node_lookup_int+0xcd/0x3a0  <c01ebcbb> 
> xfs_dir2_data_log_unused+0x6b/0x80
> <c01e6597> xfs_da_read_buf+0x47/0x50  <c01f03b3> 
> xfs_dir2_leafn_remove+0x2a3/0x440
> <c01f03b3> xfs_dir2_leafn_remove+0x2a3/0x440  <c01f1824> 
> xfs_dir2_node_removename+0xb4/0x100
> <c01e8c92> xfs_dir2_removename+0x122/0x130  <c0229e46> 
> kmem_zone_alloc+0x56/0xe0
> <c021d205> xfs_trans_ijoin+0x35/0xa0  <c0225050> xfs_remove+0x280/0x560
> <c01ff3e0> xfs_iget+0xd0/0x140  <c0231cd8> xfs_vn_unlink+0x48/0x90
> <c021bfa1> xfs_trans_unlocked_item+0x41/0x60  <c0174b10> 
> d_rehash+0x50/0x80
> <c0174648> d_splice_alias+0xa8/0x110  <c01687e5> permission+0x85/0xb0
> <c016a7a1> may_delete+0x41/0x120  <c016bc7f> vfs_unlink+0xaf/0xc0
> <c016bd3c> do_unlinkat+0xac/0x130  <c016e67c> sys_getdents64+0xcc/0xf0
> <c016e4c0> filldir64+0x0/0xf0  <c016be17> sys_unlink+0x17/0x20
> <c01031e3> syscall_call+0x7/0xb
> Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1150 of 
> file fs/xfs/xfs_trans.c.  Caller 0xc02250be
> <c021b998> xfs_trans_cancel+0x108/0x150  <c02250be> 
> xfs_remove+0x2ee/0x560
> <c02250be> xfs_remove+0x2ee/0x560  <c01ff3e0> xfs_iget+0xd0/0x140
> <c0231cd8> xfs_vn_unlink+0x48/0x90  <c021bfa1> 
> xfs_trans_unlocked_item+0x41/0x60
> <c0174b10> d_rehash+0x50/0x80  <c0174648> d_splice_alias+0xa8/0x110
> <c01687e5> permission+0x85/0xb0  <c016a7a1> may_delete+0x41/0x120
> <c016bc7f> vfs_unlink+0xaf/0xc0  <c016bd3c> do_unlinkat+0xac/0x130
> <c016e67c> sys_getdents64+0xcc/0xf0  <c016e4c0> filldir64+0x0/0xf0
> <c016be17> sys_unlink+0x17/0x20  <c01031e3> syscall_call+0x7/0xb
> xfs_force_shutdown(sdb1,0x8) called from line 1151 of file 
> fs/xfs/xfs_trans.c.  Return address = 0xc021b9be
> Filesystem "sdb1": Corruption of in-memory data detected.  Shutting 
> down filesystem: sdb1
> Please umount the filesystem, and rectify the problem(s)
>
> when umount appears this at dmesg:
> xfs_force_shutdown(sdb1,0x1) called from line 338 of file 
> fs/xfs/xfs_rw.c.  Return address = 0xc02299e7
>
> now mount and: (to replay log for check with xfs_check)
> XFS mounting filesystem sdb1
> Starting XFS recovery on filesystem: sdb1 (logdev: internal)
> Ending XFS recovery on filesystem: sdb1 (logdev: internal)
>
> umount and xfs_check:
> root@djgera:~# xfs_check /dev/sdb1
> bad free block nused 34 should be 43 for dir ino 78944224 block 16777216
> missing free index for data block 0 in dir ino 168328424
> missing free index for data block 2 in dir ino 168328424
> missing free index for data block 3 in dir ino 168328424
> missing free index for data block 4 in dir ino 168328424
> missing free index for data block 5 in dir ino 168328424
> missing free index for data block 6 in dir ino 168328424
> missing free index for data block 7 in dir ino 168328424
> missing free index for data block 8 in dir ino 168328424
> missing free index for data block 9 in dir ino 168328424
> bad free block nused 4 should be 21 for dir ino 168328959 block 16777216
>
> xfs_repair:
> root@djgera:~# xfs_repair /dev/sdb1
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
>        - zero log...
>        - scan filesystem freespace and inode maps...
>        - found root inode chunk
> Phase 3 - for each AG...
>        - scan and clear agi unlinked lists...
>        - process known inodes and perform inode discovery...
>        - agno = 0
>        - agno = 1
>        - agno = 2
>        - agno = 3
>        - agno = 4
>        - agno = 5
>        - agno = 6
>        - agno = 7
>        - agno = 8
>        - agno = 9
>        - agno = 10
>        - agno = 11
>        - agno = 12
>        - agno = 13
>        - agno = 14
>        - agno = 15
>        - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
>        - setting up duplicate extent list...
>        - clear lost+found (if it exists) ...
>        - check for inodes claiming duplicate blocks...
>        - agno = 0
>        - agno = 1
>        - agno = 2
>        - agno = 3
>        - agno = 4
>        - agno = 5
>        - agno = 6
>        - agno = 7
>        - agno = 8
>        - agno = 9
>        - agno = 10
>        - agno = 11
>        - agno = 12
>        - agno = 13
>        - agno = 14
>        - agno = 15
> Phase 5 - rebuild AG headers and trees...
>        - reset superblock...
> Phase 6 - check inode connectivity...
>        - resetting contents of realtime bitmap and summary inodes
>        - ensuring existence of lost+found directory
>        - traversing filesystem starting at / ...
> free block 16777216 for directory inode 78944224 bad nused
> rebuilding directory inode 78944224
> free block 16777216 for directory inode 168328959 bad nused
> rebuilding directory inode 168328959
> can't read freespace block 16777216 for directory inode 168328424
> rebuilding directory inode 168328424
>        - traversal finished ...
>        - traversing all unattached subtrees ...
>        - traversals finished ...
>        - moving disconnected inodes to lost+found ...
> Phase 7 - verify and correct link counts...
> cache_purge: shake on cache 0x80daa78 left 3 nodes!?
> cache_purge: shake on cache 0x80daa78 left 3 nodes!?
> done
>
> root@djgera:~# xfs_info /dev/sdb1
> meta-data=/dev/sdb1              isize=256    agcount=16, 
> agsize=1525923 blks
>         =                       sectsz=512   attr=0
> data     =                       bsize=4096   blocks=24414768, imaxpct=25
>         =                       sunit=0      swidth=0 blks, unwritten=1
> naming   =version 2              bsize=4096
> log      =internal               bsize=4096   blocks=11921, version=1
>         =                       sectsz=512   sunit=0 blks
> realtime =none                   extsz=65536  blocks=0, rtextents=0
>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* XFS internal error
@ 2006-08-26 16:06 Gerardo Exequiel Pozzi
  2006-08-26 17:05 ` Jeffrey Hundstad
  0 siblings, 1 reply; 12+ messages in thread
From: Gerardo Exequiel Pozzi @ 2006-08-26 16:06 UTC (permalink / raw)
  To: linux-kernel

Hi,

(sorry for my english)
(please cc to my email i am not suscribe to this list)
Linux version 2.6.17.11 (root@djgera) (gcc version 3.4.6) #1 PREEMPT Thu 
Aug 24 00:27:47 ART 2006
and previusly used, .7 .6 .2, and .16.X ...

This problem can be related with previus kernels that write on fs, and 
now when delete files apears the errors?

I delete a big tree directory with many files (39G) from my filesystem,
then appears this error:

djgera@djgera:/mnt/sdb1$ time rm -rf frugalware
rm: cannot remove 
`frugalware/frugalware-stable/extra/frugalware-x86_64/gnet-2.0.7-1-x86_64.fpm': 
Unknown error 990
rm: cannot lstat 
`frugalware/frugalware-stable/extra/frugalware-x86_64/firefox-hu-1.5-2-x86_64.fpm': 
Input/output error

real    0m54.650s
user    0m0.059s
sys     0m2.582s

the dmesg shows:
xfs_da_do_buf: bno 16777216
dir: inode 168328424
Filesystem "sdb1": XFS internal error xfs_da_do_buf(1) at line 2119 of 
file fs/xfs/xfs_da_btree.c.  Caller 0xc01e6597
 <c01e6276> xfs_da_do_buf+0x5f6/0x880  <c01e6597> xfs_da_read_buf+0x47/0x50
 <c01e43bd> xfs_da_node_lookup_int+0xcd/0x3a0  <c01ebcbb> 
xfs_dir2_data_log_unused+0x6b/0x80
 <c01e6597> xfs_da_read_buf+0x47/0x50  <c01f03b3> 
xfs_dir2_leafn_remove+0x2a3/0x440
 <c01f03b3> xfs_dir2_leafn_remove+0x2a3/0x440  <c01f1824> 
xfs_dir2_node_removename+0xb4/0x100
 <c01e8c92> xfs_dir2_removename+0x122/0x130  <c0229e46> 
kmem_zone_alloc+0x56/0xe0
 <c021d205> xfs_trans_ijoin+0x35/0xa0  <c0225050> xfs_remove+0x280/0x560
 <c01ff3e0> xfs_iget+0xd0/0x140  <c0231cd8> xfs_vn_unlink+0x48/0x90
 <c021bfa1> xfs_trans_unlocked_item+0x41/0x60  <c0174b10> d_rehash+0x50/0x80
 <c0174648> d_splice_alias+0xa8/0x110  <c01687e5> permission+0x85/0xb0
 <c016a7a1> may_delete+0x41/0x120  <c016bc7f> vfs_unlink+0xaf/0xc0
 <c016bd3c> do_unlinkat+0xac/0x130  <c016e67c> sys_getdents64+0xcc/0xf0
 <c016e4c0> filldir64+0x0/0xf0  <c016be17> sys_unlink+0x17/0x20
 <c01031e3> syscall_call+0x7/0xb
Filesystem "sdb1": XFS internal error xfs_trans_cancel at line 1150 of 
file fs/xfs/xfs_trans.c.  Caller 0xc02250be
 <c021b998> xfs_trans_cancel+0x108/0x150  <c02250be> xfs_remove+0x2ee/0x560
 <c02250be> xfs_remove+0x2ee/0x560  <c01ff3e0> xfs_iget+0xd0/0x140
 <c0231cd8> xfs_vn_unlink+0x48/0x90  <c021bfa1> 
xfs_trans_unlocked_item+0x41/0x60
 <c0174b10> d_rehash+0x50/0x80  <c0174648> d_splice_alias+0xa8/0x110
 <c01687e5> permission+0x85/0xb0  <c016a7a1> may_delete+0x41/0x120
 <c016bc7f> vfs_unlink+0xaf/0xc0  <c016bd3c> do_unlinkat+0xac/0x130
 <c016e67c> sys_getdents64+0xcc/0xf0  <c016e4c0> filldir64+0x0/0xf0
 <c016be17> sys_unlink+0x17/0x20  <c01031e3> syscall_call+0x7/0xb
xfs_force_shutdown(sdb1,0x8) called from line 1151 of file 
fs/xfs/xfs_trans.c.  Return address = 0xc021b9be
Filesystem "sdb1": Corruption of in-memory data detected.  Shutting down 
filesystem: sdb1
Please umount the filesystem, and rectify the problem(s)

when umount appears this at dmesg:
xfs_force_shutdown(sdb1,0x1) called from line 338 of file 
fs/xfs/xfs_rw.c.  Return address = 0xc02299e7

now mount and: (to replay log for check with xfs_check)
XFS mounting filesystem sdb1
Starting XFS recovery on filesystem: sdb1 (logdev: internal)
Ending XFS recovery on filesystem: sdb1 (logdev: internal)

umount and xfs_check:
root@djgera:~# xfs_check /dev/sdb1
bad free block nused 34 should be 43 for dir ino 78944224 block 16777216
missing free index for data block 0 in dir ino 168328424
missing free index for data block 2 in dir ino 168328424
missing free index for data block 3 in dir ino 168328424
missing free index for data block 4 in dir ino 168328424
missing free index for data block 5 in dir ino 168328424
missing free index for data block 6 in dir ino 168328424
missing free index for data block 7 in dir ino 168328424
missing free index for data block 8 in dir ino 168328424
missing free index for data block 9 in dir ino 168328424
bad free block nused 4 should be 21 for dir ino 168328959 block 16777216

xfs_repair:
root@djgera:~# xfs_repair /dev/sdb1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - clear lost+found (if it exists) ...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - ensuring existence of lost+found directory
        - traversing filesystem starting at / ...
free block 16777216 for directory inode 78944224 bad nused
rebuilding directory inode 78944224
free block 16777216 for directory inode 168328959 bad nused
rebuilding directory inode 168328959
can't read freespace block 16777216 for directory inode 168328424
rebuilding directory inode 168328424
        - traversal finished ...
        - traversing all unattached subtrees ...
        - traversals finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
cache_purge: shake on cache 0x80daa78 left 3 nodes!?
cache_purge: shake on cache 0x80daa78 left 3 nodes!?
done

root@djgera:~# xfs_info /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=16, agsize=1525923 
blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=24414768, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=11921, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0


-- 
Gerardo Exequiel Pozzi ( djgera )
http://www.vmlinuz.com.ar http://www.djgera.com.ar
KeyID: 0x1B8C330D
Key fingerprint = 0CAA D5D4 CD85 4434 A219  76ED 39AB 221B 1B8C 330D


	
	
		
__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2008-03-11 13:48 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-10-07  1:09 XFS internal error Max Waterman
2007-10-08  0:14 ` David Chinner
2007-10-08  1:54   ` Max Waterman
2007-10-08  2:32     ` Barry Naujok
2007-10-08  2:48       ` Max Waterman
2008-03-10 12:22   ` Andreas Kotes
2008-03-10 22:30     ` David Chinner
2008-03-10 22:59       ` Andreas Kotes
2008-03-10 23:45         ` David Chinner
2008-03-11 13:47           ` Andreas Kotes
  -- strict thread matches above, loose matches on Subject: below --
2006-08-26 16:06 Gerardo Exequiel Pozzi
2006-08-26 17:05 ` Jeffrey Hundstad

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).