LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
@ 2007-01-10 22:37 David Chinner
2007-01-10 23:04 ` Christoph Lameter
0 siblings, 1 reply; 19+ messages in thread
From: David Chinner @ 2007-01-10 22:37 UTC (permalink / raw)
To: linux-kernel; +Cc: linux-mm
Discussion thread:
http://oss.sgi.com/archives/xfs/2007-01/msg00052.html
Short story is that buffered writes slowed down by 20-30%
between 2.6.18 and 2.6.19 and became a lot more erratic.
Writing a single file to a single filesystem doesn't appear
to have major problems, but when writing a file per filesystem
and using 3 filesystems performance is much worse on 2.6.19
and is only slightly better on 2.6.20-rc3.
It doesn't appear to be fragmentation (I wrote quite a few
800GB files when testing this and they all had "perfect"
extent layouts (i.e. extents the size of allocation groups
and in sequential AGs). It's not the block devices, either,
as doing the same I/O to the block device gives the same
results.
My test case is effectively:
#!/bin/bash
mkfs.xfs -f -l version=2 -d sunit=512,swidth=2048 /dev/dm-0
mkfs.xfs -f -l version=2 -d sunit=512,swidth=2048 /dev/dm-1
mkfs.xfs -f -l version=2 -d sunit=512,swidth=2048 /dev/dm-2
mount /dev/dm-0 /mnt/dm0
mount /dev/dm-1 /mnt/dm1
mount /dev/dm-2 /mnt/dm2
dd if=/dev/zero of=/mnt/dm0/test bs=1024k count=800k &
dd if=/dev/zero of=/mnt/dm1/test bs=1024k count=800k &
dd if=/dev/zero of=/mnt/dm2/test bs=1024k count=800k &
wait
unmount /mnt/dm0
unmount /mnt/dm1
unmount /mnt/dm2
#EOF
Overall, on 2.6.18 this gave an average of about 240MB/s per
filesystem with minimum write rates of about 190MB/s per fs
(when writing near the inner edge of the disks).
On 2.6.20-rc3, this gave and average of ~200MB/s per fs
with minimum write rates of about 110MB/s per fs which
occurrred randomly throughout the test.
The performance and smoothness is fully restored on 2.6.20-rc3
by setting dirty_ratio down to 10 (from the default 40), so
something in the VM is not working as well as it used to....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-10 22:37 [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown David Chinner
@ 2007-01-10 23:04 ` Christoph Lameter
2007-01-10 23:08 ` David Chinner
0 siblings, 1 reply; 19+ messages in thread
From: Christoph Lameter @ 2007-01-10 23:04 UTC (permalink / raw)
To: David Chinner; +Cc: linux-kernel, linux-mm
On Thu, 11 Jan 2007, David Chinner wrote:
> The performance and smoothness is fully restored on 2.6.20-rc3
> by setting dirty_ratio down to 10 (from the default 40), so
> something in the VM is not working as well as it used to....
dirty_background_ratio is left as is at 10? So you gain performance
by switching off background writes via pdflush?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-10 23:04 ` Christoph Lameter
@ 2007-01-10 23:08 ` David Chinner
2007-01-10 23:12 ` Christoph Lameter
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: David Chinner @ 2007-01-10 23:08 UTC (permalink / raw)
To: Christoph Lameter; +Cc: David Chinner, linux-kernel, linux-mm
On Wed, Jan 10, 2007 at 03:04:15PM -0800, Christoph Lameter wrote:
> On Thu, 11 Jan 2007, David Chinner wrote:
>
> > The performance and smoothness is fully restored on 2.6.20-rc3
> > by setting dirty_ratio down to 10 (from the default 40), so
> > something in the VM is not working as well as it used to....
>
> dirty_background_ratio is left as is at 10?
Yes.
> So you gain performance by switching off background writes via pdflush?
Well, pdflush appears to be doing very little on both 2.6.18 and
2.6.20-rc3. In both cases kswapd is consuming 10-20% of a CPU and
all of the pdflush threads combined (I've seen up to 7 active at
once) use maybe 1-2% of cpu time. This occurs regardless of the
dirty_ratio setting.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-10 23:08 ` David Chinner
@ 2007-01-10 23:12 ` Christoph Lameter
2007-01-10 23:18 ` David Chinner
2007-01-10 23:13 ` Nick Piggin
2007-01-11 1:11 ` David Chinner
2 siblings, 1 reply; 19+ messages in thread
From: Christoph Lameter @ 2007-01-10 23:12 UTC (permalink / raw)
To: David Chinner; +Cc: linux-kernel, linux-mm
On Thu, 11 Jan 2007, David Chinner wrote:
> Well, pdflush appears to be doing very little on both 2.6.18 and
> 2.6.20-rc3. In both cases kswapd is consuming 10-20% of a CPU and
> all of the pdflush threads combined (I've seen up to 7 active at
> once) use maybe 1-2% of cpu time. This occurs regardless of the
> dirty_ratio setting.
That sounds a bit much for kswapd. How many nodes? Any cpusets in use?
A upper maximum on the number of pdflush threads exists at 8. Are these
multiple files or single file transfers?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-10 23:08 ` David Chinner
2007-01-10 23:12 ` Christoph Lameter
@ 2007-01-10 23:13 ` Nick Piggin
2007-01-11 0:31 ` David Chinner
2007-01-11 1:11 ` David Chinner
2 siblings, 1 reply; 19+ messages in thread
From: Nick Piggin @ 2007-01-10 23:13 UTC (permalink / raw)
To: David Chinner; +Cc: Christoph Lameter, linux-kernel, linux-mm
David Chinner wrote:
> On Wed, Jan 10, 2007 at 03:04:15PM -0800, Christoph Lameter wrote:
>
>>On Thu, 11 Jan 2007, David Chinner wrote:
>>
>>
>>>The performance and smoothness is fully restored on 2.6.20-rc3
>>>by setting dirty_ratio down to 10 (from the default 40), so
>>>something in the VM is not working as well as it used to....
>>
>>dirty_background_ratio is left as is at 10?
>
>
> Yes.
>
>
>>So you gain performance by switching off background writes via pdflush?
>
>
> Well, pdflush appears to be doing very little on both 2.6.18 and
> 2.6.20-rc3. In both cases kswapd is consuming 10-20% of a CPU and
> all of the pdflush threads combined (I've seen up to 7 active at
> once) use maybe 1-2% of cpu time. This occurs regardless of the
> dirty_ratio setting.
Hi David,
Could you get /proc/vmstat deltas for each kernel, to start with?
I'm guessing CPU time isn't a problem, but if it is then I guess
profiles as well.
Thanks,
Nick
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-10 23:12 ` Christoph Lameter
@ 2007-01-10 23:18 ` David Chinner
0 siblings, 0 replies; 19+ messages in thread
From: David Chinner @ 2007-01-10 23:18 UTC (permalink / raw)
To: Christoph Lameter; +Cc: David Chinner, linux-kernel, linux-mm
On Wed, Jan 10, 2007 at 03:12:02PM -0800, Christoph Lameter wrote:
> On Thu, 11 Jan 2007, David Chinner wrote:
>
> > Well, pdflush appears to be doing very little on both 2.6.18 and
> > 2.6.20-rc3. In both cases kswapd is consuming 10-20% of a CPU and
> > all of the pdflush threads combined (I've seen up to 7 active at
> > once) use maybe 1-2% of cpu time. This occurs regardless of the
> > dirty_ratio setting.
>
> That sounds a bit much for kswapd. How many nodes? Any cpusets in use?
It's an x86-64 box - an XE 240 - 4 core, 16GB RAM, single node, no cpusets.
> A upper maximum on the number of pdflush threads exists at 8. Are these
> multiple files or single file transfers?
See the test case i posted - a single file write per filesystem, three
filesystems being written to at once, all on different, unshared block
devices.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-10 23:13 ` Nick Piggin
@ 2007-01-11 0:31 ` David Chinner
2007-01-11 0:43 ` Christoph Lameter
2007-01-11 1:08 ` Nick Piggin
0 siblings, 2 replies; 19+ messages in thread
From: David Chinner @ 2007-01-11 0:31 UTC (permalink / raw)
To: Nick Piggin; +Cc: David Chinner, Christoph Lameter, linux-kernel, linux-mm
[-- Attachment #1: Type: text/plain, Size: 2432 bytes --]
On Thu, Jan 11, 2007 at 10:13:55AM +1100, Nick Piggin wrote:
> David Chinner wrote:
> >On Wed, Jan 10, 2007 at 03:04:15PM -0800, Christoph Lameter wrote:
> >
> >>On Thu, 11 Jan 2007, David Chinner wrote:
> >>
> >>
> >>>The performance and smoothness is fully restored on 2.6.20-rc3
> >>>by setting dirty_ratio down to 10 (from the default 40), so
> >>>something in the VM is not working as well as it used to....
> >>
> >>dirty_background_ratio is left as is at 10?
> >
> >
> >Yes.
> >
> >
> >>So you gain performance by switching off background writes via pdflush?
> >
> >
> >Well, pdflush appears to be doing very little on both 2.6.18 and
> >2.6.20-rc3. In both cases kswapd is consuming 10-20% of a CPU and
> >all of the pdflush threads combined (I've seen up to 7 active at
> >once) use maybe 1-2% of cpu time. This occurs regardless of the
> >dirty_ratio setting.
>
> Hi David,
>
> Could you get /proc/vmstat deltas for each kernel, to start with?
Sure, but that doesn't really show the how erratic the per-filesystem
throughput is because the test I'm running is PCI-X bus limited in
it's throughput at about 750MB/s. Each dm device is capable of about
340MB/s write, so when one slows down, the others will typically
speed up.
So, what I've attached is three files which have both
'vmstat 5' output and 'iostat 5 |grep dm-' output in them.
- 2.6.18.out - 2.6.18 behaviour near start of writes.
Behaviour does not change over the couse of the test,
just gets a bit slower as the test moves from the outer
edge of the disk to the inner. erractic behaviour is
highlighted.
- 2.6.20-rc3.out - 2.6.20-rc3 behaviour near start of writes.
Somewhat more erratic than 2.6.18, but about 100-150GB into
the write test, things change with dirty_ratio=40. erractic
behaviour is highlighted.
- 2.6.20-rc3-worse.out - 2.6.20-rc3 behavour when things go
bad. We're not keeping the disks or the PCI-X bus fully
utilised (each dm device can do about 300MB/s at this offset)
and aggregate throughput has dropped to 500-600MB/s.
With 2.6.20-rc3 and dirty_ratio = 10, the performance drop-off part way
into the test does not occur and the output is almost identical to
2.6.18.out.
> I'm guessing CPU time isn't a problem, but if it is then I guess
> profiles as well.
Plenty of idle cpu so I don't think it's a problem.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
[-- Attachment #2: 2.6.18.out --]
[-- Type: text/plain, Size: 15040 bytes --]
2.6.18-1-amd64 from debian unstable
/proc/sys/vm at defaults
$ vmstat 5
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 4 0 1807512 200 14052784 0 0 4 1638 76 41 0 1 99 0
7 2 0 91300 40 15726136 0 0 6 750452 2060 7153 1 53 13 33
3 3 0 89144 40 15719372 0 0 0 765470 2028 8306 0 57 16 27
3 3 0 90620 40 15737432 0 0 8 734020 2231 8449 1 55 16 28
1 4 0 93400 40 15732968 0 0 0 754329 2200 8208 0 55 15 30
1 4 0 94684 24 15737896 0 0 5 743548 2227 8862 1 56 14 29
2 1 0 87684 8 15828324 0 0 0 669015 2071 8571 0 46 27 27 <<<<<
3 3 0 91028 8 15735904 0 0 2 754430 2099 8252 1 46 30 24
2 4 0 89868 8 15778236 0 0 0 707134 2174 9668 0 47 24 29
2 8 0 88980 8 15740856 0 0 0 762258 2275 7690 1 54 15 30
3 7 0 92736 8 15734196 0 0 5 747985 2248 7365 0 56 17 26
3 6 0 90172 8 15744740 0 0 0 740884 2312 8348 0 57 15 28
2 7 0 89380 8 15741924 0 0 0 754114 2264 10184 0 57 15 28
3 5 0 94296 8 15807056 0 0 8 696285 2367 10537 0 54 14 32
2 6 0 88644 8 15791416 0 0 2 693826 2210 16981 0 44 19 37
2 7 0 93136 8 15793116 0 0 5 725302 2371 17737 1 52 14 34
3 10 0 87428 8 15753932 0 0 0 773513 2285 17724 0 56 7 37
3 8 0 90800 8 15753124 0 0 6 739142 2375 16656 1 58 10 31
4 8 0 93096 8 15754324 0 0 0 749338 2415 18957 0 60 10 30
2 11 0 86628 8 15760316 0 0 9 746439 2462 17413 1 60 11 29
3 5 0 87772 8 15829824 0 0 4 650377 2404 18220 0 49 16 35 <<<<<
1 6 0 90944 8 15803152 0 0 6 709598 2402 11936 1 48 13 38
7 7 0 89092 8 15783484 0 0 0 736173 2451 12959 0 51 14 35
1 11 0 85788 8 15752500 0 0 1 760675 2467 10508 1 54 11 34
4 9 0 88324 8 15748964 0 0 5 738161 2496 6939 0 57 13 30
3 8 0 91804 8 15751276 0 0 2 740564 2554 9801 1 59 12 29
2 9 0 91708 8 15761608 0 0 0 738580 2474 16851 0 60 11 28
4 8 0 87148 8 15774140 0 0 0 735378 2598 24892 0 60 9 30
5 9 0 87820 8 15767248 0 0 2 712053 2715 25149 0 56 13 31
2 9 0 94568 8 15756724 0 0 5 751153 2525 19390 1 60 8 31
3 8 0 88996 8 15764540 0 0 0 739513 2554 17517 0 60 12 28
4 9 0 88060 8 15770108 0 0 0 733779 2617 14532 1 59 10 31
1 11 0 84372 8 15767840 0 0 0 748831 2575 16348 0 60 12 28
4 10 0 85276 8 15769740 0 0 44 739195 2570 15844 1 60 10 30
3 10 0 90464 8 15757564 0 0 50 742243 2587 17061 0 58 9 32
3 6 0 92368 8 15781128 0 0 4 655346 2599 14666 1 48 18 33 <<<<<
3 7 0 94220 8 15812424 0 0 0 593861 2587 15113 0 38 31 31 <<<<<
8 8 0 90684 8 15786132 0 0 0 690888 2682 15278 1 47 15 38 <<<<<
1 10 0 89076 8 15754540 0 0 4 735318 2602 11575 0 55 10 36
2 9 0 91340 8 15752164 0 0 1 736788 2671 11438 0 59 14 26
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
3 9 0 89000 8 15756816 0 0 0 746851 2543 10281 0 60 15 25
3 10 0 89420 8 15753108 0 0 1 745210 2619 12884 1 61 13 25
1 11 0 84588 8 15762012 0 0 2 695561 2718 13985 0 54 10 36
3 9 0 87012 8 15759400 0 0 10 728164 2722 11159 1 59 10 30
2 9 0 91472 8 15757548 0 0 6 723808 2669 13573 0 57 13 29
4 8 0 89160 8 15759376 0 0 0 742918 2661 15718 1 60 12 27
2 10 0 85084 8 15771616 0 0 0 724117 2671 18962 0 61 9 30
4 9 0 92472 8 15771296 0 0 1 725714 2811 24185 1 60 12 28
2 10 0 92068 8 15773208 0 0 4 734559 2726 27220 0 60 11 29
2 11 0 85592 8 15773564 0 0 2 737879 2714 27623 1 60 11 28
3 10 0 91700 8 15766072 0 0 2 718346 2646 22216 0 56 15 29
3 10 0 83776 8 15775852 0 0 0 727938 2753 23132 1 61 10 29
3 8 0 93516 8 15771240 0 0 0 734910 2711 23445 0 62 12 26
2 9 0 88968 8 15776408 0 0 5 732068 2734 23289 1 62 11 26
$ iostat 5 |grep dm-
dm-0 226.47 3.28 6213.88 11668 22084021
dm-1 216.82 3.29 5681.85 11692 20193189
dm-2 230.18 3.29 5762.51 11692 20479829
dm-0 31150.80 0.00 498411.24 0 2482088
dm-1 29964.66 0.00 479434.54 0 2387584
dm-2 32157.23 0.00 514515.66 0 2562288
dm-0 31355.53 0.00 501688.53 0 2493392
dm-1 30267.40 0.00 484276.86 0 2406856
dm-2 32690.14 0.00 523042.25 0 2599520
dm-0 30640.85 0.00 490329.18 0 2436936
dm-1 31119.32 0.00 497986.32 0 2474992
dm-2 31146.88 0.00 498401.61 0 2477056
dm-0 11263.65 0.00 361658.43 0 1801059 <<<<<
dm-1 23827.51 0.00 543884.94 0 2708547 <<<<<
dm-2 15443.37 0.00 376792.57 0 1876427 <<<<<
dm-0 24433.06 0.00 488340.73 0 2422170
dm-1 30285.08 0.00 542182.26 0 2689224
dm-2 14798.59 0.00 469650.00 0 2329464
dm-0 23780.96 0.00 566244.49 0 2825560 <<<<<
dm-1 31165.93 0.00 560659.12 0 2797689 <<<<<
dm-2 12451.50 0.00 440997.60 0 2200578 <<<<<
dm-0 29361.77 0.00 469781.89 0 2334816
dm-1 31076.26 0.00 497215.29 0 2471160
dm-2 31230.18 0.00 499679.68 0 2483408
dm-0 32040.32 0.00 512641.94 0 2542704
dm-1 31585.69 0.00 505369.35 0 2506632
dm-2 30485.08 0.00 487758.06 0 2419280
dm-0 31143.95 0.00 498300.00 0 2471568
dm-1 32371.37 0.00 517941.94 0 2568992
dm-2 30724.60 0.00 491593.55 0 2438304
dm-0 30964.92 0.00 495433.87 0 2457352
dm-1 31821.17 0.00 509137.10 0 2525320
dm-2 31084.88 0.00 497353.23 0 2466872
dm-0 22213.86 0.00 431683.94 0 2149786 <<<<<
dm-1 19303.82 3.21 406148.19 16 2022618 <<<<<
dm-2 24112.85 0.00 481736.55 0 2399048 <<<<<
dm-0 32586.52 0.00 521364.99 0 2591184
dm-1 26891.15 0.00 533442.25 0 2651208
dm-2 30924.35 0.00 494713.88 0 2458728
dm-0 32124.95 0.00 513909.94 0 2533576
dm-1 31073.63 0.00 497169.98 0 2451048
dm-2 30958.42 0.00 495117.24 0 2440928
dm-0 31779.44 0.00 508403.23 0 2521680
dm-1 30971.57 0.00 495524.19 0 2457800
dm-2 31085.08 0.00 497133.87 0 2465784
dm-0 32254.03 0.00 515925.81 0 2558992
dm-1 30127.62 0.00 481991.94 0 2390680
dm-2 32006.05 0.00 512008.06 0 2539560
dm-0 22819.92 4.83 526683.70 24 2617618 <<<<<
dm-1 15316.30 4.83 411950.50 24 2047394 <<<<<
dm-2 16564.59 8.05 434072.03 40 2157338 <<<<<
dm-0 18948.69 3.23 481028.28 16 2381090
dm-1 19329.90 0.00 445191.52 0 2203698
dm-2 24194.75 0.00 454280.81 0 2248690
dm-0 38064.65 0.00 608997.17 0 3014536 <<<<<
dm-1 35084.65 0.00 561273.54 0 2778304 <<<<<
dm-2 723.84 0.00 321928.08 0 1593544 <<<<<
dm-0 31022.29 0.00 496204.02 0 2471096
dm-1 30877.11 0.00 493871.49 0 2459480
dm-2 23065.26 0.00 516738.96 0 2573360
dm-0 31469.90 0.00 503505.45 0 2492352
dm-1 29785.86 0.00 476305.45 0 2357712
dm-2 32217.17 0.00 515424.65 0 2551352
dm-0 31458.87 0.00 503259.68 0 2496168
dm-1 29787.70 0.00 476203.23 0 2361968
dm-2 31912.50 0.00 510553.23 0 2532344
dm-0 31722.78 0.00 507559.68 0 2517496
dm-1 31753.02 0.00 507954.84 0 2519456
dm-2 30015.52 0.00 479920.97 0 2380408
dm-0 31761.13 0.00 508152.63 0 2510274
dm-1 29887.25 0.00 477944.94 0 2361048
dm-2 30137.85 0.00 482254.25 0 2382336
dm-0 21502.83 0.00 488791.92 0 2419520
dm-1 30429.49 0.00 486623.03 0 2408784
dm-2 30606.87 0.00 489622.63 0 2423632
dm-0 32404.22 0.00 518364.66 0 2581456
dm-1 29427.91 0.00 470522.09 0 2343200
dm-2 31702.01 0.00 507203.21 0 2525872
dm-0 31864.04 0.00 509805.25 0 2523536
dm-1 31660.20 0.00 506454.95 0 2506952
dm-2 29938.99 0.00 478516.36 0 2368656
dm-0 31114.75 0.00 497692.12 0 2463576
dm-1 30767.07 0.00 492061.41 0 2435704
dm-2 31053.94 0.00 496738.59 0 2458856
dm-0 32305.04 0.00 516856.45 0 2563608
dm-1 31119.96 0.00 497622.58 0 2468208
dm-2 30513.10 0.00 487833.87 0 2419656
dm-0 32361.62 0.00 517675.96 0 2562496
dm-1 29420.61 0.00 470370.91 0 2328336
dm-2 31926.06 0.00 510792.73 0 2528424
dm-0 30932.39 0.00 494989.47 0 2445248
dm-1 32021.46 0.00 512369.23 0 2531104
dm-2 30555.67 0.00 488898.79 0 2415160
dm-0 25578.14 0.00 529783.00 0 2617128
dm-1 30213.97 0.00 483229.15 0 2387152
dm-2 31086.64 0.00 506953.85 0 2504352
dm-0 30769.76 0.00 492275.81 0 2441688
dm-1 31664.31 0.00 506608.06 0 2512776
dm-2 30774.19 0.00 492314.52 0 2441880
dm-0 31153.72 0.00 498317.91 0 2476640
dm-1 30733.40 0.00 491517.10 0 2442840
dm-2 31614.29 0.00 505740.04 0 2513528
dm-0 31665.52 0.00 506700.00 0 2513232
dm-1 31119.35 0.00 497848.39 0 2469328
dm-2 30385.08 0.00 485975.81 0 2410440
dm-0 24149.60 4.86 464011.74 24 2292218
dm-1 21799.19 0.00 463535.63 0 2289866
dm-2 28986.84 0.00 484500.40 0 2393432
dm-0 28882.66 0.00 461990.32 0 2291472
dm-1 29911.49 1.61 478564.52 8 2373680
dm-2 29888.91 0.00 487735.48 0 2419168
dm-0 30865.73 0.00 493837.10 0 2449432
dm-1 28708.87 3.23 503904.84 16 2499368
dm-2 30902.22 4.84 504001.61 24 2499848
dm-0 31659.27 0.00 506483.87 0 2512160
dm-1 31373.39 0.00 501814.52 0 2489000
dm-2 30631.25 0.00 489846.77 0 2429640
dm-0 29666.80 0.00 474531.99 0 2358424
dm-1 30355.33 0.00 485514.69 0 2413008
dm-2 30551.31 0.00 488505.43 0 2427872
dm-0 29918.15 0.00 478454.84 0 2373136
dm-1 30949.80 0.00 495179.03 0 2456088
dm-2 31393.15 0.00 502204.84 0 2490936
dm-0 30278.38 0.00 484286.06 0 2397216
dm-1 31032.73 0.00 496501.01 0 2457680
dm-2 31428.08 0.00 502841.21 0 2489064
dm-0 30813.08 0.00 493051.11 0 2450464 <<<<<
dm-1 31612.88 0.00 505817.51 0 2513913 <<<<<
dm-2 27513.28 3.22 440259.76 16 2188091 <<<<<
dm-0 31584.85 0.00 505252.53 0 2501000
dm-1 32325.86 0.00 517141.01 0 2559848
dm-2 20020.20 0.00 464043.64 0 2297016
dm-0 30549.19 0.00 488669.35 0 2423800
dm-1 31501.21 0.00 503943.55 0 2499560
dm-2 30221.17 0.00 483325.81 0 2397296
dm-0 29849.60 0.00 477311.29 0 2367464
dm-1 31148.79 0.00 498250.00 0 2471320
dm-2 30749.40 0.00 496754.84 0 2463904
dm-0 31116.30 0.00 497820.52 0 2474168
dm-1 30522.13 0.00 488333.20 0 2427016
dm-2 30494.57 0.00 487793.96 0 2424336
[-- Attachment #3: 2.6.20-rc3.out --]
[-- Type: text/plain, Size: 21242 bytes --]
2.6.20-rc3, default vm tunings
$ vmstat 5
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
3 5 0 87864 8 15723492 0 0 48 64452 253 908 0 18 72 9
3 6 0 87520 8 15726764 0 0 5 747371 2295 15063 0 57 19 24
3 6 0 91612 8 15726768 0 0 0 743760 2343 15321 1 57 17 25
4 5 0 87912 8 15744920 0 0 2 727226 2312 13431 0 57 20 24
1 7 0 87848 8 15798020 0 0 1 495273 1878 7874 1 24 61 14 <<<<<<<
2 10 0 89364 8 15756660 0 0 0 681229 2401 24021 0 54 20 26
2 10 0 93892 8 15749864 0 0 5 736260 2514 26324 1 59 11 30
5 8 0 88716 8 15749160 0 0 0 751747 2389 27372 0 59 11 29
2 10 0 90880 8 15743488 0 0 0 746821 2384 26088 1 59 9 32
3 9 0 91520 8 15747588 0 0 0 741946 2345 28706 0 60 13 27
4 10 0 86928 8 15756244 0 0 4 744362 2425 28480 1 61 14 25
3 8 0 91932 8 15757916 0 0 9 590591 2141 14653 0 36 31 33 <<<<<<<
3 9 0 88180 8 15778396 0 0 2 534214 2311 20443 1 37 26 37 <<<<<<<
2 9 0 88976 8 15762576 0 0 2 717699 2516 30072 0 56 17 27
7 8 0 92892 8 15757448 0 0 0 747566 2485 32363 1 61 12 27
1 10 0 86364 8 15759024 0 0 0 749395 2453 30159 0 60 13 27
5 8 0 92860 8 15752016 0 0 6 745046 2462 29773 1 60 11 28
4 10 0 91700 8 15754044 0 0 0 745897 2422 31167 0 60 12 28
2 9 0 93368 8 15773256 0 0 0 636481 2736 29868 1 52 7 41 <<<<<<<
0 11 0 92920 8 15752060 0 0 0 737382 2639 32313 0 58 13 28
1 10 0 87772 8 15759076 0 0 0 742549 2478 30909 1 61 13 26
3 8 0 91632 8 15759116 0 0 5 742481 2464 31315 0 61 13 27
2 10 0 85992 8 15760688 0 0 14 740988 2639 31975 1 61 9 30
3 8 0 92352 8 15751444 0 0 0 742023 2557 29505 0 61 12 27
3 8 0 90332 8 15747752 0 0 0 748272 2537 27328 1 60 18 21
4 8 0 92128 8 15769772 0 0 0 677031 2614 27487 0 56 13 31 <<<<<<<
2 9 0 92028 8 15760824 0 0 5 724803 2652 27075 1 57 10 32
5 8 0 86600 8 15763456 0 0 0 739023 2557 31041 0 61 15 24
2 9 0 93508 8 15751448 0 0 1 747915 2629 28523 1 62 11 26
5 9 0 90224 8 15755416 0 0 0 744098 2509 27222 0 63 10 27
1 11 0 91948 8 15758920 0 0 0 744949 2561 27062 1 61 11 28
1 10 0 89108 8 15755400 0 0 5 749241 2553 28086 0 60 12 28
12 8 0 90292 8 15778824 0 0 2 690422 2696 28210 1 56 16 27 <<<<<<<
2 9 0 94316 8 15762820 0 0 2 621403 2741 26233 0 47 22 31 <<<<<<<
0 11 0 91864 8 15759612 0 0 1 734336 2671 30539 1 59 14 26
4 9 0 88592 8 15755156 0 0 2 743448 2600 27277 0 61 16 24
3 8 0 92236 8 15746248 0 0 7 740407 2643 26521 1 61 17 21
3 9 0 87024 8 15755208 0 0 0 748968 2587 31585 0 63 12 25
6 8 0 91032 8 15748884 0 0 0 745108 2569 27812 1 62 10 28
2 8 0 89656 8 15776280 0 0 2 697020 2639 26038 0 57 12 30
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
4 8 0 91660 8 15770328 0 0 1 722922 2719 28936 1 56 15 28
2 10 0 89032 8 15779088 0 0 5 679131 2871 29320 0 55 20 24 <<<<<<<
4 8 0 87044 8 15777288 0 0 0 718871 2747 30824 1 59 19 21
3 9 0 89644 8 15770644 0 0 0 746693 2574 32736 0 64 9 27
4 8 0 92260 8 15773276 0 0 0 742171 2631 33458 1 66 7 27
1 11 0 87176 8 15772488 0 0 0 746909 2627 34393 0 66 13 21
1 7 0 87696 8 15792288 0 0 6 711137 2683 31292 0 61 12 26
2 7 0 92700 8 15778184 0 0 0 616361 2652 23023 0 48 23 29 <<<<<<<
3 10 0 84792 8 15752604 0 0 0 750780 2683 24771 0 63 11 26
5 9 0 92944 8 15743256 0 0 0 749332 2681 25124 0 64 17 19
1 10 0 94048 8 15741900 0 0 0 735865 2675 29609 0 65 9 26
9 9 0 90672 8 15740272 0 0 5 747928 2688 25178 0 66 8 26
3 8 0 87356 8 15749596 0 0 1 730961 2718 26207 1 64 16 20
2 9 0 88708 8 15750372 0 0 0 740069 2649 25111 0 65 16 19
2 10 0 83480 8 15770688 0 0 2 628456 2845 23485 1 52 22 25 <<<<<<<
3 8 0 92748 8 15751520 0 0 0 733477 2788 33597 0 64 12 24
2 9 0 94188 8 15749032 0 0 5 740086 2724 34426 1 66 13 20
3 9 0 89168 8 15750168 0 0 0 738250 2723 31540 0 65 16 19
4 8 0 89388 8 15758480 0 0 3 706912 3034 32462 1 63 7 29
3 8 0 88364 8 15759132 0 0 2 738317 2745 34635 0 66 9 26
5 9 0 89300 8 15755912 0 0 5 737039 2824 33647 1 65 7 27
2 8 0 86640 8 15765904 0 0 0 622271 3029 24950 0 52 17 30 <<<<<<<
3 9 0 87144 8 15758820 0 0 0 694992 3123 26889 1 62 7 31
1 10 0 86968 8 15756064 0 0 0 736023 2867 24487 0 65 3 32
11 9 0 89468 8 15756400 0 0 6 721520 2904 23735 1 64 3 32
9 9 0 91564 8 15756660 0 0 1 710507 3029 25829 0 64 3 33
2 9 0 88196 8 15754908 0 0 0 724214 3002 26844 1 66 2 31
$ iostat 5 |grep dm-
dm-0 9134.66 11.56 166305.84 1903 27380594
dm-1 8620.94 11.56 158204.25 1903 26046748
dm-2 8916.15 11.56 165543.95 1903 27255156
dm-0 34293.75 0.00 548691.07 0 2458136
dm-1 35079.02 0.00 561233.93 0 2514328
dm-2 35228.79 0.00 563646.43 0 2525136
dm-0 34237.64 0.00 547802.23 0 2459632
dm-1 34295.55 0.00 548728.73 0 2463792
dm-2 34899.78 0.00 558392.87 0 2507184
dm-0 34518.30 0.00 552292.86 0 2474272
dm-1 34491.07 0.00 551850.00 0 2472288
dm-2 34769.20 0.00 556307.14 0 2492256
dm-0 33206.70 3.57 531393.30 16 2380642
dm-1 33909.38 0.00 542646.88 0 2431058
dm-2 35174.55 0.00 562889.73 0 2521746
dm-0 3334.30 0.00 299309.64 0 1334921 <<<<<<<
dm-1 3046.64 0.00 326809.19 0 1457569 <<<<<<<
dm-2 3493.50 0.00 448256.73 0 1999225 <<<<<<<
dm-0 19059.56 0.00 325194.67 0 1463376 <<<<<<<
dm-1 31279.11 0.00 561379.56 0 2526208 <<<<<<<
dm-2 35930.44 0.00 639664.00 0 2878488 <<<<<<<
dm-0 22602.02 0.00 361628.70 0 1612864 <<<<<<<
dm-1 40571.75 0.00 649147.98 0 2895200 <<<<<<<
dm-2 40258.30 0.00 644127.35 0 2872808 <<<<<<<
dm-0 35988.57 0.00 575817.04 0 2568144
dm-1 34716.37 0.00 555461.88 0 2477360
dm-2 34521.08 0.00 552337.22 0 2463424
dm-0 36909.23 0.00 590545.95 0 2622024
dm-1 34323.20 0.00 549171.17 0 2438320
dm-2 34153.60 0.00 546344.14 0 2425768
dm-0 34906.67 0.00 558506.67 0 2513280
dm-1 33709.56 0.00 539349.33 0 2427072
dm-2 34209.56 0.00 547287.11 0 2462792
dm-0 35394.17 0.00 566410.76 0 2526192
dm-1 33390.36 0.00 534306.73 0 2383008
dm-2 35356.05 0.00 565808.07 0 2523504
dm-0 16723.60 0.00 469194.61 0 2087916 <<<<<<<
dm-1 9435.96 0.00 366801.57 0 1632267 <<<<<<<
dm-2 21639.10 5.39 508268.31 24 2261794 <<<<<<<
dm-0 11498.67 0.00 315091.56 0 1417912 <<<<<<<
dm-1 21372.22 5.33 475187.56 24 2138344 <<<<<<<
dm-2 15014.89 0.00 394926.67 0 1777170 <<<<<<<
dm-0 33084.34 5.37 529331.54 24 2366112 <<<<<<<
dm-1 41148.10 0.00 658367.79 0 2942904 <<<<<<<
dm-2 25574.72 0.00 409111.41 0 1828728 <<<<<<<
dm-0 34765.18 0.00 556241.07 0 2491960
dm-1 34749.78 0.00 555996.43 0 2490864
dm-2 34273.21 0.00 548369.64 0 2456696
dm-0 35772.10 0.00 572341.07 0 2564088
dm-1 33792.86 0.00 540530.36 0 2421576
dm-2 35690.40 0.00 571044.64 0 2558280
dm-0 34965.55 0.00 559429.08 0 2500648
dm-1 35015.21 0.00 560243.40 0 2504288
dm-2 34003.80 0.00 543958.84 0 2431496
dm-0 34828.79 0.00 557366.07 0 2497000
dm-1 35144.42 0.00 562417.86 0 2519632
dm-2 33988.39 0.00 543769.64 0 2436088
dm-0 25764.04 0.00 412182.02 0 1834210 <<<<<<<
dm-1 22946.29 0.00 367083.60 0 1633522 <<<<<<<
dm-2 39979.10 0.00 653973.93 0 2910184 <<<<<<<
dm-0 37178.92 0.00 594832.29 0 2652952 <<<<<<<
dm-1 24420.63 0.00 390550.67 0 1741856 <<<<<<<
dm-2 40920.18 0.00 654695.96 0 2919944 <<<<<<<
dm-0 35388.37 0.00 566144.07 0 2530664
dm-1 35114.09 0.00 561782.55 0 2511168
dm-2 33981.66 0.00 543676.06 0 2430232
dm-0 35202.01 0.00 563228.64 0 2517632
dm-1 34419.69 0.00 550713.20 0 2461688
dm-2 34196.64 0.00 547112.30 0 2445592
dm-0 34176.73 0.00 546731.10 0 2443888
dm-1 32948.99 0.00 526990.60 0 2355648
dm-2 36352.57 0.00 581617.90 0 2599832
dm-0 35192.86 0.00 562916.07 0 2521864
dm-1 34445.31 0.00 551071.43 0 2468800
dm-2 34110.71 0.00 545592.86 0 2444256
dm-0 34828.51 0.00 557222.27 0 2501928
dm-1 35213.81 0.00 563376.39 0 2529560
dm-2 34087.31 0.00 545363.03 0 2448680
dm-0 28085.14 0.00 449459.91 0 1995602 <<<<<<<
dm-1 28701.80 0.00 459330.18 0 2039426 <<<<<<<
dm-2 38110.81 0.00 616873.87 0 2738920 <<<<<<<
dm-0 29772.26 0.00 484220.13 0 2164464 <<<<<<<
dm-1 31771.14 0.00 508089.49 0 2271160 <<<<<<<
dm-2 38417.67 0.00 614672.04 0 2747584 <<<<<<<
dm-0 34103.35 0.00 545508.93 0 2443880
dm-1 34408.48 0.00 550446.43 0 2466000
dm-2 34469.20 0.00 563789.29 0 2525776
dm-0 34283.00 0.00 548506.49 0 2451824
dm-1 35376.06 0.00 565993.74 0 2529992
dm-2 34725.95 0.00 555441.61 0 2482824
dm-0 35205.82 0.00 563291.28 0 2517912
dm-1 35118.79 0.00 561898.88 0 2511688
dm-2 33814.32 0.00 540970.02 0 2418136
dm-0 36445.17 0.00 583122.70 0 2594896
dm-1 33097.98 0.00 529398.65 0 2355824
dm-2 34954.38 0.00 559264.72 0 2488728
dm-0 35248.10 0.00 563795.97 0 2520168
dm-1 34957.49 0.00 559200.00 0 2499624
dm-2 34368.68 0.00 549877.40 0 2457952
dm-0 36696.19 0.00 592000.00 0 2640320 <<<<<<<
dm-1 30397.98 0.00 486409.42 0 2169386 <<<<<<<
dm-2 29955.38 3.59 489987.89 16 2185346 <<<<<<<
dm-0 41788.39 0.00 688048.21 0 3082456 <<<<<<<
dm-1 20116.96 0.00 321871.43 0 1441984 <<<<<<<
dm-2 21677.68 5.36 358007.14 24 1603872 <<<<<<<
dm-0 37506.70 0.00 600107.14 0 2688480 <<<<<<<
dm-1 29063.62 0.00 465016.07 0 2083272 <<<<<<<
dm-2 36207.59 0.00 579308.93 0 2595304 <<<<<<<
dm-0 32667.34 5.37 560195.08 24 2504072
dm-1 32984.79 0.00 527293.06 0 2357000
dm-2 35434.90 0.00 566949.44 0 2534264
dm-0 34265.26 0.00 548238.75 0 2461592
dm-1 32824.28 5.35 541412.92 24 2430944
dm-2 35598.66 0.00 569541.20 0 2557240
dm-0 34853.81 0.00 557653.81 0 2487136
dm-1 34941.70 0.00 559065.47 0 2493432
dm-2 34413.45 0.00 550615.25 0 2455744
dm-0 34828.19 0.00 557251.01 0 2490912
dm-1 35065.32 0.00 561045.19 0 2507872
dm-2 34574.05 0.00 553141.83 0 2472544
dm-0 31283.63 3.59 500597.98 16 2232667
dm-1 32622.20 0.00 521850.00 0 2327451
dm-2 35009.19 0.00 560244.39 0 2498690
dm-0 12572.81 0.00 324825.17 0 1445472 <<<<<<<
dm-1 39278.88 0.00 628296.63 0 2795920 <<<<<<<
dm-2 39375.51 0.00 653073.26 0 2906176 <<<<<<<
dm-0 19262.36 0.00 308197.77 0 1383808 <<<<<<<
dm-1 38768.37 0.00 620293.99 0 2785120 <<<<<<<
dm-2 36592.65 0.00 585464.59 0 2628736 <<<<<<<
dm-0 25140.53 0.00 402246.77 0 1806088 <<<<<<<
dm-1 37817.15 0.00 605074.39 0 2716784 <<<<<<<
dm-2 36809.80 0.00 588898.00 0 2644152 <<<<<<<
dm-0 34272.20 0.00 548353.36 0 2445656
dm-1 34775.78 0.00 556362.33 0 2481376
dm-2 35434.08 0.00 566945.29 0 2528576
dm-0 35166.22 0.00 562657.72 0 2515080
dm-1 33327.96 0.00 538122.60 0 2405408
dm-2 34748.99 0.00 555978.52 0 2485224
dm-0 34530.51 0.00 552488.20 0 2480672
dm-1 34470.16 0.00 551520.71 0 2476328
dm-2 34854.12 0.00 562897.10 0 2527408
dm-0 29927.23 0.00 494966.74 0 2217451
dm-1 33754.24 0.00 568400.00 0 2546432
dm-2 31355.58 0.00 536487.95 0 2403466
dm-0 19824.55 0.00 481598.21 0 2157560 <<<<<<<
dm-1 6611.16 0.00 353630.80 0 1584266 <<<<<<<
dm-2 29875.89 0.00 532652.01 0 2386281 <<<<<<<
dm-0 36691.46 0.00 587052.58 0 2612384
dm-1 25364.94 0.00 513862.47 0 2286688
dm-2 36197.08 0.00 593362.70 0 2640464
dm-0 34072.16 0.00 545147.44 0 2447712
dm-1 34371.49 0.00 549910.02 0 2469096
dm-2 35014.92 0.00 566635.19 0 2544192
dm-0 32494.20 0.00 519842.86 0 2328896
dm-1 33713.17 0.00 549982.14 0 2463920
dm-2 33254.46 0.00 565432.14 0 2533136
dm-0 34066.14 0.00 544938.12 0 2430424
dm-1 35761.43 0.00 572182.96 0 2551936
dm-2 35004.71 0.00 560075.34 0 2497936
dm-0 34033.85 0.00 544541.65 0 2444992
dm-1 33596.21 0.00 537507.35 0 2413408
dm-2 32896.44 0.00 546198.66 0 2452432
dm-0 33809.78 0.00 541038.22 0 2434672
dm-1 33782.00 0.00 540574.22 0 2432584
dm-2 35588.67 0.00 578160.00 0 2601720
dm-0 22325.67 0.00 453068.53 0 2029747 <<<<<<<
dm-1 20525.22 0.00 431493.53 0 1933091 <<<<<<<
dm-2 24213.84 5.36 524191.74 24 2348379 <<<<<<<
dm-0 32452.56 0.00 519068.15 0 2330616
dm-1 32655.90 0.00 527748.78 0 2369592
dm-2 32908.24 0.00 564297.55 0 2533696
dm-0 34977.73 0.00 559643.65 0 2512800
dm-1 34287.31 0.00 548564.81 0 2463056
dm-2 33959.02 0.00 548579.06 0 2463120
dm-0 34490.65 0.00 551816.48 0 2477656
dm-1 34505.57 0.00 552064.14 0 2478768
dm-2 33488.42 0.00 541060.13 0 2429360
dm-0 31625.17 5.35 519837.86 24 2334072 <<<<<<<
dm-1 28918.26 0.00 462214.70 0 2075344 <<<<<<<
dm-2 34211.14 0.00 576848.11 0 2590048 <<<<<<<
dm-0 33163.68 0.00 530378.48 0 2365488
dm-1 34059.42 5.38 558333.63 24 2490168
dm-2 35936.77 0.00 574984.75 0 2564432
dm-0 34723.54 0.00 564419.73 0 2517312
dm-1 34305.38 0.00 557705.83 0 2487368
dm-2 33052.69 0.00 528747.98 0 2358216
dm-0 18900.00 0.00 401263.74 0 1781611 <<<<<<<
dm-1 24734.46 0.00 429701.58 0 1907875 <<<<<<<
dm-2 32280.63 0.00 573387.39 0 2545840 <<<<<<<
dm-0 28238.53 0.00 451561.69 0 2027512 <<<<<<<
dm-1 31791.54 0.00 514277.06 0 2309104 <<<<<<<
dm-2 36535.41 0.00 584552.34 0 2624640 <<<<<<<
dm-0 34163.47 0.00 546478.40 0 2453688
dm-1 32867.04 0.00 525512.69 0 2359552
dm-2 34826.95 0.00 562485.52 0 2525560
dm-0 34127.17 0.00 546032.96 0 2451688
dm-1 34187.75 0.00 546984.41 0 2455960
dm-2 32145.66 0.00 514088.20 0 2308256
dm-0 34534.74 0.00 555887.75 0 2495936
dm-1 29349.00 0.00 469056.57 0 2106064
dm-2 33997.10 0.00 557651.67 0 2503856
dm-0 33370.67 0.00 533662.22 0 2401480
dm-1 32164.22 0.00 514227.56 0 2314024
dm-2 35266.00 0.00 569495.11 0 2562728
[-- Attachment #4: 2.6.20-rc3-worse.out --]
[-- Type: text/plain, Size: 9844 bytes --]
After some run time (about 100GB in), 2.6.18 continues
to write at close to 250MB/s per filesystem (as per original
trace). 2.6.20-rc3 with dirty_ratio = 40 does:
$ vmstat 5
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 7 0 90540 8 15796472 0 0 10 164413 734 6124 0 51 23 25
2 7 0 91060 8 15773864 0 0 0 558089 3780 27825 1 48 12 40
8 8 0 91808 8 15764172 0 0 0 692855 3343 25813 0 63 8 29
4 9 0 86604 8 15770876 0 0 6 670767 3605 32657 1 63 8 28
1 11 0 83840 8 15775184 0 0 0 666833 3398 31701 0 64 3 33
3 9 0 82300 8 15776156 0 0 2 616678 3939 37617 1 58 10 32
11 6 0 90640 8 15771528 0 0 2 696310 3395 38654 0 67 5 28
5 9 0 92412 8 15771872 0 0 2 585962 3453 32651 1 54 14 32
5 9 0 90452 8 15766020 0 0 5 583086 3670 32937 0 55 7 38
2 9 0 87980 8 15763964 0 0 1 681694 3525 30906 1 64 9 26
3 8 0 86664 8 15761044 0 0 2 650998 3677 25797 0 61 6 32
6 8 0 86804 8 15759052 0 0 0 715986 3317 25729 1 69 11 20
3 10 0 84240 8 15765944 0 0 0 673761 3528 25241 0 64 7 29
7 10 0 89916 8 15763700 0 0 5 664583 3506 27555 1 63 7 30
1 8 0 82896 8 15814824 0 0 2 519892 3456 24579 0 49 23 29
3 7 0 88276 8 15809844 0 0 1 415687 3093 15203 0 29 50 20
3 8 0 88324 8 15778956 0 0 0 589647 3370 30624 0 52 13 35
2 8 0 91776 8 15775764 0 0 0 609689 3506 32780 0 56 11 33
3 9 0 85604 8 15778512 0 0 0 656673 3573 37672 0 64 5 31
3 10 0 85676 8 15785708 0 0 5 644915 3746 39503 1 63 6 31
5 8 0 89700 8 15775952 0 0 2 680001 3510 38927 0 65 8 27
0 8 0 82332 8 15814516 0 0 1 638288 3499 36999 0 63 6 30
3 8 0 88748 8 15767756 0 0 0 535037 3797 26778 0 44 9 47
8 9 0 86620 8 15773060 0 0 0 581291 3556 27902 1 54 9 37
3 11 0 93708 8 15759696 0 0 6 647017 3714 33242 0 61 6 32
3 9 0 90172 8 15759952 0 0 0 716961 3345 35311 1 70 7 23
3 9 0 90040 8 15763172 0 0 0 655000 3652 31259 0 61 11 27
4 8 0 87176 8 15764384 0 0 1 643515 3668 32440 0 62 5 32
1 10 0 83812 8 15786040 0 0 0 650601 3605 33809 0 64 8 28
5 9 0 82760 8 15805612 0 0 5 440437 3467 19922 0 35 31 33
$ iostat 5 |grep dm-
dm-0 25998.50 2.57 433094.50 2207 371924231
dm-1 26090.07 2.48 437352.37 2127 375580723
dm-2 26555.31 2.52 451131.47 2167 387413658
dm-0 4710.74 0.00 244624.61 0 1093472
dm-1 10662.86 0.00 293821.92 0 1313384
dm-2 13400.89 3.58 424025.06 16 1895392
dm-0 5820.13 0.00 280354.36 0 1253184
dm-1 31697.99 0.00 506851.01 0 2265624
dm-2 32324.83 0.00 544966.44 0 2436000
dm-0 31331.17 0.00 501296.86 0 2235784
dm-1 34606.50 0.00 553700.45 0 2469504
dm-2 32436.10 0.00 518557.85 0 2312768
dm-0 30406.47 0.00 490957.14 0 2199488
dm-1 29177.46 0.00 466635.71 0 2090528
dm-2 32097.77 0.00 518639.29 0 2323504
dm-0 29419.55 0.00 470106.97 0 2091976
dm-1 32272.36 0.00 526919.55 0 2344792
dm-2 30346.07 0.00 484979.78 0 2158160
dm-0 30287.75 0.00 484119.38 0 2173696
dm-1 25081.74 5.35 412530.96 24 1852264
dm-2 31800.22 0.00 515681.07 0 2315408
dm-0 32531.91 5.39 520604.04 24 2316688
dm-1 32934.83 0.00 527065.17 0 2345440
dm-2 32717.75 0.00 523541.57 0 2329760
dm-0 29717.45 0.00 511937.58 0 2288361
dm-1 16489.26 3.58 408936.69 16 1827947
dm-2 17686.58 0.00 321441.16 0 1436842
dm-0 29951.89 0.00 510305.57 0 2291272
dm-1 28271.27 0.00 451946.55 0 2029240
dm-2 25195.55 0.00 403021.83 0 1809568
dm-0 29194.63 0.00 473558.84 0 2116808
dm-1 31468.68 0.00 506738.26 0 2265120
dm-2 31061.07 0.00 496748.10 0 2220464
dm-0 32711.66 0.00 523320.18 0 2334008
dm-1 29713.90 0.00 478310.31 0 2133264
dm-2 32273.32 0.00 531892.38 0 2372240
dm-0 32079.87 0.00 513091.72 0 2293520
dm-1 33290.38 0.00 532594.18 0 2380696
dm-2 32902.46 0.00 526344.52 0 2352760
dm-0 31768.23 0.00 507946.31 0 2270520
dm-1 27854.14 0.00 445063.09 0 1989432
dm-2 31099.11 0.00 497422.82 0 2223480
dm-0 31845.78 0.00 512867.56 0 2307904
dm-1 30570.67 0.00 495665.78 0 2230496
dm-2 29890.89 0.00 502348.44 0 2260568
dm-0 14096.41 3.59 356843.72 16 1591523
dm-1 13003.14 0.00 336176.46 0 1499347
dm-2 13941.93 0.00 342043.72 0 1525515
dm-0 5363.21 0.00 345241.08 0 1529418
dm-1 1142.89 0.00 260744.02 0 1155096
dm-2 8430.70 0.00 367339.05 0 1627312
dm-0 18464.43 0.00 295423.71 0 1320544
dm-1 27161.97 0.00 462172.71 0 2065912
dm-2 38440.27 0.00 636003.58 0 2842936
dm-0 18842.28 0.00 301259.96 0 1346632
dm-1 30305.59 0.00 487824.61 0 2180576
dm-2 32483.22 0.00 535235.79 0 2392504
dm-0 32819.73 0.00 528437.67 0 2356832
dm-1 30986.10 0.00 495330.94 0 2209176
dm-2 32231.17 0.00 515334.53 0 2298392
dm-0 32100.67 0.00 520234.45 0 2325448
dm-1 27425.28 0.00 437621.48 0 1956168
dm-2 30090.38 0.00 480978.97 0 2149976
dm-0 32054.36 0.00 512816.11 0 2292288
dm-1 32423.49 0.00 518761.52 0 2318864
dm-2 32213.87 5.37 519063.98 24 2320216
dm-0 19114.38 0.00 441338.20 0 1963955
dm-1 22902.70 0.00 381944.27 0 1699652
dm-2 20328.31 0.00 434641.80 0 1934156
dm-0 29357.08 0.00 511746.52 0 2277272
dm-1 9773.71 0.00 299194.61 0 1331416
dm-2 26526.52 0.00 476692.13 0 2121280
dm-0 29961.56 0.00 495352.89 0 2229088
dm-1 14524.67 0.00 304846.22 0 1371808
dm-2 32383.33 0.00 531809.78 0 2393144
dm-0 32353.59 0.00 521090.58 0 2324064
dm-1 30616.82 0.00 489497.76 0 2183160
dm-2 30549.33 0.00 488245.74 0 2177576
dm-0 33246.09 0.00 531937.36 0 2377760
dm-1 33248.77 0.00 531948.10 0 2377808
dm-2 33506.49 0.00 536096.64 0 2396352
dm-0 31178.44 0.00 507873.78 0 2285432
dm-1 24935.56 0.00 397466.67 0 1788600
dm-2 31364.67 0.00 508512.00 0 2288304
dm-0 31783.22 0.00 508452.80 0 2272784
dm-1 28665.10 0.00 458310.51 0 2048648
dm-2 30211.86 0.00 486645.19 0 2175304
dm-0 17848.55 0.00 377156.82 0 1685891
dm-1 24671.14 0.00 430719.24 0 1925315
dm-2 26552.13 0.00 487440.72 0 2178860
dm-0 10159.23 0.00 303722.52 0 1348528
dm-1 17420.05 0.00 478315.32 0 2123720
dm-2 1689.19 0.00 256675.68 0 1139640
dm-0 29412.58 0.00 510968.09 0 2273808
dm-1 33237.08 0.00 548357.75 0 2440192
dm-2 16102.25 0.00 303135.28 0 1348952
dm-0 31949.10 5.38 535790.13 24 2389624
dm-1 27270.85 0.00 434730.04 0 1938896
dm-2 19068.39 0.00 305058.30 0 1360560
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 0:31 ` David Chinner
@ 2007-01-11 0:43 ` Christoph Lameter
2007-01-11 1:06 ` David Chinner
2007-01-11 1:08 ` Nick Piggin
1 sibling, 1 reply; 19+ messages in thread
From: Christoph Lameter @ 2007-01-11 0:43 UTC (permalink / raw)
To: David Chinner; +Cc: Nick Piggin, linux-kernel, linux-mm
You are comparing a debian 2.6.18 standard kernel with your tuned version
of 2.6.20-rc3. There may be a lot of differences. Could you get us the
config? Or use the same config file and build 2.6.20/18 the same way.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 0:43 ` Christoph Lameter
@ 2007-01-11 1:06 ` David Chinner
2007-01-11 1:40 ` Christoph Lameter
0 siblings, 1 reply; 19+ messages in thread
From: David Chinner @ 2007-01-11 1:06 UTC (permalink / raw)
To: Christoph Lameter; +Cc: David Chinner, Nick Piggin, linux-kernel, linux-mm
On Wed, Jan 10, 2007 at 04:43:36PM -0800, Christoph Lameter wrote:
> You are comparing a debian 2.6.18 standard kernel with your tuned version
> of 2.6.20-rc3. There may be a lot of differences. Could you get us the
> config? Or use the same config file and build 2.6.20/18 the same way.
I took the /proc/config.gz from the debian 2.6.18-1 kernel as the
base config for the 2.6.20-rc3 kernel and did a make oldconfig on
it to make sure it was valid for the newer kernel but pretty much
the same. I think that's the right process, so I don't think
different build configs are the problem here.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 0:31 ` David Chinner
2007-01-11 0:43 ` Christoph Lameter
@ 2007-01-11 1:08 ` Nick Piggin
2007-01-11 1:24 ` David Chinner
[not found] ` <20070111063555.GB33919298@melbourne.sgi.com>
1 sibling, 2 replies; 19+ messages in thread
From: Nick Piggin @ 2007-01-11 1:08 UTC (permalink / raw)
To: David Chinner; +Cc: Christoph Lameter, linux-kernel, linux-mm
David Chinner wrote:
> On Thu, Jan 11, 2007 at 10:13:55AM +1100, Nick Piggin wrote:
>
>>David Chinner wrote:
>>
>>>On Wed, Jan 10, 2007 at 03:04:15PM -0800, Christoph Lameter wrote:
>>>
>>>
>>>>On Thu, 11 Jan 2007, David Chinner wrote:
>>>>
>>>>
>>>>
>>>>>The performance and smoothness is fully restored on 2.6.20-rc3
>>>>>by setting dirty_ratio down to 10 (from the default 40), so
>>>>>something in the VM is not working as well as it used to....
>>>>
>>>>dirty_background_ratio is left as is at 10?
>>>
>>>
>>>Yes.
>>>
>>>
>>>
>>>>So you gain performance by switching off background writes via pdflush?
>>>
>>>
>>>Well, pdflush appears to be doing very little on both 2.6.18 and
>>>2.6.20-rc3. In both cases kswapd is consuming 10-20% of a CPU and
>>>all of the pdflush threads combined (I've seen up to 7 active at
>>>once) use maybe 1-2% of cpu time. This occurs regardless of the
>>>dirty_ratio setting.
>>
>>Hi David,
>>
>>Could you get /proc/vmstat deltas for each kernel, to start with?
>
>
> Sure, but that doesn't really show the how erratic the per-filesystem
> throughput is because the test I'm running is PCI-X bus limited in
> it's throughput at about 750MB/s. Each dm device is capable of about
> 340MB/s write, so when one slows down, the others will typically
> speed up.
But you do also get aggregate throughput drops? (ie. 2.6.20-rc3-worse)
> So, what I've attached is three files which have both
> 'vmstat 5' output and 'iostat 5 |grep dm-' output in them.
Ahh, sorry to be unclear, I meant:
cat /proc/vmstat > pre
run_test
cat /proc/vmstat > post
It might just give us a hint what is changing (however vmstat doesn't
give much interesting in the way of pdflush stats, so it might not
show anything up).
Thanks,
Nick
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-10 23:08 ` David Chinner
2007-01-10 23:12 ` Christoph Lameter
2007-01-10 23:13 ` Nick Piggin
@ 2007-01-11 1:11 ` David Chinner
2 siblings, 0 replies; 19+ messages in thread
From: David Chinner @ 2007-01-11 1:11 UTC (permalink / raw)
To: David Chinner; +Cc: Christoph Lameter, linux-kernel, linux-mm
On Thu, Jan 11, 2007 at 10:08:55AM +1100, David Chinner wrote:
> On Wed, Jan 10, 2007 at 03:04:15PM -0800, Christoph Lameter wrote:
> > On Thu, 11 Jan 2007, David Chinner wrote:
> >
> > > The performance and smoothness is fully restored on 2.6.20-rc3
> > > by setting dirty_ratio down to 10 (from the default 40), so
> > > something in the VM is not working as well as it used to....
> >
> > dirty_background_ratio is left as is at 10?
>
> Yes.
FWIW, setting dirty_ratio to 20 instead of 10 fixes the most of
the erraticness of the writeback and most of the performance as well.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 1:08 ` Nick Piggin
@ 2007-01-11 1:24 ` David Chinner
2007-01-11 9:27 ` Nick Piggin
[not found] ` <20070111063555.GB33919298@melbourne.sgi.com>
1 sibling, 1 reply; 19+ messages in thread
From: David Chinner @ 2007-01-11 1:24 UTC (permalink / raw)
To: Nick Piggin; +Cc: David Chinner, Christoph Lameter, linux-kernel, linux-mm
On Thu, Jan 11, 2007 at 12:08:10PM +1100, Nick Piggin wrote:
> David Chinner wrote:
> >Sure, but that doesn't really show the how erratic the per-filesystem
> >throughput is because the test I'm running is PCI-X bus limited in
> >it's throughput at about 750MB/s. Each dm device is capable of about
> >340MB/s write, so when one slows down, the others will typically
> >speed up.
>
> But you do also get aggregate throughput drops? (ie. 2.6.20-rc3-worse)
Yes - you can see that from the vmstat output I sent.
At 500GB into the write of each file (about 60% of the disks filled)
the per fs write rate should be around 220MB/s, so aggregate should
be around 650MB/s. That's what Im seeing with 2.6.18 and 2.6.20-rc3
with a tweaked dirty_ratio. Without the dirty_ratio tweak, you see
what is in 2.6.20-rc3-worse.
e.g. I just changed dirty ratio from 10 to 40 and I've gone from
consistent 210-215MB/s per filesystm (~630-650MB/s aggregate) to
ranging over 110-200MB/s per fielsystem and aggregates of ~450-600MB/s.
I changed dirty_ratio back to 10, and within 15 seconds we are back
to consistent 210MB/s per filesystem and 630-650MB/s write.
> >So, what I've attached is three files which have both
> >'vmstat 5' output and 'iostat 5 |grep dm-' output in them.
>
> Ahh, sorry to be unclear, I meant:
>
> cat /proc/vmstat > pre
> run_test
> cat /proc/vmstat > post
Ok, I'll get back to you on that one - even at 600+MB/s, writing 5TB
of data takes some time....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 1:06 ` David Chinner
@ 2007-01-11 1:40 ` Christoph Lameter
2007-01-11 2:57 ` David Chinner
0 siblings, 1 reply; 19+ messages in thread
From: Christoph Lameter @ 2007-01-11 1:40 UTC (permalink / raw)
To: David Chinner; +Cc: Nick Piggin, linux-kernel, linux-mm
On Thu, 11 Jan 2007, David Chinner wrote:
> On Wed, Jan 10, 2007 at 04:43:36PM -0800, Christoph Lameter wrote:
> > You are comparing a debian 2.6.18 standard kernel with your tuned version
> > of 2.6.20-rc3. There may be a lot of differences. Could you get us the
> > config? Or use the same config file and build 2.6.20/18 the same way.
>
> I took the /proc/config.gz from the debian 2.6.18-1 kernel as the
> base config for the 2.6.20-rc3 kernel and did a make oldconfig on
> it to make sure it was valid for the newer kernel but pretty much
> the same. I think that's the right process, so I don't think
> different build configs are the problem here.
Debian may have added extra patches that are not upstream. I see f.e. some
of my post 2.6.18 patches in there.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 1:40 ` Christoph Lameter
@ 2007-01-11 2:57 ` David Chinner
0 siblings, 0 replies; 19+ messages in thread
From: David Chinner @ 2007-01-11 2:57 UTC (permalink / raw)
To: Christoph Lameter; +Cc: David Chinner, Nick Piggin, linux-kernel, linux-mm
On Wed, Jan 10, 2007 at 05:40:26PM -0800, Christoph Lameter wrote:
> On Thu, 11 Jan 2007, David Chinner wrote:
>
> > On Wed, Jan 10, 2007 at 04:43:36PM -0800, Christoph Lameter wrote:
> > > You are comparing a debian 2.6.18 standard kernel with your tuned version
> > > of 2.6.20-rc3. There may be a lot of differences. Could you get us the
> > > config? Or use the same config file and build 2.6.20/18 the same way.
> >
> > I took the /proc/config.gz from the debian 2.6.18-1 kernel as the
> > base config for the 2.6.20-rc3 kernel and did a make oldconfig on
> > it to make sure it was valid for the newer kernel but pretty much
> > the same. I think that's the right process, so I don't think
> > different build configs are the problem here.
>
> Debian may have added extra patches that are not upstream. I see f.e. some
> of my post 2.6.18 patches in there.
Did you read the thread I linked in my original report? The original
bug report was for a regression from 2.6.18.1 to 2.6.20-rc3. I have
reproduced the same regression between the debian 2.6.18-1 kernel
and 2.6.20-rc3. I think you're looking in the wrong place for the
cause of the problem....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
[not found] ` <20070111063555.GB33919298@melbourne.sgi.com>
@ 2007-01-11 9:23 ` Nick Piggin
0 siblings, 0 replies; 19+ messages in thread
From: Nick Piggin @ 2007-01-11 9:23 UTC (permalink / raw)
To: David Chinner; +Cc: Christoph Lameter, linux-kernel, Linux Memory Management
Thanks. BTW. You didn't cc this to the list, so I won't either in case
you want it kept private.
David Chinner wrote:
> On Thu, Jan 11, 2007 at 12:08:10PM +1100, Nick Piggin wrote:
>
>>Ahh, sorry to be unclear, I meant:
>>
>> cat /proc/vmstat > pre
>> run_test
>> cat /proc/vmstat > post
>
>
> 6 files attached - 2.6.18 pre/post, 2.6.20-rc3 dirty_ratio = 10 pre/post
> and 2.6.20-rc3 dirty_ratio=40 pre/post.
>
> Cheers,
>
> Dave.
>
>
> ------------------------------------------------------------------------
Send instant messages to your online friends http://au.messenger.yahoo.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 1:24 ` David Chinner
@ 2007-01-11 9:27 ` Nick Piggin
2007-01-11 17:51 ` Christoph Lameter
0 siblings, 1 reply; 19+ messages in thread
From: Nick Piggin @ 2007-01-11 9:27 UTC (permalink / raw)
To: David Chinner; +Cc: Christoph Lameter, linux-kernel, linux-mm
David Chinner wrote:
> On Thu, Jan 11, 2007 at 12:08:10PM +1100, Nick Piggin wrote:
>>>So, what I've attached is three files which have both
>>>'vmstat 5' output and 'iostat 5 |grep dm-' output in them.
>>
>>Ahh, sorry to be unclear, I meant:
>>
>> cat /proc/vmstat > pre
>> run_test
>> cat /proc/vmstat > post
>
>
> Ok, I'll get back to you on that one - even at 600+MB/s, writing 5TB
> of data takes some time....
OK, according to your vmstat deltas, you are doing an order of magnitude
more writeout off the LRU with 2.6.20-rc3 default than with the smaller
dirty_ratio (53GB of data vs 4GB of data). 2.6.18 does not have that stat,
unfortunately.
allocstall and direct reclaim are way down when the dirty ratio is lower,
but those numbers with vanilla 2.6.20-rc3 are comparable to 2.6.18, so
that shows that kswapd in 2.6.18 is probably also having trouble which may
mean it is also writing out a lot off the LRU.
You're not turning on zone_reclaim, by any chance, are you?
Otherwise, nothing jumps out at me yet. I'll have a bit of a look through
changelogs tomorrow. I guess it could be a pdflush or vmscan change (XFS,
maybe?).
Can you narrow it down at all?
THanks,
Nick
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 9:27 ` Nick Piggin
@ 2007-01-11 17:51 ` Christoph Lameter
2007-01-12 0:06 ` Nick Piggin
0 siblings, 1 reply; 19+ messages in thread
From: Christoph Lameter @ 2007-01-11 17:51 UTC (permalink / raw)
To: Nick Piggin; +Cc: David Chinner, linux-kernel, linux-mm
On Thu, 11 Jan 2007, Nick Piggin wrote:
> You're not turning on zone_reclaim, by any chance, are you?
It is not a NUMA system so zone reclaim is not available. zone reclaim was
already in 2.6.16.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-11 17:51 ` Christoph Lameter
@ 2007-01-12 0:06 ` Nick Piggin
2007-01-12 3:04 ` Christoph Lameter
0 siblings, 1 reply; 19+ messages in thread
From: Nick Piggin @ 2007-01-12 0:06 UTC (permalink / raw)
To: Christoph Lameter; +Cc: David Chinner, linux-kernel, linux-mm
Christoph Lameter wrote:
> On Thu, 11 Jan 2007, Nick Piggin wrote:
>
>
>>You're not turning on zone_reclaim, by any chance, are you?
>
>
> It is not a NUMA system so zone reclaim is not available.
Ah yes... Can't you force it on if you have a NUMA complied kernel?
> zone reclaim was
> already in 2.6.16.
Well it was a long shot, but that is something that has had a few
changes recently and is something that could interact badly with
the global pdflush.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown
2007-01-12 0:06 ` Nick Piggin
@ 2007-01-12 3:04 ` Christoph Lameter
0 siblings, 0 replies; 19+ messages in thread
From: Christoph Lameter @ 2007-01-12 3:04 UTC (permalink / raw)
To: Nick Piggin; +Cc: David Chinner, linux-kernel, linux-mm
On Fri, 12 Jan 2007, Nick Piggin wrote:
> Ah yes... Can't you force it on if you have a NUMA complied kernel?
But it wont do anything since it only comes into action if you have an off
node allocation. If you run a NUMA kernel on an SMP system then you only
have one node. There is no way that an off node allocation can occur.
> > zone reclaim was already in 2.6.16.
>
> Well it was a long shot, but that is something that has had a few
> changes recently and is something that could interact badly with
> the global pdflush.
zone reclaim is not touching dirty pages in its default configuration. It
would only remove up clean pagecache pages.
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2007-01-12 3:05 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-01-10 22:37 [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown David Chinner
2007-01-10 23:04 ` Christoph Lameter
2007-01-10 23:08 ` David Chinner
2007-01-10 23:12 ` Christoph Lameter
2007-01-10 23:18 ` David Chinner
2007-01-10 23:13 ` Nick Piggin
2007-01-11 0:31 ` David Chinner
2007-01-11 0:43 ` Christoph Lameter
2007-01-11 1:06 ` David Chinner
2007-01-11 1:40 ` Christoph Lameter
2007-01-11 2:57 ` David Chinner
2007-01-11 1:08 ` Nick Piggin
2007-01-11 1:24 ` David Chinner
2007-01-11 9:27 ` Nick Piggin
2007-01-11 17:51 ` Christoph Lameter
2007-01-12 0:06 ` Nick Piggin
2007-01-12 3:04 ` Christoph Lameter
[not found] ` <20070111063555.GB33919298@melbourne.sgi.com>
2007-01-11 9:23 ` Nick Piggin
2007-01-11 1:11 ` David Chinner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).