LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
@ 2007-01-11 23:38 Justin Piszcz
  2007-01-12 14:01 ` Michael Tokarev
  0 siblings, 1 reply; 12+ messages in thread
From: Justin Piszcz @ 2007-01-11 23:38 UTC (permalink / raw)
  To: linux-kernel, linux-raid, xfs

Using 4 raptor 150s:

Without the tweaks, I get 111MB/s write and 87MB/s read.
With the tweaks, 195MB/s write and 211MB/s read.

Using kernel 2.6.19.1.

Without the tweaks and with the tweaks:

# Stripe tests:
echo 8192 > /sys/block/md3/md/stripe_cache_size

# DD TESTS [WRITE]

DEFAULT: (512K)
$ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s

8192 STRIPE CACHE
$ dd if=/dev/zero of=10gb.8192k.stripe.out bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 55.0628 seconds, 195 MB/s
(and again...)
10737418240 bytes (11 GB) copied, 61.9902 seconds, 173 MB/s
(and again...)
10737418240 bytes (11 GB) copied, 61.3053 seconds, 175 MB/s
** maybe 16384 is better, need to do more testing.

16384 STRIPE CACHE
$ dd if=/dev/zero of=10gb.16384k.stripe.out bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 56.2793 seconds, 191 MB/s

32768 STRIPE CACHE
$ dd if=/dev/zero of=10gb.32768.stripe.out bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 55.8382 seconds, 192 MB/s

# Set readahead.
blockdev --setra 16384 /dev/md3

# DD TESTS [READ]

DEFAULT: (1536K READ AHEAD)
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
298+0 records in
297+0 records out
311427072 bytes (311 MB) copied, 3.5453 seconds, 87.8 MB/s

2048K READ AHEAD
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 85.4632 seconds, 126 MB/s

8192K READ AHEAD
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s

16384K READ AHEAD
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 59.3119 seconds, 181 MB/s

32768 READ AHEAD
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 56.6329 seconds, 190 MB/s

65536 READ AHEAD
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 54.9768 seconds, 195 MB/s

131072 READ AHEAD
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 52.0896 seconds, 206 MB/s

262144 READ AHEAD**
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 50.8496 seconds, 211 MB/s
(and again..)
10737418240 bytes (11 GB) copied, 51.2064 seconds, 210 MB/s***

524288 READ AHEAD
$ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 59.6508 seconds, 180 MB/s

Output (vmstat) during a write test:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 2  1    172 730536     12 952740    0    0     0 357720 1836 107450  0 80  6 15
 1  1    172 485016     12 1194448    0    0     0 171760 1604 42853  0 38 16 46
 1  0    172 243960     12 1432140    0    0     0 223088 1598 63118  0 44 25 31
 0  0    172  77428     12 1596240    0    0     0 199736 1559 56939  0 36 28 36
 2  0    172  50328     12 1622796    0    0    16 87496 1726 31251  0 27 73  0
 2  1    172  50600     12 1622052    0    0     0 313432 1739 88026  0 53 16 32
 1  1    172  51012     12 1621216    0    0     0 200656 1586 56349  0 38  9 53
 0  3    172  50084     12 1622408    0    0     0 204320 1588 67085  0 40 24 36
 1  1    172  51716     12 1620760    0    0     0 245672 1608 81564  0 61 13 26
 0  2    172  51168     12 1621432    0    0     0 212740 1622 67203  0 44 22 34
 0  2    172  51940     12 1620516    0    0     0 203704 1614 59396  0 42 24 35
 0  0    172  51188     12 1621348    0    0     0 171744 1582 56664  0 38 28 34
 1  0    172  52264     12 1620812    0    0     0 143792 1724 43543  0 39 59  2
 0  1    172  48292     12 1623984    0    0    16 248784 1610 73980  0 40 19 41
 0  2    172  51868     12 1620596    0    0     0 209184 1571 60611  0 40 20 40
 1  1    172  51168     12 1621340    0    0     0 205020 1620 70048  0 38 27 34
 2  0    172  51076     12 1621508    0    0     0 236400 1658 81582  0 59 13 29
 0  0    172  51284     12 1621064    0    0     0 138739 1611 40220  0 30 34 36
 1  0    172  52020     12 1620376    0    0     4 170200 1752 52315  0 38 58  5

Output (vmstat) during a read test:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  0    172  53484     12 1769396    0    0     0     0 1005   54  0  0 100  0
 0  0    172  53148     12 1740380    0    0 221752     0 1562 11779  0 22 70  9
 0  0    172  53868     12 1709048    0    0 231764    16 1708 14658  0 37 54  9
 2  0    172  53384     12 1768236    0    0 189604     8 1646 8507  0 28 59 13
 2  0    172  53920     12 1758856    0    0 253708     0 1716 17665  0 37 63  0
 0  0    172  50704     12 1739872    0    0 239700     0 1654 10949  0 41 54  5
 1  0    172  50796     12 1684120    0    0 206236     0 1722 16610  0 43 57  0
 2  0    172  53012     12 1768192    0    0 217876    12 1709 17022  0 34 66  0
 0  0    172  50676     12 1761664    0    0 252840     8 1711 15985  0 38 62  0
 0  0    172  53676     12 1736192    0    0 240072     0 1686 7530  0 42 54  4
 0  0    172  52892     12 1686740    0    0 211924     0 1707 16284  0 38 62  0
 2  0    172  53536     12 1767580    0    0 212668     0 1680 18409  0 34 62  5
 0  0    172  50488     12 1760780    0    0 251972     9 1719 15818  0 41 59  0
 0  0    172  53912     12 1736916    0    0 241932     8 1645 12602  0 37 54  9
 1  0    172  53296     12 1656072    0    0 180800     0 1723 15826  0 41 59  0
 1  1    172  51208     12 1770156    0    0 242800     0 1738 11146  1 30 64  6
 2  0    172  53604     12 1756452    0    0 251104     0 1652 10315  0 39 59  2
 0  0    172  53268     12 1739120    0    0 244536     0 1679 18972  0 44 56  0
 1  0    172  53256     12 1664920    0    0 187620     0 1668 19003  0 39 53  8
 1  0    172  53716     12 1767424    0    0 234244     0 1711 17040  0 32 64  5
 2  0    172  53680     12 1760680    0    0 255196     0 1695 9895  0 38 61  1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-11 23:38 Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) Justin Piszcz
@ 2007-01-12 14:01 ` Michael Tokarev
  2007-01-12 14:38   ` Justin Piszcz
  0 siblings, 1 reply; 12+ messages in thread
From: Michael Tokarev @ 2007-01-12 14:01 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-kernel, linux-raid, xfs

Justin Piszcz wrote:
> Using 4 raptor 150s:
> 
> Without the tweaks, I get 111MB/s write and 87MB/s read.
> With the tweaks, 195MB/s write and 211MB/s read.
> 
> Using kernel 2.6.19.1.
> 
> Without the tweaks and with the tweaks:
> 
> # Stripe tests:
> echo 8192 > /sys/block/md3/md/stripe_cache_size
> 
> # DD TESTS [WRITE]
> 
> DEFAULT: (512K)
> $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s
[]
> 8192K READ AHEAD
> $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s

What exactly are you measuring?  Linear read/write, like copying one
device to another (or to /dev/null), in large chunks?

I don't think it's an interesting test.  Hint: how many times a day
you plan to perform such a copy?

(By the way, for a copy of one block device to another, try using
O_DIRECT, with two dd processes doing the copy - one reading, and
another writing - this way, you'll get best results without huge
affect on other things running on the system.  Like this:

 dd if=/dev/onedev bs=1M iflag=direct |
 dd of=/dev/twodev bs=1M oflag=direct
)

/mjt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 14:01 ` Michael Tokarev
@ 2007-01-12 14:38   ` Justin Piszcz
  2007-01-12 17:37     ` Justin Piszcz
  0 siblings, 1 reply; 12+ messages in thread
From: Justin Piszcz @ 2007-01-12 14:38 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: linux-kernel, linux-raid, xfs



On Fri, 12 Jan 2007, Michael Tokarev wrote:

> Justin Piszcz wrote:
> > Using 4 raptor 150s:
> > 
> > Without the tweaks, I get 111MB/s write and 87MB/s read.
> > With the tweaks, 195MB/s write and 211MB/s read.
> > 
> > Using kernel 2.6.19.1.
> > 
> > Without the tweaks and with the tweaks:
> > 
> > # Stripe tests:
> > echo 8192 > /sys/block/md3/md/stripe_cache_size
> > 
> > # DD TESTS [WRITE]
> > 
> > DEFAULT: (512K)
> > $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240
> > 10240+0 records in
> > 10240+0 records out
> > 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s
> []
> > 8192K READ AHEAD
> > $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
> > 10240+0 records in
> > 10240+0 records out
> > 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s
> 
> What exactly are you measuring?  Linear read/write, like copying one
> device to another (or to /dev/null), in large chunks?
Check bonnie benchmarks below.
> 
> I don't think it's an interesting test.  Hint: how many times a day
> you plan to perform such a copy?
It is a measurement of raw performance.
> 
> (By the way, for a copy of one block device to another, try using
> O_DIRECT, with two dd processes doing the copy - one reading, and
> another writing - this way, you'll get best results without huge
> affect on other things running on the system.  Like this:
> 
>  dd if=/dev/onedev bs=1M iflag=direct |
>  dd of=/dev/twodev bs=1M oflag=direct
> )
Interesting, I will take this into consideration-- however, an untar test 
shows a 2:1 improvement, see below.
> 
> /mjt
> 

Decompress/unrar a DVD-sized file:

On the following RAID volumes with the same set of [4] 150GB raptors:

RAID  0] 1:13.16 elapsed @ 49% CPU
RAID  4] 2:05.85 elapsed @ 30% CPU 
RAID  5] 2:01.94 elapsed @ 32% CPU
RAID  6] 2:39.34 elapsed @ 24% CPU
RAID 10] 1:52.37 elapsed @ 32% CPU

RAID 5 Tweaked (8192 stripe_cache & 16384 setra/blockdev)::

RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU

I did not tweak raid 0, but seeing how RAID5 tweaked is faster than RAID0 
is good enough for me :)

RAID0 did 278MB/s read and 317MB/s write (by the way)

Here are the bonnie results, the times alone speak for themselves, from 8 
minutes to min and 48-59 seconds.

# No optimizations:
# Run Benchmarks
Default Bonnie: 
[nr_requests=128,max_sectors_kb=512,stripe_cache_size=256,read_ahead=1536]
default_run1,4000M,42879,98,105436,19,41081,11,46277,96,87845,15,639.2,1,16:100000:16/64,380,4,29642,99,2990,18,469,5,11784,40,1712,12
default_run2,4000M,47145,99,108664,19,40931,11,46466,97,94158,16,634.8,0,16:100000:16/64,377,4,16990,56,2850,17,431,4,21066,71,1800,13
default_run3,4000M,43653,98,109063,19,40898,11,46447,97,97141,16,645.8,1,16:100000:16/64,373,4,22302,75,2793,16,420,4,16708,56,1794,13
default_run4,4000M,46485,98,110664,20,41102,11,46443,97,93616,16,631.3,1,16:100000:16/64,363,3,14484,49,2802,17,388,4,25532,86,1604,12
default_run5,4000M,43813,98,109800,19,41214,11,46457,97,92563,15,635.1,1,16:100000:16/64,376,4,28990,95,2827,17,388,4,22874,76,1817,13

169.88user 44.01system 8:02.98elapsed 44%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (6major+1102minor)pagefaults 0swaps
161.60user 44.33system 7:53.14elapsed 43%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1095minor)pagefaults 0swaps
166.64user 45.24system 8:00.07elapsed 44%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1096minor)pagefaults 0swaps
161.90user 44.66system 8:00.85elapsed 42%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1094minor)pagefaults 0swaps
167.61user 44.12system 8:03.26elapsed 43%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1092minor)pagefaults 0swaps


All optimizations [bonnie++] 

168.08user 46.05system 5:55.13elapsed 60%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (16major+1092minor)pagefaults 0swaps
162.65user 46.21system 5:48.47elapsed 59%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (7major+1101minor)pagefaults 0swaps
168.06user 45.74system 5:59.84elapsed 59%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (7major+1102minor)pagefaults 0swaps
168.00user 46.18system 5:58.77elapsed 59%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1095minor)pagefaults 0swaps
167.98user 45.53system 5:56.49elapsed 59%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (5major+1101minor)pagefaults 0swaps

c6300-optimized:4000M,43976,99,167209,29,73109,22,43471,91,208572,40,511.4,1,16:100000:16/64,1109,12,26948,89,2469,14,1051,11,29037,97,2167,16
c6300-optimized:4000M,47455,99,190212,35,70402,21,43167,92,206290,40,503.3,1,16:100000:16/64,1071,11,29893,99,2804,16,1059,12,24887,84,2090,16
c6300-optimized:4000M,43979,99,172543,29,71811,21,41760,87,201870,39,498.9,1,16:100000:16/64,1042,11,30276,99,2800,16,1063,12,29491,99,2257,17
c6300-optimized:4000M,43824,98,164585,29,73470,22,43098,90,207003,40,489.1,1,16:100000:16/64,1045,11,30288,98,2512,15,1018,11,27365,92,2097,16
c6300-optimized:4000M,44003,99,194250,32,71055,21,43327,91,196553,38,505.8,1,16:100000:16/64,1031,11,30278,98,2474,14,1049,12,28068,94,2027,15

txt version of optimized results:

Version  1.03      ------Sequential Output------ --Sequential Input- 
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  
/sec %CP
c6300-optimiz 47455    99 190212    35 70402    21 43167    92 206290    
40 503.3     1
c6300-optimiz 43979    99 172543    29 71811    21 41760    87 201870    
39 498.9     1
c6300-optimiz 43824    98 164585    29 73470    22 43098    90 207003    
40 489.1     1
c6300-optimiz 44003    99 194250    32 71055    21 43327    91 196553    
38 505.8     1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 14:38   ` Justin Piszcz
@ 2007-01-12 17:37     ` Justin Piszcz
  2007-01-12 19:49       ` Al Boldi
  0 siblings, 1 reply; 12+ messages in thread
From: Justin Piszcz @ 2007-01-12 17:37 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: linux-kernel, linux-raid, xfs

RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU

This should be 1:14 not 1:06(was with a similarly sized file but not the 
same) the 1:14 is the same file as used with the other benchmarks.  and to 
get that I used 256mb read-ahead and 16384 stripe size ++ 128 
max_sectors_kb (same size as my sw raid5 chunk size)

On Fri, 12 Jan 2007, Justin Piszcz wrote:

> 
> 
> On Fri, 12 Jan 2007, Michael Tokarev wrote:
> 
> > Justin Piszcz wrote:
> > > Using 4 raptor 150s:
> > > 
> > > Without the tweaks, I get 111MB/s write and 87MB/s read.
> > > With the tweaks, 195MB/s write and 211MB/s read.
> > > 
> > > Using kernel 2.6.19.1.
> > > 
> > > Without the tweaks and with the tweaks:
> > > 
> > > # Stripe tests:
> > > echo 8192 > /sys/block/md3/md/stripe_cache_size
> > > 
> > > # DD TESTS [WRITE]
> > > 
> > > DEFAULT: (512K)
> > > $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240
> > > 10240+0 records in
> > > 10240+0 records out
> > > 10737418240 bytes (11 GB) copied, 96.6988 seconds, 111 MB/s
> > []
> > > 8192K READ AHEAD
> > > $ dd if=10gb.16384k.stripe.out of=/dev/null bs=1M
> > > 10240+0 records in
> > > 10240+0 records out
> > > 10737418240 bytes (11 GB) copied, 64.9454 seconds, 165 MB/s
> > 
> > What exactly are you measuring?  Linear read/write, like copying one
> > device to another (or to /dev/null), in large chunks?
> Check bonnie benchmarks below.
> > 
> > I don't think it's an interesting test.  Hint: how many times a day
> > you plan to perform such a copy?
> It is a measurement of raw performance.
> > 
> > (By the way, for a copy of one block device to another, try using
> > O_DIRECT, with two dd processes doing the copy - one reading, and
> > another writing - this way, you'll get best results without huge
> > affect on other things running on the system.  Like this:
> > 
> >  dd if=/dev/onedev bs=1M iflag=direct |
> >  dd of=/dev/twodev bs=1M oflag=direct
> > )
> Interesting, I will take this into consideration-- however, an untar test 
> shows a 2:1 improvement, see below.
> > 
> > /mjt
> > 
> 
> Decompress/unrar a DVD-sized file:
> 
> On the following RAID volumes with the same set of [4] 150GB raptors:
> 
> RAID  0] 1:13.16 elapsed @ 49% CPU
> RAID  4] 2:05.85 elapsed @ 30% CPU 
> RAID  5] 2:01.94 elapsed @ 32% CPU
> RAID  6] 2:39.34 elapsed @ 24% CPU
> RAID 10] 1:52.37 elapsed @ 32% CPU
> 
> RAID 5 Tweaked (8192 stripe_cache & 16384 setra/blockdev)::
> 
> RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
> 
> I did not tweak raid 0, but seeing how RAID5 tweaked is faster than RAID0 
> is good enough for me :)
> 
> RAID0 did 278MB/s read and 317MB/s write (by the way)
> 
> Here are the bonnie results, the times alone speak for themselves, from 8 
> minutes to min and 48-59 seconds.
> 
> # No optimizations:
> # Run Benchmarks
> Default Bonnie: 
> [nr_requests=128,max_sectors_kb=512,stripe_cache_size=256,read_ahead=1536]
> default_run1,4000M,42879,98,105436,19,41081,11,46277,96,87845,15,639.2,1,16:100000:16/64,380,4,29642,99,2990,18,469,5,11784,40,1712,12
> default_run2,4000M,47145,99,108664,19,40931,11,46466,97,94158,16,634.8,0,16:100000:16/64,377,4,16990,56,2850,17,431,4,21066,71,1800,13
> default_run3,4000M,43653,98,109063,19,40898,11,46447,97,97141,16,645.8,1,16:100000:16/64,373,4,22302,75,2793,16,420,4,16708,56,1794,13
> default_run4,4000M,46485,98,110664,20,41102,11,46443,97,93616,16,631.3,1,16:100000:16/64,363,3,14484,49,2802,17,388,4,25532,86,1604,12
> default_run5,4000M,43813,98,109800,19,41214,11,46457,97,92563,15,635.1,1,16:100000:16/64,376,4,28990,95,2827,17,388,4,22874,76,1817,13
> 
> 169.88user 44.01system 8:02.98elapsed 44%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (6major+1102minor)pagefaults 0swaps
> 161.60user 44.33system 7:53.14elapsed 43%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (13major+1095minor)pagefaults 0swaps
> 166.64user 45.24system 8:00.07elapsed 44%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (13major+1096minor)pagefaults 0swaps
> 161.90user 44.66system 8:00.85elapsed 42%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (13major+1094minor)pagefaults 0swaps
> 167.61user 44.12system 8:03.26elapsed 43%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (13major+1092minor)pagefaults 0swaps
> 
> 
> All optimizations [bonnie++] 
> 
> 168.08user 46.05system 5:55.13elapsed 60%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (16major+1092minor)pagefaults 0swaps
> 162.65user 46.21system 5:48.47elapsed 59%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (7major+1101minor)pagefaults 0swaps
> 168.06user 45.74system 5:59.84elapsed 59%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (7major+1102minor)pagefaults 0swaps
> 168.00user 46.18system 5:58.77elapsed 59%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (13major+1095minor)pagefaults 0swaps
> 167.98user 45.53system 5:56.49elapsed 59%CPU (0avgtext+0avgdata 
> 0maxresident)k
> 0inputs+0outputs (5major+1101minor)pagefaults 0swaps
> 
> c6300-optimized:4000M,43976,99,167209,29,73109,22,43471,91,208572,40,511.4,1,16:100000:16/64,1109,12,26948,89,2469,14,1051,11,29037,97,2167,16
> c6300-optimized:4000M,47455,99,190212,35,70402,21,43167,92,206290,40,503.3,1,16:100000:16/64,1071,11,29893,99,2804,16,1059,12,24887,84,2090,16
> c6300-optimized:4000M,43979,99,172543,29,71811,21,41760,87,201870,39,498.9,1,16:100000:16/64,1042,11,30276,99,2800,16,1063,12,29491,99,2257,17
> c6300-optimized:4000M,43824,98,164585,29,73470,22,43098,90,207003,40,489.1,1,16:100000:16/64,1045,11,30288,98,2512,15,1018,11,27365,92,2097,16
> c6300-optimized:4000M,44003,99,194250,32,71055,21,43327,91,196553,38,505.8,1,16:100000:16/64,1031,11,30278,98,2474,14,1049,12,28068,94,2027,15
> 
> txt version of optimized results:
> 
> Version  1.03      ------Sequential Output------ --Sequential Input- 
> --Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  
> /sec %CP
> c6300-optimiz 47455    99 190212    35 70402    21 43167    92 206290    
> 40 503.3     1
> c6300-optimiz 43979    99 172543    29 71811    21 41760    87 201870    
> 39 498.9     1
> c6300-optimiz 43824    98 164585    29 73470    22 43098    90 207003    
> 40 489.1     1
> c6300-optimiz 44003    99 194250    32 71055    21 43327    91 196553    
> 38 505.8     1
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 17:37     ` Justin Piszcz
@ 2007-01-12 19:49       ` Al Boldi
  2007-01-12 19:56         ` Justin Piszcz
  0 siblings, 1 reply; 12+ messages in thread
From: Al Boldi @ 2007-01-12 19:49 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-kernel, linux-raid, xfs

Justin Piszcz wrote:
> RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
>
> This should be 1:14 not 1:06(was with a similarly sized file but not the
> same) the 1:14 is the same file as used with the other benchmarks.  and to
> get that I used 256mb read-ahead and 16384 stripe size ++ 128
> max_sectors_kb (same size as my sw raid5 chunk size)

max_sectors_kb is probably your key. On my system I get twice the read 
performance by just reducing max_sectors_kb from default 512 to 192.

Can you do a fresh reboot to shell and then:
$ cat /sys/block/hda/queue/*
$ cat /proc/meminfo
$ echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/hda of=/dev/null bs=1M count=10240
$ echo 192 > /sys/block/hda/queue/max_sectors_kb
$ echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/hda of=/dev/null bs=1M count=10240


Thanks!

--
Al


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 19:49       ` Al Boldi
@ 2007-01-12 19:56         ` Justin Piszcz
  2007-01-12 20:15           ` Justin Piszcz
  0 siblings, 1 reply; 12+ messages in thread
From: Justin Piszcz @ 2007-01-12 19:56 UTC (permalink / raw)
  To: Al Boldi; +Cc: linux-kernel, linux-raid, xfs



On Fri, 12 Jan 2007, Al Boldi wrote:

> Justin Piszcz wrote:
> > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
> >
> > This should be 1:14 not 1:06(was with a similarly sized file but not the
> > same) the 1:14 is the same file as used with the other benchmarks.  and to
> > get that I used 256mb read-ahead and 16384 stripe size ++ 128
> > max_sectors_kb (same size as my sw raid5 chunk size)
> 
> max_sectors_kb is probably your key. On my system I get twice the read 
> performance by just reducing max_sectors_kb from default 512 to 192.
> 
> Can you do a fresh reboot to shell and then:
> $ cat /sys/block/hda/queue/*
> $ cat /proc/meminfo
> $ echo 3 > /proc/sys/vm/drop_caches
> $ dd if=/dev/hda of=/dev/null bs=1M count=10240
> $ echo 192 > /sys/block/hda/queue/max_sectors_kb
> $ echo 3 > /proc/sys/vm/drop_caches
> $ dd if=/dev/hda of=/dev/null bs=1M count=10240
> 
> 
> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Ok. sec

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 19:56         ` Justin Piszcz
@ 2007-01-12 20:15           ` Justin Piszcz
  2007-01-12 20:41             ` Bill Davidsen
  2007-01-12 21:00             ` Al Boldi
  0 siblings, 2 replies; 12+ messages in thread
From: Justin Piszcz @ 2007-01-12 20:15 UTC (permalink / raw)
  To: Al Boldi; +Cc: linux-kernel, linux-raid, xfs

Btw, max sectors did improve my performance a little bit but 
stripe_cache+read_ahead were the main optimizations that made everything 
go faster by about ~1.5x.   I have individual bonnie++ benchmarks of 
[only] the max_sector_kb tests as well, it improved the times from 8min/bonnie 
run -> 7min 11 seconds or so, see below and then after that is what you 
requested.

# Options used:
# blockdev --setra 1536 /dev/md3 (back to default)
# cat /sys/block/sd{e,g,i,k}/queue/max_sectors_kb
# value: 512
# value: 512
# value: 512
# value: 512
# Test with, chunksize of raid array (128)
# echo 128 > /sys/block/sde/queue/max_sectors_kb
# echo 128 > /sys/block/sdg/queue/max_sectors_kb
# echo 128 > /sys/block/sdi/queue/max_sectors_kb
# echo 128 > /sys/block/sdk/queue/max_sectors_kb

max_sectors_kb128_run1:max_sectors_kb128_run1,4000M,46522,98,109829,19,42776,12,46527,97,86206,14,647.7,1,16:100000:16/64,874,9,29123,97,2778,16,852,9,25399,86,1396,10
max_sectors_kb128_run2:max_sectors_kb128_run2,4000M,44037,99,107971,19,42420,12,46385,97,85773,14,628.8,1,16:100000:16/64,981,10,23006,77,3185,19,848,9,27891,94,1737,13
max_sectors_kb128_run3:max_sectors_kb128_run3,4000M,46501,98,108313,19,42558,12,46314,97,87697,15,617.0,1,16:100000:16/64,864,9,29795,99,2744,16,897,9,29021,98,1439,10
max_sectors_kb128_run4:max_sectors_kb128_run4,4000M,40750,98,108959,19,42519,12,45027,97,86484,14,637.0,1,16:100000:16/64,929,10,29641,98,2476,14,883,9,29529,99,1867,13
max_sectors_kb128_run5:max_sectors_kb128_run5,4000M,46664,98,108387,19,42801,12,46423,97,87379,14,642.5,0,16:100000:16/64,925,10,29756,99,2759,16,915,10,28694,97,1215,8

162.54user 43.96system 7:12.02elapsed 47%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (5major+1104minor)pagefaults 0swaps
168.75user 43.51system 7:14.49elapsed 48%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1092minor)pagefaults 0swaps
162.76user 44.18system 7:12.26elapsed 47%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1096minor)pagefaults 0swaps
178.91user 43.39system 7:24.39elapsed 50%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1094minor)pagefaults 0swaps
162.45user 43.86system 7:11.26elapsed 47%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (13major+1092minor)pagefaults 0swaps

---------------

# cat /sys/block/sd[abcdefghijk]/queue/*
cat: /sys/block/sda/queue/iosched: Is a directory
32767
512
128
128
noop [anticipatory] 
cat: /sys/block/sdb/queue/iosched: Is a directory
32767
512
128
128
noop [anticipatory] 
cat: /sys/block/sdc/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sdd/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sde/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sdf/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sdg/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sdh/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sdi/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sdj/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
cat: /sys/block/sdk/queue/iosched: Is a directory
32767
128
128
128
noop [anticipatory] 
# 

(note I am only using four of these (which are raptors, in raid5 for md3))

# cat /proc/meminfo
MemTotal:      2048904 kB
MemFree:       1299980 kB
Buffers:          1408 kB
Cached:          58032 kB
SwapCached:          0 kB
Active:          65012 kB
Inactive:        33796 kB
HighTotal:     1153312 kB
HighFree:      1061792 kB
LowTotal:       895592 kB
LowFree:        238188 kB
SwapTotal:     2200760 kB
SwapFree:      2200760 kB
Dirty:               8 kB
Writeback:           0 kB
AnonPages:       39332 kB
Mapped:          20248 kB
Slab:            37116 kB
SReclaimable:    10580 kB
SUnreclaim:      26536 kB
PageTables:       1284 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   3225212 kB
Committed_AS:   111056 kB
VmallocTotal:   114680 kB
VmallocUsed:      3828 kB
VmallocChunk:   110644 kB
# 

# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/md3 of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s
# for i in sde sdg sdi sdk; do   echo 192 > 
/sys/block/"$i"/queue/max_sectors_kb;   echo "Set 
/sys/block/"$i"/queue/max_sectors_kb to 192kb"; done
Set /sys/block/sde/queue/max_sectors_kb to 192kb
Set /sys/block/sdg/queue/max_sectors_kb to 192kb
Set /sys/block/sdi/queue/max_sectors_kb to 192kb
Set /sys/block/sdk/queue/max_sectors_kb to 192kb
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/md3 of=/dev/null bs=1M count=10240 
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s

Awful performance with your numbers/drop_caches settings.. !

What were your tests designed to show?


Justin.

On Fri, 12 Jan 2007, Justin Piszcz wrote:

> 
> 
> On Fri, 12 Jan 2007, Al Boldi wrote:
> 
> > Justin Piszcz wrote:
> > > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
> > >
> > > This should be 1:14 not 1:06(was with a similarly sized file but not the
> > > same) the 1:14 is the same file as used with the other benchmarks.  and to
> > > get that I used 256mb read-ahead and 16384 stripe size ++ 128
> > > max_sectors_kb (same size as my sw raid5 chunk size)
> > 
> > max_sectors_kb is probably your key. On my system I get twice the read 
> > performance by just reducing max_sectors_kb from default 512 to 192.
> > 
> > Can you do a fresh reboot to shell and then:
> > $ cat /sys/block/hda/queue/*
> > $ cat /proc/meminfo
> > $ echo 3 > /proc/sys/vm/drop_caches
> > $ dd if=/dev/hda of=/dev/null bs=1M count=10240
> > $ echo 192 > /sys/block/hda/queue/max_sectors_kb
> > $ echo 3 > /proc/sys/vm/drop_caches
> > $ dd if=/dev/hda of=/dev/null bs=1M count=10240
> > 
> > 
> > Thanks!
> > 
> > --
> > Al
> > 
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> Ok. sec
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 20:15           ` Justin Piszcz
@ 2007-01-12 20:41             ` Bill Davidsen
  2007-01-12 21:00             ` Al Boldi
  1 sibling, 0 replies; 12+ messages in thread
From: Bill Davidsen @ 2007-01-12 20:41 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: Al Boldi, linux-kernel, linux-raid, xfs

Justin Piszcz wrote:
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=/dev/md3 of=/dev/null bs=1M count=10240
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s
> # for i in sde sdg sdi sdk; do   echo 192 > 
> /sys/block/"$i"/queue/max_sectors_kb;   echo "Set 
> /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done
> Set /sys/block/sde/queue/max_sectors_kb to 192kb
> Set /sys/block/sdg/queue/max_sectors_kb to 192kb
> Set /sys/block/sdi/queue/max_sectors_kb to 192kb
> Set /sys/block/sdk/queue/max_sectors_kb to 192kb
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=/dev/md3 of=/dev/null bs=1M count=10240 
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s
>
> Awful performance with your numbers/drop_caches settings.. !
>
> What were your tests designed to show?
>   
To start, I expect then to show change in write, not read... and IIRC (I 
didn't look it up) drop_caches just flushes the caches so you start with 
known memory contents, none.
>
> Justin.
>
> On Fri, 12 Jan 2007, Justin Piszcz wrote:
>
>   
>> On Fri, 12 Jan 2007, Al Boldi wrote:
>>
>>     
>>> Justin Piszcz wrote:
>>>       
>>>> RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
>>>>
>>>> This should be 1:14 not 1:06(was with a similarly sized file but not the
>>>> same) the 1:14 is the same file as used with the other benchmarks.  and to
>>>> get that I used 256mb read-ahead and 16384 stripe size ++ 128
>>>> max_sectors_kb (same size as my sw raid5 chunk size)
>>>>         
>>> max_sectors_kb is probably your key. On my system I get twice the read 
>>> performance by just reducing max_sectors_kb from default 512 to 192.
>>>
>>> Can you do a fresh reboot to shell and then:
>>> $ cat /sys/block/hda/queue/*
>>> $ cat /proc/meminfo
>>> $ echo 3 > /proc/sys/vm/drop_caches
>>> $ dd if=/dev/hda of=/dev/null bs=1M count=10240
>>> $ echo 192 > /sys/block/hda/queue/max_sectors_kb
>>> $ echo 3 > /proc/sys/vm/drop_caches
>>> $ dd if=/dev/hda of=/dev/null bs=1M count=10240
>>>
>>>       

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 20:15           ` Justin Piszcz
  2007-01-12 20:41             ` Bill Davidsen
@ 2007-01-12 21:00             ` Al Boldi
  2007-01-12 21:40               ` Justin Piszcz
  1 sibling, 1 reply; 12+ messages in thread
From: Al Boldi @ 2007-01-12 21:00 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-kernel, linux-raid, xfs

Justin Piszcz wrote:
> Btw, max sectors did improve my performance a little bit but
> stripe_cache+read_ahead were the main optimizations that made everything
> go faster by about ~1.5x.   I have individual bonnie++ benchmarks of
> [only] the max_sector_kb tests as well, it improved the times from
> 8min/bonnie run -> 7min 11 seconds or so, see below and then after that is
> what you requested.
>
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=/dev/md3 of=/dev/null bs=1M count=10240
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s
> # for i in sde sdg sdi sdk; do   echo 192 >
> /sys/block/"$i"/queue/max_sectors_kb;   echo "Set
> /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done
> Set /sys/block/sde/queue/max_sectors_kb to 192kb
> Set /sys/block/sdg/queue/max_sectors_kb to 192kb
> Set /sys/block/sdi/queue/max_sectors_kb to 192kb
> Set /sys/block/sdk/queue/max_sectors_kb to 192kb
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=/dev/md3 of=/dev/null bs=1M count=10240
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s
>
> Awful performance with your numbers/drop_caches settings.. !

Can you repeat with /dev/sda only?

With fresh reboot to shell, then:
$ cat /sys/block/sda/queue/max_sectors_kb
$ echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/sda of=/dev/null bs=1M count=10240

$ echo 192 > /sys/block/sda/queue/max_sectors_kb
$ echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/sda of=/dev/null bs=1M count=10240

$ echo 128 > /sys/block/sda/queue/max_sectors_kb
$ echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/sda of=/dev/null bs=1M count=10240

> What were your tests designed to show?

A problem with the block-io.


Thanks!

--
Al


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 21:00             ` Al Boldi
@ 2007-01-12 21:40               ` Justin Piszcz
  2007-01-13  6:11                 ` Al Boldi
  0 siblings, 1 reply; 12+ messages in thread
From: Justin Piszcz @ 2007-01-12 21:40 UTC (permalink / raw)
  To: Al Boldi; +Cc: linux-kernel, linux-raid, xfs



On Sat, 13 Jan 2007, Al Boldi wrote:

> Justin Piszcz wrote:
> > Btw, max sectors did improve my performance a little bit but
> > stripe_cache+read_ahead were the main optimizations that made everything
> > go faster by about ~1.5x.   I have individual bonnie++ benchmarks of
> > [only] the max_sector_kb tests as well, it improved the times from
> > 8min/bonnie run -> 7min 11 seconds or so, see below and then after that is
> > what you requested.
> >
> > # echo 3 > /proc/sys/vm/drop_caches
> > # dd if=/dev/md3 of=/dev/null bs=1M count=10240
> > 10240+0 records in
> > 10240+0 records out
> > 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s
> > # for i in sde sdg sdi sdk; do   echo 192 >
> > /sys/block/"$i"/queue/max_sectors_kb;   echo "Set
> > /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done
> > Set /sys/block/sde/queue/max_sectors_kb to 192kb
> > Set /sys/block/sdg/queue/max_sectors_kb to 192kb
> > Set /sys/block/sdi/queue/max_sectors_kb to 192kb
> > Set /sys/block/sdk/queue/max_sectors_kb to 192kb
> > # echo 3 > /proc/sys/vm/drop_caches
> > # dd if=/dev/md3 of=/dev/null bs=1M count=10240
> > 10240+0 records in
> > 10240+0 records out
> > 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s
> >
> > Awful performance with your numbers/drop_caches settings.. !
> 
> Can you repeat with /dev/sda only?
> 
> With fresh reboot to shell, then:
> $ cat /sys/block/sda/queue/max_sectors_kb
> $ echo 3 > /proc/sys/vm/drop_caches
> $ dd if=/dev/sda of=/dev/null bs=1M count=10240
> 
> $ echo 192 > /sys/block/sda/queue/max_sectors_kb
> $ echo 3 > /proc/sys/vm/drop_caches
> $ dd if=/dev/sda of=/dev/null bs=1M count=10240
> 
> $ echo 128 > /sys/block/sda/queue/max_sectors_kb
> $ echo 3 > /proc/sys/vm/drop_caches
> $ dd if=/dev/sda of=/dev/null bs=1M count=10240
> 
> > What were your tests designed to show?
> 
> A problem with the block-io.
> 
> 
> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Here you go:

For sda-- (is a 74GB raptor only)-- but ok.

# uptime
 16:25:38 up 1 min,  3 users,  load average: 0.23, 0.14, 0.05
# cat /sys/block/sda/queue/max_sectors_kb
512
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/sda of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 150.891 seconds, 71.2 MB/s
# 


# 
# 
# echo 192 > /sys/block/sda/queue/max_sectors_kb
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/sda of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 150.192 seconds, 71.5 MB/s
# echo 128 > /sys/block/sda/queue/max_sectors_kb
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/sda of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 150.15 seconds, 71.5 MB/s


Does this show anything useful?


Justin.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-12 21:40               ` Justin Piszcz
@ 2007-01-13  6:11                 ` Al Boldi
  2007-01-13  9:40                   ` Justin Piszcz
  0 siblings, 1 reply; 12+ messages in thread
From: Al Boldi @ 2007-01-13  6:11 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-kernel, linux-raid, xfs

Justin Piszcz wrote:
> On Sat, 13 Jan 2007, Al Boldi wrote:
> > Justin Piszcz wrote:
> > > Btw, max sectors did improve my performance a little bit but
> > > stripe_cache+read_ahead were the main optimizations that made
> > > everything go faster by about ~1.5x.   I have individual bonnie++
> > > benchmarks of [only] the max_sector_kb tests as well, it improved the
> > > times from 8min/bonnie run -> 7min 11 seconds or so, see below and
> > > then after that is what you requested.
> >
> > Can you repeat with /dev/sda only?
>
> For sda-- (is a 74GB raptor only)-- but ok.

Do you get the same results for the 150GB-raptor on sd{e,g,i,k}?

> # uptime
>  16:25:38 up 1 min,  3 users,  load average: 0.23, 0.14, 0.05
> # cat /sys/block/sda/queue/max_sectors_kb
> 512
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=/dev/sda of=/dev/null bs=1M count=10240
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 150.891 seconds, 71.2 MB/s
> # echo 192 > /sys/block/sda/queue/max_sectors_kb
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=/dev/sda of=/dev/null bs=1M count=10240
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 150.192 seconds, 71.5 MB/s
> # echo 128 > /sys/block/sda/queue/max_sectors_kb
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=/dev/sda of=/dev/null bs=1M count=10240
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB) copied, 150.15 seconds, 71.5 MB/s
>
>
> Does this show anything useful?

Probably a latency issue.  md is highly latency sensitive.

What CPU type/speed do you have?  Bootlog/dmesg?


Thanks!

--
Al


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)
  2007-01-13  6:11                 ` Al Boldi
@ 2007-01-13  9:40                   ` Justin Piszcz
  0 siblings, 0 replies; 12+ messages in thread
From: Justin Piszcz @ 2007-01-13  9:40 UTC (permalink / raw)
  To: Al Boldi; +Cc: linux-kernel, linux-raid, xfs



On Sat, 13 Jan 2007, Al Boldi wrote:

> Justin Piszcz wrote:
> > On Sat, 13 Jan 2007, Al Boldi wrote:
> > > Justin Piszcz wrote:
> > > > Btw, max sectors did improve my performance a little bit but
> > > > stripe_cache+read_ahead were the main optimizations that made
> > > > everything go faster by about ~1.5x.   I have individual bonnie++
> > > > benchmarks of [only] the max_sector_kb tests as well, it improved the
> > > > times from 8min/bonnie run -> 7min 11 seconds or so, see below and
> > > > then after that is what you requested.
> > >
> > > Can you repeat with /dev/sda only?
> >
> > For sda-- (is a 74GB raptor only)-- but ok.
> 
> Do you get the same results for the 150GB-raptor on sd{e,g,i,k}?
> 
> > # uptime
> >  16:25:38 up 1 min,  3 users,  load average: 0.23, 0.14, 0.05
> > # cat /sys/block/sda/queue/max_sectors_kb
> > 512
> > # echo 3 > /proc/sys/vm/drop_caches
> > # dd if=/dev/sda of=/dev/null bs=1M count=10240
> > 10240+0 records in
> > 10240+0 records out
> > 10737418240 bytes (11 GB) copied, 150.891 seconds, 71.2 MB/s
> > # echo 192 > /sys/block/sda/queue/max_sectors_kb
> > # echo 3 > /proc/sys/vm/drop_caches
> > # dd if=/dev/sda of=/dev/null bs=1M count=10240
> > 10240+0 records in
> > 10240+0 records out
> > 10737418240 bytes (11 GB) copied, 150.192 seconds, 71.5 MB/s
> > # echo 128 > /sys/block/sda/queue/max_sectors_kb
> > # echo 3 > /proc/sys/vm/drop_caches
> > # dd if=/dev/sda of=/dev/null bs=1M count=10240
> > 10240+0 records in
> > 10240+0 records out
> > 10737418240 bytes (11 GB) copied, 150.15 seconds, 71.5 MB/s
> >
> >
> > Does this show anything useful?
> 
> Probably a latency issue.  md is highly latency sensitive.
> 
> What CPU type/speed do you have?  Bootlog/dmesg?
> 
> 
> Thanks!
> 
> --
> Al
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

> What CPU type/speed do you have?  Bootlog/dmesg?
Core Duo E6300

The speed is great since I have tweaked the various settings..

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2007-01-13  9:40 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-01-11 23:38 Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) Justin Piszcz
2007-01-12 14:01 ` Michael Tokarev
2007-01-12 14:38   ` Justin Piszcz
2007-01-12 17:37     ` Justin Piszcz
2007-01-12 19:49       ` Al Boldi
2007-01-12 19:56         ` Justin Piszcz
2007-01-12 20:15           ` Justin Piszcz
2007-01-12 20:41             ` Bill Davidsen
2007-01-12 21:00             ` Al Boldi
2007-01-12 21:40               ` Justin Piszcz
2007-01-13  6:11                 ` Al Boldi
2007-01-13  9:40                   ` Justin Piszcz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).