LKML Archive on
help / color / mirror / Atom feed
* md: raid5 vs raid10 (f2,n2,o2) benchmarks [w/10 raptors]
@ 2008-03-29 18:20 Justin Piszcz
  0 siblings, 0 replies; only message in thread
From: Justin Piszcz @ 2008-03-29 18:20 UTC (permalink / raw)
  To: linux-kernel, linux-raid; +Cc: xfs, Alan Piszcz

There has been a lot of discussion on the mailing list regarding the 
various raid10 replicas so I benchmarked them compared to RAID 5 all with 
no optimizations and then re-ran the tests with optimizations, RAID 5 
still generally turns out the best speed for sequential writes but 
raid10_f2 seems to overtake raid5 for reads.

All tests used the XFS filesystem.

For the results that show 0 this is when bonnie++ reports '+++' for the result
which means it went too fast and needed to capture more data to give an
accurate number; however, I was only interested in the sequential read and
write speeds.  All tests are with default mkfs.xfs options, where I used
optimizations for the filesystem, these are only mount option parameters
as follows: noatime,nodiratime,logbufs=8,logbsize=262144.  I have done a lot
of testing in the past with mkfs.xfs and its default parameters optimize
to a physical HDD or mdraid so there is little point/gain to try to optimize
it any further.


I have gone back to my RAID 5 optimized configuration, I am done testing 
for now :)

1. Test RAID 10 with no optimizations to the disks or filesystems.
    a. mdadm --create /dev/md3 --assume-clean --chunk=256 --level=raid10 --raid-devices=10 --spare-devices=0 --layout=f2 /dev/sd[c-l]1
    b. mdadm --create /dev/md3 --assume-clean --chunk=256 --level=raid10 --raid-devices=10 --spare-devices=0 --layout=n2 /dev/sd[c-l]1
    c. mdadm --create /dev/md3 --assume-clean --chunk=256 --level=raid10 --raid-devices=10 --spare-devices=0 --layout=o2 /dev/sd[c-l]1
2. Test RAID 5 with no optimizations as well using the default layout.
    a. mdadm --create /dev/md3 --assume-clean --chunk=256 --level=raid5 --raid-devices=10 --spare-devices=0 /dev/sd[c-l]1
3. Test RAID 5 with optimizations.
    a. mdadm --create /dev/md3 --assume-clean --chunk=1024 --level=raid5 --raid-devices=10 --spare-devices=0 /dev/sd[c-l]1

2. Format and set permissions and run benchmarks.
    a. mkfs.xfs -f /dev/md3; mount /dev/md3 /r1; mkdir /r1/x
       chown -R jpiszcz:users /r1

3. Run the following bonnie++ benchmark 3 times and take the average.
    a. /usr/bin/time /usr/sbin/bonnie++ -d /x/test -s 16384 -m p34 -n 16:100000:16:64
    b. A script will be responsible for running it 3 times and the total time
       of all runs will also be taken.

   a. Total time for f2: 34:35.75elapsed 69%CPU
   b. bonnie++ csv line below:

   c. Total time for n2: 36:39.66elapsed 67%CPU
   d. bonnie++ csv line below:

   e. Total time for o2: 35:51.84elapsed 67%CPU
   f. bonnie++ csv line below:

   g. Total time for raid5 (256 KiB chunk): 45:44.73elapsed 54%CPU
   h. bonnie++ csv line below:

   a. Total time for f2 (block+fs optimizations): 33:33.13elapsed 72%CPU
   b. bonnie++ csv line below:

   c. Total time for n2 (block+fs optimizations): 33:45.89elapsed 72%CPU
   d. bonnie++ csv line below:

   e. Total time for o2 (block+fs optimizations): 34:54.18elapsed 69%CPU
   f. bonnie++ csv line below:

   i. Total time for raid5 (1024 KiB chunk)(block+fs optimizations)
      32:02.28elapsed 79%CPU
   j. bonnie++ csv line below:

1. Problem found with mkfs.xfs using such a large stripe size with RAID10:
# mkfs.xfs /dev/md3 -f
log stripe unit (1048576 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
2. I will use 256 KiB for all RAID 10 testing.
    and the non-optimized RAID 5 test.

Other misc info for RAID 10:
p34:~# mkfs.xfs /dev/md3 -f
meta-data=/dev/md3               isize=256    agcount=32, agsize=5723456 blks
          =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=183150592, imaxpct=25
          =                       sunit=64     swidth=640 blks
naming   =version 2              bsize=4096 
log      =internal log           bsize=4096   blocks=32768, version=2
          =                       sectsz=512   sunit=64 blks, lazy-count=0
realtime =none                   extsz=2621440 blocks=0, rtextents=0

p34:~# mount /dev/md3 /r1

p34:~# df -h
/dev/md3              699G  5.1M  699G   1% /r1

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2008-03-29 18:20 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-03-29 18:20 md: raid5 vs raid10 (f2,n2,o2) benchmarks [w/10 raptors] Justin Piszcz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).