LKML Archive on
help / color / mirror / Atom feed
From: Ric Wheeler <>
To: Jeremy Higdon <>
Cc: David Chinner <>, Michael Tokarev <>,
	device-mapper development <>,
	Andi Kleen <>,
Subject: Re: [dm-devel] Re: [PATCH] Implement barrier support for single device DM devices
Date: Wed, 20 Feb 2008 08:38:19 -0500	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

Jeremy Higdon wrote:
> On Tue, Feb 19, 2008 at 09:16:44AM +1100, David Chinner wrote:
>> On Mon, Feb 18, 2008 at 04:24:27PM +0300, Michael Tokarev wrote:
>>> First, I still don't understand why in God's sake barriers are "working"
>>> while regular cache flushes are not.  Almost no consumer-grade hard drive
>>> supports write barriers, but they all support regular cache flushes, and
>>> the latter should be enough (while not the most speed-optimal) to ensure
>>> data safety.  Why to require write cache disable (like in XFS FAQ) instead
>>> of going the flush-cache-when-appropriate (as opposed to write-barrier-
>>> when-appropriate) way?
>> Devil's advocate:
>> Why should we need to support multiple different block layer APIs
>> to do the same thing? Surely any hardware that doesn't support barrier
>> operations can emulate them with cache flushes when they receive a
>> barrier I/O from the filesystem....
>> Also, given that disabling the write cache still allows CTQ/NCQ to
>> operate effectively and that in most cases WCD+CTQ is as fast as
>> WCE+barriers, the simplest thing to do is turn off volatile write
>> caches and not require any extra software kludges for safe
>> operation.
> I'll put it even more strongly.  My experience is that disabling write
> cache plus disabling barriers is often much faster than enabling both
> barriers and write cache enabled, when doing metadata intensive
> operations, as long as you have a drive that is good at CTQ/NCQ.
> The only time write cache + barriers is significantly faster is when
> doing single threaded data writes, such as direct I/O, or if CTQ/NCQ
> is not enabled, or the drive does a poor job at it.
> jeremy

It would be interesting to compare numbers.

In the large, single threaded write case, what I have measured is 
roughly 2x faster writes with barriers/write cache enabled on S-ATA/ATA 
class drives. I think that this case alone is a fairly common one.

For very small file sizes, I have seen write cache off beat barriers + 
write cache enabled as well but barriers start out performing write 
cache disabled when you get up to moderate sizes (need to rerun tests to 
  get precise numbers/cross over data).

The type of workload is also important. In the test cases that I ran, 
the application needs to fsync() each file so we beat up on the barrier 
code pretty heavily.


  parent reply	other threads:[~2008-02-20 13:39 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-15 12:08 [PATCH] Implement barrier support for single device DM devices Andi Kleen
2008-02-15 12:20 ` Alasdair G Kergon
2008-02-15 13:07   ` Michael Tokarev
2008-02-15 14:20     ` Andi Kleen
2008-02-15 14:12       ` [dm-devel] " Alasdair G Kergon
2008-02-15 15:34         ` Andi Kleen
2008-02-15 15:31           ` Alan Cox
2008-02-18 12:48         ` Ric Wheeler
2008-02-18 13:24           ` Michael Tokarev
2008-02-18 13:52             ` Ric Wheeler
2008-02-19  2:45               ` Alasdair G Kergon
2008-05-16 19:55                 ` Mike Snitzer
2008-05-16 21:48                   ` Andi Kleen
2008-02-18 22:16             ` David Chinner
2008-02-19  2:56               ` Alasdair G Kergon
2008-02-19  5:36                 ` David Chinner
2008-02-19  9:43                 ` Andi Kleen
2008-02-19  7:19               ` Jeremy Higdon
2008-02-19  7:58                 ` Michael Tokarev
2008-02-20 13:38                 ` Ric Wheeler [this message]
2008-02-21  3:29                 ` Neil Brown
2008-02-21  3:39               ` Neil Brown
2008-02-17 23:31     ` David Chinner
2008-02-19  2:39     ` Alasdair G Kergon
2008-02-19 11:12       ` David Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).