LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: David Howells <dhowells@redhat.com>
Cc: Lennert Buytenhek <buytenh@wantstofly.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	ARM Linux Mailing List  <linux-arm-kernel@lists.arm.linux.org.uk>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-kernel@vger.kernel.org, torvalds@osdl.org
Subject: Re: I/O memory barriers vs SMP memory barriers
Date: Sun, 25 Mar 2007 14:15:42 -0700	[thread overview]
Message-ID: <20070325211542.GD3593@linux.vnet.ibm.com> (raw)
In-Reply-To: <16051.1174657433@redhat.com>

On Fri, Mar 23, 2007 at 01:43:53PM +0000, David Howells wrote:
> 
> [Resend - this time with a comma in the addresses, not a dot]
> 
> Lennert Buytenhek <buytenh@wantstofly.org> wrote:
> 
> > [ background: On ARM, SMP synchronisation does need barriers but device
> >   synchronisation does not.  The question is that given this, whether
> >   mb() and friends can be NOPs on ARM or not (i.e. whether mb() is
> >   supposed to sync against other CPUs or not, or whether only smp_mb()
> >   can be used for this.)  ]
> 
> Hmmmm...
> 
> I see your problem.  I think the right way to deal with this is to get rid of
> mb(), rmb(), wmb() and read_barrier_depends() and replace them with io_mb(),
> io_rmb(), ...

We will get combinatorial explosion if we aren't -extremely- careful:

1.	Orders only normal memory accesses, which is all that is required
	of smp_*().

2.	Orders both normal and device accesses -- mmiowb().

3.	Orders memory accesses and device accesses, but not necessarily
	the union of the two -- mb(), rmb(), wmb().

4.	Orders only device accesses, which is what seems to be looked
	for here.

						Thanx, Paul

> I think that there are only two places you should be using explicit memory
> barriers:
> 
>  (1) To control inter-CPU effects on an SMP system.
> 
>  (2) To control CPU vs device effects.
> 
> > On Thu, Mar 22, 2007 at 04:17:44PM +0000, Catalin Marinas wrote:
> > 
> > > Is the requirement for mb() to act correctly in the SMP case as well?
> > 
> > That's what the docs seem to suggest.  A couple of snippets from
> > memory-barriers.txt:
> > 
> > [1]  A write memory barrier gives a guarantee that all the STORE operations
> >      specified before the barrier will appear to happen before all the STORE
> >      operations specified after the barrier with respect to the other
> >      components of the system.
> > 
> > [2]  A read barrier is a data dependency barrier plus a guarantee that all the
> >      LOAD operations specified before the barrier will appear to happen before
> >      all the LOAD operations specified after the barrier with respect to the
> >      other components of the system.
> > 
> > [3]     TYPE            MANDATORY               SMP CONDITIONAL
> >         =============== ======================= ===========================
> >         GENERAL         mb()                    smp_mb()
> >         WRITE           wmb()                   smp_wmb()
> >         READ            rmb()                   smp_rmb()
> >         DATA DEPENDENCY read_barrier_depends()  smp_read_barrier_depends()
> > 
> > [4]  Mandatory barriers should not be used to control SMP effects,
> >      since mandatory barriers unnecessarily impose overhead on UP
> >      systems.
> > 
> > Note the wording of 'other components of the system' in [1] and [2] --
> > the way I read it, this includes devices as well as other CPUs.
> 
> Yes, but I suppose which "other components" may depend on the class of barrier
> used.
> 
> > [4] says that mandatory barriers (i.e. from [3]: mb(), wmb(), rmb(),
> > read_barrier_depends()) SHOULD not be used to control SMP effects, but
> > it does not say that they MUST not.
> 
> As it stands, mb() is a superset of smp_mb(), and rmb() of smp_rmb(), etc.,
> so, yes, currently, mb() implies smp_mb().  However, mb() shouldn't be used if
> smb_mb() is sufficient as that may impact performance on a UP system.
> 
> Really, mb() should only be used with respect to I/O.
> 
> > > The memory-barriers.txt doc says that smp_* must be used for the SMP
> > > case.
> > 
> > The exact wording is:
> > 
> > 	[!] Note that SMP memory barriers _must_ be used to control the
> > 	ordering of references to shared memory on SMP systems, though
> > 	the use of locking instead is sufficient.
> > 
> > This can IMHO be interpreted in two ways:
> > 1. If you want to control ordering of references to shared memory on
> >    SMP systems, you must use SMP memory barriers and not any other kind
> >    of memory barrier.
> 
> If the shared memory is purely an inter-CPU effect, yes.  If the shared memory
> is actually a device with side effects, then I/O safe memory barriers are
> required - mb() and co.  Note that there must _also_ be safety wrt to other
> CPUs in the system, as other CPUs may also try to access the device.
> 
> > 2. If you want to control ordering of references to shared memory on
> >    SMP systems, you must use memory barriers, and the SMP memory barrier
> >    is the most appropriate barrier type to use.
> 
> You may use locking instead to control inter-CPU effects.  Locks imply one-way
> permeable SMP-class memory barriers.
> 
> > I'm thinking that [2] is what was intended.  [1] doesn't seem consistent
> > with the rest of the document, but if [1] _is_ what is what was intended,
> > we're off the hook and mb() and friends can be NOPs on ARM.  (But it'd
> > probably still need a thorough audit... :-/ )
> 
> I think the best way to do an audit would be to make mb() and co. deprecated,
> pending obsolete, and to replace them with io_mb() and co.  That way people
> would have to eyeball any usages of mb() and co.
> 
> > > This means that if code uses mb() to control SMP sharing, it is broken.
> > 
> > I'm not so sure.
> 
> If it's _purely_ to control inter-CPU SMP sharing, then yes, it's broken.  It
> must use either a lock or an smp_*mb() barrier.
> 
> Of course, Linus may disagree...
> 
> David
> 

  parent reply	other threads:[~2007-03-25 21:49 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20070323111350.GD3980@xi.wantstofly.org>
     [not found] ` <e9c3a7c20703021312y5f7aa228i5d1c84a8e9ea5676@mail.gmail.com>
     [not found]   ` <20070303111427.GB16944@xi.wantstofly.org>
     [not found]     ` <20070303113305.GB10515@flint.arm.linux.org.uk>
     [not found]       ` <20070321221134.GA22497@xi.wantstofly.org>
     [not found]         ` <tnxlkhpgslz.fsf@arm.com>
2007-03-23 13:43           ` David Howells
2007-03-23 15:08             ` Lennert Buytenhek
2007-03-24 20:16             ` Benjamin Herrenschmidt
2007-03-25 21:15             ` Paul E. McKenney [this message]
2007-03-25 21:38               ` Lennert Buytenhek
2007-03-26  3:24                 ` Paul E. McKenney
2007-03-26  8:46                   ` Lennert Buytenhek
2007-03-26 20:07                     ` Paul E. McKenney
2007-03-28 18:36                       ` Lennert Buytenhek
2007-03-26 10:04               ` David Howells
2007-03-26 10:07             ` David Howells

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070325211542.GD3593@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=buytenh@wantstofly.org \
    --cc=catalin.marinas@arm.com \
    --cc=dan.j.williams@intel.com \
    --cc=dhowells@redhat.com \
    --cc=linux-arm-kernel@lists.arm.linux.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=torvalds@osdl.org \
    --subject='Re: I/O memory barriers vs SMP memory barriers' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).