LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Davidlohr Bueso <dave@stgolabs.net>
Cc: Jason Low <jason.low2@hp.com>, Ingo Molnar <mingo@kernel.org>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Michel Lespinasse <walken@google.com>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	linux-kernel@vger.kernel.org
Subject: Re: Refactoring mutex spin on owner code
Date: Fri, 30 Jan 2015 08:51:40 +0100	[thread overview]
Message-ID: <20150130075140.GS2896@worktop.programming.kicks-ass.net> (raw)
In-Reply-To: <1422602080.2005.9.camel@stgolabs.net>

On Thu, Jan 29, 2015 at 11:14:40PM -0800, Davidlohr Bueso wrote:
> > +bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
> >  {
> > +	bool ret;
> > +
> >  	rcu_read_lock();
> > -	while (owner_running(lock, owner)) {
> > -		if (need_resched())
> > +	while (true) {
> > +		/* Return success when the lock owner changed */
> > +		if (lock->owner != owner) {
> 
> Shouldn't this be a READ_ONCE(lock->owner)? We're in a loop and need to
> avoid gcc giving us stale data if the owner is updated after a few
> iterations, no?

There's a barrier() in that loop, and cpu_relax() also implies
barrier(). I'm pretty sure that's more than sufficient to make GCC emit
loads.

  reply	other threads:[~2015-01-30  7:51 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-26  7:36 [PATCH -tip 0/6] rwsem: Fine tuning Davidlohr Bueso
2015-01-26  7:36 ` [PATCH 1/6] locking/rwsem: Use task->state helpers Davidlohr Bueso
2015-02-04 14:38   ` [tip:locking/core] " tip-bot for Davidlohr Bueso
2015-01-26  7:36 ` [PATCH 2/6] locking/rwsem: Document barrier need when waking tasks Davidlohr Bueso
2015-01-27 17:07   ` Peter Zijlstra
2015-01-27 20:30     ` Davidlohr Bueso
2015-01-26  7:36 ` [PATCH 3/6] locking/rwsem: Set lock ownership ASAP Davidlohr Bueso
2015-01-27 17:10   ` Peter Zijlstra
2015-01-27 19:18     ` Davidlohr Bueso
2015-01-26  7:36 ` [PATCH 4/6] locking/rwsem: Avoid deceiving lock spinners Davidlohr Bueso
2015-01-27 17:23   ` Jason Low
2015-01-28  3:54     ` Davidlohr Bueso
2015-01-28 17:01       ` Tim Chen
2015-01-28 21:03       ` Jason Low
2015-01-29  1:10         ` Davidlohr Bueso
2015-01-29 20:13           ` Jason Low
2015-01-29 20:18             ` Jason Low
2015-01-29 23:15               ` Davidlohr Bueso
2015-01-30  1:52                 ` Refactoring mutex spin on owner code Jason Low
2015-01-30  7:14                   ` Davidlohr Bueso
2015-01-30  7:51                     ` Peter Zijlstra [this message]
2015-01-26  7:36 ` [PATCH 5/6] locking/rwsem: Optimize slowpath/sleeping Davidlohr Bueso
2015-01-27 17:34   ` Peter Zijlstra
2015-01-27 21:57     ` Davidlohr Bueso
2015-01-26  7:36 ` [PATCH 6/6] locking/rwsem: Check for active lock before bailing on spinning Davidlohr Bueso
2015-01-27 18:11   ` Jason Low

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150130075140.GS2896@worktop.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=dave@stgolabs.net \
    --cc=jason.low2@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=walken@google.com \
    --subject='Re: Refactoring mutex spin on owner code' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).