LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] locking/mutex: Refactor mutex_spin_on_owner()
@ 2015-03-09 20:14 Jason Low
  2015-03-10  8:11 ` Ingo Molnar
  0 siblings, 1 reply; 4+ messages in thread
From: Jason Low @ 2015-03-09 20:14 UTC (permalink / raw)
  To: Peter Zijlstra, Linus Torvalds, Ingo Molnar, Davidlohr Bueso
  Cc: LKML, Jason Low

This patch applies on top of tip.

-------------------------------------------------------------------
Similar to what Linus suggested for rwsem_spin_on_owner(), in
mutex_spin_on_owner(), instead of having while (true) and breaking
out of the spin loop on lock->owner != owner, we can have the loop
directly check for while (lock->owner == owner). This improves the
readability of the code.

Signed-off-by: Jason Low <jason.low2@hp.com>
---
 kernel/locking/mutex.c |   17 +++++------------
 1 files changed, 5 insertions(+), 12 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 16b2d3c..1c3b7c5 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -224,16 +224,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock,
 static noinline
 bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 {
-	bool ret;
-
 	rcu_read_lock();
-	while (true) {
-		/* Return success when the lock owner changed */
-		if (lock->owner != owner) {
-			ret = true;
-			break;
-		}
-
+	while (lock->owner == owner) {
 		/*
 		 * Ensure we emit the owner->on_cpu, dereference _after_
 		 * checking lock->owner still matches owner, if that fails,
@@ -242,16 +234,17 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 		 */
 		barrier();
 
+		/* Stop spinning when need_resched or owner is not running. */
 		if (!owner->on_cpu || need_resched()) {
-			ret = false;
-			break;
+			rcu_read_unlock();
+			return false;
 		}
 
 		cpu_relax_lowlatency();
 	}
 	rcu_read_unlock();
 
-	return ret;
+	return true;
 }
 
 /*
-- 
1.7.2.5




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] locking/mutex: Refactor mutex_spin_on_owner()
  2015-03-09 20:14 [PATCH] locking/mutex: Refactor mutex_spin_on_owner() Jason Low
@ 2015-03-10  8:11 ` Ingo Molnar
  2015-03-10 16:37   ` Jason Low
  0 siblings, 1 reply; 4+ messages in thread
From: Ingo Molnar @ 2015-03-10  8:11 UTC (permalink / raw)
  To: Jason Low; +Cc: Peter Zijlstra, Linus Torvalds, Davidlohr Bueso, LKML


* Jason Low <jason.low2@hp.com> wrote:

> This patch applies on top of tip.
> 
> -------------------------------------------------------------------
> Similar to what Linus suggested for rwsem_spin_on_owner(), in
> mutex_spin_on_owner(), instead of having while (true) and breaking
> out of the spin loop on lock->owner != owner, we can have the loop
> directly check for while (lock->owner == owner). This improves the
> readability of the code.
> 
> Signed-off-by: Jason Low <jason.low2@hp.com>
> ---
>  kernel/locking/mutex.c |   17 +++++------------
>  1 files changed, 5 insertions(+), 12 deletions(-)
> 
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index 16b2d3c..1c3b7c5 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -224,16 +224,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock,
>  static noinline
>  bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
>  {
> -	bool ret;
> -
>  	rcu_read_lock();
> -	while (true) {
> -		/* Return success when the lock owner changed */
> -		if (lock->owner != owner) {
> -			ret = true;
> -			break;
> -		}
> -
> +	while (lock->owner == owner) {
>  		/*
>  		 * Ensure we emit the owner->on_cpu, dereference _after_
>  		 * checking lock->owner still matches owner, if that fails,
> @@ -242,16 +234,17 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
>  		 */
>  		barrier();
>  
> +		/* Stop spinning when need_resched or owner is not running. */
>  		if (!owner->on_cpu || need_resched()) {
> -			ret = false;
> -			break;
> +			rcu_read_unlock();
> +			return false;
>  		}
>  
>  		cpu_relax_lowlatency();
>  	}
>  	rcu_read_unlock();
>  
> -	return ret;
> +	return true;

A nit: having multiple return statements in a function is not the 
cleanest approach, especially when we are holding locks.

It's better to add an 'out_unlock' label to before the 
rcu_read_unlock() and use that plus 'ret'.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] locking/mutex: Refactor mutex_spin_on_owner()
  2015-03-10  8:11 ` Ingo Molnar
@ 2015-03-10 16:37   ` Jason Low
  2015-03-16  9:16     ` Ingo Molnar
  0 siblings, 1 reply; 4+ messages in thread
From: Jason Low @ 2015-03-10 16:37 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Linus Torvalds, Davidlohr Bueso, LKML, jason.low2

On Tue, 2015-03-10 at 09:11 +0100, Ingo Molnar wrote:
> * Jason Low <jason.low2@hp.com> wrote:
> 
> > This patch applies on top of tip.
> > 
> > -------------------------------------------------------------------
> > Similar to what Linus suggested for rwsem_spin_on_owner(), in
> > mutex_spin_on_owner(), instead of having while (true) and breaking
> > out of the spin loop on lock->owner != owner, we can have the loop
> > directly check for while (lock->owner == owner). This improves the
> > readability of the code.
> > 
> > Signed-off-by: Jason Low <jason.low2@hp.com>
> > ---
> >  kernel/locking/mutex.c |   17 +++++------------
> >  1 files changed, 5 insertions(+), 12 deletions(-)
> > 
> > diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> > index 16b2d3c..1c3b7c5 100644
> > --- a/kernel/locking/mutex.c
> > +++ b/kernel/locking/mutex.c
> > @@ -224,16 +224,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock,
> >  static noinline
> >  bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
> >  {
> > -	bool ret;
> > -
> >  	rcu_read_lock();
> > -	while (true) {
> > -		/* Return success when the lock owner changed */
> > -		if (lock->owner != owner) {
> > -			ret = true;
> > -			break;
> > -		}
> > -
> > +	while (lock->owner == owner) {
> >  		/*
> >  		 * Ensure we emit the owner->on_cpu, dereference _after_
> >  		 * checking lock->owner still matches owner, if that fails,
> > @@ -242,16 +234,17 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
> >  		 */
> >  		barrier();
> >  
> > +		/* Stop spinning when need_resched or owner is not running. */
> >  		if (!owner->on_cpu || need_resched()) {
> > -			ret = false;
> > -			break;
> > +			rcu_read_unlock();
> > +			return false;
> >  		}
> >  
> >  		cpu_relax_lowlatency();
> >  	}
> >  	rcu_read_unlock();
> >  
> > -	return ret;
> > +	return true;
> 
> A nit: having multiple return statements in a function is not the 
> cleanest approach, especially when we are holding locks.
> 
> It's better to add an 'out_unlock' label to before the 
> rcu_read_unlock() and use that plus 'ret'.

Okay, I can update this patch. Should we make another similar update for
the rwsem then?


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] locking/mutex: Refactor mutex_spin_on_owner()
  2015-03-10 16:37   ` Jason Low
@ 2015-03-16  9:16     ` Ingo Molnar
  0 siblings, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2015-03-16  9:16 UTC (permalink / raw)
  To: Jason Low; +Cc: Peter Zijlstra, Linus Torvalds, Davidlohr Bueso, LKML


* Jason Low <jason.low2@hp.com> wrote:

> On Tue, 2015-03-10 at 09:11 +0100, Ingo Molnar wrote:
> > * Jason Low <jason.low2@hp.com> wrote:
> > 
> > > This patch applies on top of tip.
> > > 
> > > -------------------------------------------------------------------
> > > Similar to what Linus suggested for rwsem_spin_on_owner(), in
> > > mutex_spin_on_owner(), instead of having while (true) and breaking
> > > out of the spin loop on lock->owner != owner, we can have the loop
> > > directly check for while (lock->owner == owner). This improves the
> > > readability of the code.
> > > 
> > > Signed-off-by: Jason Low <jason.low2@hp.com>
> > > ---
> > >  kernel/locking/mutex.c |   17 +++++------------
> > >  1 files changed, 5 insertions(+), 12 deletions(-)
> > > 
> > > diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> > > index 16b2d3c..1c3b7c5 100644
> > > --- a/kernel/locking/mutex.c
> > > +++ b/kernel/locking/mutex.c
> > > @@ -224,16 +224,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock,
> > >  static noinline
> > >  bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
> > >  {
> > > -	bool ret;
> > > -
> > >  	rcu_read_lock();
> > > -	while (true) {
> > > -		/* Return success when the lock owner changed */
> > > -		if (lock->owner != owner) {
> > > -			ret = true;
> > > -			break;
> > > -		}
> > > -
> > > +	while (lock->owner == owner) {
> > >  		/*
> > >  		 * Ensure we emit the owner->on_cpu, dereference _after_
> > >  		 * checking lock->owner still matches owner, if that fails,
> > > @@ -242,16 +234,17 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
> > >  		 */
> > >  		barrier();
> > >  
> > > +		/* Stop spinning when need_resched or owner is not running. */
> > >  		if (!owner->on_cpu || need_resched()) {
> > > -			ret = false;
> > > -			break;
> > > +			rcu_read_unlock();
> > > +			return false;
> > >  		}
> > >  
> > >  		cpu_relax_lowlatency();
> > >  	}
> > >  	rcu_read_unlock();
> > >  
> > > -	return ret;
> > > +	return true;
> > 
> > A nit: having multiple return statements in a function is not the 
> > cleanest approach, especially when we are holding locks.
> > 
> > It's better to add an 'out_unlock' label to before the 
> > rcu_read_unlock() and use that plus 'ret'.
> 
> Okay, I can update this patch. Should we make another similar update 
> for the rwsem then?

Yeah, I suppose so.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-03-16  9:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-09 20:14 [PATCH] locking/mutex: Refactor mutex_spin_on_owner() Jason Low
2015-03-10  8:11 ` Ingo Molnar
2015-03-10 16:37   ` Jason Low
2015-03-16  9:16     ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).