LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Will Deacon <will.deacon@arm.com>, linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org,
	mingo@kernel.org, boqun.feng@gmail.com,
	paulmck@linux.vnet.ibm.com
Subject: Re: [PATCH v3 00/14] kernel/locking: qspinlock improvements
Date: Thu, 26 Apr 2018 16:18:56 -0400	[thread overview]
Message-ID: <8fdcfa2d-7717-5eb6-a938-53524db8ea41@redhat.com> (raw)
In-Reply-To: <1524738868-31318-1-git-send-email-will.deacon@arm.com>

On 04/26/2018 06:34 AM, Will Deacon wrote:
> Hi all,
>
> This is version three of the qspinlock patches I posted previously:
>
>   v1: https://lkml.org/lkml/2018/4/5/496
>   v2: https://lkml.org/lkml/2018/4/11/618
>
> Changes since v2 include:
>   * Fixed bisection issues
>   * Fixed x86 PV build
>   * Added patch proposing me as a co-maintainer
>   * Rebased onto -rc2
>
> All feedback welcome,
>
> Will
>
> --->8
>
> Jason Low (1):
>   locking/mcs: Use smp_cond_load_acquire() in mcs spin loop
>
> Waiman Long (1):
>   locking/qspinlock: Add stat tracking for pending vs slowpath
>
> Will Deacon (12):
>   barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
>   locking/qspinlock: Merge struct __qspinlock into struct qspinlock
>   locking/qspinlock: Bound spinning on pending->locked transition in
>     slowpath
>   locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound
>   locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath
>   locking/qspinlock: Kill cmpxchg loop when claiming lock from head of
>     queue
>   locking/qspinlock: Use atomic_cond_read_acquire
>   locking/qspinlock: Use smp_cond_load_relaxed to wait for next node
>   locking/qspinlock: Make queued_spin_unlock use smp_store_release
>   locking/qspinlock: Elide back-to-back RELEASE operations with
>     smp_wmb()
>   locking/qspinlock: Use try_cmpxchg instead of cmpxchg when locking
>   MAINTAINERS: Add myself as a co-maintainer for LOCKING PRIMITIVES
>
>  MAINTAINERS                               |   1 +
>  arch/x86/include/asm/qspinlock.h          |  21 ++-
>  arch/x86/include/asm/qspinlock_paravirt.h |   3 +-
>  include/asm-generic/atomic-long.h         |   2 +
>  include/asm-generic/barrier.h             |  27 +++-
>  include/asm-generic/qspinlock.h           |   2 +-
>  include/asm-generic/qspinlock_types.h     |  32 +++-
>  include/linux/atomic.h                    |   2 +
>  kernel/locking/mcs_spinlock.h             |  10 +-
>  kernel/locking/qspinlock.c                | 247 ++++++++++++++----------------
>  kernel/locking/qspinlock_paravirt.h       |  44 ++----
>  kernel/locking/qspinlock_stat.h           |   9 +-
>  12 files changed, 209 insertions(+), 191 deletions(-)
>
Other than my comment on patch 5 (which can wait as the code path is
unlikely to be used soon), I have no other issue with this patchset.

Acked-by: Waiman Long <longman@redhat.com>

Cheers,
Longman

      parent reply	other threads:[~2018-04-26 20:18 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-26 10:34 Will Deacon
2018-04-26 10:34 ` [PATCH v3 01/14] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed Will Deacon
2018-04-27  9:36   ` [tip:locking/core] locking/barriers: Introduce smp_cond_load_relaxed() and atomic_cond_read_relaxed() tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 02/14] locking/qspinlock: Merge struct __qspinlock into struct qspinlock Will Deacon
2018-04-27  9:37   ` [tip:locking/core] locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock' tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 03/14] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Will Deacon
2018-04-27  9:37   ` [tip:locking/core] " tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 04/14] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound Will Deacon
2018-04-27  9:38   ` [tip:locking/core] " tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 05/14] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Will Deacon
2018-04-26 15:53   ` Peter Zijlstra
2018-04-26 16:55     ` Will Deacon
2018-04-28 12:45       ` Peter Zijlstra
2018-04-30  8:53         ` Will Deacon
2018-04-26 20:16   ` Waiman Long
2018-04-27 10:16     ` Will Deacon
2018-04-27 11:01       ` [tip:locking/core] locking/qspinlock: Remove duplicate clear_pending() function from PV code tip-bot for Will Deacon
2018-04-27 13:09       ` [PATCH v3 05/14] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Waiman Long
2018-04-27  9:39   ` [tip:locking/core] locking/qspinlock: Remove unbounded cmpxchg() " tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 06/14] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue Will Deacon
2018-04-27  9:39   ` [tip:locking/core] locking/qspinlock: Kill cmpxchg() " tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 07/14] locking/qspinlock: Use atomic_cond_read_acquire Will Deacon
2018-04-27  9:40   ` [tip:locking/core] locking/qspinlock: Use atomic_cond_read_acquire() tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 08/14] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon
2018-04-27  9:40   ` [tip:locking/core] locking/mcs: Use smp_cond_load_acquire() in MCS " tip-bot for Jason Low
2018-04-26 10:34 ` [PATCH v3 09/14] locking/qspinlock: Use smp_cond_load_relaxed to wait for next node Will Deacon
2018-04-27  9:41   ` [tip:locking/core] locking/qspinlock: Use smp_cond_load_relaxed() " tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 10/14] locking/qspinlock: Make queued_spin_unlock use smp_store_release Will Deacon
2018-04-27  9:42   ` [tip:locking/core] locking/qspinlock: Use smp_store_release() in queued_spin_unlock() tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 11/14] locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() Will Deacon
2018-04-27  9:42   ` [tip:locking/core] " tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 12/14] locking/qspinlock: Use try_cmpxchg instead of cmpxchg when locking Will Deacon
2018-04-27  9:43   ` [tip:locking/core] locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() " tip-bot for Will Deacon
2018-04-26 10:34 ` [PATCH v3 13/14] locking/qspinlock: Add stat tracking for pending vs slowpath Will Deacon
2018-04-27  9:43   ` [tip:locking/core] locking/qspinlock: Add stat tracking for pending vs. slowpath tip-bot for Waiman Long
2018-04-26 10:34 ` [PATCH v3 14/14] MAINTAINERS: Add myself as a co-maintainer for LOCKING PRIMITIVES Will Deacon
2018-04-26 15:55   ` Peter Zijlstra
2018-04-27  9:44   ` [tip:locking/core] MAINTAINERS: Add myself as a co-maintainer for the locking subsystem tip-bot for Will Deacon
2018-04-26 15:54 ` [PATCH v3 00/14] kernel/locking: qspinlock improvements Peter Zijlstra
2018-04-27  9:33   ` Ingo Molnar
2018-04-26 20:18 ` Waiman Long [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8fdcfa2d-7717-5eb6-a938-53524db8ea41@redhat.com \
    --to=longman@redhat.com \
    --cc=boqun.feng@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=will.deacon@arm.com \
    --subject='Re: [PATCH v3 00/14] kernel/locking: qspinlock improvements' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).