LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "tip-bot2 for Thomas Gleixner" <tip-bot2@linutronix.de>
To: linux-tip-commits@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	x86@kernel.org, linux-kernel@vger.kernel.org
Subject: [tip: locking/core] locking/rt: Add base code for RT rw_semaphore and rwlock
Date: Tue, 17 Aug 2021 20:14:28 -0000	[thread overview]
Message-ID: <162923126814.25758.11395417168543723983.tip-bot2@tip-bot2> (raw)
In-Reply-To: <20210815211302.957920571@linutronix.de>

The following commit has been merged into the locking/core branch of tip:

Commit-ID:     943f0edb754fac195043c620b44f920e4fb76ec8
Gitweb:        https://git.kernel.org/tip/943f0edb754fac195043c620b44f920e4fb76ec8
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Sun, 15 Aug 2021 23:28:03 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 17 Aug 2021 17:12:22 +02:00

locking/rt: Add base code for RT rw_semaphore and rwlock

On PREEMPT_RT, rw_semaphores and rwlocks are substituted with an rtmutex and
a reader count. The implementation is writer unfair, as it is not feasible
to do priority inheritance on multiple readers, but experience has shown
that real-time workloads are not the typical workloads which are sensitive
to writer starvation.

The inner workings of rw_semaphores and rwlocks on RT are almost identical
except for the task state and signal handling. rw_semaphores are not state
preserving over a contention, they are expected to enter and leave with state
== TASK_RUNNING. rwlocks have a mechanism to preserve the state of the task
at entry and restore it after unblocking taking potential non-lock related
wakeups into account. rw_semaphores can also be subject to signal handling
interrupting a blocked state, while rwlocks ignore signals.

To avoid code duplication, provide a shared implementation which takes the
small difference vs. state and signals into account. The code is included
into the relevant rw_semaphore/rwlock base code and compiled for each use
case separately.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20210815211302.957920571@linutronix.de
---
 include/linux/rwbase_rt.h  |  39 +++++-
 kernel/locking/rwbase_rt.c | 263 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 302 insertions(+)
 create mode 100644 include/linux/rwbase_rt.h
 create mode 100644 kernel/locking/rwbase_rt.c

diff --git a/include/linux/rwbase_rt.h b/include/linux/rwbase_rt.h
new file mode 100644
index 0000000..1d264dd
--- /dev/null
+++ b/include/linux/rwbase_rt.h
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#ifndef _LINUX_RWBASE_RT_H
+#define _LINUX_RWBASE_RT_H
+
+#include <linux/rtmutex.h>
+#include <linux/atomic.h>
+
+#define READER_BIAS		(1U << 31)
+#define WRITER_BIAS		(1U << 30)
+
+struct rwbase_rt {
+	atomic_t		readers;
+	struct rt_mutex_base	rtmutex;
+};
+
+#define __RWBASE_INITIALIZER(name)				\
+{								\
+	.readers = ATOMIC_INIT(READER_BIAS),			\
+	.rtmutex = __RT_MUTEX_BASE_INITIALIZER(name.rtmutex),	\
+}
+
+#define init_rwbase_rt(rwbase)					\
+	do {							\
+		rt_mutex_base_init(&(rwbase)->rtmutex);		\
+		atomic_set(&(rwbase)->readers, READER_BIAS);	\
+	} while (0)
+
+
+static __always_inline bool rw_base_is_locked(struct rwbase_rt *rwb)
+{
+	return atomic_read(&rwb->readers) != READER_BIAS;
+}
+
+static __always_inline bool rw_base_is_contended(struct rwbase_rt *rwb)
+{
+	return atomic_read(&rwb->readers) > 0;
+}
+
+#endif /* _LINUX_RWBASE_RT_H */
diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c
new file mode 100644
index 0000000..4ba1508
--- /dev/null
+++ b/kernel/locking/rwbase_rt.c
@@ -0,0 +1,263 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * RT-specific reader/writer semaphores and reader/writer locks
+ *
+ * down_write/write_lock()
+ *  1) Lock rtmutex
+ *  2) Remove the reader BIAS to force readers into the slow path
+ *  3) Wait until all readers have left the critical section
+ *  4) Mark it write locked
+ *
+ * up_write/write_unlock()
+ *  1) Remove the write locked marker
+ *  2) Set the reader BIAS, so readers can use the fast path again
+ *  3) Unlock rtmutex, to release blocked readers
+ *
+ * down_read/read_lock()
+ *  1) Try fast path acquisition (reader BIAS is set)
+ *  2) Take tmutex::wait_lock, which protects the writelocked flag
+ *  3) If !writelocked, acquire it for read
+ *  4) If writelocked, block on tmutex
+ *  5) unlock rtmutex, goto 1)
+ *
+ * up_read/read_unlock()
+ *  1) Try fast path release (reader count != 1)
+ *  2) Wake the writer waiting in down_write()/write_lock() #3
+ *
+ * down_read/read_lock()#3 has the consequence, that rw semaphores and rw
+ * locks on RT are not writer fair, but writers, which should be avoided in
+ * RT tasks (think mmap_sem), are subject to the rtmutex priority/DL
+ * inheritance mechanism.
+ *
+ * It's possible to make the rw primitives writer fair by keeping a list of
+ * active readers. A blocked writer would force all newly incoming readers
+ * to block on the rtmutex, but the rtmutex would have to be proxy locked
+ * for one reader after the other. We can't use multi-reader inheritance
+ * because there is no way to support that with SCHED_DEADLINE.
+ * Implementing the one by one reader boosting/handover mechanism is a
+ * major surgery for a very dubious value.
+ *
+ * The risk of writer starvation is there, but the pathological use cases
+ * which trigger it are not necessarily the typical RT workloads.
+ *
+ * Common code shared between RT rw_semaphore and rwlock
+ */
+
+static __always_inline int rwbase_read_trylock(struct rwbase_rt *rwb)
+{
+	int r;
+
+	/*
+	 * Increment reader count, if sem->readers < 0, i.e. READER_BIAS is
+	 * set.
+	 */
+	for (r = atomic_read(&rwb->readers); r < 0;) {
+		if (likely(atomic_try_cmpxchg(&rwb->readers, &r, r + 1)))
+			return 1;
+	}
+	return 0;
+}
+
+static int __sched __rwbase_read_lock(struct rwbase_rt *rwb,
+				      unsigned int state)
+{
+	struct rt_mutex_base *rtm = &rwb->rtmutex;
+	int ret;
+
+	raw_spin_lock_irq(&rtm->wait_lock);
+	/*
+	 * Allow readers, as long as the writer has not completely
+	 * acquired the semaphore for write.
+	 */
+	if (atomic_read(&rwb->readers) != WRITER_BIAS) {
+		atomic_inc(&rwb->readers);
+		raw_spin_unlock_irq(&rtm->wait_lock);
+		return 0;
+	}
+
+	/*
+	 * Call into the slow lock path with the rtmutex->wait_lock
+	 * held, so this can't result in the following race:
+	 *
+	 * Reader1		Reader2		Writer
+	 *			down_read()
+	 *					down_write()
+	 *					rtmutex_lock(m)
+	 *					wait()
+	 * down_read()
+	 * unlock(m->wait_lock)
+	 *			up_read()
+	 *			wake(Writer)
+	 *					lock(m->wait_lock)
+	 *					sem->writelocked=true
+	 *					unlock(m->wait_lock)
+	 *
+	 *					up_write()
+	 *					sem->writelocked=false
+	 *					rtmutex_unlock(m)
+	 *			down_read()
+	 *					down_write()
+	 *					rtmutex_lock(m)
+	 *					wait()
+	 * rtmutex_lock(m)
+	 *
+	 * That would put Reader1 behind the writer waiting on
+	 * Reader2 to call up_read(), which might be unbound.
+	 */
+
+	/*
+	 * For rwlocks this returns 0 unconditionally, so the below
+	 * !ret conditionals are optimized out.
+	 */
+	ret = rwbase_rtmutex_slowlock_locked(rtm, state);
+
+	/*
+	 * On success the rtmutex is held, so there can't be a writer
+	 * active. Increment the reader count and immediately drop the
+	 * rtmutex again.
+	 *
+	 * rtmutex->wait_lock has to be unlocked in any case of course.
+	 */
+	if (!ret)
+		atomic_inc(&rwb->readers);
+	raw_spin_unlock_irq(&rtm->wait_lock);
+	if (!ret)
+		rwbase_rtmutex_unlock(rtm);
+	return ret;
+}
+
+static __always_inline int rwbase_read_lock(struct rwbase_rt *rwb,
+					    unsigned int state)
+{
+	if (rwbase_read_trylock(rwb))
+		return 0;
+
+	return __rwbase_read_lock(rwb, state);
+}
+
+static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb,
+					 unsigned int state)
+{
+	struct rt_mutex_base *rtm = &rwb->rtmutex;
+	struct task_struct *owner;
+
+	raw_spin_lock_irq(&rtm->wait_lock);
+	/*
+	 * Wake the writer, i.e. the rtmutex owner. It might release the
+	 * rtmutex concurrently in the fast path (due to a signal), but to
+	 * clean up rwb->readers it needs to acquire rtm->wait_lock. The
+	 * worst case which can happen is a spurious wakeup.
+	 */
+	owner = rt_mutex_owner(rtm);
+	if (owner)
+		wake_up_state(owner, state);
+
+	raw_spin_unlock_irq(&rtm->wait_lock);
+}
+
+static __always_inline void rwbase_read_unlock(struct rwbase_rt *rwb,
+					       unsigned int state)
+{
+	/*
+	 * rwb->readers can only hit 0 when a writer is waiting for the
+	 * active readers to leave the critical section.
+	 */
+	if (unlikely(atomic_dec_and_test(&rwb->readers)))
+		__rwbase_read_unlock(rwb, state);
+}
+
+static inline void __rwbase_write_unlock(struct rwbase_rt *rwb, int bias,
+					 unsigned long flags)
+{
+	struct rt_mutex_base *rtm = &rwb->rtmutex;
+
+	atomic_add(READER_BIAS - bias, &rwb->readers);
+	raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
+	rwbase_rtmutex_unlock(rtm);
+}
+
+static inline void rwbase_write_unlock(struct rwbase_rt *rwb)
+{
+	struct rt_mutex_base *rtm = &rwb->rtmutex;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rtm->wait_lock, flags);
+	__rwbase_write_unlock(rwb, WRITER_BIAS, flags);
+}
+
+static inline void rwbase_write_downgrade(struct rwbase_rt *rwb)
+{
+	struct rt_mutex_base *rtm = &rwb->rtmutex;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rtm->wait_lock, flags);
+	/* Release it and account current as reader */
+	__rwbase_write_unlock(rwb, WRITER_BIAS - 1, flags);
+}
+
+static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
+				     unsigned int state)
+{
+	struct rt_mutex_base *rtm = &rwb->rtmutex;
+	unsigned long flags;
+
+	/* Take the rtmutex as a first step */
+	if (rwbase_rtmutex_lock_state(rtm, state))
+		return -EINTR;
+
+	/* Force readers into slow path */
+	atomic_sub(READER_BIAS, &rwb->readers);
+
+	raw_spin_lock_irqsave(&rtm->wait_lock, flags);
+	/*
+	 * set_current_state() for rw_semaphore
+	 * current_save_and_set_rtlock_wait_state() for rwlock
+	 */
+	rwbase_set_and_save_current_state(state);
+
+	/* Block until all readers have left the critical section. */
+	for (; atomic_read(&rwb->readers);) {
+		/* Optimized out for rwlocks */
+		if (rwbase_signal_pending_state(state, current)) {
+			__set_current_state(TASK_RUNNING);
+			__rwbase_write_unlock(rwb, 0, flags);
+			return -EINTR;
+		}
+		raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
+
+		/*
+		 * Schedule and wait for the readers to leave the critical
+		 * section. The last reader leaving it wakes the waiter.
+		 */
+		if (atomic_read(&rwb->readers) != 0)
+			rwbase_schedule();
+		set_current_state(state);
+		raw_spin_lock_irqsave(&rtm->wait_lock, flags);
+	}
+
+	atomic_set(&rwb->readers, WRITER_BIAS);
+	rwbase_restore_current_state();
+	raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
+	return 0;
+}
+
+static inline int rwbase_write_trylock(struct rwbase_rt *rwb)
+{
+	struct rt_mutex_base *rtm = &rwb->rtmutex;
+	unsigned long flags;
+
+	if (!rwbase_rtmutex_trylock(rtm))
+		return 0;
+
+	atomic_sub(READER_BIAS, &rwb->readers);
+
+	raw_spin_lock_irqsave(&rtm->wait_lock, flags);
+	if (!atomic_read(&rwb->readers)) {
+		atomic_set(&rwb->readers, WRITER_BIAS);
+		raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
+		return 1;
+	}
+	__rwbase_write_unlock(rwb, 0, flags);
+	return 0;
+}

  parent reply	other threads:[~2021-08-17 20:17 UTC|newest]

Thread overview: 160+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-15 21:27 [patch V5 00/72] locking, sched: The PREEMPT-RT locking infrastructure Thomas Gleixner
2021-08-15 21:27 ` [patch V5 01/72] locking/local_lock: Add missing owner initialization Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 02/72] locking/rtmutex: Set proper wait context for lockdep Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 03/72] sched: Split out the wakeup state check Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: Split out the wakeup ->__state check tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 04/72] sched: Introduce TASK_RTLOCK_WAIT Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: Introduce the TASK_RTLOCK_WAIT state bit tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 05/72] sched: Reorganize current::__state helpers Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: Reorganize the " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 06/72] sched: Prepare for RT sleeping spin/rwlocks Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wakeup: " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 07/72] sched: Rework the __schedule() preempt argument Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/core: " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 08/72] sched: Provide schedule point for RT locks Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/core: Provide a scheduling " tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 09/72] sched/wake_q: Provide WAKE_Q_HEAD_INITIALIZER Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] sched/wake_q: Provide WAKE_Q_HEAD_INITIALIZER() tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 10/72] media/atomisp: Use lockdep instead of *mutex_is_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:27 ` [patch V5 11/72] rtmutex: Remove rt_mutex_is_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Peter Zijlstra
2021-08-15 21:27 ` [patch V5 12/72] rtmutex: Convert macros to inlines Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:27 ` [patch V5 13/72] rtmutex: Switch to try_cmpxchg() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Switch to from cmpxchg_*() to try_cmpxchg_*() tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 14/72] rtmutex: Split API and implementation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Split API from implementation tip-bot2 for Thomas Gleixner
2021-08-15 21:27 ` [patch V5 15/72] rtmutex: Split out the inner parts of struct rtmutex Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Split out the inner parts of 'struct rtmutex' tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 16/72] locking/rtmutex: Provide rt_mutex_slowlock_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 17/72] rtmutex: Provide rt_mutex_base_is_locked() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 18/72] locking: Add base code for RT rw_semaphore and rwlock Thomas Gleixner
2021-08-16  5:00   ` Davidlohr Bueso
2021-08-17 20:14   ` tip-bot2 for Thomas Gleixner [this message]
2021-08-15 21:28 ` [patch V5 19/72] locking/rwsem: Add rtmutex based R/W semaphore implementation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 20/72] locking/rtmutex: Add wake_state to rt_mutex_waiter Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 21/72] locking/rtmutex: Provide rt_wake_q and helpers Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Provide rt_wake_q_head " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 22/72] locking/rtmutex: Use rt_mutex_wake_q_head Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 23/72] locking/rtmutex: Prepare RT rt_mutex_wake_q for RT locks Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 24/72] locking/rtmutex: Guard regular sleeping locks specific functions Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 25/72] locking/spinlock: Split the lock types header Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/spinlock: Split the lock types header, and move the raw types into <linux/spinlock_types_raw.h> tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 26/72] locking/rtmutex: Prevent future include recursion hell Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 27/72] locking/lockdep: Reduce includes in debug_locks.h Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/lockdep: Reduce header dependencies in <linux/debug_locks.h> tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 28/72] rbtree: Split out the rbtree type definitions Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] rbtree: Split out the rbtree type definitions into <linux/rbtree_types.h> tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 29/72] locking/rtmutex: Include only rbtree types Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/rtmutex: Reduce <linux/rtmutex.h> header dependencies, only include <linux/rbtree_types.h> tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 30/72] locking/spinlock: Provide RT specific spinlock type Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/spinlock: Provide RT specific spinlock_t tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 31/72] locking/spinlock: Provide RT variant header Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/spinlock: Provide RT variant header: <linux/spinlock_rt.h> tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 32/72] locking/rtmutex: Provide the spin/rwlock core lock function Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-27 17:21   ` [patch V5 32/72] " Boqun Feng
2021-08-27 20:03     ` Thomas Gleixner
2021-08-15 21:28 ` [patch V5 33/72] locking/spinlock: Provide RT variant Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 34/72] locking/rwlock: " Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-09-11  2:59   ` [patch V5 34/72] " Xiaoming Ni
2021-09-13  6:28     ` Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 35/72] locking/rtmutex: Squash !RT tasks to DEFAULT_PRIO Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 36/72] locking/mutex: Consolidate core headers Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/mutex: Consolidate core headers, remove kernel/locking/mutex-debug.h tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 37/72] locking/mutex: Move waiter to core header Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/mutex: Move the 'struct mutex_waiter' definition from <linux/mutex.h> to the internal header tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 38/72] locking/ww_mutex: Move ww_mutex declarations into ww_mutex.h Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Move the ww_mutex definitions from <linux/mutex.h> into <linux/ww_mutex.h> tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 39/72] locking/mutex: Make mutex::wait_lock raw Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 40/72] locking/ww_mutex: Simplify lockdep annotation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Simplify lockdep annotations tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 41/72] locking/ww_mutex: Gather mutex_waiter initialization Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-19 16:54   ` [patch V5 41/72] " Guenter Roeck
2021-08-19 18:08     ` [PATCH] locking/ww_mutex: Initialize waiter.ww_ctx properly Sebastian Andrzej Siewior
2021-08-19 18:22       ` Peter Zijlstra
2021-08-19 18:32         ` Sebastian Andrzej Siewior
2021-08-19 17:51   ` [patch V5 41/72] locking/ww_mutex: Gather mutex_waiter initialization Sebastian Andrzej Siewior
2021-08-19 18:17     ` Peter Zijlstra
2021-08-19 18:28       ` Sebastian Andrzej Siewior
2021-08-19 19:30       ` [PATCH v2] locking/ww_mutex: Initialize waiter.ww_ctx properly Sebastian Andrzej Siewior
2021-08-20 10:20         ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:28 ` [patch V5 42/72] locking/ww_mutex: Split up ww_mutex_unlock() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra (Intel)
2021-08-15 21:28 ` [patch V5 43/72] locking/ww_mutex: Split W/W implementation logic Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Split out the W/W implementation logic into kernel/locking/ww_mutex.h tip-bot2 for Peter Zijlstra (Intel)
2021-08-15 21:28 ` [patch V5 44/72] locking/ww_mutex: Remove __sched annotation Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Remove the __sched annotation from ww_mutex APIs tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 45/72] locking/ww_mutex: Abstract waiter iteration Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out the " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 46/72] locking/ww_mutex: Abstract waiter enqueueing Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 47/72] locking/ww_mutex: Abstract mutex accessors Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 48/72] locking/ww_mutex: Abstract mutex types Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 49/72] locking/ww_mutex: Abstract internal lock access Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] locking/ww_mutex: Abstract out internal lock accesses tip-bot2 for Thomas Gleixner
2021-08-15 21:28 ` [patch V5 50/72] locking/ww_mutex: Implement rt_mutex accessors Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 51/72] locking/ww_mutex: Add RT priority to W/W order Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 52/72] locking/ww_mutex: Add rt_mutex based lock type and accessors Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:28 ` [patch V5 53/72] locking/rtmutex: Extend the rtmutex core to support ww_mutex Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:29 ` [patch V5 54/72] locking/ww_mutex: Implement rtmutex based ww_mutex API functions Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2021-08-15 21:29 ` [patch V5 55/72] locking/rtmutex: Add mutex variant for RT Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 56/72] lib/test_lockup: Adapt to changed variables Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Sebastian Andrzej Siewior
2021-08-15 21:29 ` [patch V5 57/72] futex: Validate waiter correctly in futex_proxy_trylock_atomic() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 58/72] futex: Cleanup stale comments Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] futex: Clean up " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 59/72] futex: Clarify futex_requeue() PI handling Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 60/72] futex: Remove bogus condition for requeue PI Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 61/72] futex: Correct the number of requeued waiters for PI Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 62/72] futex: Restructure futex_requeue() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 63/72] futex: Clarify comment in futex_requeue() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 64/72] futex: Reorder sanity checks " Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 65/72] futex: Simplify handle_early_requeue_pi_wakeup() Thomas Gleixner
2021-08-17 20:14   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 66/72] futex: Prevent requeue_pi() lock nesting issue on RT Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 67/72] rtmutex: Prevent lockdep false positive with PI futexes Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] locking/rtmutex: " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 68/72] preempt: Adjust PREEMPT_LOCK_OFFSET for RT Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 69/72] locking/rtmutex: Implement equal priority lock stealing Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Gregory Haskins
2021-08-15 21:29 ` [patch V5 70/72] locking/rtmutex: Add adaptive spinwait mechanism Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Steven Rostedt
2021-08-15 21:29 ` [patch V5 71/72] locking/spinlock/rt: Prepare for RT local_lock Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-15 21:29 ` [patch V5 72/72] locking/local_lock: Add PREEMPT_RT support Thomas Gleixner
2021-08-17 20:13   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-16  9:21 ` [patch V5 00/72] locking, sched: The PREEMPT-RT locking infrastructure Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=162923126814.25758.11395417168543723983.tip-bot2@tip-bot2 \
    --to=tip-bot2@linutronix.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --subject='Re: [tip: locking/core] locking/rt: Add base code for RT rw_semaphore and rwlock' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).