LKML Archive on lore.kernel.org help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org> To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org, Linus Torvalds <torvalds@linux-foundation.org> Subject: [PATCH v2 rcu 04/18] rcu: Weaken ->dynticks accesses and updates Date: Wed, 28 Jul 2021 10:37:15 -0700 [thread overview] Message-ID: <20210728173715.GA9416@paulmck-ThinkPad-P17-Gen-1> (raw) In-Reply-To: <20210721202127.2129660-4-paulmck@kernel.org> Accesses to the rcu_data structure's ->dynticks field have always been fully ordered because it was not possible to prove that weaker ordering was safe. However, with the removal of the rcu_eqs_special_set() function and the advent of the Linux-kernel memory model, it is now easy to show that two of the four original full memory barriers can be weakened to acquire and release operations. The remaining pair must remain full memory barriers. This change makes the memory ordering requirements more evident, and it might well also speed up the to-idle and from-idle fastpaths on some architectures. The following litmus test, adapted from one supplied off-list by Frederic Weisbecker, models the RCU grace-period kthread detecting an idle CPU that is concurrently transitioning to non-idle: C dynticks-from-idle { DYNTICKS=0; (* Initially idle. *) } P0(int *X, int *DYNTICKS) { int dynticks; int x; // Idle. dynticks = READ_ONCE(*DYNTICKS); smp_store_release(DYNTICKS, dynticks + 1); smp_mb(); // Now non-idle x = READ_ONCE(*X); } P1(int *X, int *DYNTICKS) { int dynticks; WRITE_ONCE(*X, 1); smp_mb(); dynticks = smp_load_acquire(DYNTICKS); } exists (1:dynticks=0 /\ 0:x=1) Running "herd7 -conf linux-kernel.cfg dynticks-from-idle.litmus" verifies this transition, namely, showing that if the RCU grace-period kthread (P1) sees another CPU as idle (P0), then any memory access prior to the start of the grace period (P1's write to X) will be seen by any RCU read-side critical section following the to-non-idle transition (P0's read from X). This is a straightforward use of full memory barriers to force ordering in a store-buffering (SB) litmus test. The following litmus test, also adapted from the one supplied off-list by Frederic Weisbecker, models the RCU grace-period kthread detecting a non-idle CPU that is concurrently transitioning to idle: C dynticks-into-idle { DYNTICKS=1; (* Initially non-idle. *) } P0(int *X, int *DYNTICKS) { int dynticks; // Non-idle. WRITE_ONCE(*X, 1); dynticks = READ_ONCE(*DYNTICKS); smp_store_release(DYNTICKS, dynticks + 1); smp_mb(); // Now idle. } P1(int *X, int *DYNTICKS) { int x; int dynticks; smp_mb(); dynticks = smp_load_acquire(DYNTICKS); x = READ_ONCE(*X); } exists (1:dynticks=2 /\ 1:x=0) Running "herd7 -conf linux-kernel.cfg dynticks-into-idle.litmus" verifies this transition, namely, showing that if the RCU grace-period kthread (P1) sees another CPU as newly idle (P0), then any pre-idle memory access (P0's write to X) will be seen by any code following the grace period (P1's read from X). This is a simple release-acquire pair forcing ordering in a message-passing (MP) litmus test. Of course, if the grace-period kthread detects the CPU as non-idle, it will refrain from reporting a quiescent state on behalf of that CPU, so there are no ordering requirements from the grace-period kthread in that case. However, other subsystems call rcu_is_idle_cpu() to check for CPUs being non-idle from an RCU perspective. That case is also verified by the above litmus tests with the proviso that the sense of the low-order bit of the DYNTICKS counter be inverted. Unfortunately, on x86 smp_mb() is as expensive as a cache-local atomic increment. This commit therefore weakens only the read from ->dynticks. However, the updates are abstracted into a rcu_dynticks_inc() function to ease any future changes that might be needed. [ paulmck: Apply Linus Torvalds feedback. ] Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 42a0032dd99f7..c87b3a271d65b 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -251,6 +251,15 @@ void rcu_softirq_qs(void) rcu_tasks_qs(current, false); } +/* + * Increment the current CPU's rcu_data structure's ->dynticks field + * with ordering. Return the new value. + */ +static noinstr unsigned long rcu_dynticks_inc(int incby) +{ + return arch_atomic_add_return(incby, this_cpu_ptr(&rcu_data.dynticks)); +} + /* * Record entry into an extended quiescent state. This is only to be * called when not already in an extended quiescent state, that is, @@ -267,7 +276,7 @@ static noinstr void rcu_dynticks_eqs_enter(void) * next idle sojourn. */ rcu_dynticks_task_trace_enter(); // Before ->dynticks update! - seq = arch_atomic_inc_return(&this_cpu_ptr(&rcu_data)->dynticks); + seq = rcu_dynticks_inc(1); // RCU is no longer watching. Better be in extended quiescent state! WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & 0x1)); } @@ -286,7 +295,7 @@ static noinstr void rcu_dynticks_eqs_exit(void) * and we also must force ordering with the next RCU read-side * critical section. */ - seq = arch_atomic_inc_return(&this_cpu_ptr(&rcu_data)->dynticks); + seq = rcu_dynticks_inc(1); // RCU is now watching. Better not be in an extended quiescent state! rcu_dynticks_task_trace_exit(); // After ->dynticks update! WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & 0x1)); @@ -308,7 +317,7 @@ static void rcu_dynticks_eqs_online(void) if (atomic_read(&rdp->dynticks) & 0x1) return; - atomic_inc(&rdp->dynticks); + rcu_dynticks_inc(1); } /* @@ -318,7 +327,7 @@ static void rcu_dynticks_eqs_online(void) */ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) { - return !(arch_atomic_read(&this_cpu_ptr(&rcu_data)->dynticks) & 0x1); + return !(atomic_read(this_cpu_ptr(&rcu_data.dynticks)) & 0x1); } /* @@ -327,7 +336,8 @@ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) */ static int rcu_dynticks_snap(struct rcu_data *rdp) { - return atomic_add_return(0, &rdp->dynticks); + smp_mb(); // Fundamental RCU ordering guarantee. + return atomic_read_acquire(&rdp->dynticks); } /* @@ -391,12 +401,12 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) */ notrace void rcu_momentary_dyntick_idle(void) { - int special; + int seq; raw_cpu_write(rcu_data.rcu_need_heavy_qs, false); - special = atomic_add_return(2, &this_cpu_ptr(&rcu_data)->dynticks); + seq = rcu_dynticks_inc(2); /* It is illegal to call this from idle state. */ - WARN_ON_ONCE(!(special & 0x1)); + WARN_ON_ONCE(!(seq & 0x1)); rcu_preempt_deferred_qs(current); } EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle);
next prev parent reply other threads:[~2021-07-28 17:37 UTC|newest] Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-07-21 20:20 [PATCH rcu 0/18] Miscellaneous fixes for v5.15 Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 01/18] rcu: Fix to include first blocked task in stall warning Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 02/18] rcu: Fix stall-warning deadlock due to non-release of rcu_node ->lock Paul E. McKenney 2021-08-03 14:24 ` Qais Yousef 2021-08-03 15:52 ` Paul E. McKenney 2021-08-03 16:12 ` Qais Yousef 2021-08-03 16:28 ` Paul E. McKenney 2021-08-03 16:33 ` Qais Yousef 2021-08-04 13:50 ` Qais Yousef 2021-08-04 22:33 ` Paul E. McKenney 2021-08-06 9:56 ` Qais Yousef 2021-08-06 9:57 ` Qais Yousef 2021-08-06 11:43 ` Paul E. McKenney 2021-08-06 12:33 ` Qais Yousef 2021-07-21 20:21 ` [PATCH rcu 03/18] rcu: Remove special bit at the bottom of the ->dynticks counter Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 04/18] rcu: Weaken ->dynticks accesses and updates Paul E. McKenney 2021-07-21 20:41 ` Linus Torvalds 2021-07-21 21:25 ` Paul E. McKenney 2021-07-28 17:37 ` Paul E. McKenney [this message] 2021-07-28 17:58 ` [PATCH v2 " Linus Torvalds 2021-07-28 18:12 ` Mathieu Desnoyers 2021-07-28 18:32 ` Linus Torvalds 2021-07-28 18:39 ` Mathieu Desnoyers 2021-07-28 18:46 ` Paul E. McKenney 2021-07-28 18:46 ` Paul E. McKenney 2021-07-28 18:57 ` Linus Torvalds 2021-07-28 18:23 ` Mathieu Desnoyers 2021-07-28 18:58 ` Paul E. McKenney 2021-07-28 19:45 ` Paul E. McKenney 2021-07-28 20:03 ` Mathieu Desnoyers 2021-07-28 20:28 ` Paul E. McKenney 2021-07-29 14:41 ` Mathieu Desnoyers 2021-07-29 15:57 ` Paul E. McKenney 2021-07-29 17:41 ` Mathieu Desnoyers 2021-07-29 18:05 ` Paul E. McKenney 2021-07-29 18:42 ` Mathieu Desnoyers 2021-07-28 20:37 ` Josh Triplett 2021-07-28 20:47 ` Paul E. McKenney 2021-07-28 22:23 ` Frederic Weisbecker 2021-07-29 1:07 ` Paul E. McKenney 2021-07-29 7:58 ` [PATCH " Boqun Feng 2021-07-29 10:53 ` Frederic Weisbecker 2021-07-30 5:56 ` Boqun Feng 2021-07-30 17:18 ` Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 05/18] rcu: Mark accesses to ->rcu_read_lock_nesting Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 06/18] rculist: Unify documentation about missing list_empty_rcu() Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 07/18] rcu/tree: Handle VM stoppage in stall detection Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 08/18] rcu: Do not disable GP stall detection in rcu_cpu_stall_reset() Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 09/18] rcu: Start timing stall repetitions after warning complete Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 10/18] srcutiny: Mark read-side data races Paul E. McKenney 2021-07-29 8:23 ` Boqun Feng 2021-07-29 13:36 ` Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 11/18] rcu: Mark lockless ->qsmask read in rcu_check_boost_fail() Paul E. McKenney 2021-07-29 8:54 ` Boqun Feng 2021-07-29 14:03 ` Paul E. McKenney 2021-07-30 2:28 ` Boqun Feng 2021-07-30 3:26 ` Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 12/18] rcu: Make rcu_gp_init() and rcu_gp_fqs_loop noinline to conserve stack Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 13/18] rcu: Remove trailing spaces and tabs Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 14/18] rcu: Mark accesses in tree_stall.h Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 15/18] rcu: Remove useless "ret" update in rcu_gp_fqs_loop() Paul E. McKenney 2021-08-03 16:48 ` Joe Perches 2021-08-03 17:10 ` Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 16/18] rcu: Use per_cpu_ptr to get the pointer of per_cpu variable Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 17/18] rcu: Explain why rcu_all_qs() is a stub in preemptible TREE RCU Paul E. McKenney 2021-07-21 20:21 ` [PATCH rcu 18/18] rcu: Print human-readable message for schedule() in RCU reader Paul E. McKenney
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210728173715.GA9416@paulmck-ThinkPad-P17-Gen-1 \ --to=paulmck@kernel.org \ --cc=akpm@linux-foundation.org \ --cc=dhowells@redhat.com \ --cc=edumazet@google.com \ --cc=fweisbec@gmail.com \ --cc=jiangshanlai@gmail.com \ --cc=joel@joelfernandes.org \ --cc=josh@joshtriplett.org \ --cc=kernel-team@fb.com \ --cc=linux-kernel@vger.kernel.org \ --cc=mathieu.desnoyers@efficios.com \ --cc=mingo@kernel.org \ --cc=oleg@redhat.com \ --cc=peterz@infradead.org \ --cc=rcu@vger.kernel.org \ --cc=rostedt@goodmis.org \ --cc=tglx@linutronix.de \ --cc=torvalds@linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).