LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] locking/lock_events: Use this_cpu_add() when necessary
@ 2019-05-22 15:39 Waiman Long
  2019-05-22 19:54 ` Linus Torvalds
  0 siblings, 1 reply; 5+ messages in thread
From: Waiman Long @ 2019-05-22 15:39 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Borislav Petkov, H. Peter Anvin
  Cc: linux-kernel, x86, Davidlohr Bueso, Linus Torvalds, Tim Chen,
	huang ying, Waiman Long

The kernel test robot has reported that the use of __this_cpu_add()
causes bug messages like:

  BUG: using __this_cpu_add() in preemptible [00000000] code: ...

This is only an issue on preempt kernel where preemption can happen
in the middle of the multi-instruction percpu operation. It is not an
issue on x86 as the percpu operation is a single instruction.  The lock
events code is updated to use the slower this_cpu_add() for non-x86
preempt kernel or when CONFIG_DEBUG_PREEMPT is defined.

Fixes: a8654596f0371 ("locking/rwsem: Enable lock event counting")
Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/lock_events.h | 27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lock_events.h b/kernel/locking/lock_events.h
index feb1acc54611..2b6c8b7588dc 100644
--- a/kernel/locking/lock_events.h
+++ b/kernel/locking/lock_events.h
@@ -30,13 +30,36 @@ enum lock_events {
  */
 DECLARE_PER_CPU(unsigned long, lockevents[lockevent_num]);
 
+/*
+ * The purpose of the lock event counting subsystem is to provide a low
+ * overhead way to record the number of specific locking events by using
+ * percpu counters. It is the percpu sum that matters, not specifically
+ * how many of them happens in each cpu.
+ *
+ * In !preempt kernel, we can just use __this_cpu_{inc|add}() as preemption
+ * won't happen in the middle of the percpu operation. In preempt kernel,
+ * it depends on whether the percpu operation is atomic (1 instruction)
+ * or not. We know x86 generates a single instruction to do percpu op, but
+ * we can't guarantee that for other architectures. We also need to use
+ * the slower this_cpu_{inc|add}() when CONFIG_DEBUG_PREEMPT is defined
+ * to make the checking code happy.
+ */
+#if defined(CONFIG_PREEMPT) && \
+   (defined(CONFIG_DEBUG_PREEMPT) || !defined(CONFIG_X86))
+#define lockevent_percpu_inc(x)		this_cpu_inc(x)
+#define lockevent_percpu_add(x, v)	this_cpu_add(x, v)
+#else
+#define lockevent_percpu_inc(x)		__this_cpu_inc(x)
+#define lockevent_percpu_add(x, v)	__this_cpu_add(x, v)
+#endif
+
 /*
  * Increment the PV qspinlock statistical counters
  */
 static inline void __lockevent_inc(enum lock_events event, bool cond)
 {
 	if (cond)
-		__this_cpu_inc(lockevents[event]);
+		lockevent_percpu_inc(lockevents[event]);
 }
 
 #define lockevent_inc(ev)	  __lockevent_inc(LOCKEVENT_ ##ev, true)
@@ -44,7 +67,7 @@ static inline void __lockevent_inc(enum lock_events event, bool cond)
 
 static inline void __lockevent_add(enum lock_events event, int inc)
 {
-	__this_cpu_add(lockevents[event], inc);
+	lockevent_percpu_add(lockevents[event], inc);
 }
 
 #define lockevent_add(ev, c)	__lockevent_add(LOCKEVENT_ ##ev, c)
-- 
2.18.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-05-24 17:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-22 15:39 [PATCH] locking/lock_events: Use this_cpu_add() when necessary Waiman Long
2019-05-22 19:54 ` Linus Torvalds
2019-05-22 20:50   ` Waiman Long
2019-05-23 14:58   ` Will Deacon
2019-05-24 17:00     ` Waiman Long

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).