LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [GIT PULL] locking fix
@ 2019-05-16 16:01 Ingo Molnar
  2019-05-16 17:57 ` Linus Torvalds
  2019-05-16 18:20 ` pr-tracker-bot
  0 siblings, 2 replies; 30+ messages in thread
From: Ingo Molnar @ 2019-05-16 16:01 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: a9e9bcb45b1525ba7aea26ed9441e8632aeeda58 locking/rwsem: Prevent decrement of reader count before increment

A single rwsem fix.

 Thanks,

	Ingo

------------------>
Waiman Long (1):
      locking/rwsem: Prevent decrement of reader count before increment


 kernel/locking/rwsem-xadd.c | 46 ++++++++++++++++++++++++++++++---------------
 1 file changed, 31 insertions(+), 15 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 6b3ee9948bf1..0b1f77957240 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -130,6 +130,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 {
 	struct rwsem_waiter *waiter, *tmp;
 	long oldcount, woken = 0, adjustment = 0;
+	struct list_head wlist;
 
 	/*
 	 * Take a peek at the queue head waiter such that we can determine
@@ -188,18 +189,43 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 	 * of the queue. We know that woken will be at least 1 as we accounted
 	 * for above. Note we increment the 'active part' of the count by the
 	 * number of readers before waking any processes up.
+	 *
+	 * We have to do wakeup in 2 passes to prevent the possibility that
+	 * the reader count may be decremented before it is incremented. It
+	 * is because the to-be-woken waiter may not have slept yet. So it
+	 * may see waiter->task got cleared, finish its critical section and
+	 * do an unlock before the reader count increment.
+	 *
+	 * 1) Collect the read-waiters in a separate list, count them and
+	 *    fully increment the reader count in rwsem.
+	 * 2) For each waiters in the new list, clear waiter->task and
+	 *    put them into wake_q to be woken up later.
 	 */
-	list_for_each_entry_safe(waiter, tmp, &sem->wait_list, list) {
-		struct task_struct *tsk;
-
+	list_for_each_entry(waiter, &sem->wait_list, list) {
 		if (waiter->type == RWSEM_WAITING_FOR_WRITE)
 			break;
 
 		woken++;
-		tsk = waiter->task;
+	}
+	list_cut_before(&wlist, &sem->wait_list, &waiter->list);
+
+	adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
+	lockevent_cond_inc(rwsem_wake_reader, woken);
+	if (list_empty(&sem->wait_list)) {
+		/* hit end of list above */
+		adjustment -= RWSEM_WAITING_BIAS;
+	}
+
+	if (adjustment)
+		atomic_long_add(adjustment, &sem->count);
+
+	/* 2nd pass */
+	list_for_each_entry_safe(waiter, tmp, &wlist, list) {
+		struct task_struct *tsk;
 
+		tsk = waiter->task;
 		get_task_struct(tsk);
-		list_del(&waiter->list);
+
 		/*
 		 * Ensure calling get_task_struct() before setting the reader
 		 * waiter to nil such that rwsem_down_read_failed() cannot
@@ -213,16 +239,6 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
 		 */
 		wake_q_add_safe(wake_q, tsk);
 	}
-
-	adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
-	lockevent_cond_inc(rwsem_wake_reader, woken);
-	if (list_empty(&sem->wait_list)) {
-		/* hit end of list above */
-		adjustment -= RWSEM_WAITING_BIAS;
-	}
-
-	if (adjustment)
-		atomic_long_add(adjustment, &sem->count);
 }
 
 /*

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-05-16 16:01 [GIT PULL] locking fix Ingo Molnar
@ 2019-05-16 17:57 ` Linus Torvalds
  2019-05-16 18:39   ` Greg KH
  2019-05-16 18:20 ` pr-tracker-bot
  1 sibling, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2019-05-16 17:57 UTC (permalink / raw)
  To: Ingo Molnar, stable
  Cc: Linux List Kernel Mailing, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

On Thu, May 16, 2019 at 9:01 AM Ingo Molnar <mingo@kernel.org> wrote:
>
> A single rwsem fix.

Side note, this one isn't marked for stable, but I'm hoping stable
picks it up just from the "Fixes" tag.

Stable people, we're talking about

    a9e9bcb45b15 ("locking/rwsem: Prevent decrement of reader count
before increment")

that I just pulled into my tree, and needs to go in 4.9 and later.

              Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-05-16 16:01 [GIT PULL] locking fix Ingo Molnar
  2019-05-16 17:57 ` Linus Torvalds
@ 2019-05-16 18:20 ` pr-tracker-bot
  1 sibling, 0 replies; 30+ messages in thread
From: pr-tracker-bot @ 2019-05-16 18:20 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Thu, 16 May 2019 18:01:35 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/f57d7715d7645b7c3d1e7b7cb79ac7690fe2d260

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-05-16 17:57 ` Linus Torvalds
@ 2019-05-16 18:39   ` Greg KH
  2019-05-16 18:42     ` Linus Torvalds
  0 siblings, 1 reply; 30+ messages in thread
From: Greg KH @ 2019-05-16 18:39 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, stable, Linux List Kernel Mailing, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

On Thu, May 16, 2019 at 10:57:53AM -0700, Linus Torvalds wrote:
> On Thu, May 16, 2019 at 9:01 AM Ingo Molnar <mingo@kernel.org> wrote:
> >
> > A single rwsem fix.
> 
> Side note, this one isn't marked for stable, but I'm hoping stable
> picks it up just from the "Fixes" tag.
> 
> Stable people, we're talking about
> 
>     a9e9bcb45b15 ("locking/rwsem: Prevent decrement of reader count
> before increment")
> 
> that I just pulled into my tree, and needs to go in 4.9 and later.

Thanks, I'll work on that later tonight...


greg k-h

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-05-16 18:39   ` Greg KH
@ 2019-05-16 18:42     ` Linus Torvalds
  2019-05-16 23:55       ` Sasha Levin
  0 siblings, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2019-05-16 18:42 UTC (permalink / raw)
  To: Greg KH
  Cc: Ingo Molnar, stable, Linux List Kernel Mailing, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

On Thu, May 16, 2019 at 11:39 AM Greg KH <greg@kroah.com> wrote:
>
> Thanks, I'll work on that later tonight...

Note that it probably is almost entirely impossible to trigger the
problem in practice, so it's not like this is a particularly important
stable back-port.

I just happened to look at it and go "hmm, it's not _marked_ for stable".

                  Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-05-16 18:42     ` Linus Torvalds
@ 2019-05-16 23:55       ` Sasha Levin
  2019-05-17 12:16         ` Greg KH
  0 siblings, 1 reply; 30+ messages in thread
From: Sasha Levin @ 2019-05-16 23:55 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Greg KH, Ingo Molnar, stable, Linux List Kernel Mailing,
	Peter Zijlstra, Thomas Gleixner, Andrew Morton

On Thu, May 16, 2019 at 11:42:58AM -0700, Linus Torvalds wrote:
>On Thu, May 16, 2019 at 11:39 AM Greg KH <greg@kroah.com> wrote:
>>
>> Thanks, I'll work on that later tonight...
>
>Note that it probably is almost entirely impossible to trigger the
>problem in practice, so it's not like this is a particularly important
>stable back-port.
>
>I just happened to look at it and go "hmm, it's not _marked_ for stable".

I've addressed a missing a8654596f03 ("locking/rwsem: Enable lock event
counting") and queued up a backport for 5.1-4.9.

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-05-16 23:55       ` Sasha Levin
@ 2019-05-17 12:16         ` Greg KH
  0 siblings, 0 replies; 30+ messages in thread
From: Greg KH @ 2019-05-17 12:16 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Linus Torvalds, Ingo Molnar, stable, Linux List Kernel Mailing,
	Peter Zijlstra, Thomas Gleixner, Andrew Morton

On Thu, May 16, 2019 at 07:55:53PM -0400, Sasha Levin wrote:
> On Thu, May 16, 2019 at 11:42:58AM -0700, Linus Torvalds wrote:
> > On Thu, May 16, 2019 at 11:39 AM Greg KH <greg@kroah.com> wrote:
> > > 
> > > Thanks, I'll work on that later tonight...
> > 
> > Note that it probably is almost entirely impossible to trigger the
> > problem in practice, so it's not like this is a particularly important
> > stable back-port.
> > 
> > I just happened to look at it and go "hmm, it's not _marked_ for stable".
> 
> I've addressed a missing a8654596f03 ("locking/rwsem: Enable lock event
> counting") and queued up a backport for 5.1-4.9.

Thanks for doing this.

greg k-h

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2021-03-28 10:28 Ingo Molnar
@ 2021-03-28 19:22 ` pr-tracker-bot
  0 siblings, 0 replies; 30+ messages in thread
From: pr-tracker-bot @ 2021-03-28 19:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Thomas Gleixner, Peter Zijlstra,
	Andrew Morton

The pull request you sent on Sun, 28 Mar 2021 12:28:43 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-2021-03-28

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/47fbbc94dab61a1385f21a0a209c61b5d6b0a215

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2021-03-28 10:28 Ingo Molnar
  2021-03-28 19:22 ` pr-tracker-bot
  0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2021-03-28 10:28 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Thomas Gleixner, Peter Zijlstra, Andrew Morton

Linus,

Please pull the latest locking/urgent git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-2021-03-28

   # HEAD: 291da9d4a9eb3a1cb0610b7f4480f5b52b1825e7 locking/mutex: Fix non debug version of mutex_lock_io_nested()

Fix the non-debug mutex_lock_io_nested() method to map to mutex_lock_io() instead of mutex_lock().

Right now nothing uses this API explicitly, but this is an accident waiting to happen.

 Thanks,

	Ingo

------------------>
Thomas Gleixner (1):
      locking/mutex: Fix non debug version of mutex_lock_io_nested()


 include/linux/mutex.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 0cd631a19727..515cff77a4f4 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -185,7 +185,7 @@ extern void mutex_lock_io(struct mutex *lock);
 # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock)
 # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lock)
 # define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock)
-# define mutex_lock_io_nested(lock, subclass) mutex_lock(lock)
+# define mutex_lock_io_nested(lock, subclass) mutex_lock_io(lock)
 #endif
 
 /*

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-07-14 11:36 Ingo Molnar
@ 2019-07-14 18:45 ` pr-tracker-bot
  0 siblings, 0 replies; 30+ messages in thread
From: pr-tracker-bot @ 2019-07-14 18:45 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Will Deacon, Andrew Morton

The pull request you sent on Sun, 14 Jul 2019 13:36:21 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/0c85ce135456a3927f552e738f730c47ac905ac3

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2019-07-14 11:36 Ingo Molnar
  2019-07-14 18:45 ` pr-tracker-bot
  0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2019-07-14 11:36 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Will Deacon,
	Andrew Morton

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: 68d41d8c94a31dfb8233ab90b9baf41a2ed2da68 locking/lockdep: Fix lock used or unused stats error

A single fix for a locking statistics bug.

 Thanks,

	Ingo

------------------>
Yuyang Du (1):
      locking/lockdep: Fix lock used or unused stats error


 kernel/locking/lockdep_proc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 9c49ec645d8b..65b6a1600c8f 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -210,6 +210,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 		      nr_hardirq_read_safe = 0, nr_hardirq_read_unsafe = 0,
 		      sum_forward_deps = 0;
 
+#ifdef CONFIG_PROVE_LOCKING
 	list_for_each_entry(class, &all_lock_classes, lock_entry) {
 
 		if (class->usage_mask == 0)
@@ -241,12 +242,12 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
 		if (class->usage_mask & LOCKF_ENABLED_HARDIRQ_READ)
 			nr_hardirq_read_unsafe++;
 
-#ifdef CONFIG_PROVE_LOCKING
 		sum_forward_deps += lockdep_count_forward_deps(class);
-#endif
 	}
 #ifdef CONFIG_DEBUG_LOCKDEP
 	DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused);
+#endif
+
 #endif
 	seq_printf(m, " lock-classes:                  %11lu [max: %lu]\n",
 			nr_lock_classes, MAX_LOCKDEP_KEYS);

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2019-04-12 11:53 Ingo Molnar
@ 2019-04-13  4:05 ` pr-tracker-bot
  0 siblings, 0 replies; 30+ messages in thread
From: pr-tracker-bot @ 2019-04-13  4:05 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Thomas Gleixner, Peter Zijlstra,
	Andrew Morton

The pull request you sent on Fri, 12 Apr 2019 13:53:43 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/26e2b81977bb1ad70ff9b2acb4d4cb13c23facfd

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2019-04-12 11:53 Ingo Molnar
  2019-04-13  4:05 ` pr-tracker-bot
  0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2019-04-12 11:53 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Thomas Gleixner, Peter Zijlstra, Andrew Morton

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: 90c1cba2b3b3851c151229f61801919b2904d437 locking/lockdep: Zap lock classes even with lock debugging disabled

Fixes a crash when accessing /proc/lockdep.

 Thanks,

	Ingo

------------------>
Bart Van Assche (1):
      locking/lockdep: Zap lock classes even with lock debugging disabled


 kernel/locking/lockdep.c | 29 ++++++++++++-----------------
 1 file changed, 12 insertions(+), 17 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 34cdcbedda49..e16766ff184b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4689,8 +4689,8 @@ static void free_zapped_rcu(struct rcu_head *ch)
 		return;
 
 	raw_local_irq_save(flags);
-	if (!graph_lock())
-		goto out_irq;
+	arch_spin_lock(&lockdep_lock);
+	current->lockdep_recursion = 1;
 
 	/* closed head */
 	pf = delayed_free.pf + (delayed_free.index ^ 1);
@@ -4702,8 +4702,8 @@ static void free_zapped_rcu(struct rcu_head *ch)
 	 */
 	call_rcu_zapped(delayed_free.pf + delayed_free.index);
 
-	graph_unlock();
-out_irq:
+	current->lockdep_recursion = 0;
+	arch_spin_unlock(&lockdep_lock);
 	raw_local_irq_restore(flags);
 }
 
@@ -4744,21 +4744,17 @@ static void lockdep_free_key_range_reg(void *start, unsigned long size)
 {
 	struct pending_free *pf;
 	unsigned long flags;
-	int locked;
 
 	init_data_structures_once();
 
 	raw_local_irq_save(flags);
-	locked = graph_lock();
-	if (!locked)
-		goto out_irq;
-
+	arch_spin_lock(&lockdep_lock);
+	current->lockdep_recursion = 1;
 	pf = get_pending_free();
 	__lockdep_free_key_range(pf, start, size);
 	call_rcu_zapped(pf);
-
-	graph_unlock();
-out_irq:
+	current->lockdep_recursion = 0;
+	arch_spin_unlock(&lockdep_lock);
 	raw_local_irq_restore(flags);
 
 	/*
@@ -4911,9 +4907,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
 		return;
 
 	raw_local_irq_save(flags);
-	if (!graph_lock())
-		goto out_irq;
-
+	arch_spin_lock(&lockdep_lock);
+	current->lockdep_recursion = 1;
 	pf = get_pending_free();
 	hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
 		if (k == key) {
@@ -4925,8 +4920,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
 	WARN_ON_ONCE(!found);
 	__lockdep_free_key_range(pf, key, 1);
 	call_rcu_zapped(pf);
-	graph_unlock();
-out_irq:
+	current->lockdep_recursion = 0;
+	arch_spin_unlock(&lockdep_lock);
 	raw_local_irq_restore(flags);
 
 	/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2017-07-21 10:11 Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2017-07-21 10:11 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Thomas Gleixner, Peter Zijlstra, Andrew Morton

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: 69f0d429c413fe96db2c187475cebcc6e3a8c7f5 locking/rtmutex: Remove unnecessary priority adjustment

Remove an unnecessary priority adjustment in the rtmutex code.

 Thanks,

	Ingo

------------------>
Alex Shi (1):
      locking/rtmutex: Remove unnecessary priority adjustment


 kernel/locking/rtmutex.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 78069895032a..649dc9d3951a 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -963,7 +963,6 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
 		return -EDEADLK;
 
 	raw_spin_lock(&task->pi_lock);
-	rt_mutex_adjust_prio(task);
 	waiter->task = task;
 	waiter->lock = lock;
 	waiter->prio = task->prio;

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2016-09-13 18:11 Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2016-09-13 18:11 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Paul E. McKenney,
	Andrew Morton

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: d7127b5e5fa0551be21b86640f1648b224e36d43 locking/barriers: Don't use sizeof(void) in lockless_dereference()

Another lockless_dereference() Sparse fix.

 Thanks,

	Ingo

------------------>
Johannes Berg (1):
      locking/barriers: Don't use sizeof(void) in lockless_dereference()


 include/linux/compiler.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 436aa4e42221..668569844d37 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -527,13 +527,14 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
  * object's lifetime is managed by something other than RCU.  That
  * "something other" might be reference counting or simple immortality.
  *
- * The seemingly unused size_t variable is to validate @p is indeed a pointer
- * type by making sure it can be dereferenced.
+ * The seemingly unused variable ___typecheck_p validates that @p is
+ * indeed a pointer type by using a pointer to typeof(*p) as the type.
+ * Taking a pointer to typeof(*p) again is needed in case p is void *.
  */
 #define lockless_dereference(p) \
 ({ \
 	typeof(p) _________p1 = READ_ONCE(p); \
-	size_t __maybe_unused __size_of_ptr = sizeof(*(p)); \
+	typeof(*(p)) *___typecheck_p __maybe_unused; \
 	smp_read_barrier_depends(); /* Dependency order vs. p above. */ \
 	(_________p1); \
 })

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2016-04-16  9:16 Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2016-04-16  9:16 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton,
	Paul E. McKenney

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: 5c8a010c2411729a07cb1b90c09fa978ac0ac6c0 locking/lockdep: Fix print_collision() unused warning

Fixes a build warning on certain Kconfig combinations.

 Thanks,

	Ingo

------------------>
Borislav Petkov (1):
      locking/lockdep: Fix print_collision() unused warning


 kernel/locking/lockdep.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2324ba5310db..ed9410936a22 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1999,6 +1999,7 @@ static inline int get_first_held_lock(struct task_struct *curr,
 	return ++i;
 }
 
+#ifdef CONFIG_DEBUG_LOCKDEP
 /*
  * Returns the next chain_key iteration
  */
@@ -2069,6 +2070,7 @@ static void print_collision(struct task_struct *curr,
 	printk("\nstack backtrace:\n");
 	dump_stack();
 }
+#endif
 
 /*
  * Checks whether the chain and the current held locks are consistent

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2015-08-14  7:08 Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2015-08-14  7:08 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Paul E. McKenney,
	Andrew Morton

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: cba77f03f2c7b6cc0b0a44a3c679e0abade7da62 locking/pvqspinlock: Fix kernel panic in locking-selftest

A single fix for a locking self-test crash.

 Thanks,

	Ingo

------------------>
Waiman Long (1):
      locking/pvqspinlock: Fix kernel panic in locking-selftest


 kernel/locking/qspinlock_paravirt.h | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 04ab18151cc8..df19ae4debd0 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -4,6 +4,7 @@
 
 #include <linux/hash.h>
 #include <linux/bootmem.h>
+#include <linux/debug_locks.h>
 
 /*
  * Implement paravirt qspinlocks; the general idea is to halt the vcpus instead
@@ -286,15 +287,23 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock)
 {
 	struct __qspinlock *l = (void *)lock;
 	struct pv_node *node;
+	u8 lockval = cmpxchg(&l->locked, _Q_LOCKED_VAL, 0);
 
 	/*
 	 * We must not unlock if SLOW, because in that case we must first
 	 * unhash. Otherwise it would be possible to have multiple @lock
 	 * entries, which would be BAD.
 	 */
-	if (likely(cmpxchg(&l->locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL))
+	if (likely(lockval == _Q_LOCKED_VAL))
 		return;
 
+	if (unlikely(lockval != _Q_SLOW_VAL)) {
+		if (debug_locks_silent)
+			return;
+		WARN(1, "pvqspinlock: lock %p has corrupted value 0x%x!\n", lock, atomic_read(&lock->val));
+		return;
+	}
+
 	/*
 	 * Since the above failed to release, this must be the SLOW path.
 	 * Therefore start by looking up the blocked node and unhashing it.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2015-03-28 10:07 Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2015-03-28 10:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Paul E. McKenney,
	Rusty Russell, Andrew Morton

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: 35a9393c95b31870a74f51a3e7455f33f5657b6f lockdep: Fix the module unload key range freeing logic

A module unload lockdep race fix.

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      lockdep: Fix the module unload key range freeing logic


 kernel/locking/lockdep.c | 81 ++++++++++++++++++++++++++++++++----------------
 kernel/module.c          |  8 ++---
 2 files changed, 59 insertions(+), 30 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 88d0d4420ad2..ba77ab5f64dd 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -633,7 +633,7 @@ static int count_matching_names(struct lock_class *new_class)
 	if (!new_class->name)
 		return 0;
 
-	list_for_each_entry(class, &all_lock_classes, lock_entry) {
+	list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) {
 		if (new_class->key - new_class->subclass == class->key)
 			return class->name_version;
 		if (class->name && !strcmp(class->name, new_class->name))
@@ -700,10 +700,12 @@ look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
 	hash_head = classhashentry(key);
 
 	/*
-	 * We can walk the hash lockfree, because the hash only
-	 * grows, and we are careful when adding entries to the end:
+	 * We do an RCU walk of the hash, see lockdep_free_key_range().
 	 */
-	list_for_each_entry(class, hash_head, hash_entry) {
+	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+		return NULL;
+
+	list_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key) {
 			/*
 			 * Huh! same key, different name? Did someone trample
@@ -728,7 +730,8 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	struct lockdep_subclass_key *key;
 	struct list_head *hash_head;
 	struct lock_class *class;
-	unsigned long flags;
+
+	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
 	class = look_up_lock_class(lock, subclass);
 	if (likely(class))
@@ -750,28 +753,26 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	key = lock->key->subkeys + subclass;
 	hash_head = classhashentry(key);
 
-	raw_local_irq_save(flags);
 	if (!graph_lock()) {
-		raw_local_irq_restore(flags);
 		return NULL;
 	}
 	/*
 	 * We have to do the hash-walk again, to avoid races
 	 * with another CPU:
 	 */
-	list_for_each_entry(class, hash_head, hash_entry)
+	list_for_each_entry_rcu(class, hash_head, hash_entry) {
 		if (class->key == key)
 			goto out_unlock_set;
+	}
+
 	/*
 	 * Allocate a new key from the static array, and add it to
 	 * the hash:
 	 */
 	if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {
 		if (!debug_locks_off_graph_unlock()) {
-			raw_local_irq_restore(flags);
 			return NULL;
 		}
-		raw_local_irq_restore(flags);
 
 		print_lockdep_off("BUG: MAX_LOCKDEP_KEYS too low!");
 		dump_stack();
@@ -798,7 +799,6 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 
 	if (verbose(class)) {
 		graph_unlock();
-		raw_local_irq_restore(flags);
 
 		printk("\nnew class %p: %s", class->key, class->name);
 		if (class->name_version > 1)
@@ -806,15 +806,12 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 		printk("\n");
 		dump_stack();
 
-		raw_local_irq_save(flags);
 		if (!graph_lock()) {
-			raw_local_irq_restore(flags);
 			return NULL;
 		}
 	}
 out_unlock_set:
 	graph_unlock();
-	raw_local_irq_restore(flags);
 
 out_set_class_cache:
 	if (!subclass || force)
@@ -870,11 +867,9 @@ static int add_lock_to_list(struct lock_class *class, struct lock_class *this,
 	entry->distance = distance;
 	entry->trace = *trace;
 	/*
-	 * Since we never remove from the dependency list, the list can
-	 * be walked lockless by other CPUs, it's only allocation
-	 * that must be protected by the spinlock. But this also means
-	 * we must make new entries visible only once writes to the
-	 * entry become visible - hence the RCU op:
+	 * Both allocation and removal are done under the graph lock; but
+	 * iteration is under RCU-sched; see look_up_lock_class() and
+	 * lockdep_free_key_range().
 	 */
 	list_add_tail_rcu(&entry->entry, head);
 
@@ -1025,7 +1020,9 @@ static int __bfs(struct lock_list *source_entry,
 		else
 			head = &lock->class->locks_before;
 
-		list_for_each_entry(entry, head, entry) {
+		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
+
+		list_for_each_entry_rcu(entry, head, entry) {
 			if (!lock_accessed(entry)) {
 				unsigned int cq_depth;
 				mark_lock_accessed(entry, lock);
@@ -2022,7 +2019,7 @@ static inline int lookup_chain_cache(struct task_struct *curr,
 	 * We can walk it lock-free, because entries only get added
 	 * to the hash:
 	 */
-	list_for_each_entry(chain, hash_head, entry) {
+	list_for_each_entry_rcu(chain, hash_head, entry) {
 		if (chain->chain_key == chain_key) {
 cache_hit:
 			debug_atomic_inc(chain_lookup_hits);
@@ -2996,8 +2993,18 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,
 	if (unlikely(!debug_locks))
 		return;
 
-	if (subclass)
+	if (subclass) {
+		unsigned long flags;
+
+		if (DEBUG_LOCKS_WARN_ON(current->lockdep_recursion))
+			return;
+
+		raw_local_irq_save(flags);
+		current->lockdep_recursion = 1;
 		register_lock_class(lock, subclass, 1);
+		current->lockdep_recursion = 0;
+		raw_local_irq_restore(flags);
+	}
 }
 EXPORT_SYMBOL_GPL(lockdep_init_map);
 
@@ -3887,9 +3894,17 @@ static inline int within(const void *addr, void *start, unsigned long size)
 	return addr >= start && addr < start + size;
 }
 
+/*
+ * Used in module.c to remove lock classes from memory that is going to be
+ * freed; and possibly re-used by other modules.
+ *
+ * We will have had one sync_sched() before getting here, so we're guaranteed
+ * nobody will look up these exact classes -- they're properly dead but still
+ * allocated.
+ */
 void lockdep_free_key_range(void *start, unsigned long size)
 {
-	struct lock_class *class, *next;
+	struct lock_class *class;
 	struct list_head *head;
 	unsigned long flags;
 	int i;
@@ -3905,7 +3920,7 @@ void lockdep_free_key_range(void *start, unsigned long size)
 		head = classhash_table + i;
 		if (list_empty(head))
 			continue;
-		list_for_each_entry_safe(class, next, head, hash_entry) {
+		list_for_each_entry_rcu(class, head, hash_entry) {
 			if (within(class->key, start, size))
 				zap_class(class);
 			else if (within(class->name, start, size))
@@ -3916,11 +3931,25 @@ void lockdep_free_key_range(void *start, unsigned long size)
 	if (locked)
 		graph_unlock();
 	raw_local_irq_restore(flags);
+
+	/*
+	 * Wait for any possible iterators from look_up_lock_class() to pass
+	 * before continuing to free the memory they refer to.
+	 *
+	 * sync_sched() is sufficient because the read-side is IRQ disable.
+	 */
+	synchronize_sched();
+
+	/*
+	 * XXX at this point we could return the resources to the pool;
+	 * instead we leak them. We would need to change to bitmap allocators
+	 * instead of the linear allocators we have now.
+	 */
 }
 
 void lockdep_reset_lock(struct lockdep_map *lock)
 {
-	struct lock_class *class, *next;
+	struct lock_class *class;
 	struct list_head *head;
 	unsigned long flags;
 	int i, j;
@@ -3948,7 +3977,7 @@ void lockdep_reset_lock(struct lockdep_map *lock)
 		head = classhash_table + i;
 		if (list_empty(head))
 			continue;
-		list_for_each_entry_safe(class, next, head, hash_entry) {
+		list_for_each_entry_rcu(class, head, hash_entry) {
 			int match = 0;
 
 			for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
diff --git a/kernel/module.c b/kernel/module.c
index b3d634ed06c9..99fdf94efce8 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1865,7 +1865,7 @@ static void free_module(struct module *mod)
 	kfree(mod->args);
 	percpu_modfree(mod);
 
-	/* Free lock-classes: */
+	/* Free lock-classes; relies on the preceding sync_rcu(). */
 	lockdep_free_key_range(mod->module_core, mod->core_size);
 
 	/* Finally, free the core (containing the module structure) */
@@ -3349,9 +3349,6 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	module_bug_cleanup(mod);
 	mutex_unlock(&module_mutex);
 
-	/* Free lock-classes: */
-	lockdep_free_key_range(mod->module_core, mod->core_size);
-
 	/* we can't deallocate the module until we clear memory protection */
 	unset_module_init_ro_nx(mod);
 	unset_module_core_ro_nx(mod);
@@ -3375,6 +3372,9 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	synchronize_rcu();
 	mutex_unlock(&module_mutex);
  free_module:
+	/* Free lock-classes; relies on the preceding sync_rcu() */
+	lockdep_free_key_range(mod->module_core, mod->core_size);
+
 	module_deallocate(mod, info);
  free_copy:
 	free_copy(info);

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2015-03-01 16:57 Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2015-03-01 16:57 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Paul E. McKenney,
	Andrew Morton, Mike Galbraith

Linus,

Please pull the latest locking-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-for-linus

   # HEAD: 9d3e2d02f54160725d97f4ab1e1e8de493fbf33a locking/rtmutex: Set state back to running on error

An rtmutex deadlock path fixlet.

 Thanks,

	Ingo

------------------>
Sebastian Andrzej Siewior (1):
      locking/rtmutex: Set state back to running on error


 kernel/locking/rtmutex.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index e16e5542bf13..6357265a31ad 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1193,6 +1193,7 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
 		ret = __rt_mutex_slowlock(lock, state, timeout, &waiter);
 
 	if (unlikely(ret)) {
+		__set_current_state(TASK_RUNNING);
 		if (rt_mutex_has_waiters(lock))
 			remove_waiter(lock, &waiter);
 		rt_mutex_handle_deadlock(ret, chwalk, &waiter);

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 19:59             ` Linus Torvalds
@ 2013-10-28  8:47               ` Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2013-10-28  8:47 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Maarten Lankhorst, Linux Kernel Mailing List, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton


* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Sun, Oct 27, 2013 at 12:56 PM, Maarten Lankhorst
> <maarten.lankhorst@canonical.com> wrote:
> >
> > And this is why ww_ctx == NULL is now passed as an inline 
> > argument. :)
> 
> Well, more than that - the "optimization" has been done at the 
> source code level, so that the behavior is no longer a matter 
> about how well the compiler optimizes it any more.
> 
> I'm not complaining about the fix. I'm complaining about how the 
> fix was claimed to be due to a compiler bug. The "documentation" 
> for the fix (ie the commit message) was actively misleading.

Agreed, there was quite a bit of back and forth and I genuinely got 
confused and thought it's purely about a compiler bug (hence the 
misleading pull request) - will watch out for that pattern better 
next time around.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 19:56           ` Maarten Lankhorst
@ 2013-10-27 19:59             ` Linus Torvalds
  2013-10-28  8:47               ` Ingo Molnar
  0 siblings, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2013-10-27 19:59 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: Ingo Molnar, Linux Kernel Mailing List, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

On Sun, Oct 27, 2013 at 12:56 PM, Maarten Lankhorst
<maarten.lankhorst@canonical.com> wrote:
>
> And this is why ww_ctx == NULL is now passed as an inline argument. :)

Well, more than that - the "optimization" has been done at the source
code level, so that the behavior is no longer a matter about how well
the compiler optimizes it any more.

I'm not complaining about the fix. I'm complaining about how the fix
was claimed to be due to a compiler bug. The "documentation" for the
fix (ie the commit message) was actively misleading.

               Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 19:51         ` Linus Torvalds
@ 2013-10-27 19:56           ` Maarten Lankhorst
  2013-10-27 19:59             ` Linus Torvalds
  0 siblings, 1 reply; 30+ messages in thread
From: Maarten Lankhorst @ 2013-10-27 19:56 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, Linux Kernel Mailing List, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

op 27-10-13 20:51, Linus Torvalds schreef:
> On Sun, Oct 27, 2013 at 12:37 PM, Maarten Lankhorst
> <maarten.lankhorst@canonical.com> wrote:
>> I would love for a compiler to become that smart though, but I do not think it's likely.
> Dammit, even if that is true, then write the conditional *correctly*.
>
> As mentioned, the conditional
>
>     __builtin_constant_p(ww_ctx) && ww_ctx == NULL
>
> is actually sensible, in a way the original one was *not*. It actually
> tests what you apparently intended to test, and is more readable to
> humans to boot.
Yeah that mail arrived after I sent mine, I agree that this would have been more sensible.
> And no, it still isn't actually guaranteed to do what you want it to
> do. Historically, in gcc, __builtin_constant_p() really only ever
> worked in macros, because by the time you use it in inline functions,
> a constant NULL in the caller will have been turned into a argument
> variable in the inline function, and __builtin_constant_p() would be
> done before that was optimized away. Over the years, gcc has pushed
> some of the builtin evaluation deeper down, and these days it actually
> works within inline functions, but my point that
> __builtin_constant_p() is about a certain level of compiler
> optimization is very much true: you're actually testing for a compiler
> optimization detail.
>
> I know the LLVM people had similar issues with this comparison, so
> these days it's not even just about gcc versions. We may never have
> cared very much about icc, but llvm is actually an interesting target
> compiler.
>
And this is why ww_ctx == NULL is now passed as an inline argument. :)

~Maarten

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 19:37       ` Maarten Lankhorst
@ 2013-10-27 19:51         ` Linus Torvalds
  2013-10-27 19:56           ` Maarten Lankhorst
  0 siblings, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2013-10-27 19:51 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: Ingo Molnar, Linux Kernel Mailing List, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

On Sun, Oct 27, 2013 at 12:37 PM, Maarten Lankhorst
<maarten.lankhorst@canonical.com> wrote:
>
> I would love for a compiler to become that smart though, but I do not think it's likely.

Dammit, even if that is true, then write the conditional *correctly*.

As mentioned, the conditional

    __builtin_constant_p(ww_ctx) && ww_ctx == NULL

is actually sensible, in a way the original one was *not*. It actually
tests what you apparently intended to test, and is more readable to
humans to boot.

And no, it still isn't actually guaranteed to do what you want it to
do. Historically, in gcc, __builtin_constant_p() really only ever
worked in macros, because by the time you use it in inline functions,
a constant NULL in the caller will have been turned into a argument
variable in the inline function, and __builtin_constant_p() would be
done before that was optimized away. Over the years, gcc has pushed
some of the builtin evaluation deeper down, and these days it actually
works within inline functions, but my point that
__builtin_constant_p() is about a certain level of compiler
optimization is very much true: you're actually testing for a compiler
optimization detail.

I know the LLVM people had similar issues with this comparison, so
these days it's not even just about gcc versions. We may never have
cared very much about icc, but llvm is actually an interesting target
compiler.

            Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 19:23     ` Linus Torvalds
  2013-10-27 19:35       ` Linus Torvalds
@ 2013-10-27 19:37       ` Maarten Lankhorst
  2013-10-27 19:51         ` Linus Torvalds
  1 sibling, 1 reply; 30+ messages in thread
From: Maarten Lankhorst @ 2013-10-27 19:37 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, Linux Kernel Mailing List, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

op 27-10-13 20:23, Linus Torvalds schreef:
> On Sun, Oct 27, 2013 at 12:00 PM, Maarten Lankhorst
> <maarten.lankhorst@canonical.com> wrote:
>> op 27-10-13 18:28, Linus Torvalds schreef:
>>> That expression is largely equivalent to
>>> "__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
>>> the comparison to NULL is constant), which is actually much easier to
>>> read, while carrying a totally different semantic meaning. Making
>>> things worse, the comparison to NULL *may* be marked constant under
>>> some very random situations (ie the compiler could turn a "taking an
>>> address of a variable is never NULL" kind of knowledge and combining
>>> it with other knowledge, and turn a complicated "ctx" expression into
>>> a "I know this cannot be NULL" thing, and thus the "== NULL" is a
>>> constant, even though ctx itself is some dynamic calculation).
>>>
>>> Whoever wrote the original should be shot. And this commit shouldn't
>>> have been marked as being somehow about gcc-version dependence, but
>>> about removing completely crap code.
>>>
>> Unfortunately gcc disagreed there, which was another compiler bug.
> Stop this idiotic "blame gcc bug" crap. Which part of my explanation
> for why it was *NOT* a compiler bug did you not understand?
>
>> __builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == NULL), iirc.
> See my "largely equivalent" comment, with the *EXTRA* logic that gcc
> may actually find cases where the comparison is a constant even if the
> ww_ctx thing itself isn't a constant.
Sure in the theoretical case it's possible.
>>  __builtin_constant_p(ww_ctx == NULL) is equal to __builtin_constant_p(ww_ctx != NULL), but
>> the former is more readable, since it shows we expect ww_ctx to be null.
> Stop the f*cking around already! The  whole "we expect ww_ctx to be
> null" thing shows that YOU DO NOT SEEM TO UNDERSTAND WHAT THE TEST
> ACTUALLY IS!
>
> The expression
>
>    __builtin_constant_p(ww_ctx == NULL)
>
> has ABSOLUTELY NOTHING to do with whether ww_ctx is NULL or not!
> Christ, can you really not understand that?
I'm fully aware, I just think the compiler cannot know that the address is always non-null for a generic function that takes an argument and isn't inlined.

> For example, ww_ctx could be "&static_variable", and the compiler can
> - and some compiles _will_ - say that ww_ctx clearly cannot be NULL,
> so "ww_ctx == NULL" is 0, which is a constant, so the
> __builtin_constant_p() expression returns true. See? That expression
> has absolutely NOTHING to do with whether you passed in NULL or not.
> NOTHING.
but __ww_mutex_lock isn't inlined..
> That __builtin_constant_p() tests whether the comparison is
> *CONSTANT*. And "0" is just as much a constant as "1" is. Really. So
> the whole f*cking expression is total and utter crap, because it is
> entirely and utterly senseless. It lacks all meaning. It's not
> actually testing for NULL at all. Never was, never will.
>
> The *ONLY* thing it is testing for is "how much can the compiler
> optimize this", and as such the *ONLY* thing it tests for is compiler
> differences.
>
> Really. Seriously. If you start blaming the compiler for different
> compilers giving different results, the only thing *that* shows is
> that you didn't understand the expression to begin with.
>
>> But yeah I guess it was too broken in gcc after all, so that's why it had to be killed altogether.
> NO NO NO NO. No a f*cking thousand times. It's not "too broken in
> gcc". It's too broken in the source code, and the fact that you don't
> even understand that is sad. You wrote the code, and you seem to be
> unable to admit that *your* code was buggy.
>
> It's not a compiler bug. It's your bug. Stand up like a man, instead
> of trying to flail around and blame anything else but yourself.
>
> So guys, get your act together, and stop blaming the compiler already.
I never denied my original code didn't contain bugs, which is why I wrote that fix. I just don't believe gcc
will ever be smart enough to determine that ww_ctx is a non-null argument in all calls to __ww_mutex_lock,
and then determine for that reason ww_ctx != NULL would be an invariant.

I would love for a compiler to become that smart though, but I do not think it's likely.

But hey it was a bug, my code was buggy and I helped by suggesting how to write the correct fix.

~Maarten


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 19:23     ` Linus Torvalds
@ 2013-10-27 19:35       ` Linus Torvalds
  2013-10-27 19:37       ` Maarten Lankhorst
  1 sibling, 0 replies; 30+ messages in thread
From: Linus Torvalds @ 2013-10-27 19:35 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: Ingo Molnar, Linux Kernel Mailing List, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

On Sun, Oct 27, 2013 at 12:23 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> The *ONLY* thing it is testing for is "how much can the compiler
> optimize this", and as such the *ONLY* thing it tests for is compiler
> differences.

Side note: testing "can the compiler optimize this expression at
compile time" is actually sometimes an interesting question, so it can
be a valid thing to test.

But people should understand that the question is literally about THAT
(ie visibility into compiler optimization) rather than about the value
itself.

So generally, the only thing that a __builtin_constant_p() test can be
used for is in *conjunction* with having an actual test for an actual
value, and then having special-case logic that pertains to that value.

So for example, *this* is a valid test:

    if (__builtin_constant_p(ww_ctx) && ww_ctx == NULL) {
        ... special compile-time shortcut for the NULL value ..
    } else {
        ... generic code that *also* handles the NULL value ..
    }

and it's useful for triggering a compile-time optimized code-sequence
that is only true for NULL. But because __builtin_constant_p() is
about "how well can the compiler optimize this", that "else" statement
had better be able to handle the generic case too.

And yes, there are a few places where we do expect a certain minimal
set of optimizations. So in some cases we *might* have the rule that
the only valid use of NULL in a case like the above is when the
pointer passed in is passed in as a constant. And then we might say
"we rely on the compiler always returning true for
__builtin_constant_p(NULL)", and then we might say "so the "generic"
version of the code is guaranteed to never see NULL".

But notice how *different* that

    __builtin_constant_p(ww_ctx) && ww_ctx == NULL

test is from

    __builtin_constant_p(ww_ctx == NULL)

and really, the two tests are *fundamentally* really really different.
The first one can make sense. While the second one is pure and utter
garbage.

                          Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 19:00   ` Maarten Lankhorst
@ 2013-10-27 19:23     ` Linus Torvalds
  2013-10-27 19:35       ` Linus Torvalds
  2013-10-27 19:37       ` Maarten Lankhorst
  0 siblings, 2 replies; 30+ messages in thread
From: Linus Torvalds @ 2013-10-27 19:23 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: Ingo Molnar, Linux Kernel Mailing List, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

On Sun, Oct 27, 2013 at 12:00 PM, Maarten Lankhorst
<maarten.lankhorst@canonical.com> wrote:
> op 27-10-13 18:28, Linus Torvalds schreef:
>>
>> That expression is largely equivalent to
>> "__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
>> the comparison to NULL is constant), which is actually much easier to
>> read, while carrying a totally different semantic meaning. Making
>> things worse, the comparison to NULL *may* be marked constant under
>> some very random situations (ie the compiler could turn a "taking an
>> address of a variable is never NULL" kind of knowledge and combining
>> it with other knowledge, and turn a complicated "ctx" expression into
>> a "I know this cannot be NULL" thing, and thus the "== NULL" is a
>> constant, even though ctx itself is some dynamic calculation).
>>
>> Whoever wrote the original should be shot. And this commit shouldn't
>> have been marked as being somehow about gcc-version dependence, but
>> about removing completely crap code.
>>
> Unfortunately gcc disagreed there, which was another compiler bug.

Stop this idiotic "blame gcc bug" crap. Which part of my explanation
for why it was *NOT* a compiler bug did you not understand?

> __builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == NULL), iirc.

See my "largely equivalent" comment, with the *EXTRA* logic that gcc
may actually find cases where the comparison is a constant even if the
ww_ctx thing itself isn't a constant.

> __builtin_constant_p(ww_ctx == NULL) is equal to __builtin_constant_p(ww_ctx != NULL), but
> the former is more readable, since it shows we expect ww_ctx to be null.

Stop the f*cking around already! The  whole "we expect ww_ctx to be
null" thing shows that YOU DO NOT SEEM TO UNDERSTAND WHAT THE TEST
ACTUALLY IS!

The expression

   __builtin_constant_p(ww_ctx == NULL)

has ABSOLUTELY NOTHING to do with whether ww_ctx is NULL or not!
Christ, can you really not understand that?

For example, ww_ctx could be "&static_variable", and the compiler can
- and some compiles _will_ - say that ww_ctx clearly cannot be NULL,
so "ww_ctx == NULL" is 0, which is a constant, so the
__builtin_constant_p() expression returns true. See? That expression
has absolutely NOTHING to do with whether you passed in NULL or not.
NOTHING.

That __builtin_constant_p() tests whether the comparison is
*CONSTANT*. And "0" is just as much a constant as "1" is. Really. So
the whole f*cking expression is total and utter crap, because it is
entirely and utterly senseless. It lacks all meaning. It's not
actually testing for NULL at all. Never was, never will.

The *ONLY* thing it is testing for is "how much can the compiler
optimize this", and as such the *ONLY* thing it tests for is compiler
differences.

Really. Seriously. If you start blaming the compiler for different
compilers giving different results, the only thing *that* shows is
that you didn't understand the expression to begin with.

> But yeah I guess it was too broken in gcc after all, so that's why it had to be killed altogether.

NO NO NO NO. No a f*cking thousand times. It's not "too broken in
gcc". It's too broken in the source code, and the fact that you don't
even understand that is sad. You wrote the code, and you seem to be
unable to admit that *your* code was buggy.

It's not a compiler bug. It's your bug. Stand up like a man, instead
of trying to flail around and blame anything else but yourself.

So guys, get your act together, and stop blaming the compiler already.

               Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-27 17:28 ` Linus Torvalds
@ 2013-10-27 19:00   ` Maarten Lankhorst
  2013-10-27 19:23     ` Linus Torvalds
  0 siblings, 1 reply; 30+ messages in thread
From: Maarten Lankhorst @ 2013-10-27 19:00 UTC (permalink / raw)
  To: Linus Torvalds, Ingo Molnar
  Cc: Linux Kernel Mailing List, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

op 27-10-13 18:28, Linus Torvalds schreef:
> On Sat, Oct 26, 2013 at 5:19 AM, Ingo Molnar <mingo@kernel.org> wrote:
>> This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on
>> kernels built with GCC 3.x. (There are still such distros.)
> Btw, it's really not just gcc 3.x. That code was (a) incomprehensible,
> (b) wrong and (c) caused problems for LLVM too.
>
> It was wrong because "__builtin_constant_p(ww_ctx == NULL)" simply
> makes no sense.
>
> Why?
>
> That expression is largely equivalent to
> "__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
> the comparison to NULL is constant), which is actually much easier to
> read, while carrying a totally different semantic meaning. Making
> things worse, the comparison to NULL *may* be marked constant under
> some very random situations (ie the compiler could turn a "taking an
> address of a variable is never NULL" kind of knowledge and combining
> it with other knowledge, and turn a complicated "ctx" expression into
> a "I know this cannot be NULL" thing, and thus the "== NULL" is a
> constant, even though ctx itself is some dynamic calculation).
>
> Whoever wrote the original should be shot. And this commit shouldn't
> have been marked as being somehow about gcc-version dependence, but
> about removing completely crap code.
>
Unfortunately gcc disagreed there, which was another compiler bug.
__builtin_constant_p(ww_ctx) was NOT equal to __builtin_constant_p(ww_ctx == NULL), iirc.
__builtin_constant_p(ww_ctx == NULL) is equal to __builtin_constant_p(ww_ctx != NULL), but
the former is more readable, since it shows we expect ww_ctx to be null.

But yeah I guess it was too broken in gcc after all, so that's why it had to be killed altogether.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [GIT PULL] locking fix
  2013-10-26 12:19 Ingo Molnar
@ 2013-10-27 17:28 ` Linus Torvalds
  2013-10-27 19:00   ` Maarten Lankhorst
  0 siblings, 1 reply; 30+ messages in thread
From: Linus Torvalds @ 2013-10-27 17:28 UTC (permalink / raw)
  To: Ingo Molnar, Maarten Lankhorst
  Cc: Linux Kernel Mailing List, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

On Sat, Oct 26, 2013 at 5:19 AM, Ingo Molnar <mingo@kernel.org> wrote:
>
> This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on
> kernels built with GCC 3.x. (There are still such distros.)

Btw, it's really not just gcc 3.x. That code was (a) incomprehensible,
(b) wrong and (c) caused problems for LLVM too.

It was wrong because "__builtin_constant_p(ww_ctx == NULL)" simply
makes no sense.

Why?

That expression is largely equivalent to
"__builtin_constant_p(ww_ctx)" (because iff ww_ctx is constant, then
the comparison to NULL is constant), which is actually much easier to
read, while carrying a totally different semantic meaning. Making
things worse, the comparison to NULL *may* be marked constant under
some very random situations (ie the compiler could turn a "taking an
address of a variable is never NULL" kind of knowledge and combining
it with other knowledge, and turn a complicated "ctx" expression into
a "I know this cannot be NULL" thing, and thus the "== NULL" is a
constant, even though ctx itself is some dynamic calculation).

Whoever wrote the original should be shot. And this commit shouldn't
have been marked as being somehow about gcc-version dependence, but
about removing completely crap code.

                    Linus

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2013-10-26 12:19 Ingo Molnar
  2013-10-27 17:28 ` Linus Torvalds
  0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2013-10-26 12:19 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest core-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core-urgent-for-linus

   # HEAD: b0267507dfd0187fb7840a0ec461a510a7f041c5 mutex: Avoid gcc version dependent __builtin_constant_p() usage

This tree fixes a boot crash in CONFIG_DEBUG_MUTEXES=y kernels, on 
kernels built with GCC 3.x. (There are still such distros.)

 Thanks,

	Ingo

------------------>
Tetsuo Handa (1):
      mutex: Avoid gcc version dependent __builtin_constant_p() usage


 kernel/mutex.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/kernel/mutex.c b/kernel/mutex.c
index 6d647ae..d24105b 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -410,7 +410,7 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock,
 static __always_inline int __sched
 __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		    struct lockdep_map *nest_lock, unsigned long ip,
-		    struct ww_acquire_ctx *ww_ctx)
+		    struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
 {
 	struct task_struct *task = current;
 	struct mutex_waiter waiter;
@@ -450,7 +450,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		struct task_struct *owner;
 		struct mspin_node  node;
 
-		if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+		if (use_ww_ctx && ww_ctx->acquired > 0) {
 			struct ww_mutex *ww;
 
 			ww = container_of(lock, struct ww_mutex, base);
@@ -480,7 +480,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		if ((atomic_read(&lock->count) == 1) &&
 		    (atomic_cmpxchg(&lock->count, 1, 0) == 1)) {
 			lock_acquired(&lock->dep_map, ip);
-			if (!__builtin_constant_p(ww_ctx == NULL)) {
+			if (use_ww_ctx) {
 				struct ww_mutex *ww;
 				ww = container_of(lock, struct ww_mutex, base);
 
@@ -551,7 +551,7 @@ slowpath:
 			goto err;
 		}
 
-		if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+		if (use_ww_ctx && ww_ctx->acquired > 0) {
 			ret = __mutex_lock_check_stamp(lock, ww_ctx);
 			if (ret)
 				goto err;
@@ -575,7 +575,7 @@ skip_wait:
 	lock_acquired(&lock->dep_map, ip);
 	mutex_set_owner(lock);
 
-	if (!__builtin_constant_p(ww_ctx == NULL)) {
+	if (use_ww_ctx) {
 		struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
 		struct mutex_waiter *cur;
 
@@ -615,7 +615,7 @@ mutex_lock_nested(struct mutex *lock, unsigned int subclass)
 {
 	might_sleep();
 	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
-			    subclass, NULL, _RET_IP_, NULL);
+			    subclass, NULL, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_nested);
@@ -625,7 +625,7 @@ _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
 {
 	might_sleep();
 	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
-			    0, nest, _RET_IP_, NULL);
+			    0, nest, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock);
@@ -635,7 +635,7 @@ mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass)
 {
 	might_sleep();
 	return __mutex_lock_common(lock, TASK_KILLABLE,
-				   subclass, NULL, _RET_IP_, NULL);
+				   subclass, NULL, _RET_IP_, NULL, 0);
 }
 EXPORT_SYMBOL_GPL(mutex_lock_killable_nested);
 
@@ -644,7 +644,7 @@ mutex_lock_interruptible_nested(struct mutex *lock, unsigned int subclass)
 {
 	might_sleep();
 	return __mutex_lock_common(lock, TASK_INTERRUPTIBLE,
-				   subclass, NULL, _RET_IP_, NULL);
+				   subclass, NULL, _RET_IP_, NULL, 0);
 }
 
 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested);
@@ -682,7 +682,7 @@ __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	might_sleep();
 	ret =  __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE,
-				   0, &ctx->dep_map, _RET_IP_, ctx);
+				   0, &ctx->dep_map, _RET_IP_, ctx, 1);
 	if (!ret && ctx->acquired > 1)
 		return ww_mutex_deadlock_injection(lock, ctx);
 
@@ -697,7 +697,7 @@ __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 
 	might_sleep();
 	ret = __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE,
-				  0, &ctx->dep_map, _RET_IP_, ctx);
+				  0, &ctx->dep_map, _RET_IP_, ctx, 1);
 
 	if (!ret && ctx->acquired > 1)
 		return ww_mutex_deadlock_injection(lock, ctx);
@@ -809,28 +809,28 @@ __mutex_lock_slowpath(atomic_t *lock_count)
 	struct mutex *lock = container_of(lock_count, struct mutex, count);
 
 	__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0,
-			    NULL, _RET_IP_, NULL);
+			    NULL, _RET_IP_, NULL, 0);
 }
 
 static noinline int __sched
 __mutex_lock_killable_slowpath(struct mutex *lock)
 {
 	return __mutex_lock_common(lock, TASK_KILLABLE, 0,
-				   NULL, _RET_IP_, NULL);
+				   NULL, _RET_IP_, NULL, 0);
 }
 
 static noinline int __sched
 __mutex_lock_interruptible_slowpath(struct mutex *lock)
 {
 	return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0,
-				   NULL, _RET_IP_, NULL);
+				   NULL, _RET_IP_, NULL, 0);
 }
 
 static noinline int __sched
 __ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
 	return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0,
-				   NULL, _RET_IP_, ctx);
+				   NULL, _RET_IP_, ctx, 1);
 }
 
 static noinline int __sched
@@ -838,7 +838,7 @@ __ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock,
 					    struct ww_acquire_ctx *ctx)
 {
 	return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0,
-				   NULL, _RET_IP_, ctx);
+				   NULL, _RET_IP_, ctx, 1);
 }
 
 #endif

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [GIT PULL] locking fix
@ 2011-02-15 17:02 Ingo Molnar
  0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2011-02-15 17:02 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest core-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git core-fixes-for-linus

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      Revert "lockdep, timer: Fix del_timer_sync() annotation"


 kernel/timer.c |    8 +++-----
 1 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/kernel/timer.c b/kernel/timer.c
index d53ce66..d645992 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -959,7 +959,7 @@ EXPORT_SYMBOL(try_to_del_timer_sync);
  *
  * Synchronization rules: Callers must prevent restarting of the timer,
  * otherwise this function is meaningless. It must not be called from
- * hardirq contexts. The caller must not hold locks which would prevent
+ * interrupt contexts. The caller must not hold locks which would prevent
  * completion of the timer's handler. The timer's handler must not call
  * add_timer_on(). Upon exit the timer is not queued and the handler is
  * not running on any CPU.
@@ -971,12 +971,10 @@ int del_timer_sync(struct timer_list *timer)
 #ifdef CONFIG_LOCKDEP
 	unsigned long flags;
 
-	raw_local_irq_save(flags);
-	local_bh_disable();
+	local_irq_save(flags);
 	lock_map_acquire(&timer->lockdep_map);
 	lock_map_release(&timer->lockdep_map);
-	_local_bh_enable();
-	raw_local_irq_restore(flags);
+	local_irq_restore(flags);
 #endif
 	/*
 	 * don't use it in hardirq context, because it

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2021-03-28 19:23 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-16 16:01 [GIT PULL] locking fix Ingo Molnar
2019-05-16 17:57 ` Linus Torvalds
2019-05-16 18:39   ` Greg KH
2019-05-16 18:42     ` Linus Torvalds
2019-05-16 23:55       ` Sasha Levin
2019-05-17 12:16         ` Greg KH
2019-05-16 18:20 ` pr-tracker-bot
  -- strict thread matches above, loose matches on Subject: below --
2021-03-28 10:28 Ingo Molnar
2021-03-28 19:22 ` pr-tracker-bot
2019-07-14 11:36 Ingo Molnar
2019-07-14 18:45 ` pr-tracker-bot
2019-04-12 11:53 Ingo Molnar
2019-04-13  4:05 ` pr-tracker-bot
2017-07-21 10:11 Ingo Molnar
2016-09-13 18:11 Ingo Molnar
2016-04-16  9:16 Ingo Molnar
2015-08-14  7:08 Ingo Molnar
2015-03-28 10:07 Ingo Molnar
2015-03-01 16:57 Ingo Molnar
2013-10-26 12:19 Ingo Molnar
2013-10-27 17:28 ` Linus Torvalds
2013-10-27 19:00   ` Maarten Lankhorst
2013-10-27 19:23     ` Linus Torvalds
2013-10-27 19:35       ` Linus Torvalds
2013-10-27 19:37       ` Maarten Lankhorst
2013-10-27 19:51         ` Linus Torvalds
2013-10-27 19:56           ` Maarten Lankhorst
2013-10-27 19:59             ` Linus Torvalds
2013-10-28  8:47               ` Ingo Molnar
2011-02-15 17:02 Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).