LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Waiman Long <longman@redhat.com>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Davidlohr Bueso <dave@stgolabs.net>
Subject: [patch 2/2] locking/rtmutex: Dequeue waiter on ww_mutex deadlock
Date: Wed, 25 Aug 2021 12:33:14 +0200 (CEST)	[thread overview]
Message-ID: <20210825102454.042280541@linutronix.de> (raw)
In-Reply-To: <20210825101857.420032248@linutronix.de>

The rt_mutex based ww_mutex variant queues the new waiter first in the
lock's rbtree before evaluating the ww_mutex specific conditions which
might decide that the waiter should back out. This check and conditional
exit happens before the waiter is enqueued into the PI chain.

The failure handling at the call site assumes that the waiter, if it is the
top most waiter on the lock, is queued in the PI chain and then proceeds to
adjust the unmodified PI chain, which results in RB tree corruption.

Dequeue the waiter from the lock waiter list in the ww_mutex error exit
path to prevent this.

Fixes: add461325ec5 ("locking/rtmutex: Extend the rtmutex core to support ww_mutex")
Reported-by: Sebastian Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/locking/rtmutex.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1082,8 +1082,13 @@ static int __sched task_blocks_on_rt_mut
 		/* Check whether the waiter should back out immediately */
 		rtm = container_of(lock, struct rt_mutex, rtmutex);
 		res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx);
-		if (res)
+		if (res) {
+			raw_spin_lock(&task->pi_lock);
+			rt_mutex_dequeue(lock, waiter);
+			task->pi_blocked_on = NULL;
+			raw_spin_unlock(&task->pi_lock);
 			return res;
+		}
 	}
 
 	if (!owner)


  parent reply	other threads:[~2021-08-25 10:33 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-25 10:33 [patch 0/2] locking/rtmutex: Cure two subtle bugs Thomas Gleixner
2021-08-25 10:33 ` [patch 1/2] locking/rtmutex: Dont dereference waiter lockless Thomas Gleixner
2021-08-25 14:17   ` [tip: locking/core] " tip-bot2 for Thomas Gleixner
2021-08-25 10:33 ` Thomas Gleixner [this message]
2021-08-25 14:17   ` [tip: locking/core] locking/rtmutex: Dequeue waiter on ww_mutex deadlock tip-bot2 for Thomas Gleixner
2021-08-25 11:40 ` [patch 0/2] locking/rtmutex: Cure two subtle bugs Peter Zijlstra
2021-08-26 13:26 ` Peter Zijlstra
2021-08-27  7:56   ` Sebastian Andrzej Siewior
2021-08-27 12:31   ` [tip: locking/core] locking/rtmutex: Return success on deadlock for ww_mutex waiters tip-bot2 for Peter Zijlstra
2021-08-27 12:31   ` [tip: locking/core] locking/rtmutex: Prevent spurious EDEADLK return caused by ww_mutexes tip-bot2 for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210825102454.042280541@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=bigeasy@linutronix.de \
    --cc=dave@stgolabs.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --subject='Re: [patch 2/2] locking/rtmutex: Dequeue waiter on ww_mutex deadlock' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).