LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Heiko Carstens <heiko.carstens@de.ibm.com>
To: Andrew Morton <akpm@osdl.org>, Ingo Molnar <mingo@elte.hu>,
	Andi Kleen <ak@suse.de>, Jan Glauber <jan.glauber@de.ibm.com>,
	Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: linux-kernel@vger.kernel.org
Subject: [patch] i386/x86_64: smp_call_function locking inconsistency
Date: Thu, 8 Feb 2007 21:32:10 +0100	[thread overview]
Message-ID: <20070208203210.GB9798@osiris.ibm.com> (raw)

On i386/x86_64 smp_call_function_single() takes call_lock with
spin_lock_bh(). To me this would imply that it is legal to call
smp_call_function_single() from softirq context.
It's not since smp_call_function() takes call_lock with just
spin_lock(). We can easily deadlock:

-> [process context]
-> smp_call_function()
-> spin_lock(&call_lock)
-> IRQ -> do_softirq -> tasklet
-> [softirq context]
-> smp_call_function_single()
-> spin_lock_bh(&call_lock)
-> dead

So either all spin_lock_bh's should be converted to spin_lock,
which would limit smp_call_function()/smp_call_function_single()
to process context & irqs enabled.
Or the spin_lock's could be converted to spin_lock_bh which would
make it possible to call these two functions even if in softirq
context. AFAICS this should be safe.

Just stumbled across this since we have the same inconsistency
on s390 and our new iucv driver makes use of smp_call_function
in softirq context.

The patch below converts the spin_lock's in i386/x86_64 to
spin_lock_bh, so it would be consistent with s390.

Patch is _not_ compile tested.

Cc: Andi Kleen <ak@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 arch/i386/kernel/smp.c   |    8 ++++----
 arch/x86_64/kernel/smp.c |   10 +++++-----
 2 files changed, 9 insertions(+), 9 deletions(-)

Index: linux-2.6/arch/i386/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/i386/kernel/smp.c
+++ linux-2.6/arch/i386/kernel/smp.c
@@ -527,7 +527,7 @@ static struct call_data_struct *call_dat
  * remote CPUs are nearly ready to execute <<func>> or are or have executed.
  *
  * You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler.
+ * hardware interrupt handler.
  */
 int smp_call_function (void (*func) (void *info), void *info, int nonatomic,
 			int wait)
@@ -536,10 +536,10 @@ int smp_call_function (void (*func) (voi
 	int cpus;
 
 	/* Holding any lock stops cpus from going down. */
-	spin_lock(&call_lock);
+	spin_lock_bh(&call_lock);
 	cpus = num_online_cpus() - 1;
 	if (!cpus) {
-		spin_unlock(&call_lock);
+		spin_unlock_bh(&call_lock);
 		return 0;
 	}
 
@@ -566,7 +566,7 @@ int smp_call_function (void (*func) (voi
 	if (wait)
 		while (atomic_read(&data.finished) != cpus)
 			cpu_relax();
-	spin_unlock(&call_lock);
+	spin_unlock_bh(&call_lock);
 
 	return 0;
 }
Index: linux-2.6/arch/x86_64/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/x86_64/kernel/smp.c
+++ linux-2.6/arch/x86_64/kernel/smp.c
@@ -439,15 +439,15 @@ static void __smp_call_function (void (*
  * remote CPUs are nearly ready to execute func or are or have executed.
  *
  * You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler.
+ * hardware interrupt handler.
  * Actually there are a few legal cases, like panic.
  */
 int smp_call_function (void (*func) (void *info), void *info, int nonatomic,
 			int wait)
 {
-	spin_lock(&call_lock);
+	spin_lock_bh(&call_lock);
 	__smp_call_function(func,info,nonatomic,wait);
-	spin_unlock(&call_lock);
+	spin_unlock_bh(&call_lock);
 	return 0;
 }
 EXPORT_SYMBOL(smp_call_function);
@@ -477,13 +477,13 @@ void smp_send_stop(void)
 	if (reboot_force)
 		return;
 	/* Don't deadlock on the call lock in panic */
-	if (!spin_trylock(&call_lock)) {
+	if (!spin_trylock_bh(&call_lock)) {
 		/* ignore locking because we have panicked anyways */
 		nolock = 1;
 	}
 	__smp_call_function(smp_really_stop_cpu, NULL, 0, 0);
 	if (!nolock)
-		spin_unlock(&call_lock);
+		spin_unlock_bh(&call_lock);
 
 	local_irq_disable();
 	disable_local_APIC();

             reply	other threads:[~2007-02-08 20:34 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-02-08 20:32 Heiko Carstens [this message]
2007-02-08 20:43 ` David Miller
2007-02-09  8:42   ` Heiko Carstens
2007-02-09 12:57     ` Jan Glauber
2007-06-07 14:07       ` Satyam Sharma
2007-06-07 16:27         ` Heiko Carstens
2007-06-07 16:54           ` Satyam Sharma
2007-06-07 17:18             ` Satyam Sharma
2007-06-07 17:22               ` Avi Kivity
2007-06-07 17:33                 ` Satyam Sharma
2007-06-10  7:38                   ` Avi Kivity
2007-06-08 19:43             ` Andi Kleen
2007-06-08 19:42         ` Andi Kleen
2007-02-09  7:40 ` Andi Kleen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070208203210.GB9798@osiris.ibm.com \
    --to=heiko.carstens@de.ibm.com \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=jan.glauber@de.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=schwidefsky@de.ibm.com \
    --subject='Re: [patch] i386/x86_64: smp_call_function locking inconsistency' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).