LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
@ 2015-03-30  9:29 Preeti U Murthy
  2015-03-31  3:11 ` Nicolas Pitre
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Preeti U Murthy @ 2015-03-30  9:29 UTC (permalink / raw)
  To: peterz, mpe, tglx, rjw, mingo; +Cc: nicolas.pitre, linuxppc-dev, linux-kernel

It was found when doing a hotplug stress test on POWER, that the machine
either hit softlockups or rcu_sched stall warnings.  The issue was
traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states
management, which exposed the cpu down race with hrtimer based broadcast
mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This
is explained below.

Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before
it is taken down.

CPU0					CPU1

cpu_down()				take_cpu_down()
					disable_interrupts()

cpu_die()

 while(CPU1 != CPU_DEAD) {
  msleep(100);
   switch_to_idle();
    stop_cpu_timer();
     schedule_broadcast();
 }

tick_cleanup_cpu_dead()
	take_over_broadcast()

So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer
anymore, so CPU0 will be stuck forever.

Fix this by explicitly taking over broadcast duty before cpu_die().
This is a temporary workaround. What we really want is a callback in the
clockevent device which allows us to do that from the dying CPU by
pushing the hrtimer onto a different cpu. That might involve an IPI and
is definitely more complex than this immediate fix.

Fixes:
http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
[Changelog drawn from: https://lkml.org/lkml/2015/2/16/213]
---
Change from V1: https://lkml.org/lkml/2015/2/26/11
1. Decoupled this fix from the kernel/time cleanup patches. V1 had a fail
related to the cleanup which needs to be fixed. But since this bug fix
is independent of this and needs to go in quickly, the patch is being posted
out separately to be merged.

 include/linux/tick.h         |   10 +++++++---
 kernel/cpu.c                 |    2 ++
 kernel/time/tick-broadcast.c |   19 +++++++++++--------
 3 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 9c085dc..3069256 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -94,14 +94,18 @@ extern void tick_cancel_sched_timer(int cpu);
 static inline void tick_cancel_sched_timer(int cpu) { }
 # endif
 
-# ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
+# if defined CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 extern struct tick_device *tick_get_broadcast_device(void);
 extern struct cpumask *tick_get_broadcast_mask(void);
 
-#  ifdef CONFIG_TICK_ONESHOT
+#  if defined CONFIG_TICK_ONESHOT
 extern struct cpumask *tick_get_broadcast_oneshot_mask(void);
+extern void tick_takeover(int deadcpu);
+# else
+static inline void tick_takeover(int deadcpu) {}
 #  endif
-
+# else
+static inline void tick_takeover(int deadcpu) {}
 # endif /* BROADCAST */
 
 # ifdef CONFIG_TICK_ONESHOT
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 1972b16..f9ca351 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -20,6 +20,7 @@
 #include <linux/gfp.h>
 #include <linux/suspend.h>
 #include <linux/lockdep.h>
+#include <linux/tick.h>
 #include <trace/events/power.h>
 
 #include "smpboot.h"
@@ -411,6 +412,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
 	while (!idle_cpu(cpu))
 		cpu_relax();
 
+	tick_takeover(cpu);
 	/* This actually kills the CPU. */
 	__cpu_die(cpu);
 
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 066f0ec..0fd6634 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -669,14 +669,19 @@ static void broadcast_shutdown_local(struct clock_event_device *bc,
 	clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
 }
 
-static void broadcast_move_bc(int deadcpu)
+void tick_takeover(int deadcpu)
 {
-	struct clock_event_device *bc = tick_broadcast_device.evtdev;
+	struct clock_event_device *bc;
+	unsigned long flags;
 
-	if (!bc || !broadcast_needs_cpu(bc, deadcpu))
-		return;
-	/* This moves the broadcast assignment to this cpu */
-	clockevents_program_event(bc, bc->next_event, 1);
+	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
+	bc = tick_broadcast_device.evtdev;
+
+	if (bc && broadcast_needs_cpu(bc, deadcpu)) {
+		/* This moves the broadcast assignment to this cpu */
+		clockevents_program_event(bc, bc->next_event, 1);
+	}
+	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
 }
 
 /*
@@ -913,8 +918,6 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup)
 	cpumask_clear_cpu(cpu, tick_broadcast_pending_mask);
 	cpumask_clear_cpu(cpu, tick_broadcast_force_mask);
 
-	broadcast_move_bc(cpu);
-
 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
 }
 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-03-30  9:29 [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting Preeti U Murthy
@ 2015-03-31  3:11 ` Nicolas Pitre
  2015-04-02 10:42 ` Ingo Molnar
  2015-04-02 14:30 ` [tip:timers/core] clockevents: Fix cpu_down() " tip-bot for Preeti U Murthy
  2 siblings, 0 replies; 14+ messages in thread
From: Nicolas Pitre @ 2015-03-31  3:11 UTC (permalink / raw)
  To: Preeti U Murthy; +Cc: peterz, mpe, tglx, rjw, mingo, linuxppc-dev, linux-kernel

On Mon, 30 Mar 2015, Preeti U Murthy wrote:

> It was found when doing a hotplug stress test on POWER, that the machine
> either hit softlockups or rcu_sched stall warnings.  The issue was
> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states
> management, which exposed the cpu down race with hrtimer based broadcast
> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This
> is explained below.
> 
> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before
> it is taken down.
> 
> CPU0					CPU1
> 
> cpu_down()				take_cpu_down()
> 					disable_interrupts()
> 
> cpu_die()
> 
>  while(CPU1 != CPU_DEAD) {
>   msleep(100);
>    switch_to_idle();
>     stop_cpu_timer();
>      schedule_broadcast();
>  }
> 
> tick_cleanup_cpu_dead()
> 	take_over_broadcast()
> 
> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer
> anymore, so CPU0 will be stuck forever.
> 
> Fix this by explicitly taking over broadcast duty before cpu_die().
> This is a temporary workaround. What we really want is a callback in the
> clockevent device which allows us to do that from the dying CPU by
> pushing the hrtimer onto a different cpu. That might involve an IPI and
> is definitely more complex than this immediate fix.
> 
> Fixes:
> http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
> [Changelog drawn from: https://lkml.org/lkml/2015/2/16/213]

The lock-up I was experiencing with v1 of this patch is no longer 
reproducible with this one.

Tested-by: Nicolas Pitre <nico@linaro.org>

> ---
> Change from V1: https://lkml.org/lkml/2015/2/26/11
> 1. Decoupled this fix from the kernel/time cleanup patches. V1 had a fail
> related to the cleanup which needs to be fixed. But since this bug fix
> is independent of this and needs to go in quickly, the patch is being posted
> out separately to be merged.
> 
>  include/linux/tick.h         |   10 +++++++---
>  kernel/cpu.c                 |    2 ++
>  kernel/time/tick-broadcast.c |   19 +++++++++++--------
>  3 files changed, 20 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/tick.h b/include/linux/tick.h
> index 9c085dc..3069256 100644
> --- a/include/linux/tick.h
> +++ b/include/linux/tick.h
> @@ -94,14 +94,18 @@ extern void tick_cancel_sched_timer(int cpu);
>  static inline void tick_cancel_sched_timer(int cpu) { }
>  # endif
>  
> -# ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
> +# if defined CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
>  extern struct tick_device *tick_get_broadcast_device(void);
>  extern struct cpumask *tick_get_broadcast_mask(void);
>  
> -#  ifdef CONFIG_TICK_ONESHOT
> +#  if defined CONFIG_TICK_ONESHOT
>  extern struct cpumask *tick_get_broadcast_oneshot_mask(void);
> +extern void tick_takeover(int deadcpu);
> +# else
> +static inline void tick_takeover(int deadcpu) {}
>  #  endif
> -
> +# else
> +static inline void tick_takeover(int deadcpu) {}
>  # endif /* BROADCAST */
>  
>  # ifdef CONFIG_TICK_ONESHOT
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 1972b16..f9ca351 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -20,6 +20,7 @@
>  #include <linux/gfp.h>
>  #include <linux/suspend.h>
>  #include <linux/lockdep.h>
> +#include <linux/tick.h>
>  #include <trace/events/power.h>
>  
>  #include "smpboot.h"
> @@ -411,6 +412,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
>  	while (!idle_cpu(cpu))
>  		cpu_relax();
>  
> +	tick_takeover(cpu);
>  	/* This actually kills the CPU. */
>  	__cpu_die(cpu);
>  
> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
> index 066f0ec..0fd6634 100644
> --- a/kernel/time/tick-broadcast.c
> +++ b/kernel/time/tick-broadcast.c
> @@ -669,14 +669,19 @@ static void broadcast_shutdown_local(struct clock_event_device *bc,
>  	clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
>  }
>  
> -static void broadcast_move_bc(int deadcpu)
> +void tick_takeover(int deadcpu)
>  {
> -	struct clock_event_device *bc = tick_broadcast_device.evtdev;
> +	struct clock_event_device *bc;
> +	unsigned long flags;
>  
> -	if (!bc || !broadcast_needs_cpu(bc, deadcpu))
> -		return;
> -	/* This moves the broadcast assignment to this cpu */
> -	clockevents_program_event(bc, bc->next_event, 1);
> +	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
> +	bc = tick_broadcast_device.evtdev;
> +
> +	if (bc && broadcast_needs_cpu(bc, deadcpu)) {
> +		/* This moves the broadcast assignment to this cpu */
> +		clockevents_program_event(bc, bc->next_event, 1);
> +	}
> +	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
>  }
>  
>  /*
> @@ -913,8 +918,6 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup)
>  	cpumask_clear_cpu(cpu, tick_broadcast_pending_mask);
>  	cpumask_clear_cpu(cpu, tick_broadcast_force_mask);
>  
> -	broadcast_move_bc(cpu);
> -
>  	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
>  }
>  
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-03-30  9:29 [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting Preeti U Murthy
  2015-03-31  3:11 ` Nicolas Pitre
@ 2015-04-02 10:42 ` Ingo Molnar
  2015-04-02 11:25   ` Preeti U Murthy
  2015-04-02 12:02   ` Peter Zijlstra
  2015-04-02 14:30 ` [tip:timers/core] clockevents: Fix cpu_down() " tip-bot for Preeti U Murthy
  2 siblings, 2 replies; 14+ messages in thread
From: Ingo Molnar @ 2015-04-02 10:42 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: peterz, mpe, tglx, rjw, nicolas.pitre, linuxppc-dev, linux-kernel


* Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:

> It was found when doing a hotplug stress test on POWER, that the machine
> either hit softlockups or rcu_sched stall warnings.  The issue was
> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states
> management, which exposed the cpu down race with hrtimer based broadcast
> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This
> is explained below.
> 
> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before
> it is taken down.
> 
> CPU0					CPU1
> 
> cpu_down()				take_cpu_down()
> 					disable_interrupts()
> 
> cpu_die()
> 
>  while(CPU1 != CPU_DEAD) {
>   msleep(100);
>    switch_to_idle();
>     stop_cpu_timer();
>      schedule_broadcast();
>  }
> 
> tick_cleanup_cpu_dead()
> 	take_over_broadcast()
> 
> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer
> anymore, so CPU0 will be stuck forever.
> 
> Fix this by explicitly taking over broadcast duty before cpu_die().
> This is a temporary workaround. What we really want is a callback in the
> clockevent device which allows us to do that from the dying CPU by
> pushing the hrtimer onto a different cpu. That might involve an IPI and
> is definitely more complex than this immediate fix.

So why not use a suitable CPU_DOWN* notifier for this, instead of open 
coding it all into a random place in the hotplug machinery?

Also, I improved the changelog (attached below), but decided against 
applying it until these questions are cleared - please use that for 
future versions of this patch.

Thanks,

	Ingo

===================>
>From 413fbf5193b330c5f478ef7aaeaaee08907a993e Mon Sep 17 00:00:00 2001
From: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Date: Mon, 30 Mar 2015 14:59:19 +0530
Subject: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting

It was found when doing a hotplug stress test on POWER, that the
machine either hit softlockups or rcu_sched stall warnings.  The
issue was traced to commit:

  7cba160ad789 ("powernv/cpuidle: Redesign idle states management")

which exposed the cpu_down() race with hrtimer based broadcast mode:

  5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")

The race is the following:

Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
before it is taken down.

	CPU0					CPU1

	cpu_down()				take_cpu_down()
						disable_interrupts()

	cpu_die()

	while (CPU1 != CPU_DEAD) {
		msleep(100);
		switch_to_idle();
		stop_cpu_timer();
		schedule_broadcast();
	}

	tick_cleanup_cpu_dead()
		take_over_broadcast()

So after CPU1 disabled interrupts it cannot handle the broadcast
hrtimer anymore, so CPU0 will be stuck forever.

Fix this by explicitly taking over broadcast duty before cpu_die().

This is a temporary workaround. What we really want is a callback
in the clockevent device which allows us to do that from the dying
CPU by pushing the hrtimer onto a different cpu. That might involve
an IPI and is definitely more complex than this immediate fix.

Changelog was picked up from:

    https://lkml.org/lkml/2015/2/16/213

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: mpe@ellerman.id.au
Cc: nicolas.pitre@linaro.org
Cc: peterz@infradead.org
Cc: rjw@rjwysocki.net
Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-04-02 10:42 ` Ingo Molnar
@ 2015-04-02 11:25   ` Preeti U Murthy
  2015-04-02 11:31     ` Ingo Molnar
  2015-04-02 12:02   ` Peter Zijlstra
  1 sibling, 1 reply; 14+ messages in thread
From: Preeti U Murthy @ 2015-04-02 11:25 UTC (permalink / raw)
  To: Ingo Molnar, peterz
  Cc: mpe, tglx, rjw, nicolas.pitre, linuxppc-dev, linux-kernel

On 04/02/2015 04:12 PM, Ingo Molnar wrote:
> 
> * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:
> 
>> It was found when doing a hotplug stress test on POWER, that the machine
>> either hit softlockups or rcu_sched stall warnings.  The issue was
>> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states
>> management, which exposed the cpu down race with hrtimer based broadcast
>> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This
>> is explained below.
>>
>> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before
>> it is taken down.
>>
>> CPU0					CPU1
>>
>> cpu_down()				take_cpu_down()
>> 					disable_interrupts()
>>
>> cpu_die()
>>
>>  while(CPU1 != CPU_DEAD) {
>>   msleep(100);
>>    switch_to_idle();
>>     stop_cpu_timer();
>>      schedule_broadcast();
>>  }
>>
>> tick_cleanup_cpu_dead()
>> 	take_over_broadcast()
>>
>> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer
>> anymore, so CPU0 will be stuck forever.
>>
>> Fix this by explicitly taking over broadcast duty before cpu_die().
>> This is a temporary workaround. What we really want is a callback in the
>> clockevent device which allows us to do that from the dying CPU by
>> pushing the hrtimer onto a different cpu. That might involve an IPI and
>> is definitely more complex than this immediate fix.
> 
> So why not use a suitable CPU_DOWN* notifier for this, instead of open 
> coding it all into a random place in the hotplug machinery?

This is because each of them is unsuitable for a reason:

1. CPU_DOWN_PREPARE stage allows for a fail. The cpu in question may not
successfully go down. So we may pull the hrtimer unnecessarily.

2. CPU_DYING notifiers are run on the cpu that is going down. So the
alternative would be to IPI an online cpu to take up the broadcast duty.

3. CPU_DEAD and CPU_POST_DEAD stages both have the drawback described in
the changelog.

I hope I got your question right.

Regards
Preeti U Murthy
> 
> Also, I improved the changelog (attached below), but decided against 
> applying it until these questions are cleared - please use that for 
> future versions of this patch.
> 
> Thanks,
> 
> 	Ingo
> 
> ===================>
> From 413fbf5193b330c5f478ef7aaeaaee08907a993e Mon Sep 17 00:00:00 2001
> From: Preeti U Murthy <preeti@linux.vnet.ibm.com>
> Date: Mon, 30 Mar 2015 14:59:19 +0530
> Subject: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting
> 
> It was found when doing a hotplug stress test on POWER, that the
> machine either hit softlockups or rcu_sched stall warnings.  The
> issue was traced to commit:
> 
>   7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
> 
> which exposed the cpu_down() race with hrtimer based broadcast mode:
> 
>   5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
> 
> The race is the following:
> 
> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
> before it is taken down.
> 
> 	CPU0					CPU1
> 
> 	cpu_down()				take_cpu_down()
> 						disable_interrupts()
> 
> 	cpu_die()
> 
> 	while (CPU1 != CPU_DEAD) {
> 		msleep(100);
> 		switch_to_idle();
> 		stop_cpu_timer();
> 		schedule_broadcast();
> 	}
> 
> 	tick_cleanup_cpu_dead()
> 		take_over_broadcast()
> 
> So after CPU1 disabled interrupts it cannot handle the broadcast
> hrtimer anymore, so CPU0 will be stuck forever.
> 
> Fix this by explicitly taking over broadcast duty before cpu_die().
> 
> This is a temporary workaround. What we really want is a callback
> in the clockevent device which allows us to do that from the dying
> CPU by pushing the hrtimer onto a different cpu. That might involve
> an IPI and is definitely more complex than this immediate fix.
> 
> Changelog was picked up from:
> 
>     https://lkml.org/lkml/2015/2/16/213
> 
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Nicolas Pitre <nico@linaro.org>
> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: mpe@ellerman.id.au
> Cc: nicolas.pitre@linaro.org
> Cc: peterz@infradead.org
> Cc: rjw@rjwysocki.net
> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-04-02 11:25   ` Preeti U Murthy
@ 2015-04-02 11:31     ` Ingo Molnar
  2015-04-02 11:44       ` Preeti U Murthy
  0 siblings, 1 reply; 14+ messages in thread
From: Ingo Molnar @ 2015-04-02 11:31 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: peterz, mpe, tglx, rjw, nicolas.pitre, linuxppc-dev, linux-kernel


* Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:

> On 04/02/2015 04:12 PM, Ingo Molnar wrote:
> > 
> > * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:
> > 
> >> It was found when doing a hotplug stress test on POWER, that the machine
> >> either hit softlockups or rcu_sched stall warnings.  The issue was
> >> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states
> >> management, which exposed the cpu down race with hrtimer based broadcast
> >> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This
> >> is explained below.
> >>
> >> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before
> >> it is taken down.
> >>
> >> CPU0					CPU1
> >>
> >> cpu_down()				take_cpu_down()
> >> 					disable_interrupts()
> >>
> >> cpu_die()
> >>
> >>  while(CPU1 != CPU_DEAD) {
> >>   msleep(100);
> >>    switch_to_idle();
> >>     stop_cpu_timer();
> >>      schedule_broadcast();
> >>  }
> >>
> >> tick_cleanup_cpu_dead()
> >> 	take_over_broadcast()
> >>
> >> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer
> >> anymore, so CPU0 will be stuck forever.
> >>
> >> Fix this by explicitly taking over broadcast duty before cpu_die().
> >> This is a temporary workaround. What we really want is a callback in the
> >> clockevent device which allows us to do that from the dying CPU by
> >> pushing the hrtimer onto a different cpu. That might involve an IPI and
> >> is definitely more complex than this immediate fix.
> > 
> > So why not use a suitable CPU_DOWN* notifier for this, instead of open 
> > coding it all into a random place in the hotplug machinery?
> 
> This is because each of them is unsuitable for a reason:
> 
> 1. CPU_DOWN_PREPARE stage allows for a fail. The cpu in question may not
> successfully go down. So we may pull the hrtimer unnecessarily.

Failure is really rare - and as long as things will continue to work 
afterwards it's not a problem to pull the hrtimer to this CPU. Right?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-04-02 11:31     ` Ingo Molnar
@ 2015-04-02 11:44       ` Preeti U Murthy
  0 siblings, 0 replies; 14+ messages in thread
From: Preeti U Murthy @ 2015-04-02 11:44 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: nicolas.pitre, peterz, rjw, linux-kernel, tglx, linuxppc-dev

On 04/02/2015 05:01 PM, Ingo Molnar wrote:
> 
> * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:
> 
>> On 04/02/2015 04:12 PM, Ingo Molnar wrote:
>>>
>>> * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:
>>>
>>>> It was found when doing a hotplug stress test on POWER, that the machine
>>>> either hit softlockups or rcu_sched stall warnings.  The issue was
>>>> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states
>>>> management, which exposed the cpu down race with hrtimer based broadcast
>>>> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This
>>>> is explained below.
>>>>
>>>> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before
>>>> it is taken down.
>>>>
>>>> CPU0					CPU1
>>>>
>>>> cpu_down()				take_cpu_down()
>>>> 					disable_interrupts()
>>>>
>>>> cpu_die()
>>>>
>>>>  while(CPU1 != CPU_DEAD) {
>>>>   msleep(100);
>>>>    switch_to_idle();
>>>>     stop_cpu_timer();
>>>>      schedule_broadcast();
>>>>  }
>>>>
>>>> tick_cleanup_cpu_dead()
>>>> 	take_over_broadcast()
>>>>
>>>> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer
>>>> anymore, so CPU0 will be stuck forever.
>>>>
>>>> Fix this by explicitly taking over broadcast duty before cpu_die().
>>>> This is a temporary workaround. What we really want is a callback in the
>>>> clockevent device which allows us to do that from the dying CPU by
>>>> pushing the hrtimer onto a different cpu. That might involve an IPI and
>>>> is definitely more complex than this immediate fix.
>>>
>>> So why not use a suitable CPU_DOWN* notifier for this, instead of open 
>>> coding it all into a random place in the hotplug machinery?
>>
>> This is because each of them is unsuitable for a reason:
>>
>> 1. CPU_DOWN_PREPARE stage allows for a fail. The cpu in question may not
>> successfully go down. So we may pull the hrtimer unnecessarily.
> 
> Failure is really rare - and as long as things will continue to work 
> afterwards it's not a problem to pull the hrtimer to this CPU. Right?

We will need to move this function to the clockevents_notify() call
under CPU_DOWN_PREPARE. But I see that Tglx wanted to get rid of the
clockevents_notify() function because it is more of a multiplex call and
less of a notification mechanism and get rid of this function explicitly.

Regards
Preeti U Murthy
> 
> Thanks,
> 
> 	Ingo
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-04-02 10:42 ` Ingo Molnar
  2015-04-02 11:25   ` Preeti U Murthy
@ 2015-04-02 12:02   ` Peter Zijlstra
  2015-04-02 12:12     ` Ingo Molnar
  1 sibling, 1 reply; 14+ messages in thread
From: Peter Zijlstra @ 2015-04-02 12:02 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Preeti U Murthy, mpe, tglx, rjw, nicolas.pitre, linuxppc-dev,
	linux-kernel

On Thu, Apr 02, 2015 at 12:42:27PM +0200, Ingo Molnar wrote:
> So why not use a suitable CPU_DOWN* notifier for this, instead of open 
> coding it all into a random place in the hotplug machinery?

Because notifiers are crap? ;-) Its entirely impossible to figure out
what's happening to core code in hotplug. You need to go chase down and
random order notifier things.

I'm planning on taking out many of the core hotplug notifiers and hard
coding their callbacks into the hotplug code.

That way at least its clear wtf happens when.

> Also, I improved the changelog (attached below), but decided against 
> applying it until these questions are cleared - please use that for 
> future versions of this patch.


> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html

You forgot to fix the Fixes line ;-)

My copy has:

 Fixes: 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-04-02 12:02   ` Peter Zijlstra
@ 2015-04-02 12:12     ` Ingo Molnar
  2015-04-02 12:44       ` Preeti U Murthy
  2015-04-02 12:58       ` Peter Zijlstra
  0 siblings, 2 replies; 14+ messages in thread
From: Ingo Molnar @ 2015-04-02 12:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Preeti U Murthy, mpe, tglx, rjw, nicolas.pitre, linuxppc-dev,
	linux-kernel


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, Apr 02, 2015 at 12:42:27PM +0200, Ingo Molnar wrote:
> > So why not use a suitable CPU_DOWN* notifier for this, instead of open 
> > coding it all into a random place in the hotplug machinery?
> 
> Because notifiers are crap? ;-) [...]

No doubt - but I didn't feel this poorly named random call into the 
hotplug code, with no comments was any better.

> [...] Its entirely impossible to figure out what's happening to core 
> code in hotplug. You need to go chase down and random order notifier 
> things.
> 
> I'm planning on taking out many of the core hotplug notifiers and 
> hard coding their callbacks into the hotplug code.

That's very welcome news - but please also lets put in place a proper 
namespace for all these callbacks, to make them easy to find and 
change: hotplug_cpu__*() or so, which in this case would turn into 
hotplug_cpu__tick_pull() or so?

> That way at least its clear wtf happens when.

Okay. I'll resurrect the fix with a hotplug_cpu__tick_pull() name - 
agreed?

> > Also, I improved the changelog (attached below), but decided 
> > against applying it until these questions are cleared - please use 
> > that for future versions of this patch.
> 
> 
> > Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> 
> You forgot to fix the Fixes line ;-)
> 
> My copy has:
> 
>  Fixes: 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")

Hm, not sure how that got lost - my git-log of the patch ported on top 
of timers/core still has it:

==========================>
>From 413fbf5193b330c5f478ef7aaeaaee08907a993e Mon Sep 17 00:00:00 2001
From: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Date: Mon, 30 Mar 2015 14:59:19 +0530
Subject: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting

It was found when doing a hotplug stress test on POWER, that the
machine either hit softlockups or rcu_sched stall warnings.  The
issue was traced to commit:

  7cba160ad789 ("powernv/cpuidle: Redesign idle states management")

which exposed the cpu_down() race with hrtimer based broadcast mode:

  5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")

The race is the following:

Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
before it is taken down.

	CPU0					CPU1

	cpu_down()				take_cpu_down()
						disable_interrupts()

	cpu_die()

	while (CPU1 != CPU_DEAD) {
		msleep(100);
		switch_to_idle();
		stop_cpu_timer();
		schedule_broadcast();
	}

	tick_cleanup_cpu_dead()
		take_over_broadcast()

So after CPU1 disabled interrupts it cannot handle the broadcast
hrtimer anymore, so CPU0 will be stuck forever.

Fix this by explicitly taking over broadcast duty before cpu_die().

This is a temporary workaround. What we really want is a callback
in the clockevent device which allows us to do that from the dying
CPU by pushing the hrtimer onto a different cpu. That might involve
an IPI and is definitely more complex than this immediate fix.

Changelog was picked up from:

    https://lkml.org/lkml/2015/2/16/213

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: mpe@ellerman.id.au
Cc: nicolas.pitre@linaro.org
Cc: peterz@infradead.org
Cc: rjw@rjwysocki.net
Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
[ Merged it to the latest timer tree, tidied up the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/cpu.c                 |  2 ++
 kernel/time/tick-broadcast.c | 19 +++++++++++--------
 kernel/time/tick-internal.h  |  2 ++
 3 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 1972b161c61e..f9ca351a404a 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -20,6 +20,7 @@
 #include <linux/gfp.h>
 #include <linux/suspend.h>
 #include <linux/lockdep.h>
+#include <linux/tick.h>
 #include <trace/events/power.h>
 
 #include "smpboot.h"
@@ -411,6 +412,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
 	while (!idle_cpu(cpu))
 		cpu_relax();
 
+	tick_takeover(cpu);
 	/* This actually kills the CPU. */
 	__cpu_die(cpu);
 
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 19cfb381faa9..81174cd9a29c 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -680,14 +680,19 @@ static void broadcast_shutdown_local(struct clock_event_device *bc,
 	clockevents_set_state(dev, CLOCK_EVT_STATE_SHUTDOWN);
 }
 
-static void broadcast_move_bc(int deadcpu)
+void tick_takeover(int deadcpu)
 {
-	struct clock_event_device *bc = tick_broadcast_device.evtdev;
+	struct clock_event_device *bc;
+	unsigned long flags;
 
-	if (!bc || !broadcast_needs_cpu(bc, deadcpu))
-		return;
-	/* This moves the broadcast assignment to this cpu */
-	clockevents_program_event(bc, bc->next_event, 1);
+	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
+	bc = tick_broadcast_device.evtdev;
+
+	if (bc && broadcast_needs_cpu(bc, deadcpu)) {
+		/* This moves the broadcast assignment to this CPU: */
+		clockevents_program_event(bc, bc->next_event, 1);
+	}
+	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
 }
 
 /*
@@ -924,8 +929,6 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup)
 	cpumask_clear_cpu(cpu, tick_broadcast_pending_mask);
 	cpumask_clear_cpu(cpu, tick_broadcast_force_mask);
 
-	broadcast_move_bc(cpu);
-
 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
 }
 
diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
index b6ba0a44e740..d0794a21ab44 100644
--- a/kernel/time/tick-internal.h
+++ b/kernel/time/tick-internal.h
@@ -126,6 +126,7 @@ extern int tick_broadcast_oneshot_active(void);
 extern void tick_check_oneshot_broadcast_this_cpu(void);
 bool tick_broadcast_oneshot_available(void);
 extern struct cpumask *tick_get_broadcast_oneshot_mask(void);
+extern void tick_takeover(int deadcpu);
 #else /* !(BROADCAST && ONESHOT): */
 static inline void tick_broadcast_setup_oneshot(struct clock_event_device *bc) { BUG(); }
 static inline int tick_broadcast_oneshot_control(unsigned long reason) { return 0; }
@@ -134,6 +135,7 @@ static inline void tick_shutdown_broadcast_oneshot(unsigned int *cpup) { }
 static inline int tick_broadcast_oneshot_active(void) { return 0; }
 static inline void tick_check_oneshot_broadcast_this_cpu(void) { }
 static inline bool tick_broadcast_oneshot_available(void) { return tick_oneshot_possible(); }
+static inline void tick_takeover(int deadcpu) { }
 #endif /* !(BROADCAST && ONESHOT) */
 
 /* NO_HZ_FULL internal */


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-04-02 12:12     ` Ingo Molnar
@ 2015-04-02 12:44       ` Preeti U Murthy
  2015-04-02 12:58       ` Peter Zijlstra
  1 sibling, 0 replies; 14+ messages in thread
From: Preeti U Murthy @ 2015-04-02 12:44 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: nicolas.pitre, rjw, linux-kernel, tglx, linuxppc-dev

On 04/02/2015 05:42 PM, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
>> On Thu, Apr 02, 2015 at 12:42:27PM +0200, Ingo Molnar wrote:
>>> So why not use a suitable CPU_DOWN* notifier for this, instead of open 
>>> coding it all into a random place in the hotplug machinery?
>>
>> Because notifiers are crap? ;-) [...]
> 
> No doubt - but I didn't feel this poorly named random call into the 
> hotplug code, with no comments was any better.
> 
>> [...] Its entirely impossible to figure out what's happening to core 
>> code in hotplug. You need to go chase down and random order notifier 
>> things.
>>
>> I'm planning on taking out many of the core hotplug notifiers and 
>> hard coding their callbacks into the hotplug code.
> 
> That's very welcome news - but please also lets put in place a proper 
> namespace for all these callbacks, to make them easy to find and 
> change: hotplug_cpu__*() or so, which in this case would turn into 
> hotplug_cpu__tick_pull() or so?
> 
>> That way at least its clear wtf happens when.
> 
> Okay. I'll resurrect the fix with a hotplug_cpu__tick_pull() name - 
> agreed?

Sounds good to me. This needs to be marked to stable also.

Regards
Preeti U Murthy


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
  2015-04-02 12:12     ` Ingo Molnar
  2015-04-02 12:44       ` Preeti U Murthy
@ 2015-04-02 12:58       ` Peter Zijlstra
  1 sibling, 0 replies; 14+ messages in thread
From: Peter Zijlstra @ 2015-04-02 12:58 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Preeti U Murthy, mpe, tglx, rjw, nicolas.pitre, linuxppc-dev,
	linux-kernel

On Thu, Apr 02, 2015 at 02:12:47PM +0200, Ingo Molnar wrote:
> Okay. I'll resurrect the fix with a hotplug_cpu__tick_pull() name - 
> agreed?

Sure.

> > > Also, I improved the changelog (attached below), but decided 
> > > against applying it until these questions are cleared - please use 
> > > that for future versions of this patch.
> > 
> > 
> > > Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> > 
> > You forgot to fix the Fixes line ;-)
> > 
> > My copy has:
> > 
> >  Fixes: 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
> 
> Hm, not sure how that got lost - my git-log of the patch ported on top 
> of timers/core still has it:


> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Nicolas Pitre <nico@linaro.org>
> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: mpe@ellerman.id.au
> Cc: nicolas.pitre@linaro.org
> Cc: peterz@infradead.org
> Cc: rjw@rjwysocki.net
> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
> [ Merged it to the latest timer tree, tidied up the changelog. ]
> Signed-off-by: Ingo Molnar <mingo@kernel.org>

That doesn't have my Sob in it, so you took it from the list yourself,
not from my queue. I also (concurrently it seems) fixed up the Changelog
some :-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [tip:timers/core] clockevents: Fix cpu_down() race for hrtimer based broadcasting
  2015-03-30  9:29 [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting Preeti U Murthy
  2015-03-31  3:11 ` Nicolas Pitre
  2015-04-02 10:42 ` Ingo Molnar
@ 2015-04-02 14:30 ` tip-bot for Preeti U Murthy
  2015-04-03 10:38   ` Preeti U Murthy
  2 siblings, 1 reply; 14+ messages in thread
From: tip-bot for Preeti U Murthy @ 2015-04-02 14:30 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: preeti, nico, linux-kernel, hpa, mingo, tglx

Commit-ID:  345527b1edce8df719e0884500c76832a18211c3
Gitweb:     http://git.kernel.org/tip/345527b1edce8df719e0884500c76832a18211c3
Author:     Preeti U Murthy <preeti@linux.vnet.ibm.com>
AuthorDate: Mon, 30 Mar 2015 14:59:19 +0530
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 2 Apr 2015 14:25:39 +0200

clockevents: Fix cpu_down() race for hrtimer based broadcasting

It was found when doing a hotplug stress test on POWER, that the
machine either hit softlockups or rcu_sched stall warnings.  The
issue was traced to commit:

  7cba160ad789 ("powernv/cpuidle: Redesign idle states management")

which exposed the cpu_down() race with hrtimer based broadcast mode:

  5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")

The race is the following:

Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
before it is taken down.

	CPU0					CPU1

	cpu_down()				take_cpu_down()
						disable_interrupts()

	cpu_die()

	while (CPU1 != CPU_DEAD) {
		msleep(100);
		switch_to_idle();
		stop_cpu_timer();
		schedule_broadcast();
	}

	tick_cleanup_cpu_dead()
		take_over_broadcast()

So after CPU1 disabled interrupts it cannot handle the broadcast
hrtimer anymore, so CPU0 will be stuck forever.

Fix this by explicitly taking over broadcast duty before cpu_die().

This is a temporary workaround. What we really want is a callback
in the clockevent device which allows us to do that from the dying
CPU by pushing the hrtimer onto a different cpu. That might involve
an IPI and is definitely more complex than this immediate fix.

Changelog was picked up from:

    https://lkml.org/lkml/2015/2/16/213

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: mpe@ellerman.id.au
Cc: nicolas.pitre@linaro.org
Cc: peterz@infradead.org
Cc: rjw@rjwysocki.net
Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
[ Merged it to the latest timer tree, renamed the callback, tidied up the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/tick.h         |  6 ++++++
 kernel/cpu.c                 |  2 ++
 kernel/time/tick-broadcast.c | 19 +++++++++++--------
 3 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 589868b..f9ff225 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -36,6 +36,12 @@ extern void tick_irq_enter(void);
 static inline void tick_irq_enter(void) { }
 #endif
 
+#if defined(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST) && defined(CONFIG_TICK_ONESHOT)
+extern void hotplug_cpu__broadcast_tick_pull(int dead_cpu);
+#else
+static inline void hotplug_cpu__broadcast_tick_pull(int dead_cpu) { }
+#endif
+
 #ifdef CONFIG_NO_HZ_COMMON
 extern int tick_nohz_tick_stopped(void);
 extern void tick_nohz_idle_enter(void);
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 1972b16..af5db20 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -20,6 +20,7 @@
 #include <linux/gfp.h>
 #include <linux/suspend.h>
 #include <linux/lockdep.h>
+#include <linux/tick.h>
 #include <trace/events/power.h>
 
 #include "smpboot.h"
@@ -411,6 +412,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
 	while (!idle_cpu(cpu))
 		cpu_relax();
 
+	hotplug_cpu__broadcast_tick_pull(cpu);
 	/* This actually kills the CPU. */
 	__cpu_die(cpu);
 
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 19cfb38..f5e0fd5 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -680,14 +680,19 @@ static void broadcast_shutdown_local(struct clock_event_device *bc,
 	clockevents_set_state(dev, CLOCK_EVT_STATE_SHUTDOWN);
 }
 
-static void broadcast_move_bc(int deadcpu)
+void hotplug_cpu__broadcast_tick_pull(int deadcpu)
 {
-	struct clock_event_device *bc = tick_broadcast_device.evtdev;
+	struct clock_event_device *bc;
+	unsigned long flags;
 
-	if (!bc || !broadcast_needs_cpu(bc, deadcpu))
-		return;
-	/* This moves the broadcast assignment to this cpu */
-	clockevents_program_event(bc, bc->next_event, 1);
+	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
+	bc = tick_broadcast_device.evtdev;
+
+	if (bc && broadcast_needs_cpu(bc, deadcpu)) {
+		/* This moves the broadcast assignment to this CPU: */
+		clockevents_program_event(bc, bc->next_event, 1);
+	}
+	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
 }
 
 /*
@@ -924,8 +929,6 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup)
 	cpumask_clear_cpu(cpu, tick_broadcast_pending_mask);
 	cpumask_clear_cpu(cpu, tick_broadcast_force_mask);
 
-	broadcast_move_bc(cpu);
-
 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
 }
 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [tip:timers/core] clockevents: Fix cpu_down()  race for hrtimer based broadcasting
  2015-04-02 14:30 ` [tip:timers/core] clockevents: Fix cpu_down() " tip-bot for Preeti U Murthy
@ 2015-04-03 10:38   ` Preeti U Murthy
  2015-04-03 10:50     ` Ingo Molnar
  0 siblings, 1 reply; 14+ messages in thread
From: Preeti U Murthy @ 2015-04-03 10:38 UTC (permalink / raw)
  To: nico, tglx, hpa, linux-kernel, mingo, linux-tip-commits

On 04/02/2015 08:00 PM, tip-bot for Preeti U Murthy wrote:
> Commit-ID:  345527b1edce8df719e0884500c76832a18211c3
> Gitweb:     http://git.kernel.org/tip/345527b1edce8df719e0884500c76832a18211c3
> Author:     Preeti U Murthy <preeti@linux.vnet.ibm.com>
> AuthorDate: Mon, 30 Mar 2015 14:59:19 +0530
> Committer:  Ingo Molnar <mingo@kernel.org>
> CommitDate: Thu, 2 Apr 2015 14:25:39 +0200
> 
> clockevents: Fix cpu_down() race for hrtimer based broadcasting
> 
> It was found when doing a hotplug stress test on POWER, that the
> machine either hit softlockups or rcu_sched stall warnings.  The
> issue was traced to commit:
> 
>   7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
> 
> which exposed the cpu_down() race with hrtimer based broadcast mode:
> 
>   5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
> 
> The race is the following:
> 
> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
> before it is taken down.
> 
> 	CPU0					CPU1
> 
> 	cpu_down()				take_cpu_down()
> 						disable_interrupts()
> 
> 	cpu_die()
> 
> 	while (CPU1 != CPU_DEAD) {
> 		msleep(100);
> 		switch_to_idle();
> 		stop_cpu_timer();
> 		schedule_broadcast();
> 	}
> 
> 	tick_cleanup_cpu_dead()
> 		take_over_broadcast()
> 
> So after CPU1 disabled interrupts it cannot handle the broadcast
> hrtimer anymore, so CPU0 will be stuck forever.
> 
> Fix this by explicitly taking over broadcast duty before cpu_die().
> 
> This is a temporary workaround. What we really want is a callback
> in the clockevent device which allows us to do that from the dying
> CPU by pushing the hrtimer onto a different cpu. That might involve
> an IPI and is definitely more complex than this immediate fix.
> 
> Changelog was picked up from:
> 
>     https://lkml.org/lkml/2015/2/16/213
> 
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Nicolas Pitre <nico@linaro.org>
> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: mpe@ellerman.id.au
> Cc: nicolas.pitre@linaro.org
> Cc: peterz@infradead.org
> Cc: rjw@rjwysocki.net
> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
> [ Merged it to the latest timer tree, renamed the callback, tidied up the changelog. ]
> Signed-off-by: Ingo Molnar <mingo@kernel.org>

Can this be marked for stable too please?

Regards
Preeti U Murthy


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [tip:timers/core] clockevents: Fix cpu_down()  race for hrtimer based broadcasting
  2015-04-03 10:38   ` Preeti U Murthy
@ 2015-04-03 10:50     ` Ingo Molnar
  2015-04-06  4:28       ` Preeti U Murthy
  0 siblings, 1 reply; 14+ messages in thread
From: Ingo Molnar @ 2015-04-03 10:50 UTC (permalink / raw)
  To: Preeti U Murthy; +Cc: nico, tglx, hpa, linux-kernel, linux-tip-commits


* Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:

> On 04/02/2015 08:00 PM, tip-bot for Preeti U Murthy wrote:
> > Commit-ID:  345527b1edce8df719e0884500c76832a18211c3
> > Gitweb:     http://git.kernel.org/tip/345527b1edce8df719e0884500c76832a18211c3
> > Author:     Preeti U Murthy <preeti@linux.vnet.ibm.com>
> > AuthorDate: Mon, 30 Mar 2015 14:59:19 +0530
> > Committer:  Ingo Molnar <mingo@kernel.org>
> > CommitDate: Thu, 2 Apr 2015 14:25:39 +0200
> > 
> > clockevents: Fix cpu_down() race for hrtimer based broadcasting
> > 
> > It was found when doing a hotplug stress test on POWER, that the
> > machine either hit softlockups or rcu_sched stall warnings.  The
> > issue was traced to commit:
> > 
> >   7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
> > 
> > which exposed the cpu_down() race with hrtimer based broadcast mode:
> > 
> >   5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
> > 
> > The race is the following:
> > 
> > Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
> > before it is taken down.
> > 
> > 	CPU0					CPU1
> > 
> > 	cpu_down()				take_cpu_down()
> > 						disable_interrupts()
> > 
> > 	cpu_die()
> > 
> > 	while (CPU1 != CPU_DEAD) {
> > 		msleep(100);
> > 		switch_to_idle();
> > 		stop_cpu_timer();
> > 		schedule_broadcast();
> > 	}
> > 
> > 	tick_cleanup_cpu_dead()
> > 		take_over_broadcast()
> > 
> > So after CPU1 disabled interrupts it cannot handle the broadcast
> > hrtimer anymore, so CPU0 will be stuck forever.
> > 
> > Fix this by explicitly taking over broadcast duty before cpu_die().
> > 
> > This is a temporary workaround. What we really want is a callback
> > in the clockevent device which allows us to do that from the dying
> > CPU by pushing the hrtimer onto a different cpu. That might involve
> > an IPI and is definitely more complex than this immediate fix.
> > 
> > Changelog was picked up from:
> > 
> >     https://lkml.org/lkml/2015/2/16/213
> > 
> > Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> > Tested-by: Nicolas Pitre <nico@linaro.org>
> > Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
> > Cc: linuxppc-dev@lists.ozlabs.org
> > Cc: mpe@ellerman.id.au
> > Cc: nicolas.pitre@linaro.org
> > Cc: peterz@infradead.org
> > Cc: rjw@rjwysocki.net
> > Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> > Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
> > [ Merged it to the latest timer tree, renamed the callback, tidied up the changelog. ]
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> Can this be marked for stable too please?

It can be forwarded to -stable once it has gone upstream in the merge 
window and has gone through some testing upstream as well. The 
breakage itself was introduced in the v3.19 merge window, so it's an 
older regression - and the fix is not simple either.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [tip:timers/core] clockevents: Fix cpu_down()  race for hrtimer based broadcasting
  2015-04-03 10:50     ` Ingo Molnar
@ 2015-04-06  4:28       ` Preeti U Murthy
  0 siblings, 0 replies; 14+ messages in thread
From: Preeti U Murthy @ 2015-04-06  4:28 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: nico, tglx, hpa, linux-kernel, linux-tip-commits

On 04/03/2015 04:20 PM, Ingo Molnar wrote:
> 
> * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:
> 
>> On 04/02/2015 08:00 PM, tip-bot for Preeti U Murthy wrote:
>>> Commit-ID:  345527b1edce8df719e0884500c76832a18211c3
>>> Gitweb:     http://git.kernel.org/tip/345527b1edce8df719e0884500c76832a18211c3
>>> Author:     Preeti U Murthy <preeti@linux.vnet.ibm.com>
>>> AuthorDate: Mon, 30 Mar 2015 14:59:19 +0530
>>> Committer:  Ingo Molnar <mingo@kernel.org>
>>> CommitDate: Thu, 2 Apr 2015 14:25:39 +0200
>>>
>>> clockevents: Fix cpu_down() race for hrtimer based broadcasting
>>>
>>> It was found when doing a hotplug stress test on POWER, that the
>>> machine either hit softlockups or rcu_sched stall warnings.  The
>>> issue was traced to commit:
>>>
>>>   7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
>>>
>>> which exposed the cpu_down() race with hrtimer based broadcast mode:
>>>
>>>   5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
>>>
>>> The race is the following:
>>>
>>> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
>>> before it is taken down.
>>>
>>> 	CPU0					CPU1
>>>
>>> 	cpu_down()				take_cpu_down()
>>> 						disable_interrupts()
>>>
>>> 	cpu_die()
>>>
>>> 	while (CPU1 != CPU_DEAD) {
>>> 		msleep(100);
>>> 		switch_to_idle();
>>> 		stop_cpu_timer();
>>> 		schedule_broadcast();
>>> 	}
>>>
>>> 	tick_cleanup_cpu_dead()
>>> 		take_over_broadcast()
>>>
>>> So after CPU1 disabled interrupts it cannot handle the broadcast
>>> hrtimer anymore, so CPU0 will be stuck forever.
>>>
>>> Fix this by explicitly taking over broadcast duty before cpu_die().
>>>
>>> This is a temporary workaround. What we really want is a callback
>>> in the clockevent device which allows us to do that from the dying
>>> CPU by pushing the hrtimer onto a different cpu. That might involve
>>> an IPI and is definitely more complex than this immediate fix.
>>>
>>> Changelog was picked up from:
>>>
>>>     https://lkml.org/lkml/2015/2/16/213
>>>
>>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>>> Tested-by: Nicolas Pitre <nico@linaro.org>
>>> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
>>> Cc: linuxppc-dev@lists.ozlabs.org
>>> Cc: mpe@ellerman.id.au
>>> Cc: nicolas.pitre@linaro.org
>>> Cc: peterz@infradead.org
>>> Cc: rjw@rjwysocki.net
>>> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
>>> Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
>>> [ Merged it to the latest timer tree, renamed the callback, tidied up the changelog. ]
>>> Signed-off-by: Ingo Molnar <mingo@kernel.org>
>>
>> Can this be marked for stable too please?
> 
> It can be forwarded to -stable once it has gone upstream in the merge 
> window and has gone through some testing upstream as well. The 
> breakage itself was introduced in the v3.19 merge window, so it's an 
> older regression - and the fix is not simple either.

Ok I see.

Thanks

Regards
Preeti U Murthy
> 
> Thanks,
> 
> 	Ingo
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-04-06  4:28 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-30  9:29 [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting Preeti U Murthy
2015-03-31  3:11 ` Nicolas Pitre
2015-04-02 10:42 ` Ingo Molnar
2015-04-02 11:25   ` Preeti U Murthy
2015-04-02 11:31     ` Ingo Molnar
2015-04-02 11:44       ` Preeti U Murthy
2015-04-02 12:02   ` Peter Zijlstra
2015-04-02 12:12     ` Ingo Molnar
2015-04-02 12:44       ` Preeti U Murthy
2015-04-02 12:58       ` Peter Zijlstra
2015-04-02 14:30 ` [tip:timers/core] clockevents: Fix cpu_down() " tip-bot for Preeti U Murthy
2015-04-03 10:38   ` Preeti U Murthy
2015-04-03 10:50     ` Ingo Molnar
2015-04-06  4:28       ` Preeti U Murthy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).