LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH 0/4] kthread/smpboot: More fixes...
@ 2018-06-07 12:33 Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 1/4] kthread, sched: Fix kthread_parkme() (again...) Peter Zijlstra
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-07 12:33 UTC (permalink / raw)
  To: mingo, oleg, gkohli
  Cc: tglx, mpe, bigeasy, linux-kernel, will.deacon, Peter Zijlstra (Intel)

These patches were tested with hotplug and /proc/sys/kernel/watchdog_cpumask
manipulations.

I used 'grep watchdog_fn /proc/timer_list | wc -l' to verify the expected
number of active timers.

Please have a look..

---
 include/linux/cpuhotplug.h |   1 +
 include/linux/kthread.h    |   1 -
 include/linux/nmi.h        |   5 ++
 include/linux/sched.h      |   2 +-
 include/linux/smpboot.h    |  15 +-----
 kernel/cpu.c               |   5 ++
 kernel/kthread.c           |  34 +++++++++---
 kernel/sched/core.c        |  31 ++++-------
 kernel/smpboot.c           |  54 ++-----------------
 kernel/watchdog.c          | 132 +++++++++++++++++++--------------------------
 10 files changed, 112 insertions(+), 168 deletions(-)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] kthread, sched: Fix kthread_parkme() (again...)
  2018-06-07 12:33 [PATCH 0/4] kthread/smpboot: More fixes Peter Zijlstra
@ 2018-06-07 12:33 ` Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work Peter Zijlstra
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-07 12:33 UTC (permalink / raw)
  To: mingo, oleg, gkohli
  Cc: tglx, mpe, bigeasy, linux-kernel, will.deacon, Peter Zijlstra (Intel)

[-- Attachment #1: peterz-kthread-0.patch --]
[-- Type: text/plain, Size: 6134 bytes --]

Gaurav reports that commit:

   85f1abe0019f ("kthread, sched/wait: Fix kthread_parkme() completion issue")

isn't working for him. Because of the following race:

> controller Thread                               CPUHP Thread
> takedown_cpu
> kthread_park
> kthread_parkme
> Set KTHREAD_SHOULD_PARK
>                                                 smpboot_thread_fn
>                                                 set Task interruptible
> 
> 
> wake_up_process
>  if (!(p->state & state))
>                 goto out;
>   
>                                                 Kthread_parkme
>                                                 SET TASK_PARKED
>                                                 schedule
>                                                 raw_spin_lock(&rq->lock)
> ttwu_remote
> waiting for __task_rq_lock
>                                                 context_switch
>        
>                                                 finish_lock_switch
> 
> 
>        
>                                                 Case TASK_PARKED
>                                                 kthread_park_complete
> 
>      
> SET Running

Furthermore, Oleg noticed that the whole scheduler TASK_PARKED
handling is buggered because the TASK_DEAD thing is done with
preemption disabled, the current code can still complete early on
preemption :/

So basically revert that earlier fix and go with a variant of the
alternative mentioned in the commit. Promote TASK_PARKED to special
state to avoid the store-store issue on task->state leading to the
WARN in kthread_unpark() -> __kthread_bind().

But in addition, add wait_task_inactive() to kthread_park() to ensure
the task really is PARKED when we return from kthread_park(). This
avoids the whole kthread still gets migrated nonsense -- although it
would be really good to get this done differently.

Cc: Oleg Nesterov <oleg@redhat.com>
Reported-by: Gaurav Kohli <gkohli@codeaurora.org>
Fixes: 85f1abe0019f ("kthread, sched/wait: Fix kthread_parkme() completion issue")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/kthread.h |    1 -
 include/linux/sched.h   |    2 +-
 kernel/kthread.c        |   30 ++++++++++++++++++++++++------
 kernel/sched/core.c     |   31 +++++++++++--------------------
 4 files changed, 36 insertions(+), 28 deletions(-)

--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -62,7 +62,6 @@ void *kthread_probe_data(struct task_str
 int kthread_park(struct task_struct *k);
 void kthread_unpark(struct task_struct *k);
 void kthread_parkme(void);
-void kthread_park_complete(struct task_struct *k);
 
 int kthreadd(void *unused);
 extern struct task_struct *kthreadd_task;
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -117,7 +117,7 @@ struct task_group;
  * the comment with set_special_state().
  */
 #define is_special_task_state(state)				\
-	((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_DEAD))
+	((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_PARKED | TASK_DEAD))
 
 #define __set_current_state(state_value)			\
 	do {							\
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -177,9 +177,20 @@ void *kthread_probe_data(struct task_str
 static void __kthread_parkme(struct kthread *self)
 {
 	for (;;) {
-		set_current_state(TASK_PARKED);
+		/*
+		 * TASK_PARKED is a special state; we must serialize against
+		 * possible pending wakeups to avoid store-store collisions on
+		 * task->state.
+		 *
+		 * Such a collision might possibly result in the task state
+		 * changin from TASK_PARKED and us failing the
+		 * wait_task_inactive() in kthread_park().
+		 */
+		set_special_state(TASK_PARKED);
 		if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags))
 			break;
+
+		complete_all(&self->parked);
 		schedule();
 	}
 	__set_current_state(TASK_RUNNING);
@@ -191,11 +202,6 @@ void kthread_parkme(void)
 }
 EXPORT_SYMBOL_GPL(kthread_parkme);
 
-void kthread_park_complete(struct task_struct *k)
-{
-	complete_all(&to_kthread(k)->parked);
-}
-
 static int kthread(void *_create)
 {
 	/* Copy data: it's on kthread's stack */
@@ -461,6 +467,9 @@ void kthread_unpark(struct task_struct *
 
 	reinit_completion(&kthread->parked);
 	clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
+	/*
+	 * __kthread_parkme() will either see !SHOULD_PARK or get the wakeup.
+	 */
 	wake_up_state(k, TASK_PARKED);
 }
 EXPORT_SYMBOL_GPL(kthread_unpark);
@@ -487,7 +496,16 @@ int kthread_park(struct task_struct *k)
 	set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
 	if (k != current) {
 		wake_up_process(k);
+		/*
+		 * Wait for __kthread_parkme() to complete(), this means we
+		 * _will_ have TASK_PARKED and are about to call schedule().
+		 */
 		wait_for_completion(&kthread->parked);
+		/*
+		 * Now wait for that schedule() to complete and the task to
+		 * get scheduled out.
+		 */
+		WARN_ON_ONCE(!wait_task_inactive(k, TASK_PARKED));
 	}
 
 	return 0;
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7,7 +7,6 @@
  */
 #include "sched.h"
 
-#include <linux/kthread.h>
 #include <linux/nospec.h>
 
 #include <asm/switch_to.h>
@@ -2701,28 +2700,20 @@ static struct rq *finish_task_switch(str
 		membarrier_mm_sync_core_before_usermode(mm);
 		mmdrop(mm);
 	}
-	if (unlikely(prev_state & (TASK_DEAD|TASK_PARKED))) {
-		switch (prev_state) {
-		case TASK_DEAD:
-			if (prev->sched_class->task_dead)
-				prev->sched_class->task_dead(prev);
-
-			/*
-			 * Remove function-return probe instances associated with this
-			 * task and put them back on the free list.
-			 */
-			kprobe_flush_task(prev);
+	if (unlikely(prev_state == TASK_DEAD)) {
+		if (prev->sched_class->task_dead)
+			prev->sched_class->task_dead(prev);
 
-			/* Task is done with its stack. */
-			put_task_stack(prev);
+		/*
+		 * Remove function-return probe instances associated with this
+		 * task and put them back on the free list.
+		 */
+		kprobe_flush_task(prev);
 
-			put_task_struct(prev);
-			break;
+		/* Task is done with its stack. */
+		put_task_stack(prev);
 
-		case TASK_PARKED:
-			kthread_park_complete(prev);
-			break;
-		}
+		put_task_struct(prev);
 	}
 
 	tick_nohz_task_switch();

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work
  2018-06-07 12:33 [PATCH 0/4] kthread/smpboot: More fixes Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 1/4] kthread, sched: Fix kthread_parkme() (again...) Peter Zijlstra
@ 2018-06-07 12:33 ` Peter Zijlstra
  2018-06-07 14:24   ` Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 3/4] smpboot: Remove cpumask from the API Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 4/4] kthread: Simplify kthread_park() completion Peter Zijlstra
  3 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-07 12:33 UTC (permalink / raw)
  To: mingo, oleg, gkohli
  Cc: tglx, mpe, bigeasy, linux-kernel, will.deacon, Peter Zijlstra (Intel)

[-- Attachment #1: peterz-kthread-1.patch --]
[-- Type: text/plain, Size: 9560 bytes --]

Oleg suggested to replace the "watchdog/%u" threads with
cpu_stop_work. That removes one thread per cpu while at the same time
fixes softlockup vs SCHED_DEADLINE.

But more importantly, it does away with the single
smpboot_update_cpumask_percpu_thread() user, which allows
cleanups/shrinkage of the smpboot interface.

Suggested-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/cpuhotplug.h |    1 
 include/linux/nmi.h        |    5 +
 kernel/cpu.c               |    5 +
 kernel/watchdog.c          |  132 +++++++++++++++++++--------------------------
 4 files changed, 67 insertions(+), 76 deletions(-)

--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -164,6 +164,7 @@ enum cpuhp_state {
 	CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE,
 	CPUHP_AP_PERF_POWERPC_CORE_IMC_ONLINE,
 	CPUHP_AP_PERF_POWERPC_THREAD_IMC_ONLINE,
+	CPUHP_AP_WATCHDOG_ONLINE,
 	CPUHP_AP_WORKQUEUE_ONLINE,
 	CPUHP_AP_RCUTREE_ONLINE,
 	CPUHP_AP_ONLINE_DYN,
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -33,10 +33,15 @@ extern int sysctl_hardlockup_all_cpu_bac
 #define sysctl_hardlockup_all_cpu_backtrace 0
 #endif /* !CONFIG_SMP */
 
+extern int lockup_detector_online_cpu(unsigned int cpu);
+extern int lockup_detector_offline_cpu(unsigned int cpu);
+
 #else /* CONFIG_LOCKUP_DETECTOR */
 static inline void lockup_detector_init(void) { }
 static inline void lockup_detector_soft_poweroff(void) { }
 static inline void lockup_detector_cleanup(void) { }
+#define lockup_detector_online_cpu	NULL
+#define lockup_detector_offline_cpu	NULL
 #endif /* !CONFIG_LOCKUP_DETECTOR */
 
 #ifdef CONFIG_SOFTLOCKUP_DETECTOR
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1344,6 +1344,11 @@ static struct cpuhp_step cpuhp_hp_states
 		.startup.single		= perf_event_init_cpu,
 		.teardown.single	= perf_event_exit_cpu,
 	},
+	[CPUHP_AP_WATCHDOG_ONLINE] = {
+		.name			= "lockup_detector:online",
+		.startup.single		= lockup_detector_online_cpu,
+		.teardown.single	= lockup_detector_offline_cpu,
+	},
 	[CPUHP_AP_WORKQUEUE_ONLINE] = {
 		.name			= "workqueue:online",
 		.startup.single		= workqueue_online_cpu,
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -18,18 +18,14 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/sysctl.h>
-#include <linux/smpboot.h>
-#include <linux/sched/rt.h>
-#include <uapi/linux/sched/types.h>
 #include <linux/tick.h>
-#include <linux/workqueue.h>
 #include <linux/sched/clock.h>
 #include <linux/sched/debug.h>
 #include <linux/sched/isolation.h>
+#include <linux/stop_machine.h>
 
 #include <asm/irq_regs.h>
 #include <linux/kvm_para.h>
-#include <linux/kthread.h>
 
 static DEFINE_MUTEX(watchdog_mutex);
 
@@ -173,7 +169,6 @@ static bool softlockup_threads_initializ
 static u64 __read_mostly sample_period;
 
 static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
-static DEFINE_PER_CPU(struct task_struct *, softlockup_watchdog);
 static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer);
 static DEFINE_PER_CPU(bool, softlockup_touch_sync);
 static DEFINE_PER_CPU(bool, soft_watchdog_warn);
@@ -335,6 +330,25 @@ static void watchdog_interrupt_count(voi
 	__this_cpu_inc(hrtimer_interrupts);
 }
 
+/*
+ * The watchdog thread function - touches the timestamp.
+ *
+ * It only runs once every sample_period seconds (4 seconds by
+ * default) to reset the softlockup timestamp. If this gets delayed
+ * for more than 2*watchdog_thresh seconds then the debug-printout
+ * triggers in watchdog_timer_fn().
+ */
+static int softlockup_fn(void *data)
+{
+	__this_cpu_write(soft_lockup_hrtimer_cnt,
+			 __this_cpu_read(hrtimer_interrupts));
+	__touch_watchdog();
+
+	return 0;
+}
+
+static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
+
 /* watchdog kicker functions */
 static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
 {
@@ -350,7 +364,9 @@ static enum hrtimer_restart watchdog_tim
 	watchdog_interrupt_count();
 
 	/* kick the softlockup detector */
-	wake_up_process(__this_cpu_read(softlockup_watchdog));
+	stop_one_cpu_nowait(smp_processor_id(),
+			softlockup_fn, NULL,
+			this_cpu_ptr(&softlockup_stop_work));
 
 	/* .. and repeat */
 	hrtimer_forward_now(hrtimer, ns_to_ktime(sample_period));
@@ -448,17 +464,12 @@ static enum hrtimer_restart watchdog_tim
 	return HRTIMER_RESTART;
 }
 
-static void watchdog_set_prio(unsigned int policy, unsigned int prio)
-{
-	struct sched_param param = { .sched_priority = prio };
-
-	sched_setscheduler(current, policy, &param);
-}
-
 static void watchdog_enable(unsigned int cpu)
 {
 	struct hrtimer *hrtimer = this_cpu_ptr(&watchdog_hrtimer);
 
+	WARN_ON_ONCE(cpu != smp_processor_id());
+
 	/*
 	 * Start the timer first to prevent the NMI watchdog triggering
 	 * before the timer has a chance to fire.
@@ -466,22 +477,21 @@ static void watchdog_enable(unsigned int
 	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
 	hrtimer->function = watchdog_timer_fn;
 	hrtimer_start(hrtimer, ns_to_ktime(sample_period),
-		      HRTIMER_MODE_REL_PINNED);
+			HRTIMER_MODE_REL_PINNED);
 
 	/* Initialize timestamp */
 	__touch_watchdog();
 	/* Enable the perf event */
 	if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
 		watchdog_nmi_enable(cpu);
-
-	watchdog_set_prio(SCHED_FIFO, MAX_RT_PRIO - 1);
 }
 
 static void watchdog_disable(unsigned int cpu)
 {
 	struct hrtimer *hrtimer = this_cpu_ptr(&watchdog_hrtimer);
 
-	watchdog_set_prio(SCHED_NORMAL, 0);
+	WARN_ON_ONCE(cpu != smp_processor_id());
+
 	/*
 	 * Disable the perf event first. That prevents that a large delay
 	 * between disabling the timer and disabling the perf event causes
@@ -491,77 +501,60 @@ static void watchdog_disable(unsigned in
 	hrtimer_cancel(hrtimer);
 }
 
-static void watchdog_cleanup(unsigned int cpu, bool online)
+static int softlockup_stop_fn(void *data)
 {
-	watchdog_disable(cpu);
+	watchdog_disable(smp_processor_id());
+	return 0;
 }
 
-static int watchdog_should_run(unsigned int cpu)
+static void softlockup_stop_all(void)
 {
-	return __this_cpu_read(hrtimer_interrupts) !=
-		__this_cpu_read(soft_lockup_hrtimer_cnt);
+	int cpu;
+
+	for_each_cpu(cpu, &watchdog_allowed_mask)
+		stop_one_cpu(cpu, softlockup_stop_fn, NULL);
+
+	cpumask_clear(&watchdog_allowed_mask);
 }
 
-/*
- * The watchdog thread function - touches the timestamp.
- *
- * It only runs once every sample_period seconds (4 seconds by
- * default) to reset the softlockup timestamp. If this gets delayed
- * for more than 2*watchdog_thresh seconds then the debug-printout
- * triggers in watchdog_timer_fn().
- */
-static void watchdog(unsigned int cpu)
+static int softlockup_start_fn(void *data)
 {
-	__this_cpu_write(soft_lockup_hrtimer_cnt,
-			 __this_cpu_read(hrtimer_interrupts));
-	__touch_watchdog();
+	watchdog_enable(smp_processor_id());
+	return 0;
 }
 
-static struct smp_hotplug_thread watchdog_threads = {
-	.store			= &softlockup_watchdog,
-	.thread_should_run	= watchdog_should_run,
-	.thread_fn		= watchdog,
-	.thread_comm		= "watchdog/%u",
-	.setup			= watchdog_enable,
-	.cleanup		= watchdog_cleanup,
-	.park			= watchdog_disable,
-	.unpark			= watchdog_enable,
-};
-
-static void softlockup_update_smpboot_threads(void)
+static void softlockup_start_all(void)
 {
-	lockdep_assert_held(&watchdog_mutex);
-
-	if (!softlockup_threads_initialized)
-		return;
+	int cpu;
 
-	smpboot_update_cpumask_percpu_thread(&watchdog_threads,
-					     &watchdog_allowed_mask);
+	cpumask_copy(&watchdog_allowed_mask, &watchdog_cpumask);
+	for_each_cpu(cpu, &watchdog_allowed_mask)
+		stop_one_cpu(cpu, softlockup_start_fn, NULL);
 }
 
-/* Temporarily park all watchdog threads */
-static void softlockup_park_all_threads(void)
+int lockup_detector_online_cpu(unsigned int cpu)
 {
-	cpumask_clear(&watchdog_allowed_mask);
-	softlockup_update_smpboot_threads();
+	watchdog_enable(cpu);
+	return 0;
 }
 
-/* Unpark enabled threads */
-static void softlockup_unpark_threads(void)
+int lockup_detector_offline_cpu(unsigned int cpu)
 {
-	cpumask_copy(&watchdog_allowed_mask, &watchdog_cpumask);
-	softlockup_update_smpboot_threads();
+	watchdog_disable(cpu);
+	return 0;
 }
 
 static void lockup_detector_reconfigure(void)
 {
 	cpus_read_lock();
 	watchdog_nmi_stop();
-	softlockup_park_all_threads();
+
+	softlockup_stop_all();
 	set_sample_period();
 	lockup_detector_update_enable();
 	if (watchdog_enabled && watchdog_thresh)
-		softlockup_unpark_threads();
+		softlockup_start_all();
+
 	watchdog_nmi_start();
 	cpus_read_unlock();
 	/*
@@ -580,8 +573,6 @@ static void lockup_detector_reconfigure(
  */
 static __init void lockup_detector_setup(void)
 {
-	int ret;
-
 	/*
 	 * If sysctl is off and watchdog got disabled on the command line,
 	 * nothing to do here.
@@ -592,13 +583,6 @@ static __init void lockup_detector_setup
 	    !(watchdog_enabled && watchdog_thresh))
 		return;
 
-	ret = smpboot_register_percpu_thread_cpumask(&watchdog_threads,
-						     &watchdog_allowed_mask);
-	if (ret) {
-		pr_err("Failed to initialize soft lockup detector threads\n");
-		return;
-	}
-
 	mutex_lock(&watchdog_mutex);
 	softlockup_threads_initialized = true;
 	lockup_detector_reconfigure();
@@ -606,10 +590,6 @@ static __init void lockup_detector_setup
 }
 
 #else /* CONFIG_SOFTLOCKUP_DETECTOR */
-static inline int watchdog_park_threads(void) { return 0; }
-static inline void watchdog_unpark_threads(void) { }
-static inline int watchdog_enable_all_cpus(void) { return 0; }
-static inline void watchdog_disable_all_cpus(void) { }
 static void lockup_detector_reconfigure(void)
 {
 	cpus_read_lock();

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 3/4] smpboot: Remove cpumask from the API
  2018-06-07 12:33 [PATCH 0/4] kthread/smpboot: More fixes Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 1/4] kthread, sched: Fix kthread_parkme() (again...) Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work Peter Zijlstra
@ 2018-06-07 12:33 ` Peter Zijlstra
  2018-06-07 12:33 ` [PATCH 4/4] kthread: Simplify kthread_park() completion Peter Zijlstra
  3 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-07 12:33 UTC (permalink / raw)
  To: mingo, oleg, gkohli
  Cc: tglx, mpe, bigeasy, linux-kernel, will.deacon, Peter Zijlstra (Intel)

[-- Attachment #1: peterz-kthread-2.patch --]
[-- Type: text/plain, Size: 5064 bytes --]

Now that the sole use of the whole smpboot_*cpumask() API is gone,
remove it.

Suggested-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/smpboot.h |   15 -------------
 kernel/smpboot.c        |   54 ++++--------------------------------------------
 2 files changed, 6 insertions(+), 63 deletions(-)

--- a/include/linux/smpboot.h
+++ b/include/linux/smpboot.h
@@ -25,8 +25,6 @@ struct smpboot_thread_data;
  *			parked (cpu offline)
  * @unpark:		Optional unpark function, called when the thread is
  *			unparked (cpu online)
- * @cpumask:		Internal state.  To update which threads are unparked,
- *			call smpboot_update_cpumask_percpu_thread().
  * @selfparking:	Thread is not parked by the park function.
  * @thread_comm:	The base name of the thread
  */
@@ -40,23 +38,12 @@ struct smp_hotplug_thread {
 	void				(*cleanup)(unsigned int cpu, bool online);
 	void				(*park)(unsigned int cpu);
 	void				(*unpark)(unsigned int cpu);
-	cpumask_var_t			cpumask;
 	bool				selfparking;
 	const char			*thread_comm;
 };
 
-int smpboot_register_percpu_thread_cpumask(struct smp_hotplug_thread *plug_thread,
-					   const struct cpumask *cpumask);
-
-static inline int
-smpboot_register_percpu_thread(struct smp_hotplug_thread *plug_thread)
-{
-	return smpboot_register_percpu_thread_cpumask(plug_thread,
-						      cpu_possible_mask);
-}
+int smpboot_register_percpu_thread(struct smp_hotplug_thread *plug_thread);
 
 void smpboot_unregister_percpu_thread(struct smp_hotplug_thread *plug_thread);
-void smpboot_update_cpumask_percpu_thread(struct smp_hotplug_thread *plug_thread,
-					  const struct cpumask *);
 
 #endif
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -238,8 +238,7 @@ int smpboot_unpark_threads(unsigned int
 
 	mutex_lock(&smpboot_threads_lock);
 	list_for_each_entry(cur, &hotplug_threads, list)
-		if (cpumask_test_cpu(cpu, cur->cpumask))
-			smpboot_unpark_thread(cur, cpu);
+		smpboot_unpark_thread(cur, cpu);
 	mutex_unlock(&smpboot_threads_lock);
 	return 0;
 }
@@ -280,34 +279,26 @@ static void smpboot_destroy_threads(stru
 }
 
 /**
- * smpboot_register_percpu_thread_cpumask - Register a per_cpu thread related
+ * smpboot_register_percpu_thread - Register a per_cpu thread related
  * 					    to hotplug
  * @plug_thread:	Hotplug thread descriptor
- * @cpumask:		The cpumask where threads run
  *
  * Creates and starts the threads on all online cpus.
  */
-int smpboot_register_percpu_thread_cpumask(struct smp_hotplug_thread *plug_thread,
-					   const struct cpumask *cpumask)
+int smpboot_register_percpu_thread(struct smp_hotplug_thread *plug_thread)
 {
 	unsigned int cpu;
 	int ret = 0;
 
-	if (!alloc_cpumask_var(&plug_thread->cpumask, GFP_KERNEL))
-		return -ENOMEM;
-	cpumask_copy(plug_thread->cpumask, cpumask);
-
 	get_online_cpus();
 	mutex_lock(&smpboot_threads_lock);
 	for_each_online_cpu(cpu) {
 		ret = __smpboot_create_thread(plug_thread, cpu);
 		if (ret) {
 			smpboot_destroy_threads(plug_thread);
-			free_cpumask_var(plug_thread->cpumask);
 			goto out;
 		}
-		if (cpumask_test_cpu(cpu, cpumask))
-			smpboot_unpark_thread(plug_thread, cpu);
+		smpboot_unpark_thread(plug_thread, cpu);
 	}
 	list_add(&plug_thread->list, &hotplug_threads);
 out:
@@ -315,7 +306,7 @@ int smpboot_register_percpu_thread_cpuma
 	put_online_cpus();
 	return ret;
 }
-EXPORT_SYMBOL_GPL(smpboot_register_percpu_thread_cpumask);
+EXPORT_SYMBOL_GPL(smpboot_register_percpu_thread);
 
 /**
  * smpboot_unregister_percpu_thread - Unregister a per_cpu thread related to hotplug
@@ -331,44 +322,9 @@ void smpboot_unregister_percpu_thread(st
 	smpboot_destroy_threads(plug_thread);
 	mutex_unlock(&smpboot_threads_lock);
 	put_online_cpus();
-	free_cpumask_var(plug_thread->cpumask);
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
 
-/**
- * smpboot_update_cpumask_percpu_thread - Adjust which per_cpu hotplug threads stay parked
- * @plug_thread:	Hotplug thread descriptor
- * @new:		Revised mask to use
- *
- * The cpumask field in the smp_hotplug_thread must not be updated directly
- * by the client, but only by calling this function.
- * This function can only be called on a registered smp_hotplug_thread.
- */
-void smpboot_update_cpumask_percpu_thread(struct smp_hotplug_thread *plug_thread,
-					  const struct cpumask *new)
-{
-	struct cpumask *old = plug_thread->cpumask;
-	static struct cpumask tmp;
-	unsigned int cpu;
-
-	lockdep_assert_cpus_held();
-	mutex_lock(&smpboot_threads_lock);
-
-	/* Park threads that were exclusively enabled on the old mask. */
-	cpumask_andnot(&tmp, old, new);
-	for_each_cpu_and(cpu, &tmp, cpu_online_mask)
-		smpboot_park_thread(plug_thread, cpu);
-
-	/* Unpark threads that are exclusively enabled on the new mask. */
-	cpumask_andnot(&tmp, new, old);
-	for_each_cpu_and(cpu, &tmp, cpu_online_mask)
-		smpboot_unpark_thread(plug_thread, cpu);
-
-	cpumask_copy(old, new);
-
-	mutex_unlock(&smpboot_threads_lock);
-}
-
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
 /*

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 4/4] kthread: Simplify kthread_park() completion
  2018-06-07 12:33 [PATCH 0/4] kthread/smpboot: More fixes Peter Zijlstra
                   ` (2 preceding siblings ...)
  2018-06-07 12:33 ` [PATCH 3/4] smpboot: Remove cpumask from the API Peter Zijlstra
@ 2018-06-07 12:33 ` Peter Zijlstra
  2018-06-08  9:52   ` Oleg Nesterov
  3 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-07 12:33 UTC (permalink / raw)
  To: mingo, oleg, gkohli
  Cc: tglx, mpe, bigeasy, linux-kernel, will.deacon, Peter Zijlstra (Intel)

[-- Attachment #1: peterz-kthread-3.patch --]
[-- Type: text/plain, Size: 1337 bytes --]

Now that smpboot_update_cpumask_percpu_thread() is gone, we no longer
have anybody calling kthread_park() on already parked threads. So
revert commit:

  b1f5b378e126 ("kthread: Allow kthread_park() on a parked kthread")

Suggested-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/kthread.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -190,7 +190,7 @@ static void __kthread_parkme(struct kthr
 		if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags))
 			break;
 
-		complete_all(&self->parked);
+		complete(&self->parked);
 		schedule();
 	}
 	__set_current_state(TASK_RUNNING);
@@ -465,7 +465,6 @@ void kthread_unpark(struct task_struct *
 	if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags))
 		__kthread_bind(k, kthread->cpu, TASK_PARKED);
 
-	reinit_completion(&kthread->parked);
 	clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
 	/*
 	 * __kthread_parkme() will either see !SHOULD_PARK or get the wakeup.
@@ -493,6 +492,9 @@ int kthread_park(struct task_struct *k)
 	if (WARN_ON(k->flags & PF_EXITING))
 		return -ENOSYS;
 
+	if (WARN_ON_ONCE(test_bit(KTHREAD_SHOULD_PARK, &kthread->flags)))
+		return -EBUSY;
+
 	set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
 	if (k != current) {
 		wake_up_process(k);

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work
  2018-06-07 12:33 ` [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work Peter Zijlstra
@ 2018-06-07 14:24   ` Peter Zijlstra
  2018-06-07 14:42     ` Peter Zijlstra
  2018-06-08 13:57     ` Oleg Nesterov
  0 siblings, 2 replies; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-07 14:24 UTC (permalink / raw)
  To: mingo, oleg, gkohli; +Cc: tglx, mpe, bigeasy, linux-kernel, will.deacon

On Thu, Jun 07, 2018 at 02:33:12PM +0200, Peter Zijlstra wrote:

> +static int softlockup_stop_fn(void *data)
>  {
> +	watchdog_disable(smp_processor_id());
> +	return 0;
>  }
>  
> +static void softlockup_stop_all(void)
>  {
> +	int cpu;
> +
> +	for_each_cpu(cpu, &watchdog_allowed_mask)
> +		stop_one_cpu(cpu, softlockup_stop_fn, NULL);
> +
> +	cpumask_clear(&watchdog_allowed_mask);
>  }

Bugger, that one doesn't quite work.. watchdog_disable() ends up calling
a sleeping function. I forgot to enable all the debug cruft when
testing..

Let me try and fix that..

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work
  2018-06-07 14:24   ` Peter Zijlstra
@ 2018-06-07 14:42     ` Peter Zijlstra
  2018-06-08 13:57     ` Oleg Nesterov
  1 sibling, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-07 14:42 UTC (permalink / raw)
  To: mingo, oleg, gkohli; +Cc: tglx, mpe, bigeasy, linux-kernel, will.deacon

On Thu, Jun 07, 2018 at 04:24:05PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 07, 2018 at 02:33:12PM +0200, Peter Zijlstra wrote:
> 
> > +static int softlockup_stop_fn(void *data)
> >  {
> > +	watchdog_disable(smp_processor_id());
> > +	return 0;
> >  }
> >  
> > +static void softlockup_stop_all(void)
> >  {
> > +	int cpu;
> > +
> > +	for_each_cpu(cpu, &watchdog_allowed_mask)
> > +		stop_one_cpu(cpu, softlockup_stop_fn, NULL);
> > +
> > +	cpumask_clear(&watchdog_allowed_mask);
> >  }
> 
> Bugger, that one doesn't quite work.. watchdog_disable() ends up calling
> a sleeping function. I forgot to enable all the debug cruft when
> testing..
> 
> Let me try and fix that..

The below seems to fix that, and now I can complete all the tests
without triggering debug nonsense ;-)


--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -512,7 +512,7 @@ static void softlockup_stop_all(void)
 	int cpu;
 
 	for_each_cpu(cpu, &watchdog_allowed_mask)
-		stop_one_cpu(cpu, softlockup_stop_fn, NULL);
+		smp_call_on_cpu(cpu, softlockup_stop_fn, NULL, false);
 
 	cpumask_clear(&watchdog_allowed_mask);
 }
@@ -529,7 +529,7 @@ static void softlockup_start_all(void)
 
 	cpumask_copy(&watchdog_allowed_mask, &watchdog_cpumask);
 	for_each_cpu(cpu, &watchdog_allowed_mask)
-		stop_one_cpu(cpu, softlockup_start_fn, NULL);
+		smp_call_on_cpu(cpu, softlockup_start_fn, NULL, false);
 }
 
 int lockup_detector_online_cpu(unsigned int cpu)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kthread: Simplify kthread_park() completion
  2018-06-07 12:33 ` [PATCH 4/4] kthread: Simplify kthread_park() completion Peter Zijlstra
@ 2018-06-08  9:52   ` Oleg Nesterov
  2018-06-12 12:42     ` Peter Zijlstra
  2018-06-25  7:12     ` Peter Zijlstra
  0 siblings, 2 replies; 13+ messages in thread
From: Oleg Nesterov @ 2018-06-08  9:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, gkohli, tglx, mpe, bigeasy, linux-kernel, will.deacon

Peter,

I am travelling till the end of the next week, unlikely I will be able
to reply to emails or even read them.

But I want very much to comment this change,

On 06/07, Peter Zijlstra wrote:
>
> Now that smpboot_update_cpumask_percpu_thread() is gone, we no longer
> have anybody calling kthread_park() on already parked threads. So
> revert commit:
>
>   b1f5b378e126 ("kthread: Allow kthread_park() on a parked kthread")

Great, I obviously like this patch but the changelog should be fixed ;)

smpboot_update_cpumask_percpu_thread() was actually fine. And we can
(should) revert this commit in any case. Unless I am totally confused.

So how this code

	for_each_cpu_and(cpu, &tmp, cpu_online_mask)
		smpboot_park_thread(plug_thread, cpu);

in smpboot_update_cpumask_percpu_thread() can hit a KTHREAD_SHOULD_PARK
thread? Lets look into kernel test robot's .config:

	CONFIG_NR_CPUS=1

Now look at NR_CPUS==1 version of for_each_cpu* helpers:

	#define for_each_cpu(cpu, mask)			\
		for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
	#define for_each_cpu_not(cpu, mask)		\
		for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
	#define for_each_cpu_wrap(cpu, mask, start)	\
		for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)(start))
	#define for_each_cpu_and(cpu, mask, and)	\
		for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)and)

See? They all ignore the "mask" argument, and this is obviously wrong.

So even if the "tmp" cpumask is empty the code above always does

		smpboot_park_thread(plug_thread, 0);

and hits the already parked kthread.

Oleg.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work
  2018-06-07 14:24   ` Peter Zijlstra
  2018-06-07 14:42     ` Peter Zijlstra
@ 2018-06-08 13:57     ` Oleg Nesterov
  2018-06-12 12:17       ` Peter Zijlstra
  1 sibling, 1 reply; 13+ messages in thread
From: Oleg Nesterov @ 2018-06-08 13:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, gkohli, tglx, mpe, bigeasy, linux-kernel, will.deacon

On 06/07, Peter Zijlstra wrote:
>
> On Thu, Jun 07, 2018 at 02:33:12PM +0200, Peter Zijlstra wrote:
>
> > +static int softlockup_stop_fn(void *data)
> >  {
> > +	watchdog_disable(smp_processor_id());
> > +	return 0;
> >  }
> >
> > +static void softlockup_stop_all(void)
> >  {
> > +	int cpu;
> > +
> > +	for_each_cpu(cpu, &watchdog_allowed_mask)
> > +		stop_one_cpu(cpu, softlockup_stop_fn, NULL);
> > +
> > +	cpumask_clear(&watchdog_allowed_mask);
> >  }
>
> Bugger, that one doesn't quite work.. watchdog_disable() ends up calling
> a sleeping function. I forgot to enable all the debug cruft when
> testing..

And probably there is another problem. Both watchdog_disable(cpu) and
watchdog_nmi_disable(cpu) assume that cpu == smp_processor_id(), this arg
is simply ignored.

but lockup_detector_offline_cpu(cpu) is called by cpuhp_invoke_callback(),
so in this case watchdog_disable(dying_cpu) is simply wrong.

May be we can do something like below? Then softlockup_stop_all() can simply do

	for_each_cpu(cpu, &watchdog_allowed_mask)
		watchdog_disable(cpu);

watchdog_nmi_disable() is __weak, but at first glance arch/sparc/kernel/nmi.c
does everything correctly.

Oleg.

--- x/kernel/watchdog.c
+++ x/kernel/watchdog.c
@@ -108,13 +108,13 @@ __setup("hardlockup_all_cpu_backtrace=",
  */
 int __weak watchdog_nmi_enable(unsigned int cpu)
 {
-	hardlockup_detector_perf_enable();
+	hardlockup_detector_perf_enable(cpu);
 	return 0;
 }
 
 void __weak watchdog_nmi_disable(unsigned int cpu)
 {
-	hardlockup_detector_perf_disable();
+	hardlockup_detector_perf_disable(cpu);
 }
 
 /* Return 0, if a NMI watchdog is available. Error code otherwise */
@@ -479,7 +479,7 @@ static void watchdog_enable(unsigned int
 
 static void watchdog_disable(unsigned int cpu)
 {
-	struct hrtimer *hrtimer = this_cpu_ptr(&watchdog_hrtimer);
+	struct hrtimer *hrtimer = per_cpu_ptr(&watchdog_hrtimer, cpu);
 
 	watchdog_set_prio(SCHED_NORMAL, 0);
 	/*
--- x/kernel/watchdog_hld.c
+++ x/kernel/watchdog_hld.c
@@ -162,9 +162,8 @@ static void watchdog_overflow_callback(s
 	return;
 }
 
-static int hardlockup_detector_event_create(void)
+static int hardlockup_detector_event_create(int cpu)
 {
-	unsigned int cpu = smp_processor_id();
 	struct perf_event_attr *wd_attr;
 	struct perf_event *evt;
 
@@ -179,37 +178,37 @@ static int hardlockup_detector_event_cre
 			PTR_ERR(evt));
 		return PTR_ERR(evt);
 	}
-	this_cpu_write(watchdog_ev, evt);
+	raw_cpu_write(watchdog_ev, evt);
 	return 0;
 }
 
 /**
  * hardlockup_detector_perf_enable - Enable the local event
  */
-void hardlockup_detector_perf_enable(void)
+void hardlockup_detector_perf_enable(int cpu)
 {
-	if (hardlockup_detector_event_create())
+	if (hardlockup_detector_event_create(cpu))
 		return;
 
 	/* use original value for check */
 	if (!atomic_fetch_inc(&watchdog_cpus))
 		pr_info("Enabled. Permanently consumes one hw-PMU counter.\n");
 
-	perf_event_enable(this_cpu_read(watchdog_ev));
+	perf_event_enable(raw_cpu_read(watchdog_ev));
 }
 
 /**
  * hardlockup_detector_perf_disable - Disable the local event
  */
-void hardlockup_detector_perf_disable(void)
+void hardlockup_detector_perf_disable(int cpu)
 {
-	struct perf_event *event = this_cpu_read(watchdog_ev);
+	struct perf_event *event = per_cpu_read(watchdog_ev, cpu);
 
 	if (event) {
 		perf_event_disable(event);
-		this_cpu_write(watchdog_ev, NULL);
-		this_cpu_write(dead_event, event);
-		cpumask_set_cpu(smp_processor_id(), &dead_events_mask);
+		raw_cpu_write(watchdog_ev, cpu, NULL);
+		raw_cpu_write(dead_event, cpu, event);
+		cpumask_set_cpu(cpu, &dead_events_mask);
 		atomic_dec(&watchdog_cpus);
 	}
 }

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work
  2018-06-08 13:57     ` Oleg Nesterov
@ 2018-06-12 12:17       ` Peter Zijlstra
  0 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-12 12:17 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: mingo, gkohli, tglx, mpe, bigeasy, linux-kernel, will.deacon


On Fri, Jun 08, 2018 at 03:57:04PM +0200, Oleg Nesterov wrote:

> And probably there is another problem. Both watchdog_disable(cpu) and
> watchdog_nmi_disable(cpu) assume that cpu == smp_processor_id(), this arg
> is simply ignored.
> 
> but lockup_detector_offline_cpu(cpu) is called by cpuhp_invoke_callback(),
> so in this case watchdog_disable(dying_cpu) is simply wrong.

But at this point, the cpuhp_invoke_callback() is ran from the dying CPU
still, so dying_cpu == this_cpu. I actually have a WARN in both
watchdog_{dis,en}able() to verify this assumption.

> May be we can do something like below? Then softlockup_stop_all() can simply do
> 
> 	for_each_cpu(cpu, &watchdog_allowed_mask)
> 		watchdog_disable(cpu);
> 
> watchdog_nmi_disable() is __weak, but at first glance arch/sparc/kernel/nmi.c
> does everything correctly.

I prefer to not do that and keep the current assumption. While it would
work for the disable, it the above form will not work for enable (we
really must start hrtimers on the right CPU) and that would bring some
asymmetry.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kthread: Simplify kthread_park() completion
  2018-06-08  9:52   ` Oleg Nesterov
@ 2018-06-12 12:42     ` Peter Zijlstra
  2018-06-25  7:12     ` Peter Zijlstra
  1 sibling, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-12 12:42 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: mingo, gkohli, tglx, mpe, bigeasy, linux-kernel, will.deacon

On Fri, Jun 08, 2018 at 11:52:20AM +0200, Oleg Nesterov wrote:
> in smpboot_update_cpumask_percpu_thread() can hit a KTHREAD_SHOULD_PARK
> thread? Lets look into kernel test robot's .config:
> 
> 	CONFIG_NR_CPUS=1
> 
> Now look at NR_CPUS==1 version of for_each_cpu* helpers:
> 
> 	#define for_each_cpu(cpu, mask)			\
> 		for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)

Argh, that issue again.

> So even if the "tmp" cpumask is empty the code above always does
> 
> 		smpboot_park_thread(plug_thread, 0);
> 
> and hits the already parked kthread.

OK, I'll write a new Changelog.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kthread: Simplify kthread_park() completion
  2018-06-08  9:52   ` Oleg Nesterov
  2018-06-12 12:42     ` Peter Zijlstra
@ 2018-06-25  7:12     ` Peter Zijlstra
  2018-06-25 16:53       ` Oleg Nesterov
  1 sibling, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2018-06-25  7:12 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: mingo, gkohli, tglx, mpe, bigeasy, linux-kernel, will.deacon

Hi Oleg,

On Fri, Jun 08, 2018 at 11:52:20AM +0200, Oleg Nesterov wrote:
> I am travelling till the end of the next week, unlikely I will be able
> to reply to emails or even read them.

Should I post a new version; with updated Changelogs for you?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kthread: Simplify kthread_park() completion
  2018-06-25  7:12     ` Peter Zijlstra
@ 2018-06-25 16:53       ` Oleg Nesterov
  0 siblings, 0 replies; 13+ messages in thread
From: Oleg Nesterov @ 2018-06-25 16:53 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, gkohli, tglx, mpe, bigeasy, linux-kernel, will.deacon

Hi Peter,

On 06/25, Peter Zijlstra wrote:
>
> Hi Oleg,
>
> On Fri, Jun 08, 2018 at 11:52:20AM +0200, Oleg Nesterov wrote:
> > I am travelling till the end of the next week, unlikely I will be able
> > to reply to emails or even read them.
>
> Should I post a new version; with updated Changelogs for you?

IIRC you have already sent v2 with the updated changelog which explains
the problem with for_each_cpu? That series looked good to me, thanks!

Oleg.


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-06-25 16:53 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-07 12:33 [PATCH 0/4] kthread/smpboot: More fixes Peter Zijlstra
2018-06-07 12:33 ` [PATCH 1/4] kthread, sched: Fix kthread_parkme() (again...) Peter Zijlstra
2018-06-07 12:33 ` [PATCH 2/4] watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work Peter Zijlstra
2018-06-07 14:24   ` Peter Zijlstra
2018-06-07 14:42     ` Peter Zijlstra
2018-06-08 13:57     ` Oleg Nesterov
2018-06-12 12:17       ` Peter Zijlstra
2018-06-07 12:33 ` [PATCH 3/4] smpboot: Remove cpumask from the API Peter Zijlstra
2018-06-07 12:33 ` [PATCH 4/4] kthread: Simplify kthread_park() completion Peter Zijlstra
2018-06-08  9:52   ` Oleg Nesterov
2018-06-12 12:42     ` Peter Zijlstra
2018-06-25  7:12     ` Peter Zijlstra
2018-06-25 16:53       ` Oleg Nesterov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).