LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue
@ 2008-10-13 21:50 Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 1/6] Call init_workqueues before pre smp initcalls Heiko Carstens
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Heiko Carstens @ 2008-10-13 21:50 UTC (permalink / raw)
  To: rusty; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel

Version 2: This is version 2 which converts stop_machine to a workqueue based
           implementation as suggested by Rusty instead of trying to extend
           the current kernel thread approach.

This patch series would allow to convert s390 to the generic IPI interface.
We can't to that currently since our etr/stp code relies on the old semantics
of smp_call_function that guarantee that the function only returns after all
receiving cpus have acknowledged the IPI. That way it is known that all other
cpus are running in an interrupt handler with interrupts disabled.
This is not true anymore with the generic IPI infrastructure.

So one idea was to use stop_machine in order to synchronize all cpus. Rusty
was kind enough to extend it so that it is now possible to run a function
on several cpus, instead of just one.
However we need to be able to do that without allocating any memory. That's
what this patch set is about: it changes the current stop_machine code to
use a workqueue instead of kernel threads to synchronize all cpus.
This has the advantage that all per cpu workqueue threads are already running
when stop_machine gets called and therefore no memory needs to be allocated.
In addition stop_machine cant't fail anymore (free_module() relies on that).

A few things that need to be addressed:
- stop_machine gets called from initcalls, so we need to make sure that it
  is already initialized and has its workqueue started before that. For that
  a pre_smp initcall (early_initcall) is used to initialize it.
- the stop_machine kernel threads used to be rt kernel threads. Workqueues
  are normal threads. To get high priority threads a new interface
  create_rt_workqueue is introduced.

Patch 1 Moves the call to init_workqueue before pre smp initcalls
Patch 2 introduces create_rt_workqueue
Patch 3 converts stop_machine to use an rt workqueue
Patch 4 adds special case handling for num_online_cpus == 1 to stop_machine
        - Patch 4 is only needed if there would be a stop_machine call before
          the pre smp initcalls have been executed. As far as I can see there
          is currently none.
Patch 5 converts the s390 etr and stp code to use stop_machine
Patch 6 converts s390 to the generic IPI interface

Thanks,
Heiko

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH/RFC v2 1/6] Call init_workqueues before pre smp initcalls.
  2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
@ 2008-10-13 21:50 ` Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 2/6] workqueue: introduce create_rt_workqueue Heiko Carstens
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Heiko Carstens @ 2008-10-13 21:50 UTC (permalink / raw)
  To: rusty; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel, Heiko Carstens

[-- Attachment #1: 001_workqueue_init.diff --]
[-- Type: text/plain, Size: 911 bytes --]

From: Heiko Carstens <heiko.carstens@de.ibm.com>

This allows to create workqueues from within the context of
a pre smp initcall (aka early_initcall).

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 init/main.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: linux-2.6/init/main.c
===================================================================
--- linux-2.6.orig/init/main.c
+++ linux-2.6/init/main.c
@@ -767,8 +767,6 @@ static void __init do_initcalls(void)
 static void __init do_basic_setup(void)
 {
 	rcu_init_sched(); /* needed by module_init stage. */
-	/* drivers will send hotplug events */
-	init_workqueues();
 	usermodehelper_init();
 	driver_init();
 	init_irq_proc();
@@ -852,6 +850,8 @@ static int __init kernel_init(void * unu
 
 	cad_pid = task_pid(current);
 
+	init_workqueues();
+
 	smp_prepare_cpus(setup_max_cpus);
 
 	do_pre_smp_initcalls();

-- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH/RFC v2 2/6] workqueue: introduce create_rt_workqueue
  2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 1/6] Call init_workqueues before pre smp initcalls Heiko Carstens
@ 2008-10-13 21:50 ` Heiko Carstens
  2008-10-21 22:15   ` Rusty Russell
  2008-10-13 21:50 ` [PATCH/RFC v2 3/6] stop_machine: use workqueues instead of kernel threads Heiko Carstens
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 9+ messages in thread
From: Heiko Carstens @ 2008-10-13 21:50 UTC (permalink / raw)
  To: rusty; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel, Heiko Carstens

[-- Attachment #1: 002_workqueue_rt.diff --]
[-- Type: text/plain, Size: 3977 bytes --]

From: Heiko Carstens <heiko.carstens@de.ibm.com>

create_rt_workqueue will create a real time prioritized workqueue.
This is needed for the conversion of stop_machine to a workqueue based
implementation.
This patch adds yet another parameter to __create_workqueue_key to tell
it that we want an rt workqueue.
However it looks like we rather should have something like "int type"
instead of singlethread, freezable and rt.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 include/linux/workqueue.h |   18 ++++++++++--------
 kernel/workqueue.c        |    7 ++++++-
 2 files changed, 16 insertions(+), 9 deletions(-)

Index: linux-2.6/include/linux/workqueue.h
===================================================================
--- linux-2.6.orig/include/linux/workqueue.h
+++ linux-2.6/include/linux/workqueue.h
@@ -149,11 +149,11 @@ struct execute_work {
 
 extern struct workqueue_struct *
 __create_workqueue_key(const char *name, int singlethread,
-		       int freezeable, struct lock_class_key *key,
+		       int freezeable, int rt, struct lock_class_key *key,
 		       const char *lock_name);
 
 #ifdef CONFIG_LOCKDEP
-#define __create_workqueue(name, singlethread, freezeable)	\
+#define __create_workqueue(name, singlethread, freezeable, rt)	\
 ({								\
 	static struct lock_class_key __key;			\
 	const char *__lock_name;				\
@@ -164,17 +164,19 @@ __create_workqueue_key(const char *name,
 		__lock_name = #name;				\
 								\
 	__create_workqueue_key((name), (singlethread),		\
-			       (freezeable), &__key,		\
+			       (freezeable), (rt), &__key,	\
 			       __lock_name);			\
 })
 #else
-#define __create_workqueue(name, singlethread, freezeable)	\
-	__create_workqueue_key((name), (singlethread), (freezeable), NULL, NULL)
+#define __create_workqueue(name, singlethread, freezeable, rt)	\
+	__create_workqueue_key((name), (singlethread), (freezeable), (rt), \
+			       NULL, NULL)
 #endif
 
-#define create_workqueue(name) __create_workqueue((name), 0, 0)
-#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1)
-#define create_singlethread_workqueue(name) __create_workqueue((name), 1, 0)
+#define create_workqueue(name) __create_workqueue((name), 0, 0, 0)
+#define create_rt_workqueue(name) __create_workqueue((name), 0, 0, 1)
+#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1, 0)
+#define create_singlethread_workqueue(name) __create_workqueue((name), 1, 0, 0)
 
 extern void destroy_workqueue(struct workqueue_struct *wq);
 
Index: linux-2.6/kernel/workqueue.c
===================================================================
--- linux-2.6.orig/kernel/workqueue.c
+++ linux-2.6/kernel/workqueue.c
@@ -62,6 +62,7 @@ struct workqueue_struct {
 	const char *name;
 	int singlethread;
 	int freezeable;		/* Freeze threads during suspend */
+	int rt;
 #ifdef CONFIG_LOCKDEP
 	struct lockdep_map lockdep_map;
 #endif
@@ -766,6 +767,7 @@ init_cpu_workqueue(struct workqueue_stru
 
 static int create_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu)
 {
+	struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
 	struct workqueue_struct *wq = cwq->wq;
 	const char *fmt = is_single_threaded(wq) ? "%s" : "%s/%d";
 	struct task_struct *p;
@@ -781,7 +783,8 @@ static int create_workqueue_thread(struc
 	 */
 	if (IS_ERR(p))
 		return PTR_ERR(p);
-
+	if (cwq->wq->rt)
+		sched_setscheduler_nocheck(p, SCHED_FIFO, &param);
 	cwq->thread = p;
 
 	return 0;
@@ -801,6 +804,7 @@ static void start_workqueue_thread(struc
 struct workqueue_struct *__create_workqueue_key(const char *name,
 						int singlethread,
 						int freezeable,
+						int rt,
 						struct lock_class_key *key,
 						const char *lock_name)
 {
@@ -822,6 +826,7 @@ struct workqueue_struct *__create_workqu
 	lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
 	wq->singlethread = singlethread;
 	wq->freezeable = freezeable;
+	wq->rt = rt;
 	INIT_LIST_HEAD(&wq->list);
 
 	if (singlethread) {

-- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH/RFC v2 3/6] stop_machine: use workqueues instead of kernel threads
  2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 1/6] Call init_workqueues before pre smp initcalls Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 2/6] workqueue: introduce create_rt_workqueue Heiko Carstens
@ 2008-10-13 21:50 ` Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 4/6] stop_machine: special case for one cpu Heiko Carstens
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Heiko Carstens @ 2008-10-13 21:50 UTC (permalink / raw)
  To: rusty; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel, Heiko Carstens

[-- Attachment #1: 003_stopmachine_wq.diff --]
[-- Type: text/plain, Size: 5515 bytes --]

From: Heiko Carstens <heiko.carstens@de.ibm.com>

Convert stop_machine to a workqueue based approach. Instead of using kernel
threads for stop_machine we now use a an rt workqueue to synchronize all
cpus.
This has the advantage that all needed per cpu threads are already created
when stop_machine gets called. And therefore a call to stop_machine won't
fail anymore. This is needed for s390 which needs a mechanism to synchronize
all cpus without allocating any memory.
As Rusty pointed out free_module() needs a non-failing stop_machine interface
as well.

As a side effect the stop_machine code gets simplified.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 kernel/stop_machine.c |  109 ++++++++++++++++++--------------------------------
 1 file changed, 40 insertions(+), 69 deletions(-)

Index: linux-2.6/kernel/stop_machine.c
===================================================================
--- linux-2.6.orig/kernel/stop_machine.c
+++ linux-2.6/kernel/stop_machine.c
@@ -37,9 +37,13 @@ struct stop_machine_data {
 /* Like num_online_cpus(), but hotplug cpu uses us, so we need this. */
 static unsigned int num_threads;
 static atomic_t thread_ack;
-static struct completion finished;
 static DEFINE_MUTEX(lock);
 
+static struct workqueue_struct *stop_machine_wq;
+static struct stop_machine_data active, idle;
+static const cpumask_t *active_cpus;
+static void *stop_machine_work;
+
 static void set_state(enum stopmachine_state newstate)
 {
 	/* Reset ack counter. */
@@ -51,21 +55,25 @@ static void set_state(enum stopmachine_s
 /* Last one to ack a state moves to the next state. */
 static void ack_state(void)
 {
-	if (atomic_dec_and_test(&thread_ack)) {
-		/* If we're the last one to ack the EXIT, we're finished. */
-		if (state == STOPMACHINE_EXIT)
-			complete(&finished);
-		else
-			set_state(state + 1);
-	}
+	if (atomic_dec_and_test(&thread_ack))
+		set_state(state + 1);
 }
 
-/* This is the actual thread which stops the CPU.  It exits by itself rather
- * than waiting for kthread_stop(), because it's easier for hotplug CPU. */
-static int stop_cpu(struct stop_machine_data *smdata)
+/* This is the actual function which stops the CPU. It runs
+ * in the context of a dedicated stopmachine workqueue. */
+static void stop_cpu(struct work_struct *unused)
 {
 	enum stopmachine_state curstate = STOPMACHINE_NONE;
+	struct stop_machine_data *smdata = &idle;
+	int cpu = smp_processor_id();
 
+	if (!active_cpus) {
+		if (cpu == first_cpu(cpu_online_map))
+			smdata = &active;
+	} else {
+		if (cpu_isset(cpu, *active_cpus))
+			smdata = &active;
+	}
 	/* Simple state machine */
 	do {
 		/* Chill out and ensure we re-read stopmachine_state. */
@@ -90,7 +98,6 @@ static int stop_cpu(struct stop_machine_
 	} while (curstate != STOPMACHINE_EXIT);
 
 	local_irq_enable();
-	do_exit(0);
 }
 
 /* Callback for CPUs which aren't supposed to do anything. */
@@ -101,78 +108,34 @@ static int chill(void *unused)
 
 int __stop_machine(int (*fn)(void *), void *data, const cpumask_t *cpus)
 {
-	int i, err;
-	struct stop_machine_data active, idle;
-	struct task_struct **threads;
+	struct work_struct *sm_work;
+	int i;
 
+	/* Set up initial state. */
+	mutex_lock(&lock);
+	num_threads = num_online_cpus();
+	active_cpus = cpus;
 	active.fn = fn;
 	active.data = data;
 	active.fnret = 0;
 	idle.fn = chill;
 	idle.data = NULL;
 
-	/* This could be too big for stack on large machines. */
-	threads = kcalloc(NR_CPUS, sizeof(threads[0]), GFP_KERNEL);
-	if (!threads)
-		return -ENOMEM;
-
-	/* Set up initial state. */
-	mutex_lock(&lock);
-	init_completion(&finished);
-	num_threads = num_online_cpus();
 	set_state(STOPMACHINE_PREPARE);
 
-	for_each_online_cpu(i) {
-		struct stop_machine_data *smdata = &idle;
-		struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
-
-		if (!cpus) {
-			if (i == first_cpu(cpu_online_map))
-				smdata = &active;
-		} else {
-			if (cpu_isset(i, *cpus))
-				smdata = &active;
-		}
-
-		threads[i] = kthread_create((void *)stop_cpu, smdata, "kstop%u",
-					    i);
-		if (IS_ERR(threads[i])) {
-			err = PTR_ERR(threads[i]);
-			threads[i] = NULL;
-			goto kill_threads;
-		}
-
-		/* Place it onto correct cpu. */
-		kthread_bind(threads[i], i);
-
-		/* Make it highest prio. */
-		if (sched_setscheduler_nocheck(threads[i], SCHED_FIFO, &param))
-			BUG();
-	}
-
-	/* We've created all the threads.  Wake them all: hold this CPU so one
+	/* Schedule the stop_cpu work on all cpus: hold this CPU so one
 	 * doesn't hit this CPU until we're ready. */
 	get_cpu();
-	for_each_online_cpu(i)
-		wake_up_process(threads[i]);
-
+	for_each_online_cpu(i) {
+		sm_work = percpu_ptr(stop_machine_work, i);
+		INIT_WORK(sm_work, stop_cpu);
+		queue_work_on(i, stop_machine_wq, sm_work);
+	}
 	/* This will release the thread on our CPU. */
 	put_cpu();
-	wait_for_completion(&finished);
+	flush_workqueue(stop_machine_wq);
 	mutex_unlock(&lock);
-
-	kfree(threads);
-
 	return active.fnret;
-
-kill_threads:
-	for_each_online_cpu(i)
-		if (threads[i])
-			kthread_stop(threads[i]);
-	mutex_unlock(&lock);
-
-	kfree(threads);
-	return err;
 }
 
 int stop_machine(int (*fn)(void *), void *data, const cpumask_t *cpus)
@@ -187,3 +150,11 @@ int stop_machine(int (*fn)(void *), void
 	return ret;
 }
 EXPORT_SYMBOL_GPL(stop_machine);
+
+static int __init stop_machine_init(void)
+{
+	stop_machine_wq = create_rt_workqueue("kstop");
+	stop_machine_work = alloc_percpu(struct work_struct);
+	return 0;
+}
+early_initcall(stop_machine_init);

-- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH/RFC v2 4/6] stop_machine: special case for one cpu
  2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
                   ` (2 preceding siblings ...)
  2008-10-13 21:50 ` [PATCH/RFC v2 3/6] stop_machine: use workqueues instead of kernel threads Heiko Carstens
@ 2008-10-13 21:50 ` Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 5/6] s390: convert etr/stp to stop_machine interface Heiko Carstens
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Heiko Carstens @ 2008-10-13 21:50 UTC (permalink / raw)
  To: rusty; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel, Heiko Carstens

[-- Attachment #1: 004_stopmachine_single.diff --]
[-- Type: text/plain, Size: 1756 bytes --]

From: Heiko Carstens <heiko.carstens@de.ibm.com>

If only one cpu is online we might as well use the non-smp variant of
stop_machine which should be much faster since no scheduling is needed.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 include/linux/stop_machine.h |   16 +++++++++++-----
 kernel/stop_machine.c        |    3 +++
 2 files changed, 14 insertions(+), 5 deletions(-)

Index: linux-2.6/include/linux/stop_machine.h
===================================================================
--- linux-2.6.orig/include/linux/stop_machine.h
+++ linux-2.6/include/linux/stop_machine.h
@@ -8,6 +8,16 @@
 #include <linux/cpumask.h>
 #include <asm/system.h>
 
+static inline int stop_machine_simple(int (*fn)(void *), void *data,
+				      const cpumask_t *cpus)
+{
+	int ret;
+	local_irq_disable();
+	ret = fn(data);
+	local_irq_enable();
+	return ret;
+}
+
 #if defined(CONFIG_STOP_MACHINE) && defined(CONFIG_SMP)
 
 /**
@@ -40,11 +50,7 @@ int __stop_machine(int (*fn)(void *), vo
 static inline int stop_machine(int (*fn)(void *), void *data,
 			       const cpumask_t *cpus)
 {
-	int ret;
-	local_irq_disable();
-	ret = fn(data);
-	local_irq_enable();
-	return ret;
+	return stop_machine_simple(fn, data, cpus);
 }
 #endif /* CONFIG_SMP */
 #endif /* _LINUX_STOP_MACHINE */
Index: linux-2.6/kernel/stop_machine.c
===================================================================
--- linux-2.6.orig/kernel/stop_machine.c
+++ linux-2.6/kernel/stop_machine.c
@@ -111,6 +111,9 @@ int __stop_machine(int (*fn)(void *), vo
 	struct work_struct *sm_work;
 	int i;
 
+	if (num_online_cpus() == 1)
+		return stop_machine_simple(fn, data, cpus);
+
 	/* Set up initial state. */
 	mutex_lock(&lock);
 	num_threads = num_online_cpus();

-- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH/RFC v2 5/6] s390: convert etr/stp to stop_machine interface
  2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
                   ` (3 preceding siblings ...)
  2008-10-13 21:50 ` [PATCH/RFC v2 4/6] stop_machine: special case for one cpu Heiko Carstens
@ 2008-10-13 21:50 ` Heiko Carstens
  2008-10-13 21:50 ` [PATCH/RFC v2 6/6] s390: convert to generic IPI infrstructure Heiko Carstens
  2008-10-16 10:38 ` [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Rusty Russell
  6 siblings, 0 replies; 9+ messages in thread
From: Heiko Carstens @ 2008-10-13 21:50 UTC (permalink / raw)
  To: rusty; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel, Heiko Carstens

[-- Attachment #1: 005_etr.diff --]
[-- Type: text/plain, Size: 8297 bytes --]

From: Heiko Carstens <heiko.carstens@de.ibm.com>

This converts the etr and stp code to the new stop_machine interface
which allows to synchronize all cpus without allocating any memory.
This way we get rid of the only reason why we haven't converted s390
to the generic IPI interface yet.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 arch/s390/kernel/time.c |  173 +++++++++++++++++++++++++++++-------------------
 1 file changed, 106 insertions(+), 67 deletions(-)

Index: linux-2.6/arch/s390/kernel/time.c
===================================================================
--- linux-2.6.orig/arch/s390/kernel/time.c
+++ linux-2.6/arch/s390/kernel/time.c
@@ -20,6 +20,8 @@
 #include <linux/string.h>
 #include <linux/mm.h>
 #include <linux/interrupt.h>
+#include <linux/cpu.h>
+#include <linux/stop_machine.h>
 #include <linux/time.h>
 #include <linux/sysdev.h>
 #include <linux/delay.h>
@@ -645,14 +647,16 @@ static int etr_aib_follows(struct etr_ai
 }
 
 struct clock_sync_data {
+	atomic_t cpus;
 	int in_sync;
 	unsigned long long fixup_cc;
+	int etr_port;
+	struct etr_aib *etr_aib;
 };
 
-static void clock_sync_cpu_start(void *dummy)
+static void clock_sync_cpu(struct clock_sync_data *sync)
 {
-	struct clock_sync_data *sync = dummy;
-
+	atomic_dec(&sync->cpus);
 	enable_sync_clock();
 	/*
 	 * This looks like a busy wait loop but it isn't. etr_sync_cpus
@@ -678,39 +682,35 @@ static void clock_sync_cpu_start(void *d
 	fixup_clock_comparator(sync->fixup_cc);
 }
 
-static void clock_sync_cpu_end(void *dummy)
-{
-}
-
 /*
  * Sync the TOD clock using the port refered to by aibp. This port
  * has to be enabled and the other port has to be disabled. The
  * last eacr update has to be more than 1.6 seconds in the past.
  */
-static int etr_sync_clock(struct etr_aib *aib, int port)
+static int etr_sync_clock(void *data)
 {
-	struct etr_aib *sync_port;
-	struct clock_sync_data etr_sync;
+	static int first;
 	unsigned long long clock, old_clock, delay, delta;
-	int follows;
+	struct clock_sync_data *etr_sync;
+	struct etr_aib *sync_port, *aib;
+	int port;
 	int rc;
 
-	/* Check if the current aib is adjacent to the sync port aib. */
-	sync_port = (port == 0) ? &etr_port0 : &etr_port1;
-	follows = etr_aib_follows(sync_port, aib, port);
-	memcpy(sync_port, aib, sizeof(*aib));
-	if (!follows)
-		return -EAGAIN;
+	etr_sync = data;
 
-	/*
-	 * Catch all other cpus and make them wait until we have
-	 * successfully synced the clock. smp_call_function will
-	 * return after all other cpus are in etr_sync_cpu_start.
-	 */
-	memset(&etr_sync, 0, sizeof(etr_sync));
-	preempt_disable();
-	smp_call_function(clock_sync_cpu_start, &etr_sync, 0);
-	local_irq_disable();
+	if (xchg(&first, 1) == 1) {
+		/* Slave */
+		clock_sync_cpu(etr_sync);
+		return 0;
+	}
+
+	/* Wait until all other cpus entered the sync function. */
+	while (atomic_read(&etr_sync->cpus) != 0)
+		cpu_relax();
+
+	port = etr_sync->etr_port;
+	aib = etr_sync->etr_aib;
+	sync_port = (port == 0) ? &etr_port0 : &etr_port1;
 	enable_sync_clock();
 
 	/* Set clock to next OTE. */
@@ -727,16 +727,16 @@ static int etr_sync_clock(struct etr_aib
 		delay = (unsigned long long)
 			(aib->edf2.etv - sync_port->edf2.etv) << 32;
 		delta = adjust_time(old_clock, clock, delay);
-		etr_sync.fixup_cc = delta;
+		etr_sync->fixup_cc = delta;
 		fixup_clock_comparator(delta);
 		/* Verify that the clock is properly set. */
 		if (!etr_aib_follows(sync_port, aib, port)) {
 			/* Didn't work. */
 			disable_sync_clock(NULL);
-			etr_sync.in_sync = -EAGAIN;
+			etr_sync->in_sync = -EAGAIN;
 			rc = -EAGAIN;
 		} else {
-			etr_sync.in_sync = 1;
+			etr_sync->in_sync = 1;
 			rc = 0;
 		}
 	} else {
@@ -744,12 +744,33 @@ static int etr_sync_clock(struct etr_aib
 		__ctl_clear_bit(0, 29);
 		__ctl_clear_bit(14, 21);
 		disable_sync_clock(NULL);
-		etr_sync.in_sync = -EAGAIN;
+		etr_sync->in_sync = -EAGAIN;
 		rc = -EAGAIN;
 	}
-	local_irq_enable();
-	smp_call_function(clock_sync_cpu_end, NULL, 0);
-	preempt_enable();
+	xchg(&first, 0);
+	return rc;
+}
+
+static int etr_sync_clock_stop(struct etr_aib *aib, int port)
+{
+	struct clock_sync_data etr_sync;
+	struct etr_aib *sync_port;
+	int follows;
+	int rc;
+
+	/* Check if the current aib is adjacent to the sync port aib. */
+	sync_port = (port == 0) ? &etr_port0 : &etr_port1;
+	follows = etr_aib_follows(sync_port, aib, port);
+	memcpy(sync_port, aib, sizeof(*aib));
+	if (!follows)
+		return -EAGAIN;
+	memset(&etr_sync, 0, sizeof(etr_sync));
+	etr_sync.etr_aib = aib;
+	etr_sync.etr_port = port;
+	get_online_cpus();
+	atomic_set(&etr_sync.cpus, num_online_cpus() - 1);
+	rc = stop_machine(etr_sync_clock, &etr_sync, &cpu_online_map);
+	put_online_cpus();
 	return rc;
 }
 
@@ -906,7 +927,7 @@ static void etr_update_eacr(struct etr_e
 }
 
 /*
- * ETR tasklet. In this function you'll find the main logic. In
+ * ETR work. In this function you'll find the main logic. In
  * particular this is the only function that calls etr_update_eacr(),
  * it "controls" the etr control register.
  */
@@ -1039,7 +1060,7 @@ static void etr_work_fn(struct work_stru
 	etr_update_eacr(eacr);
 	set_bit(CLOCK_SYNC_ETR, &clock_sync_flags);
 	if (now < etr_tolec + (1600000 << 12) ||
-	    etr_sync_clock(&aib, sync_port) != 0) {
+	    etr_sync_clock_stop(&aib, sync_port) != 0) {
 		/* Sync failed. Try again in 1/2 second. */
 		eacr.es = 0;
 		etr_update_eacr(eacr);
@@ -1368,8 +1389,11 @@ static void __init stp_reset(void)
 
 static int __init stp_init(void)
 {
-	if (test_bit(CLOCK_SYNC_HAS_STP, &clock_sync_flags) && stp_online)
-		schedule_work(&stp_work);
+	if (!test_bit(CLOCK_SYNC_HAS_STP, &clock_sync_flags))
+		return 0;
+	if (!stp_online)
+		return 0;
+	schedule_work(&stp_work);
 	return 0;
 }
 
@@ -1417,38 +1441,26 @@ void stp_island_check(void)
 	schedule_work(&stp_work);
 }
 
-/*
- * STP tasklet. Check for the STP state and take over the clock
- * synchronization if the STP clock source is usable.
- */
-static void stp_work_fn(struct work_struct *work)
+
+static int stp_sync_clock(void *data)
 {
-	struct clock_sync_data stp_sync;
+	static int first;
 	unsigned long long old_clock, delta;
+	struct clock_sync_data *stp_sync;
 	int rc;
 
-	if (!stp_online) {
-		chsc_sstpc(stp_page, STP_OP_CTRL, 0x0000);
-		return;
-	}
+	stp_sync = data;
 
-	rc = chsc_sstpc(stp_page, STP_OP_CTRL, 0xb0e0);
-	if (rc)
-		return;
+	if (xchg(&first, 1) == 1) {
+		/* Slave */
+		clock_sync_cpu(stp_sync);
+		return 0;
+	}
 
-	rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi));
-	if (rc || stp_info.c == 0)
-		return;
+	/* Wait until all other cpus entered the sync function. */
+	while (atomic_read(&stp_sync->cpus) != 0)
+		cpu_relax();
 
-	/*
-	 * Catch all other cpus and make them wait until we have
-	 * successfully synced the clock. smp_call_function will
-	 * return after all other cpus are in clock_sync_cpu_start.
-	 */
-	memset(&stp_sync, 0, sizeof(stp_sync));
-	preempt_disable();
-	smp_call_function(clock_sync_cpu_start, &stp_sync, 0);
-	local_irq_disable();
 	enable_sync_clock();
 
 	set_bit(CLOCK_SYNC_STP, &clock_sync_flags);
@@ -1472,16 +1484,43 @@ static void stp_work_fn(struct work_stru
 	}
 	if (rc) {
 		disable_sync_clock(NULL);
-		stp_sync.in_sync = -EAGAIN;
+		stp_sync->in_sync = -EAGAIN;
 		clear_bit(CLOCK_SYNC_STP, &clock_sync_flags);
 		if (etr_port0_online || etr_port1_online)
 			schedule_work(&etr_work);
 	} else
-		stp_sync.in_sync = 1;
+		stp_sync->in_sync = 1;
+	xchg(&first, 0);
+	return 0;
+}
 
-	local_irq_enable();
-	smp_call_function(clock_sync_cpu_end, NULL, 0);
-	preempt_enable();
+/*
+ * STP work. Check for the STP state and take over the clock
+ * synchronization if the STP clock source is usable.
+ */
+static void stp_work_fn(struct work_struct *work)
+{
+	struct clock_sync_data stp_sync;
+	int rc;
+
+	if (!stp_online) {
+		chsc_sstpc(stp_page, STP_OP_CTRL, 0x0000);
+		return;
+	}
+
+	rc = chsc_sstpc(stp_page, STP_OP_CTRL, 0xb0e0);
+	if (rc)
+		return;
+
+	rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi));
+	if (rc || stp_info.c == 0)
+		return;
+
+	memset(&stp_sync, 0, sizeof(stp_sync));
+	get_online_cpus();
+	atomic_set(&stp_sync.cpus, num_online_cpus() - 1);
+	stop_machine(stp_sync_clock, &stp_sync, &cpu_online_map);
+	put_online_cpus();
 }
 
 /*

-- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH/RFC v2 6/6] s390: convert to generic IPI infrstructure
  2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
                   ` (4 preceding siblings ...)
  2008-10-13 21:50 ` [PATCH/RFC v2 5/6] s390: convert etr/stp to stop_machine interface Heiko Carstens
@ 2008-10-13 21:50 ` Heiko Carstens
  2008-10-16 10:38 ` [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Rusty Russell
  6 siblings, 0 replies; 9+ messages in thread
From: Heiko Carstens @ 2008-10-13 21:50 UTC (permalink / raw)
  To: rusty; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel, Heiko Carstens

[-- Attachment #1: 006_generic_ipi.diff --]
[-- Type: text/plain, Size: 7466 bytes --]

From: Heiko Carstens <heiko.carstens@de.ibm.com>

Since etr/stp don't need the old smp_call_function semantics anymore
we can convert s390 to the generic IPI infrastructure.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 arch/s390/Kconfig            |    1 
 arch/s390/include/asm/sigp.h |    1 
 arch/s390/include/asm/smp.h  |    5 -
 arch/s390/kernel/smp.c       |  175 ++++---------------------------------------
 4 files changed, 24 insertions(+), 158 deletions(-)

Index: linux-2.6/arch/s390/include/asm/sigp.h
===================================================================
--- linux-2.6.orig/arch/s390/include/asm/sigp.h
+++ linux-2.6/arch/s390/include/asm/sigp.h
@@ -61,6 +61,7 @@ typedef enum
 {
 	ec_schedule=0,
 	ec_call_function,
+	ec_call_function_single,
 	ec_bit_last
 } ec_bit_sig;
 
Index: linux-2.6/arch/s390/include/asm/smp.h
===================================================================
--- linux-2.6.orig/arch/s390/include/asm/smp.h
+++ linux-2.6/arch/s390/include/asm/smp.h
@@ -91,8 +91,9 @@ extern int __cpu_up (unsigned int cpu);
 extern struct mutex smp_cpu_state_mutex;
 extern int smp_cpu_polarization[];
 
-extern int smp_call_function_mask(cpumask_t mask, void (*func)(void *),
-	void *info, int wait);
+extern void arch_send_call_function_single_ipi(int cpu);
+extern void arch_send_call_function_ipi(cpumask_t mask);
+
 #endif
 
 #ifndef CONFIG_SMP
Index: linux-2.6/arch/s390/Kconfig
===================================================================
--- linux-2.6.orig/arch/s390/Kconfig
+++ linux-2.6/arch/s390/Kconfig
@@ -75,6 +75,7 @@ config S390
 	select HAVE_KRETPROBES
 	select HAVE_KVM if 64BIT
 	select HAVE_ARCH_TRACEHOOK
+	select USE_GENERIC_SMP_HELPERS if SMP
 
 source "init/Kconfig"
 
Index: linux-2.6/arch/s390/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/s390/kernel/smp.c
+++ linux-2.6/arch/s390/kernel/smp.c
@@ -77,159 +77,6 @@ static DEFINE_PER_CPU(struct cpu, cpu_de
 
 static void smp_ext_bitcall(int, ec_bit_sig);
 
-/*
- * Structure and data for __smp_call_function_map(). This is designed to
- * minimise static memory requirements. It also looks cleaner.
- */
-static DEFINE_SPINLOCK(call_lock);
-
-struct call_data_struct {
-	void (*func) (void *info);
-	void *info;
-	cpumask_t started;
-	cpumask_t finished;
-	int wait;
-};
-
-static struct call_data_struct *call_data;
-
-/*
- * 'Call function' interrupt callback
- */
-static void do_call_function(void)
-{
-	void (*func) (void *info) = call_data->func;
-	void *info = call_data->info;
-	int wait = call_data->wait;
-
-	cpu_set(smp_processor_id(), call_data->started);
-	(*func)(info);
-	if (wait)
-		cpu_set(smp_processor_id(), call_data->finished);;
-}
-
-static void __smp_call_function_map(void (*func) (void *info), void *info,
-				    int wait, cpumask_t map)
-{
-	struct call_data_struct data;
-	int cpu, local = 0;
-
-	/*
-	 * Can deadlock when interrupts are disabled or if in wrong context.
-	 */
-	WARN_ON(irqs_disabled() || in_irq());
-
-	/*
-	 * Check for local function call. We have to have the same call order
-	 * as in on_each_cpu() because of machine_restart_smp().
-	 */
-	if (cpu_isset(smp_processor_id(), map)) {
-		local = 1;
-		cpu_clear(smp_processor_id(), map);
-	}
-
-	cpus_and(map, map, cpu_online_map);
-	if (cpus_empty(map))
-		goto out;
-
-	data.func = func;
-	data.info = info;
-	data.started = CPU_MASK_NONE;
-	data.wait = wait;
-	if (wait)
-		data.finished = CPU_MASK_NONE;
-
-	call_data = &data;
-
-	for_each_cpu_mask(cpu, map)
-		smp_ext_bitcall(cpu, ec_call_function);
-
-	/* Wait for response */
-	while (!cpus_equal(map, data.started))
-		cpu_relax();
-	if (wait)
-		while (!cpus_equal(map, data.finished))
-			cpu_relax();
-out:
-	if (local) {
-		local_irq_disable();
-		func(info);
-		local_irq_enable();
-	}
-}
-
-/*
- * smp_call_function:
- * @func: the function to run; this must be fast and non-blocking
- * @info: an arbitrary pointer to pass to the function
- * @wait: if true, wait (atomically) until function has completed on other CPUs
- *
- * Run a function on all other CPUs.
- *
- * You must not call this function with disabled interrupts, from a
- * hardware interrupt handler or from a bottom half.
- */
-int smp_call_function(void (*func) (void *info), void *info, int wait)
-{
-	cpumask_t map;
-
-	spin_lock(&call_lock);
-	map = cpu_online_map;
-	cpu_clear(smp_processor_id(), map);
-	__smp_call_function_map(func, info, wait, map);
-	spin_unlock(&call_lock);
-	return 0;
-}
-EXPORT_SYMBOL(smp_call_function);
-
-/*
- * smp_call_function_single:
- * @cpu: the CPU where func should run
- * @func: the function to run; this must be fast and non-blocking
- * @info: an arbitrary pointer to pass to the function
- * @wait: if true, wait (atomically) until function has completed on other CPUs
- *
- * Run a function on one processor.
- *
- * You must not call this function with disabled interrupts, from a
- * hardware interrupt handler or from a bottom half.
- */
-int smp_call_function_single(int cpu, void (*func) (void *info), void *info,
-			     int wait)
-{
-	spin_lock(&call_lock);
-	__smp_call_function_map(func, info, wait, cpumask_of_cpu(cpu));
-	spin_unlock(&call_lock);
-	return 0;
-}
-EXPORT_SYMBOL(smp_call_function_single);
-
-/**
- * smp_call_function_mask(): Run a function on a set of other CPUs.
- * @mask: The set of cpus to run on.  Must not include the current cpu.
- * @func: The function to run. This must be fast and non-blocking.
- * @info: An arbitrary pointer to pass to the function.
- * @wait: If true, wait (atomically) until function has completed on other CPUs.
- *
- * Returns 0 on success, else a negative status code.
- *
- * If @wait is true, then returns once @func has returned; otherwise
- * it returns just before the target cpu calls @func.
- *
- * You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler.
- */
-int smp_call_function_mask(cpumask_t mask, void (*func)(void *), void *info,
-			   int wait)
-{
-	spin_lock(&call_lock);
-	cpu_clear(smp_processor_id(), mask);
-	__smp_call_function_map(func, info, wait, mask);
-	spin_unlock(&call_lock);
-	return 0;
-}
-EXPORT_SYMBOL(smp_call_function_mask);
-
 void smp_send_stop(void)
 {
 	int cpu, rc;
@@ -271,7 +118,10 @@ static void do_ext_call_interrupt(__u16 
 	bits = xchg(&S390_lowcore.ext_call_fast, 0);
 
 	if (test_bit(ec_call_function, &bits))
-		do_call_function();
+		generic_smp_call_function_interrupt();
+
+	if (test_bit(ec_call_function_single, &bits))
+		generic_smp_call_function_single_interrupt();
 }
 
 /*
@@ -288,6 +138,19 @@ static void smp_ext_bitcall(int cpu, ec_
 		udelay(10);
 }
 
+void arch_send_call_function_ipi(cpumask_t mask)
+{
+	int cpu;
+
+	for_each_cpu_mask(cpu, mask)
+		smp_ext_bitcall(cpu, ec_call_function);
+}
+
+void arch_send_call_function_single_ipi(int cpu)
+{
+	smp_ext_bitcall(cpu, ec_call_function_single);
+}
+
 #ifndef CONFIG_64BIT
 /*
  * this function sends a 'purge tlb' signal to another CPU.
@@ -588,9 +451,9 @@ int __cpuinit start_secondary(void *cpuv
 	/* call cpu notifiers */
 	notify_cpu_starting(smp_processor_id());
 	/* Mark this cpu as online */
-	spin_lock(&call_lock);
+	ipi_call_lock();
 	cpu_set(smp_processor_id(), cpu_online_map);
-	spin_unlock(&call_lock);
+	ipi_call_unlock();
 	/* Switch on interrupts */
 	local_irq_enable();
 	/* Print info about this processor */

-- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue
  2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
                   ` (5 preceding siblings ...)
  2008-10-13 21:50 ` [PATCH/RFC v2 6/6] s390: convert to generic IPI infrstructure Heiko Carstens
@ 2008-10-16 10:38 ` Rusty Russell
  6 siblings, 0 replies; 9+ messages in thread
From: Rusty Russell @ 2008-10-16 10:38 UTC (permalink / raw)
  To: Heiko Carstens; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel

On Tuesday 14 October 2008 08:50:07 Heiko Carstens wrote:
> Version 2: This is version 2 which converts stop_machine to a workqueue
> based implementation as suggested by Rusty instead of trying to extend the
> current kernel thread approach.
>
> This patch series would allow to convert s390 to the generic IPI interface.
> We can't to that currently since our etr/stp code relies on the old
> semantics of smp_call_function that guarantee that the function only
> returns after all receiving cpus have acknowledged the IPI. That way it is
> known that all other cpus are running in an interrupt handler with
> interrupts disabled. This is not true anymore with the generic IPI
> infrastructure.
>
> So one idea was to use stop_machine in order to synchronize all cpus. Rusty
> was kind enough to extend it so that it is now possible to run a function
> on several cpus, instead of just one.
> However we need to be able to do that without allocating any memory. That's
> what this patch set is about: it changes the current stop_machine code to
> use a workqueue instead of kernel threads to synchronize all cpus.
> This has the advantage that all per cpu workqueue threads are already
> running when stop_machine gets called and therefore no memory needs to be
> allocated. In addition stop_machine cant't fail anymore (free_module()
> relies on that).
>
> A few things that need to be addressed:
> - stop_machine gets called from initcalls, so we need to make sure that it
>   is already initialized and has its workqueue started before that. For
> that a pre_smp initcall (early_initcall) is used to initialize it.
> - the stop_machine kernel threads used to be rt kernel threads. Workqueues
>   are normal threads. To get high priority threads a new interface
>   create_rt_workqueue is introduced.
>
> Patch 1 Moves the call to init_workqueue before pre smp initcalls
> Patch 2 introduces create_rt_workqueue
> Patch 3 converts stop_machine to use an rt workqueue

OK, I've taken 1-3.  Hope for Ingo's ack on 1 and 2.  I'm holding out on 4, 
and hopefully s390 can merge after this is done.

Thanks!
Rusty.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH/RFC v2 2/6] workqueue: introduce create_rt_workqueue
  2008-10-13 21:50 ` [PATCH/RFC v2 2/6] workqueue: introduce create_rt_workqueue Heiko Carstens
@ 2008-10-21 22:15   ` Rusty Russell
  0 siblings, 0 replies; 9+ messages in thread
From: Rusty Russell @ 2008-10-21 22:15 UTC (permalink / raw)
  To: Heiko Carstens; +Cc: jens.axboe, mingo, akpm, schwidefsky, linux-kernel

On Tuesday 14 October 2008 08:50:09 Heiko Carstens wrote:
> From: Heiko Carstens <heiko.carstens@de.ibm.com>
>
> create_rt_workqueue will create a real time prioritized workqueue.
> This is needed for the conversion of stop_machine to a workqueue based
> implementation.
> This patch adds yet another parameter to __create_workqueue_key to tell
> it that we want an rt workqueue.
> However it looks like we rather should have something like "int type"
> instead of singlethread, freezable and rt.

Ingo didn't ack this, but he didn't nack it either and it's a straightforward 
transformation.  If we want to enum the type we can always do it later.

I'll push this now as part of my stop_machine and module series.

Thanks!
Rusty.


>
> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
> ---
>  include/linux/workqueue.h |   18 ++++++++++--------
>  kernel/workqueue.c        |    7 ++++++-
>  2 files changed, 16 insertions(+), 9 deletions(-)
>
> Index: linux-2.6/include/linux/workqueue.h
> ===================================================================
> --- linux-2.6.orig/include/linux/workqueue.h
> +++ linux-2.6/include/linux/workqueue.h
> @@ -149,11 +149,11 @@ struct execute_work {
>
>  extern struct workqueue_struct *
>  __create_workqueue_key(const char *name, int singlethread,
> -		       int freezeable, struct lock_class_key *key,
> +		       int freezeable, int rt, struct lock_class_key *key,
>  		       const char *lock_name);
>
>  #ifdef CONFIG_LOCKDEP
> -#define __create_workqueue(name, singlethread, freezeable)	\
> +#define __create_workqueue(name, singlethread, freezeable, rt)	\
>  ({								\
>  	static struct lock_class_key __key;			\
>  	const char *__lock_name;				\
> @@ -164,17 +164,19 @@ __create_workqueue_key(const char *name,
>  		__lock_name = #name;				\
>  								\
>  	__create_workqueue_key((name), (singlethread),		\
> -			       (freezeable), &__key,		\
> +			       (freezeable), (rt), &__key,	\
>  			       __lock_name);			\
>  })
>  #else
> -#define __create_workqueue(name, singlethread, freezeable)	\
> -	__create_workqueue_key((name), (singlethread), (freezeable), NULL, NULL)
> +#define __create_workqueue(name, singlethread, freezeable, rt)	\
> +	__create_workqueue_key((name), (singlethread), (freezeable), (rt), \
> +			       NULL, NULL)
>  #endif
>
> -#define create_workqueue(name) __create_workqueue((name), 0, 0)
> -#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1)
> -#define create_singlethread_workqueue(name) __create_workqueue((name), 1,
> 0) +#define create_workqueue(name) __create_workqueue((name), 0, 0, 0)
> +#define create_rt_workqueue(name) __create_workqueue((name), 0, 0, 1)
> +#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1,
> 0) +#define create_singlethread_workqueue(name) __create_workqueue((name),
> 1, 0, 0)
>
>  extern void destroy_workqueue(struct workqueue_struct *wq);
>
> Index: linux-2.6/kernel/workqueue.c
> ===================================================================
> --- linux-2.6.orig/kernel/workqueue.c
> +++ linux-2.6/kernel/workqueue.c
> @@ -62,6 +62,7 @@ struct workqueue_struct {
>  	const char *name;
>  	int singlethread;
>  	int freezeable;		/* Freeze threads during suspend */
> +	int rt;
>  #ifdef CONFIG_LOCKDEP
>  	struct lockdep_map lockdep_map;
>  #endif
> @@ -766,6 +767,7 @@ init_cpu_workqueue(struct workqueue_stru
>
>  static int create_workqueue_thread(struct cpu_workqueue_struct *cwq, int
> cpu) {
> +	struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
>  	struct workqueue_struct *wq = cwq->wq;
>  	const char *fmt = is_single_threaded(wq) ? "%s" : "%s/%d";
>  	struct task_struct *p;
> @@ -781,7 +783,8 @@ static int create_workqueue_thread(struc
>  	 */
>  	if (IS_ERR(p))
>  		return PTR_ERR(p);
> -
> +	if (cwq->wq->rt)
> +		sched_setscheduler_nocheck(p, SCHED_FIFO, &param);
>  	cwq->thread = p;
>
>  	return 0;
> @@ -801,6 +804,7 @@ static void start_workqueue_thread(struc
>  struct workqueue_struct *__create_workqueue_key(const char *name,
>  						int singlethread,
>  						int freezeable,
> +						int rt,
>  						struct lock_class_key *key,
>  						const char *lock_name)
>  {
> @@ -822,6 +826,7 @@ struct workqueue_struct *__create_workqu
>  	lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
>  	wq->singlethread = singlethread;
>  	wq->freezeable = freezeable;
> +	wq->rt = rt;
>  	INIT_LIST_HEAD(&wq->list);
>
>  	if (singlethread) {



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2008-10-21 22:15 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-10-13 21:50 [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Heiko Carstens
2008-10-13 21:50 ` [PATCH/RFC v2 1/6] Call init_workqueues before pre smp initcalls Heiko Carstens
2008-10-13 21:50 ` [PATCH/RFC v2 2/6] workqueue: introduce create_rt_workqueue Heiko Carstens
2008-10-21 22:15   ` Rusty Russell
2008-10-13 21:50 ` [PATCH/RFC v2 3/6] stop_machine: use workqueues instead of kernel threads Heiko Carstens
2008-10-13 21:50 ` [PATCH/RFC v2 4/6] stop_machine: special case for one cpu Heiko Carstens
2008-10-13 21:50 ` [PATCH/RFC v2 5/6] s390: convert etr/stp to stop_machine interface Heiko Carstens
2008-10-13 21:50 ` [PATCH/RFC v2 6/6] s390: convert to generic IPI infrstructure Heiko Carstens
2008-10-16 10:38 ` [PATCH/RFC v2 0/6] Convert stop_machine to use a workqueue Rusty Russell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).