LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* rSDl cpu scheduler version 0.34-test patch
@ 2007-03-26  1:00 Con Kolivas
  2007-03-26  5:00 ` Mike Galbraith
  2007-03-26  5:46 ` Ingo Molnar
  0 siblings, 2 replies; 7+ messages in thread
From: Con Kolivas @ 2007-03-26  1:00 UTC (permalink / raw)
  To: linux list, Ingo Molnar, ck list, Mike Galbraith, Andrew Morton

This is just for testing at the moment! The reason is the size of this patch.

In the interest of evolution, I've taken the RSDL cpu scheduler and increased 
the resolution of the task timekeeping to nanosecond resolution. This removes 
the need for the runqueue rotation component entirely out of RSDL. The design 
basically is mostly unchanged, minus over 150 lines of code for the rotation, 
yet should be slightly better performing. It should be indistinguishable in 
usage from v0.33.

Other changes from v0.33:
-rr interval was not being properly scaled with HZ
-fix possible race in checking task_queued in task_running_tick
-scale down rr interval for niced tasks if HZ can tolerate it
-cull list_splice_tail

What does this mean for the next version of RSDL?

Assuming all works as expected on these test patches, it will be cleanest to 
submit a new series of patches for -mm with the renamed Staircase-Deadline 
scheduler and new documentation (when it's done).


So for testing here are full rollups for 2.6.20.4 and 2.6.21-rc4:
http://ck.kolivas.org/patches/staircase-deadline/2.6.20.4-sd-0.34-test.patch
http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc4-sd-0.34-test.patch

The patches available also include a rollup of sched: accurate user accounting 
as this code touches the same area and it is most convenient to include them 
together.

(incrementals in each subdir of staircase-deadline/ for those interested).

Thanks Mike for continuing to attempt to use the cluebat on me on this one. 
>From the start I wasn't sure if this was necessary or not but ends up being 
less code than RSDL.

While I'm still far from being well, luckily I am in much better shape to be 
able to spend the time at the pc to have done this. Thanks to all those who 
expressed their concern.

-- 
-ck

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rSDl cpu scheduler version 0.34-test patch
  2007-03-26  1:00 rSDl cpu scheduler version 0.34-test patch Con Kolivas
@ 2007-03-26  5:00 ` Mike Galbraith
  2007-03-26  7:19   ` Con Kolivas
  2007-03-26  5:46 ` Ingo Molnar
  1 sibling, 1 reply; 7+ messages in thread
From: Mike Galbraith @ 2007-03-26  5:00 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux list, Ingo Molnar, ck list, Andrew Morton

On Mon, 2007-03-26 at 11:00 +1000, Con Kolivas wrote:
> This is just for testing at the moment! The reason is the size of this patch.

(no testing done yet, but I have a couple comments)

> In the interest of evolution, I've taken the RSDL cpu scheduler and increased 
> the resolution of the task timekeeping to nanosecond resolution.

+	/* All the userspace visible cpu accounting is done here */
+	time_diff = now - p->last_ran;
...
+		/* cpu scheduler quota accounting is performed here */
+		if (p->policy != SCHED_FIFO)
+			p->time_slice -= time_diff;

If we still have any jiffies resolution clocks out there, this could be
a bit problematic.

+static inline void enqueue_pulled_task(struct rq *src_rq, struct rq *rq,
+				       struct task_struct *p)
+{
+	int queue_prio;
+
+	p->array = rq->active; <== set
+	if (!rt_task(p)) {
+		if (p->rotation == src_rq->prio_rotation) {
+			if (p->array == src_rq->expired) { <== evaluate
+				queue_expired(p, rq);
+				goto out_queue;
+			}
+			if (p->time_slice < 0)
+				task_new_array(p, rq);
+		} else
+			task_new_array(p, rq);
+	}
+	queue_prio = next_entitled_slot(p, rq);

(bug aside, this special function really shouldn't exist imho, because
there's nothing special going on.  we didn't need it before to do the
same thing, so we shouldn't need it now.)

+static void recalc_task_prio(struct task_struct *p, struct rq *rq)
+{
+	struct prio_array *array = rq->active;
+	int queue_prio;
+
+	if (p->rotation == rq->prio_rotation) {
+		if (p->array == array) {
+			if (p->time_slice > 0)
+				return;
+			p->time_slice = p->quota;
+		} else if (p->array == rq->expired) {
+			queue_expired(p, rq);
+			return;
+		} else
+			task_new_array(p, rq);
+	} else

Dequeueing a task still leaves a stale p->array laying around to be
possibly evaluated later.  try_to_wake_up() doesn't currently evaluate
and set p->rotation (but should per design doc), so when you get here, a
cross-cpu waking task won't continue it's rotation.  If it did evaluate
and set, recalc_task_prio() would evaluate the guaranteed to fail these
tests array pointer, so the task will still not continue it's rotation.
Stale pointers are evil.

	-Mike


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rSDl cpu scheduler version 0.34-test patch
  2007-03-26  1:00 rSDl cpu scheduler version 0.34-test patch Con Kolivas
  2007-03-26  5:00 ` Mike Galbraith
@ 2007-03-26  5:46 ` Ingo Molnar
  2007-03-26  5:53   ` Ingo Molnar
  1 sibling, 1 reply; 7+ messages in thread
From: Ingo Molnar @ 2007-03-26  5:46 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux list, ck list, Mike Galbraith, Andrew Morton


* Con Kolivas <kernel@kolivas.org> wrote:

> The patches available also include a rollup of sched: accurate user 
> accounting as this code touches the same area and it is most 
> convenient to include them together.

as i mentioned it before, please keep this one separate, as we want to 
apply it first.

	Ingo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rSDl cpu scheduler version 0.34-test patch
  2007-03-26  5:46 ` Ingo Molnar
@ 2007-03-26  5:53   ` Ingo Molnar
  0 siblings, 0 replies; 7+ messages in thread
From: Ingo Molnar @ 2007-03-26  5:53 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux list, ck list, Mike Galbraith, Andrew Morton


* Ingo Molnar <mingo@elte.hu> wrote:

> > The patches available also include a rollup of sched: accurate user 
> > accounting as this code touches the same area and it is most 
> > convenient to include them together.
> 
> as i mentioned it before, please keep this one separate, as we want to 
> apply it first.

i.e. have the patch below the first patch of the series. Thanks,

	Ingo

----------------------->
Subject: [patch] sched: accurate user accounting
From: Con Kolivas <kernel@kolivas.org>

Currently we only do cpu accounting to userspace based on what is 
actually happening precisely on each tick. The accuracy of that 
accounting gets progressively worse the lower HZ is. As we already keep 
accounting of nanosecond resolution we can accurately track user cpu, 
nice cpu and idle cpu if we move the accounting to update_cpu_clock with 
a nanosecond cpu_usage_stat entry. This increases overhead slightly but 
avoids the problem of tick aliasing errors making accounting unreliable.

Remove the now defunct Documentation/cpu-load.txt file.

Signed-off-by: Con Kolivas <kernel@kolivas.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 Documentation/cpu-load.txt  |  113 --------------------------------------------
 include/linux/kernel_stat.h |    3 +
 include/linux/sched.h       |    2 
 kernel/sched.c              |   58 +++++++++++++++++++++-
 kernel/timer.c              |    5 -
 5 files changed, 60 insertions(+), 121 deletions(-)

Index: linux/Documentation/cpu-load.txt
===================================================================
--- linux.orig/Documentation/cpu-load.txt
+++ /dev/null
@@ -1,113 +0,0 @@
-CPU load
---------
-
-Linux exports various bits of information via `/proc/stat' and
-`/proc/uptime' that userland tools, such as top(1), use to calculate
-the average time system spent in a particular state, for example:
-
-    $ iostat
-    Linux 2.6.18.3-exp (linmac)     02/20/2007
-
-    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
-              10.01    0.00    2.92    5.44    0.00   81.63
-
-    ...
-
-Here the system thinks that over the default sampling period the
-system spent 10.01% of the time doing work in user space, 2.92% in the
-kernel, and was overall 81.63% of the time idle.
-
-In most cases the `/proc/stat' information reflects the reality quite
-closely, however due to the nature of how/when the kernel collects
-this data sometimes it can not be trusted at all.
-
-So how is this information collected?  Whenever timer interrupt is
-signalled the kernel looks what kind of task was running at this
-moment and increments the counter that corresponds to this tasks
-kind/state.  The problem with this is that the system could have
-switched between various states multiple times between two timer
-interrupts yet the counter is incremented only for the last state.
-
-
-Example
--------
-
-If we imagine the system with one task that periodically burns cycles
-in the following manner:
-
- time line between two timer interrupts
-|--------------------------------------|
- ^                                    ^
- |_ something begins working          |
-                                      |_ something goes to sleep
-                                     (only to be awaken quite soon)
-
-In the above situation the system will be 0% loaded according to the
-`/proc/stat' (since the timer interrupt will always happen when the
-system is executing the idle handler), but in reality the load is
-closer to 99%.
-
-One can imagine many more situations where this behavior of the kernel
-will lead to quite erratic information inside `/proc/stat'.
-
-
-/* gcc -o hog smallhog.c */
-#include <time.h>
-#include <limits.h>
-#include <signal.h>
-#include <sys/time.h>
-#define HIST 10
-
-static volatile sig_atomic_t stop;
-
-static void sighandler (int signr)
-{
-     (void) signr;
-     stop = 1;
-}
-static unsigned long hog (unsigned long niters)
-{
-     stop = 0;
-     while (!stop && --niters);
-     return niters;
-}
-int main (void)
-{
-     int i;
-     struct itimerval it = { .it_interval = { .tv_sec = 0, .tv_usec = 1 },
-                             .it_value = { .tv_sec = 0, .tv_usec = 1 } };
-     sigset_t set;
-     unsigned long v[HIST];
-     double tmp = 0.0;
-     unsigned long n;
-     signal (SIGALRM, &sighandler);
-     setitimer (ITIMER_REAL, &it, NULL);
-
-     hog (ULONG_MAX);
-     for (i = 0; i < HIST; ++i) v[i] = ULONG_MAX - hog (ULONG_MAX);
-     for (i = 0; i < HIST; ++i) tmp += v[i];
-     tmp /= HIST;
-     n = tmp - (tmp / 3.0);
-
-     sigemptyset (&set);
-     sigaddset (&set, SIGALRM);
-
-     for (;;) {
-         hog (n);
-         sigwait (&set, &i);
-     }
-     return 0;
-}
-
-
-References
-----------
-
-http://lkml.org/lkml/2007/2/12/6
-Documentation/filesystems/proc.txt (1.8)
-
-
-Thanks
-------
-
-Con Kolivas, Pavel Machek
Index: linux/include/linux/kernel_stat.h
===================================================================
--- linux.orig/include/linux/kernel_stat.h
+++ linux/include/linux/kernel_stat.h
@@ -16,11 +16,14 @@
 
 struct cpu_usage_stat {
 	cputime64_t user;
+	cputime64_t user_ns;
 	cputime64_t nice;
+	cputime64_t nice_ns;
 	cputime64_t system;
 	cputime64_t softirq;
 	cputime64_t irq;
 	cputime64_t idle;
+	cputime64_t idle_ns;
 	cputime64_t iowait;
 	cputime64_t steal;
 };
Index: linux/include/linux/sched.h
===================================================================
--- linux.orig/include/linux/sched.h
+++ linux/include/linux/sched.h
@@ -882,7 +882,7 @@ struct task_struct {
 	int __user *clear_child_tid;		/* CLONE_CHILD_CLEARTID */
 
 	unsigned long rt_priority;
-	cputime_t utime, stime;
+	cputime_t utime, utime_ns, stime;
 	unsigned long nvcsw, nivcsw; /* context switch counts */
 	struct timespec start_time;
 /* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */
Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -89,6 +89,7 @@ unsigned long long __attribute__((weak))
  */
 #define NS_TO_JIFFIES(TIME)	((TIME) / (1000000000 / HZ))
 #define JIFFIES_TO_NS(TIME)	((TIME) * (1000000000 / HZ))
+#define JIFFY_NS		JIFFIES_TO_NS(1)
 
 /*
  * These are the 'tuning knobs' of the scheduler:
@@ -3017,8 +3018,59 @@ EXPORT_PER_CPU_SYMBOL(kstat);
 static inline void
 update_cpu_clock(struct task_struct *p, struct rq *rq, unsigned long long now)
 {
-	p->sched_time += now - p->last_ran;
-	p->last_ran = rq->most_recent_timestamp = now;
+	struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
+	cputime64_t time_diff;
+
+	/* Sanity check. It should never go backwards or ruin accounting */
+	if (unlikely(now < p->last_ran))
+		goto out_set;
+	/* All the userspace visible cpu accounting is done here */
+	time_diff = now - p->last_ran;
+	p->sched_time += time_diff;
+	if (p != rq->idle) {
+		cputime_t utime_diff = time_diff;
+
+		if (TASK_NICE(p) > 0) {
+			cpustat->nice_ns = cputime64_add(cpustat->nice_ns,
+							 time_diff);
+			if (cpustat->nice_ns > JIFFY_NS) {
+				cpustat->nice_ns =
+					cputime64_sub(cpustat->nice_ns,
+					JIFFY_NS);
+				cpustat->nice =
+					cputime64_add(cpustat->nice, 1);
+			}
+		} else {
+			cpustat->user_ns = cputime64_add(cpustat->user_ns,
+							 time_diff);
+			if (cpustat->user_ns > JIFFY_NS) {
+				cpustat->user_ns =
+					cputime64_sub(cpustat->user_ns,
+					JIFFY_NS);
+				cpustat ->user =
+					cputime64_add(cpustat->user, 1);
+			}
+		}
+		p->utime_ns = cputime_add(p->utime_ns, utime_diff);
+		if (p->utime_ns > JIFFY_NS) {
+			p->utime_ns = cputime_sub(p->utime_ns, JIFFY_NS);
+			p->utime = cputime_add(p->utime,
+					       jiffies_to_cputime(1));
+		}
+	} else {
+		cpustat->idle_ns = cputime64_add(cpustat->idle_ns, time_diff);
+		if (cpustat->idle_ns > JIFFY_NS) {
+			cpustat->idle_ns = cputime64_sub(cpustat->idle_ns,
+							 JIFFY_NS);
+			cpustat->idle = cputime64_add(cpustat->idle, 1);
+		}
+	}
+out_set:
+	/*
+	 * We still need to set these values even if the clock appeared to
+	 * go backwards in case _this_ is the correct timestamp.
+	 */
+	rq->most_recent_timestamp = p->last_ran = now;
 }
 
 /*
@@ -3104,8 +3156,6 @@ void account_system_time(struct task_str
 		cpustat->system = cputime64_add(cpustat->system, tmp);
 	else if (atomic_read(&rq->nr_iowait) > 0)
 		cpustat->iowait = cputime64_add(cpustat->iowait, tmp);
-	else
-		cpustat->idle = cputime64_add(cpustat->idle, tmp);
 	/* Account for system time used */
 	acct_update_integrals(p);
 }
Index: linux/kernel/timer.c
===================================================================
--- linux.orig/kernel/timer.c
+++ linux/kernel/timer.c
@@ -1196,10 +1196,9 @@ void update_process_times(int user_tick)
 	int cpu = smp_processor_id();
 
 	/* Note: this timer irq context must be accounted for as well. */
-	if (user_tick)
-		account_user_time(p, jiffies_to_cputime(1));
-	else
+	if (!user_tick)
 		account_system_time(p, HARDIRQ_OFFSET, jiffies_to_cputime(1));
+	/* User time is accounted for in update_cpu_clock in sched.c */
 	run_local_timers();
 	if (rcu_pending(cpu))
 		rcu_check_callbacks(cpu, user_tick);

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rSDl cpu scheduler version 0.34-test patch
  2007-03-26  5:00 ` Mike Galbraith
@ 2007-03-26  7:19   ` Con Kolivas
  2007-03-26  8:10     ` Mike Galbraith
  0 siblings, 1 reply; 7+ messages in thread
From: Con Kolivas @ 2007-03-26  7:19 UTC (permalink / raw)
  To: Mike Galbraith; +Cc: linux list, Ingo Molnar, ck list, Andrew Morton

On Monday 26 March 2007 15:00, Mike Galbraith wrote:
> On Mon, 2007-03-26 at 11:00 +1000, Con Kolivas wrote:
> > This is just for testing at the moment! The reason is the size of this
> > patch.
>
> (no testing done yet, but I have a couple comments)
>
> > In the interest of evolution, I've taken the RSDL cpu scheduler and
> > increased the resolution of the task timekeeping to nanosecond
> > resolution.
>
> +	/* All the userspace visible cpu accounting is done here */
> +	time_diff = now - p->last_ran;
> ...
> +		/* cpu scheduler quota accounting is performed here */
> +		if (p->policy != SCHED_FIFO)
> +			p->time_slice -= time_diff;
>
> If we still have any jiffies resolution clocks out there, this could be
> a bit problematic.

Works fine with jiffy only resolution. sched_clock just returns the change 
when it happens. This leaves us with the accuracy of the previous code on 
hardware that doesn't give higher resolution time from sched_clock.

> +static inline void enqueue_pulled_task(struct rq *src_rq, struct rq *rq,
> +				       struct task_struct *p)
> +{
> +	int queue_prio;
> +
> +	p->array = rq->active; <== set
> +	if (!rt_task(p)) {
> +		if (p->rotation == src_rq->prio_rotation) {
> +			if (p->array == src_rq->expired) { <== evaluate

I don't see a problem.

> +				queue_expired(p, rq);
> +				goto out_queue;
> +			}
> +			if (p->time_slice < 0)
> +				task_new_array(p, rq);
> +		} else
> +			task_new_array(p, rq);
> +	}
> +	queue_prio = next_entitled_slot(p, rq);
>
> (bug aside, this special function really shouldn't exist imho, because
> there's nothing special going on.  we didn't need it before to do the
> same thing, so we shouldn't need it now.)

As the comment says, the likelihood that both runqueues happen to be at the 
same priority_level is very low so the exact position cannot be transposed in 
my opinion. I'll see if I can simplify it though.

> +static void recalc_task_prio(struct task_struct *p, struct rq *rq)
> +{
> +	struct prio_array *array = rq->active;
> +	int queue_prio;
> +
> +	if (p->rotation == rq->prio_rotation) {
> +		if (p->array == array) {
> +			if (p->time_slice > 0)
> +				return;
> +			p->time_slice = p->quota;
> +		} else if (p->array == rq->expired) {
> +			queue_expired(p, rq);
> +			return;
> +		} else
> +			task_new_array(p, rq);
> +	} else
>
> Dequeueing a task still leaves a stale p->array laying around to be
> possibly evaluated later.

I don't see quite why that's a problem. If there's memory of the last dequeue 
and it enqueues at a different rotation it gets ignored. If it enqueues 
during the same rotation then that memory proves useful for ensuring it 
doesn't get a new full quota. Either way the array is always updated on 
enqueue so it wont be trying to add it to the wrong runlist.

> try_to_wake_up() doesn't currently evaluate 
> and set p->rotation (but should per design doc),

try_to_wake_up->activate_task->enqueue_task->recalc_task_prio which updates 
p->rotation

> so when you get here, a 
> cross-cpu waking task won't continue it's rotation.  If it did evaluate
> and set, recalc_task_prio() would evaluate the guaranteed to fail these
> tests array pointer, so the task will still not continue it's rotation.

> Stale pointers are evil.

I prefer to use the array value as a memory in case it wakes up on the same 
rotation and runqueue.

>
> 	-Mike

Thanks.

-- 
-ck

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rSDl cpu scheduler version 0.34-test patch
  2007-03-26  7:19   ` Con Kolivas
@ 2007-03-26  8:10     ` Mike Galbraith
  2007-03-26  8:39       ` Mike Galbraith
  0 siblings, 1 reply; 7+ messages in thread
From: Mike Galbraith @ 2007-03-26  8:10 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux list, Ingo Molnar, ck list, Andrew Morton

On Mon, 2007-03-26 at 17:19 +1000, Con Kolivas wrote:
> On Monday 26 March 2007 15:00, Mike Galbraith wrote:
> > On Mon, 2007-03-26 at 11:00 +1000, Con Kolivas wrote:
> > > This is just for testing at the moment! The reason is the size of this
> > > patch.
> >
> > (no testing done yet, but I have a couple comments)
> >
> > > In the interest of evolution, I've taken the RSDL cpu scheduler and
> > > increased the resolution of the task timekeeping to nanosecond
> > > resolution.
> >
> > +	/* All the userspace visible cpu accounting is done here */
> > +	time_diff = now - p->last_ran;
> > ...
> > +		/* cpu scheduler quota accounting is performed here */
> > +		if (p->policy != SCHED_FIFO)
> > +			p->time_slice -= time_diff;
> >
> > If we still have any jiffies resolution clocks out there, this could be
> > a bit problematic.
> 
> Works fine with jiffy only resolution. sched_clock just returns the change 
> when it happens. This leaves us with the accuracy of the previous code on 
> hardware that doesn't give higher resolution time from sched_clock.

I was thinking about how often you could zip through there with zero
change to time_slice.  Yeah, I suppose the net effect may be about the
same as dodged ticks.

> > +static inline void enqueue_pulled_task(struct rq *src_rq, struct rq *rq,
> > +				       struct task_struct *p)
> > +{
> > +	int queue_prio;
> > +
> > +	p->array = rq->active; <== set
> > +	if (!rt_task(p)) {
> > +		if (p->rotation == src_rq->prio_rotation) {
> > +			if (p->array == src_rq->expired) { <== evaluate
> 
> I don't see a problem.

p->array can be set to rq->active and evaluate to src_rq->expired?
 
> > +static void recalc_task_prio(struct task_struct *p, struct rq *rq)
> > +{
> > +	struct prio_array *array = rq->active;
> > +	int queue_prio;
> > +
> > +	if (p->rotation == rq->prio_rotation) {
> > +		if (p->array == array) {
> > +			if (p->time_slice > 0)
> > +				return;
> > +			p->time_slice = p->quota;
> > +		} else if (p->array == rq->expired) {
> > +			queue_expired(p, rq);
> > +			return;
> > +		} else
> > +			task_new_array(p, rq);
> > +	} else
> >
> > Dequeueing a task still leaves a stale p->array laying around to be
> > possibly evaluated later.
> 
> I don't see quite why that's a problem. If there's memory of the last dequeue 
> and it enqueues at a different rotation it gets ignored. If it enqueues 
> during the same rotation then that memory proves useful for ensuring it 
> doesn't get a new full quota. Either way the array is always updated on 
> enqueue so it wont be trying to add it to the wrong runlist.
> 
> > try_to_wake_up() doesn't currently evaluate 
> > and set p->rotation (but should per design doc),
> 
> try_to_wake_up->activate_task->enqueue_task->recalc_task_prio which updates 
> p->rotation

As I read it, it's task_new_array() which sets p->rotation, _after_
recalc_task_prio() has evaluated it to see if the task should continue
it's rotation or not.  The mechanism which ensures that sleeping tasks
can only get their fair share, as I understand it, is that they continue
their rotation on wakeup with their bitmap intact.  That appears to
indeed be the way same cpu wakeups are handled.  In the cross-cpu wakeup
case, it can't do anything but call task_new_array(), because the
chances that p->rotation being the same as the rotation number of the
new queue is practically nil, wherein the tasks bitmap is zeroed, ie it
starts over every time it changes cpu.  No?

	-Mike


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: rSDl cpu scheduler version 0.34-test patch
  2007-03-26  8:10     ` Mike Galbraith
@ 2007-03-26  8:39       ` Mike Galbraith
  0 siblings, 0 replies; 7+ messages in thread
From: Mike Galbraith @ 2007-03-26  8:39 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux list, Ingo Molnar, ck list, Andrew Morton

P.S.  (I've not studied the hotplug code)  When a cpu is hot-unplugged,
are it's runqueues and whatnot deallocated?  (i should just go look, but
by the time i get around to it, some nice person may have already put an
answer in my mailbox;)

	-Mike

P.S.#2:  not only bitmap are zeroed, task is issued a fresh time_slice.

	-Mike


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2007-03-26  8:39 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-03-26  1:00 rSDl cpu scheduler version 0.34-test patch Con Kolivas
2007-03-26  5:00 ` Mike Galbraith
2007-03-26  7:19   ` Con Kolivas
2007-03-26  8:10     ` Mike Galbraith
2007-03-26  8:39       ` Mike Galbraith
2007-03-26  5:46 ` Ingo Molnar
2007-03-26  5:53   ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).