LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "J.C. Pizarro" <jcpiza@gmail.com>
To: "Rik van Riel" <riel@redhat.com>,
"Linus Torvalds" <torvalds@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: Please, put 64-bit counter per task and incr.by.one each ctxt switch.
Date: Sun, 24 Feb 2008 05:08:46 +0100 [thread overview]
Message-ID: <998d0e4a0802232008w5c6566f0se15ab749ec6f1ceb@mail.gmail.com> (raw)
In-Reply-To: <20080223221750.429cf0d9@bree.surriel.com>
[-- Attachment #1: Type: text/plain, Size: 2436 bytes --]
On 2008/2/24, Rik van Riel <riel@redhat.com> wrote:
> On Sun, 24 Feb 2008 04:08:38 +0100
> "J.C. Pizarro" <jcpiza@gmail.com> wrote:
>
> > We will need 64 bit counters of the slow context switches,
> > one counter for each new created task (e.g. u64 ctxt_switch_counts;)
>
>
> Please send a patch ...
diff -ur linux-2.6_git-20080224.orig/include/linux/sched.h
linux-2.6_git-20080224/include/linux/sched.h
--- linux-2.6_git-20080224.orig/include/linux/sched.h 2008-02-24
01:04:18.000000000 +0100
+++ linux-2.6_git-20080224/include/linux/sched.h 2008-02-24
04:50:18.000000000 +0100
@@ -1007,6 +1007,12 @@
struct hlist_head preempt_notifiers;
#endif
+ unsigned long long ctxt_switch_counts; /* 64-bit switches' count */
+ /* ToDo:
+ * To implement a poller/clock for CPU-scheduler that only reads
+ * these counts of context switches of the runqueue's tasks.
+ * No problem if this poller/clock is not implemented. */
+
/*
* fpu_counter contains the number of consecutive context switches
* that the FPU is used. If this is over a threshold, the lazy fpu
diff -ur linux-2.6_git-20080224.orig/kernel/sched.c
linux-2.6_git-20080224/kernel/sched.c
--- linux-2.6_git-20080224.orig/kernel/sched.c 2008-02-24
01:04:19.000000000 +0100
+++ linux-2.6_git-20080224/kernel/sched.c 2008-02-24
04:33:57.000000000 +0100
@@ -2008,6 +2008,8 @@
BUG_ON(p->state != TASK_RUNNING);
update_rq_clock(rq);
+ p->ctxt_switch_counts = 0ULL; /* task's 64-bit counter inited 0 */
+
p->prio = effective_prio(p);
if (!p->sched_class->task_new || !current->se.on_rq) {
@@ -2189,8 +2191,14 @@
context_switch(struct rq *rq, struct task_struct *prev,
struct task_struct *next)
{
+ unsigned long flags;
+ struct rq *rq_prev;
struct mm_struct *mm, *oldmm;
+ rq_prev = task_rq_lock(prev, &flags); /* locking the prev task */
+ prev->ctxt_switch_counts++; /* incr.+1 the task's 64-bit counter */
+ task_rq_unlock(rq_prev, &flags); /* unlocking the prev task */
+
prepare_task_switch(rq, prev, next);
mm = next->mm;
oldmm = prev->active_mm;
> > I will explain your later why of it.
>
>
> ... and explain exactly why the kernel needs this extra code.
One reason: for the objective of gain interactivity, it's an issue that
CFS fair scheduler lacks it.
o:)
[-- Attachment #2: linux-2.6_git-20080224_ctxt_switch_counts.patch --]
[-- Type: application/octet-stream, Size: 1754 bytes --]
diff -ur linux-2.6_git-20080224.orig/include/linux/sched.h linux-2.6_git-20080224/include/linux/sched.h
--- linux-2.6_git-20080224.orig/include/linux/sched.h 2008-02-24 01:04:18.000000000 +0100
+++ linux-2.6_git-20080224/include/linux/sched.h 2008-02-24 04:50:18.000000000 +0100
@@ -1007,6 +1007,12 @@
struct hlist_head preempt_notifiers;
#endif
+ unsigned long long ctxt_switch_counts; /* 64-bit switches' count */
+ /* ToDo:
+ * To implement a poller/clock for CPU-scheduler that only reads
+ * these counts of context switches of the runqueue's tasks.
+ * No problem if this poller/clock is not implemented. */
+
/*
* fpu_counter contains the number of consecutive context switches
* that the FPU is used. If this is over a threshold, the lazy fpu
diff -ur linux-2.6_git-20080224.orig/kernel/sched.c linux-2.6_git-20080224/kernel/sched.c
--- linux-2.6_git-20080224.orig/kernel/sched.c 2008-02-24 01:04:19.000000000 +0100
+++ linux-2.6_git-20080224/kernel/sched.c 2008-02-24 04:33:57.000000000 +0100
@@ -2008,6 +2008,8 @@
BUG_ON(p->state != TASK_RUNNING);
update_rq_clock(rq);
+ p->ctxt_switch_counts = 0ULL; /* task's 64-bit counter inited 0 */
+
p->prio = effective_prio(p);
if (!p->sched_class->task_new || !current->se.on_rq) {
@@ -2189,8 +2191,14 @@
context_switch(struct rq *rq, struct task_struct *prev,
struct task_struct *next)
{
+ unsigned long flags;
+ struct rq *rq_prev;
struct mm_struct *mm, *oldmm;
+ rq_prev = task_rq_lock(prev, &flags); /* locking the prev task */
+ prev->ctxt_switch_counts++; /* incr.+1 the task's 64-bit counter */
+ task_rq_unlock(rq_prev, &flags); /* unlocking the prev task */
+
prepare_task_switch(rq, prev, next);
mm = next->mm;
oldmm = prev->active_mm;
next prev parent reply other threads:[~2008-02-24 4:09 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-02-24 3:08 J.C. Pizarro
2008-02-24 3:17 ` Rik van Riel
2008-02-24 4:08 ` J.C. Pizarro [this message]
2008-02-24 4:26 ` Rik van Riel
2008-02-24 13:12 ` J.C. Pizarro
2008-02-24 17:53 ` Mike Galbraith
2008-02-25 20:34 ` Andrew Morton
2008-02-26 13:20 ` J.C. Pizarro
2008-02-26 13:41 ` Alexey Dobriyan
2008-02-24 5:08 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=998d0e4a0802232008w5c6566f0se15ab749ec6f1ceb@mail.gmail.com \
--to=jcpiza@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=riel@redhat.com \
--cc=torvalds@linux-foundation.org \
--subject='Re: Please, put 64-bit counter per task and incr.by.one each ctxt switch.' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).