LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com>
To: Mike Galbraith <efault@gmx.de>
Cc: Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com>,
	Linux Kernel <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: Re: [BUG] scheduler: strange behavor with massive interactive	processes
Date: Sat, 31 Mar 2007 19:15:13 +0900	[thread overview]
Message-ID: <87wt0xhg7i.wl%takeuchi_satoru@jp.fujitsu.com> (raw)
In-Reply-To: <1175071091.29829.58.camel@Homer.simpson.net>

Hi Mike,

> I puttered around with your testcase a bit, and didn't see interactive
> tasks starving other interactive tasks so much as plain old interactive
> tasks starving expired tasks, which they're supposed to be able to do,

I inserted a trace code observing all context switches into the kernel and
confirmed that less than 10 processes having max prio certainly run
continuously and the others (having max - 1 prio) can run only at the
beggining of the program or when runqueue are expired (the chance is about
once a 200 secs in the 200 [procs/CPU] case, and their CPU time is deprived
in only 1 ticks) on each CPUs.

> Interactivity still seems to be fine with reasonable non-interactive
> loads despite ~reserving more bandwidth for non-interactive tasks.  Only
> lightly tested, YMMV, and of course the standard guarantee applies ;)

I've only seen your patch briefly and cant' make accurate comment yet. For
the time time being, however, I examined the test which is same as my initial
mail.

Test environment
================

 - kernel: 2.6.21-rc5 with or without Mike's patch
 - others: same as my initial mail except for omitting nice 19 cases

Result (without Mike's patch)
=============================

  +---------+-----------+------+------+------+--------+
  |   # of  |   # of    | avg  | max  | min  |  stdev |
  |   CPUs  | processes | (*1) | (*2) | (*3) |  (*4)  |
  +---------+-----------+------+------+------+--------+
  | 1(i386) |       200 |  162 | 8258 |    1 |   1113 |
  +---------+-----------+------+------+------+--------+
  |         |           |  378 | 9314 |    2 |   1421 |
  | 2(ia64) |       400 +------+------+------+--------+
  |         |           |  189 |12544 |    1 |   1443 |
  +---------+-----------+------+------+------+--------+

  *1) average number of loops among all processes
  *2) maximum number of loops among all processes
  *3) minimum number of loops among all processes
  *4) standard deviation

Result (with Mike's patch)
==========================

  +---------+-----------+------+------+------+--------+
  |   # of  |   # of    | avg  | max  | min  |  stdev |
  |   CPUs  | processes |      |      |      |        |
  +---------+-----------+------+------+------+--------+
  | 1(i386) |       200 |  154 | 1114 |    1 |    210 |
  +---------+-----------+------+------+------+--------+
  |         |           |  373 | 1328 |  108 |    246 |
  | 2(ia64) |       400 +------+------+------+--------+
  |         |           |  186 | 1169 |    1 |    211 |
  +---------+-----------+------+------+------+--------+

I also gatherd tha data, changing # of processors for the 1 CPU(i386):

  +---------+-----------+------+------+------+--------+
  |   # of  |   # of    | avg  | max  | min  |  stdev |
  |   CPUs  | processes |      |      |      |        |
  +---------+-----------+------+------+------+--------+
  |         |        25 | 1208 | 1787 |  987 |    237 |
  |         +-----------+------+------+------+--------+
  |         |        50 |  868 | 1631 |  559 |    275 |
  | 1(i386) +-----------+------+------+------+--------+
  |         |       100 |  319 | 1017 |   25 |    232 |
  |         +-----------+------+------+------+--------+
  |         |   200(*1) |  154 | 1114 |    1 |    210 |
  +---------+-----------+------+------+------+--------+

  *1) Same as the above table, just for easily comparison

It seems to highly depend on # of processes and at present, Ingo's patch
looks better.

Thanks,

Satoru

  parent reply	other threads:[~2007-03-31 10:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-03-27  1:34 [BUG] scheduler: strange behavor with massive interactive processes Satoru Takeuchi
2007-03-27  5:04 ` Mike Galbraith
2007-03-28  8:38   ` Mike Galbraith
2007-03-28 11:45     ` Ingo Molnar
2007-03-28 11:51       ` Mike Galbraith
2007-03-31 10:15     ` Satoru Takeuchi [this message]
2007-03-31 10:29       ` Mike Galbraith
2007-03-27 19:14 ` Ingo Molnar
2007-03-28  1:16   ` Satoru Takeuchi
2007-03-31  8:16     ` Satoru Takeuchi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wt0xhg7i.wl%takeuchi_satoru@jp.fujitsu.com \
    --to=takeuchi_satoru@jp.fujitsu.com \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).