LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "Satoshi UCHIDA" <s-uchida@ap.jp.nec.com>
To: "'Ryo Tsuruta'" <ryov@valinux.co.jp>
Cc: <axboe@kernel.dk>, <vtaras@openvz.org>,
	<containers@lists.linux-foundation.org>,
	<tom-sugawara@ap.jp.nec.com>, <linux-kernel@vger.kernel.org>,
	<m-takahashi@ex.jp.nec.com>
Subject: RE: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ
Date: Thu, 26 Jun 2008 13:49:20 +0900	[thread overview]
Message-ID: <002001c8d747$ffe5a8b0$ffb0fa10$@jp.nec.com> (raw)
In-Reply-To: <20080603.171535.246514860.ryov@valinux.co.jp>

Hi, Tsuruta.

> In addition, I got the following message during test #2. Program
> "ioload", our benchmark program, was blocked more than 120 seconds.
> Do you see any problems?

No.
I tried to test in  environment which runs from 1 to 200 processes
per group.
However, such message was not output.

> The result of test #1 is close to your estimation, but the result
> of test #2 is not, the gap between the estimation and the result
> increased.

In the above my test, the gap between the estimation and the result
is increasing as a process increases.

And, in native CFQ with ionice command, this situation is a similar.
These circumstances are shown in the case of more than processes of total 200.

I'll investigate this problem continuously.


Thanks,
  Satoshi Uchida.

> -----Original Message-----
> From: Ryo Tsuruta [mailto:ryov@valinux.co.jp]
> Sent: Tuesday, June 03, 2008 5:16 PM
> To: s-uchida@ap.jp.nec.com
> Cc: axboe@kernel.dk; vtaras@openvz.org;
> containers@lists.linux-foundation.org; tom-sugawara@ap.jp.nec.com;
> linux-kernel@vger.kernel.org
> Subject: Re: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth
> controlling subsystem for CGroups based on CFQ
> 
> Hi Uchida-san,
> 
> > I report my tests.
> 
> I did a similar test to yours. I increased the number of I/Os
> which are issued simultaneously up to 100 per cgroup.
> 
>   Procedures:
>     o Prepare 300 files which size is 250MB on 1 partition sdb3
>     o Create three groups with priority 0, 4 and 7.
>     o Run many processes issuing random direct I/O with 4KB data on each
>       files in three groups.
>           #1 Run  25 processes issuing read I/O only per group.
>           #2 Run 100 processes issuing read I/O only per group.
>     o Count up the number of I/Os which have done in 10 minutes.
> 
>                The number of I/Os (percentage to total I/O)
>      --------------------------------------------------------------
>     | group       |  group 1   |  group 2   |  group 3   |  total  |
>     | priority    | 0(highest) |     4      |  7(lowest) |  I/Os   |
>     |-------------+------------+------------+------------+---------|
>     | Estimate    |            |            |            |         |
>     | Performance |    61.5%   |    30.8%   |    7.7%    |         |
>     |-------------+------------+------------+------------|---------|
>     | #1  25procs | 52763(57%) | 30811(33%) |  9575(10%) |  93149  |
>     | #2 100procs | 24949(40%) | 21325(34%) | 16508(26%) |  62782  |
>      --------------------------------------------------------------
> 
> The result of test #1 is close to your estimation, but the result
> of test #2 is not, the gap between the estimation and the result
> increased.
> 
> In addition, I got the following message during test #2. Program
> "ioload", our benchmark program, was blocked more than 120 seconds.
> Do you see any problems?
> 
> INFO: task ioload:8456 blocked for more than 120 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> ioload        D 00000008  2772  8456   8419
>        f72eb740 00200082 c34862c0 00000008 c3565170 c35653c0 c2009d80
>        00000001
>        c1d1bea0 00200046 ffffffff f6ee039c 00000000 00000000 00000000
>        c2009d80
>        018db000 00000000 f71a6a00 c0604fb6 00000000 f71a6bc8 c04876a4
>        00000000
> Call Trace:
>  [<c0604fb6>] io_schedule+0x4a/0x81
>  [<c04876a4>] __blockdev_direct_IO+0xa04/0xb54
>  [<c04a3aa2>] ext2_direct_IO+0x35/0x3a
>  [<c04a4757>] ext2_get_block+0x0/0x603
>  [<c044ab81>] generic_file_direct_IO+0x103/0x118
>  [<c044abe6>] generic_file_direct_write+0x50/0x13d
>  [<c044b59e>] __generic_file_aio_write_nolock+0x375/0x4c3
>  [<c046e571>] link_path_walk+0x86/0x8f
>  [<c044a1e8>] find_lock_page+0x19/0x6d
>  [<c044b73e>] generic_file_aio_write+0x52/0xa9
>  [<c0466256>] do_sync_write+0xbf/0x100
>  [<c042ca44>] autoremove_wake_function+0x0/0x2d
>  [<c0413366>] update_curr+0x83/0x116
>  [<c0605280>] mutex_lock+0xb/0x1a
>  [<c04b653b>] security_file_permission+0xc/0xd
>  [<c0466197>] do_sync_write+0x0/0x100
>  [<c046695d>] vfs_write+0x83/0xf6
>  [<c0466ea9>] sys_write+0x3c/0x63
>  [<c04038de>] syscall_call+0x7/0xb
>  [<c0600000>] print_cpu_info+0x27/0x92
>  =======================
> 
> Thanks,
> Ryo Tsuruta


  reply	other threads:[~2008-06-26  4:50 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-01  9:22 [RFC][patch 0/11][CFQ-cgroup]Yet " Satoshi UCHIDA
2008-04-01  9:27 ` [RFC][patch 1/11][CFQ-cgroup] Add Configuration Satoshi UCHIDA
2008-04-01  9:30 ` [RFC][patch 2/11][CFQ-cgroup] Move header file Satoshi UCHIDA
2008-04-01  9:32 ` [RFC][patch 3/11][CFQ-cgroup] Introduce cgroup subsystem Satoshi UCHIDA
2008-04-02 22:41   ` Paul Menage
2008-04-03  2:31     ` Satoshi UCHIDA
2008-04-03  2:39       ` Li Zefan
2008-04-03 15:31       ` Paul Menage
2008-04-03  7:09     ` [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ Satoshi UCHIDA
2008-04-03  7:11       ` [PATCH] [RFC][patch 1/12][CFQ-cgroup] Add Configuration Satoshi UCHIDA
2008-04-03  7:12       ` [RFC][patch 2/11][CFQ-cgroup] Move header file Satoshi UCHIDA
2008-04-03  7:12       ` [RFC][patch 3/12][CFQ-cgroup] Introduce cgroup subsystem Satoshi UCHIDA
2008-04-03  7:13       ` [PATCH] [RFC][patch 4/12][CFQ-cgroup] Add ioprio entry Satoshi UCHIDA
2008-04-03  7:14       ` [RFC][patch 5/12][CFQ-cgroup] Create cfq driver unique data Satoshi UCHIDA
2008-04-03  7:14       ` [RFC][patch 6/12][CFQ-cgroup] Add cfq optional operation framework Satoshi UCHIDA
2008-04-03  7:15       ` [RFC][patch 7/12][CFQ-cgroup] Add new control layer over traditional control layer Satoshi UCHIDA
2008-04-03  7:15       ` [RFC][patch 8/12][CFQ-cgroup] Control cfq_data per driver Satoshi UCHIDA
2008-04-03  7:16       ` [RFC][patch 9/12][CFQ-cgroup] Control cfq_data per cgroup Satoshi UCHIDA
2008-04-03  7:16       ` [PATCH] [RFC][patch 10/12][CFQ-cgroup] Search cfq_data when not connected Satoshi UCHIDA
2008-04-03  7:17       ` [RFC][patch 11/12][CFQ-cgroup] Control service tree: Main functions Satoshi UCHIDA
2008-04-03  7:18       ` [RFC][patch 12/12][CFQ-cgroup] entry/remove active cfq_data Satoshi UCHIDA
2008-04-25  9:54       ` [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ Ryo Tsuruta
2008-04-25 21:37         ` [Devel] " Florian Westphal
2008-04-29  0:44           ` Ryo Tsuruta
2008-05-09 10:17         ` Satoshi UCHIDA
2008-05-12  3:10           ` Ryo Tsuruta
2008-05-12 15:33             ` Ryo Tsuruta
2008-05-22 13:04               ` Ryo Tsuruta
2008-05-23  2:53                 ` Satoshi UCHIDA
2008-05-26  2:46                   ` Ryo Tsuruta
2008-05-27 11:32                     ` Satoshi UCHIDA
2008-05-30 10:37                       ` Andrea Righi
2008-06-18  9:48                         ` Satoshi UCHIDA
2008-06-18 22:33                           ` Andrea Righi
2008-06-22 17:04                           ` Andrea Righi
2008-06-03  8:15                       ` Ryo Tsuruta
2008-06-26  4:49                         ` Satoshi UCHIDA [this message]
2008-04-01  9:33 ` [RFC][patch 4/11][CFQ-cgroup] Create cfq driver unique data Satoshi UCHIDA
2008-04-01  9:35 ` [RFC][patch 5/11][CFQ-cgroup] Add cfq optional operation framework Satoshi UCHIDA
2008-04-01  9:36 ` [RFC][patch 6/11][CFQ-cgroup] Add new control layer over traditional control layer Satoshi UCHIDA
2008-04-01  9:37 ` [RFC][patch 7/11][CFQ-cgroup] Control cfq_data per driver Satoshi UCHIDA
2008-04-01  9:38 ` [RFC][patch 8/11][CFQ-cgroup] Control cfq_data per cgroup Satoshi UCHIDA
2008-04-03 15:35   ` Paul Menage
2008-04-04  6:20     ` Satoshi UCHIDA
2008-04-04  9:00       ` Paul Menage
2008-04-04  9:46         ` Satoshi UCHIDA
2008-04-01  9:40 ` [RFC][patch 9/11][CFQ-cgroup] Search cfq_data when not connected Satoshi UCHIDA
2008-04-01  9:41 ` [RFC][patch 10/11][CFQ-cgroup] Control service tree: Main functions Satoshi UCHIDA
2008-04-01  9:42 ` [RFC][patch 11/11][CFQ-cgroup] entry/remove active cfq_data Satoshi UCHIDA

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='002001c8d747$ffe5a8b0$ffb0fa10$@jp.nec.com' \
    --to=s-uchida@ap.jp.nec.com \
    --cc=axboe@kernel.dk \
    --cc=containers@lists.linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m-takahashi@ex.jp.nec.com \
    --cc=ryov@valinux.co.jp \
    --cc=tom-sugawara@ap.jp.nec.com \
    --cc=vtaras@openvz.org \
    --subject='RE: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).