LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "Satoshi UCHIDA" <s-uchida@ap.jp.nec.com>
To: "'Ryo Tsuruta'" <ryov@valinux.co.jp>, <vtaras@openvz.org>
Cc: <linux-kernel@vger.kernel.org>,
<containers@lists.linux-foundation.org>, <axboe@kernel.dk>,
<tom-sugawara@ap.jp.nec.com>, <m-takahashi@ex.jp.nec.com>,
<devel@openvz.org>
Subject: RE: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ
Date: Fri, 9 May 2008 19:17:44 +0900 [thread overview]
Message-ID: <004901c8b1bd$ec22fb70$c468f250$@jp.nec.com> (raw)
In-Reply-To: <20080425.185444.115924172.ryov@valinux.co.jp>
Hi, Ryo-San.
Thank you for your test results.
In the test #2 and #3, did you use direct write?
I guess you have used the non-direct write I/O (using cache).
CFQ I/O scheduler was extended in my and Vasily's controllers so that both controllers inherit the features of CFQ.
The current CFQ I/O scheduler cannot control non-direct write I/Os.
This main cause is a cache system.
Bio data is created by special daemon process, such as pdflush or kswapd, for the write I/O using cache.
Therefore, many non-direct write I/Os will belong to one of cgroup (perhaps, root of cgroup).
We consider that this problem should be resolved by fixing cache system.
Specifically, I/Os created by collection of cache pages belong to I/O-context for task which wrote data to cache.
This resolution has a problem.
* Who is the owner of cache page?
Cache is reused by many tasks.
Therefore, it is difficult to decide owner.
In the test #3, It seems that system could control I/Os only among read (sdc2 and sdc3).
Therefore, your test shows that our controller can control I/O without above problem.
Meanwhile, I'm very interested in the result of your test #2.
In the non-direct write I/O, performance will be influenced to task scheduling and sequence of output pages.
Therefore, non-direct write I/O will be fair in the current default task scheduler.
However, your result shows almost fair in Vasilly's controller, whereas non fair in ours.
I'm just wondering if this is an accidental result or an usual result.
Thanks,
Satoshi UCHIDA.
> -----Original Message-----
> From: Ryo Tsuruta [mailto:ryov@valinux.co.jp]
> Sent: Friday, April 25, 2008 6:55 PM
> To: s-uchida@ap.jp.nec.com; vtaras@openvz.org
> Cc: linux-kernel@vger.kernel.org;
> containers@lists.linux-foundation.org; axboe@kernel.dk;
> tom-sugawara@ap.jp.nec.com; m-takahashi@ex.jp.nec.com; devel@openvz.org
> Subject: Re: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth
> controlling subsystem for CGroups based on CFQ
>
> Hi,
>
> I report benchmark results of the following I/O bandwidth controllers.
>
> From: Vasily Tarasov <vtaras@openvz.org>
> Subject: [RFC][PATCH 0/9] cgroups: block: cfq: I/O bandwidth
> controlling subsystem for CGroups based on CFQ
> Date: Fri, 15 Feb 2008 01:53:34 -0500
>
> From: "Satoshi UCHIDA" <s-uchida@ap.jp.nec.com>
> Subject: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth
> controlling subsystem for CGroups based on CFQ
> Date: Thu, 3 Apr 2008 16:09:12 +0900
>
> The test procedure is as follows:
> o Prepare 3 partitions sdc2, sdc3 and sdc4.
> o Run 100 processes issuing random direct I/O with 4KB data on each
> partitions.
> o Run 3 tests:
> #1 issuing read I/O only.
> #2 issuing write I/O only.
> #3 sdc2 and sdc3 are read, sdc4 is write.
> o Count up the number of I/Os which have done in 60 seconds.
>
> Unfortunately, both bandwidth controllers didn't work as I expected,
> On the test #3, the write I/O ate up the bandwidth regardless of the
> specified priority level.
>
> Vasily's scheduler
> The number of I/Os (percentage to total I/Os)
>
> ---------------------------------------------------------------------
> | partition | sdc2 | sdc3 | sdc4 | total
> |
> | priority | 7(highest) | 4 | 0(lowest) | I/Os
> |
>
> |---------------+--------------+--------------+--------------|--------
> |
> | #1 read | 3620(35.6%) | 3474(34.2%) | 3065(30.2%) | 10159
> |
> | #2 write | 21985(36.6%) | 19274(32.1%) | 18856(31.4%) | 60115
> |
> | #3 read&write | 5571( 7.5%) | 3253( 4.4%) | 64977(88.0%) | 73801
> |
>
> ---------------------------------------------------------------------
>
> Satoshi's scheduler
> The number of I/Os (percentage to total I/O)
>
> ---------------------------------------------------------------------
> | partition | sdc2 | sdc3 | sdc4 | total
> |
> | priority | 0(highest) | 4 | 7(lowest) | I/Os
> |
>
> |---------------+--------------+--------------+--------------|--------
> |
> | #1 read | 4523(47.8%) | 3733(39.5%) | 1204(12.7%) | 9460
> |
> | #2 write | 65202(59.0%) | 35603(32.2%) | 9673( 8.8%) | 110478
> |
> | #3 read&write | 5328(23.0%) | 4153(17.9%) | 13694(59.1%) | 23175
> |
>
> ---------------------------------------------------------------------
>
> I'd like to see other benchmark results if anyone has.
>
> Thanks,
> Ryo Tsuruta
next prev parent reply other threads:[~2008-05-09 10:20 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-04-01 9:22 [RFC][patch 0/11][CFQ-cgroup]Yet " Satoshi UCHIDA
2008-04-01 9:27 ` [RFC][patch 1/11][CFQ-cgroup] Add Configuration Satoshi UCHIDA
2008-04-01 9:30 ` [RFC][patch 2/11][CFQ-cgroup] Move header file Satoshi UCHIDA
2008-04-01 9:32 ` [RFC][patch 3/11][CFQ-cgroup] Introduce cgroup subsystem Satoshi UCHIDA
2008-04-02 22:41 ` Paul Menage
2008-04-03 2:31 ` Satoshi UCHIDA
2008-04-03 2:39 ` Li Zefan
2008-04-03 15:31 ` Paul Menage
2008-04-03 7:09 ` [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ Satoshi UCHIDA
2008-04-03 7:11 ` [PATCH] [RFC][patch 1/12][CFQ-cgroup] Add Configuration Satoshi UCHIDA
2008-04-03 7:12 ` [RFC][patch 2/11][CFQ-cgroup] Move header file Satoshi UCHIDA
2008-04-03 7:12 ` [RFC][patch 3/12][CFQ-cgroup] Introduce cgroup subsystem Satoshi UCHIDA
2008-04-03 7:13 ` [PATCH] [RFC][patch 4/12][CFQ-cgroup] Add ioprio entry Satoshi UCHIDA
2008-04-03 7:14 ` [RFC][patch 5/12][CFQ-cgroup] Create cfq driver unique data Satoshi UCHIDA
2008-04-03 7:14 ` [RFC][patch 6/12][CFQ-cgroup] Add cfq optional operation framework Satoshi UCHIDA
2008-04-03 7:15 ` [RFC][patch 7/12][CFQ-cgroup] Add new control layer over traditional control layer Satoshi UCHIDA
2008-04-03 7:15 ` [RFC][patch 8/12][CFQ-cgroup] Control cfq_data per driver Satoshi UCHIDA
2008-04-03 7:16 ` [RFC][patch 9/12][CFQ-cgroup] Control cfq_data per cgroup Satoshi UCHIDA
2008-04-03 7:16 ` [PATCH] [RFC][patch 10/12][CFQ-cgroup] Search cfq_data when not connected Satoshi UCHIDA
2008-04-03 7:17 ` [RFC][patch 11/12][CFQ-cgroup] Control service tree: Main functions Satoshi UCHIDA
2008-04-03 7:18 ` [RFC][patch 12/12][CFQ-cgroup] entry/remove active cfq_data Satoshi UCHIDA
2008-04-25 9:54 ` [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ Ryo Tsuruta
2008-04-25 21:37 ` [Devel] " Florian Westphal
2008-04-29 0:44 ` Ryo Tsuruta
2008-05-09 10:17 ` Satoshi UCHIDA [this message]
2008-05-12 3:10 ` Ryo Tsuruta
2008-05-12 15:33 ` Ryo Tsuruta
2008-05-22 13:04 ` Ryo Tsuruta
2008-05-23 2:53 ` Satoshi UCHIDA
2008-05-26 2:46 ` Ryo Tsuruta
2008-05-27 11:32 ` Satoshi UCHIDA
2008-05-30 10:37 ` Andrea Righi
2008-06-18 9:48 ` Satoshi UCHIDA
2008-06-18 22:33 ` Andrea Righi
2008-06-22 17:04 ` Andrea Righi
2008-06-03 8:15 ` Ryo Tsuruta
2008-06-26 4:49 ` Satoshi UCHIDA
2008-04-01 9:33 ` [RFC][patch 4/11][CFQ-cgroup] Create cfq driver unique data Satoshi UCHIDA
2008-04-01 9:35 ` [RFC][patch 5/11][CFQ-cgroup] Add cfq optional operation framework Satoshi UCHIDA
2008-04-01 9:36 ` [RFC][patch 6/11][CFQ-cgroup] Add new control layer over traditional control layer Satoshi UCHIDA
2008-04-01 9:37 ` [RFC][patch 7/11][CFQ-cgroup] Control cfq_data per driver Satoshi UCHIDA
2008-04-01 9:38 ` [RFC][patch 8/11][CFQ-cgroup] Control cfq_data per cgroup Satoshi UCHIDA
2008-04-03 15:35 ` Paul Menage
2008-04-04 6:20 ` Satoshi UCHIDA
2008-04-04 9:00 ` Paul Menage
2008-04-04 9:46 ` Satoshi UCHIDA
2008-04-01 9:40 ` [RFC][patch 9/11][CFQ-cgroup] Search cfq_data when not connected Satoshi UCHIDA
2008-04-01 9:41 ` [RFC][patch 10/11][CFQ-cgroup] Control service tree: Main functions Satoshi UCHIDA
2008-04-01 9:42 ` [RFC][patch 11/11][CFQ-cgroup] entry/remove active cfq_data Satoshi UCHIDA
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='004901c8b1bd$ec22fb70$c468f250$@jp.nec.com' \
--to=s-uchida@ap.jp.nec.com \
--cc=axboe@kernel.dk \
--cc=containers@lists.linux-foundation.org \
--cc=devel@openvz.org \
--cc=linux-kernel@vger.kernel.org \
--cc=m-takahashi@ex.jp.nec.com \
--cc=ryov@valinux.co.jp \
--cc=tom-sugawara@ap.jp.nec.com \
--cc=vtaras@openvz.org \
--subject='RE: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).