LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Gui Jianfeng <guijianfeng@cn.fujitsu.com>
Cc: Jens Axboe <axboe@kernel.dk>, Shaohua Li <shaohua.li@intel.com>,
	lkml <linux-kernel@vger.kernel.org>,
	Chad Talbott <ctalbott@google.com>,
	Divyesh Shah <dpshah@google.com>
Subject: Re: [PATCH 0/6 v4] cfq-iosched: Introduce CFQ group hierarchical scheduling and "use_hierarchy" interface
Date: Thu, 10 Feb 2011 13:30:53 -0500	[thread overview]
Message-ID: <20110210183053.GD2524@redhat.com> (raw)
In-Reply-To: <4D5397A9.6040404@cn.fujitsu.com>

On Thu, Feb 10, 2011 at 03:45:45PM +0800, Gui Jianfeng wrote:
> Hi
> 
> Previously, I posted a patchset to add support of CFQ group hierarchical scheduling
> in the way that it puts all CFQ queues in a hidden group and schedules with other 
> CFQ group under their parent. The patchset is available here,
> http://lkml.org/lkml/2010/8/30/30
> 
> Vivek think this approach isn't so instinct that we should treat CFQ queues
> and groups at the same level. Here is the new approach for hierarchical 
> scheduling based on Vivek's suggestion. The most big change of CFQ is that
> it gets rid of cfq_slice_offset logic, and makes use of vdisktime for CFQ
> queue scheduling just like CFQ group does. But I still give cfqq some jump 
> in vdisktime based on ioprio, thanks for Vivek to point out this. Now CFQ 
> queue and CFQ group use the same scheduling algorithm. 
> 
> "use_hierarchy" interface is now added to switch between hierarchical mode
> and flat mode. It works as memcg.
> 
> --
> V3 -> V4 Changes:
> - Take io class into account when calculating the boost value.
> - Refine the vtime boosting logic as Vivek's Suggestion.

Hi Gui,

What testing did you do to make sure that this vtime boosting logic is working
and is good replacement for slice_offset() logic for cfqq?

Secondly, did you get a chance to look at chad's patch of keeping track
of previous assigned vdisktime and keeping track of genrations. I think
his patch is going to coflict with yours, so one of you will have to
make adjustments. I think both the boost logic and keeping track of generation
logic can be combined.

	if (entity->gneration_number > cfqd->active_generation)
		Use_boost_logic;
	else
		Use_previously_assigned_vdisktime;


That way if we generation has changed then we really don't have a valid
vdisktime and we can use boost logic to come up with differential
vdisktime and if generation has not changed then we can continue to make
use of previous vdisktime.

Thanks
Vivek

  reply	other threads:[~2011-02-10 18:31 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-10  7:45 Gui Jianfeng
2011-02-10 18:30 ` Vivek Goyal [this message]
2011-02-11  9:58   ` Gui Jianfeng
2011-02-12  6:08   ` Gui Jianfeng
2011-02-14 18:06     ` Vivek Goyal
2011-02-15  3:13       ` Gui Jianfeng
2011-02-15 14:29         ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110210183053.GD2524@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=ctalbott@google.com \
    --cc=dpshah@google.com \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shaohua.li@intel.com \
    --subject='Re: [PATCH 0/6 v4] cfq-iosched: Introduce CFQ group hierarchical scheduling and "use_hierarchy" interface' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).