LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: "Wanghui (John)" <john.wanghui@huawei.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	LKML <linux-kernel@vger.kernel.org>,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: [RFC 1/8] sched: Add nice value change notifier
Date: Wed, 6 Oct 2021 20:58:15 +1300	[thread overview]
Message-ID: <CAGsJ_4w5Y4=v93YmTrXJ6hDgjKshxiAZ-ox-Nz_7uRwe4ECtdw@mail.gmail.com> (raw)
In-Reply-To: <562d45e1-4a27-3252-f615-3ab1ef531f2b@huawei.com>

On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John) <john.wanghui@huawei.com> wrote:
>
> HI Tvrtko
>
> On 2021/10/4 22:36, Tvrtko Ursulin wrote:
> >   void set_user_nice(struct task_struct *p, long nice)
> >   {
> >       bool queued, running;
> > -     int old_prio;
> > +     int old_prio, ret;
> >       struct rq_flags rf;
> >       struct rq *rq;
> >
> > @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, long nice)
> >
> >   out_unlock:
> >       task_rq_unlock(rq, p, &rf);
> > +
> > +     ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
> > +     WARN_ON_ONCE(ret != NOTIFY_DONE);
> >   }
> How about adding a new "io_nice" to task_struct,and move the call chain to
> sched_setattr/getattr, there are two benefits:

We already have an ionice for block io scheduler. hardly can this new io_nice
be generic to all I/O. it seems the patchset is trying to link
process' nice with
GPU's scheduler, to some extent, it makes more senses than having a
common ionice because we have a lot of IO devices in the systems, we don't
know which I/O the ionice of task_struct should be applied to.

Maybe we could have an ionice dedicated for GPU just like ionice for CFQ
of bio/request scheduler.

>
> 1. Decoupled with fair scheduelr. In our use case, high priority tasks often
>     use rt scheduler.

Is it possible to tell GPU RT as we are telling them CFS nice?

> 2. The range of value don't need to be bound to -20~19 or 0~139
>

could build a mapping between the priorities of process and GPU. It seems
not a big deal.

Thanks
barry

  reply	other threads:[~2021-10-06  7:58 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 14:36 [RFC v2 0/8] CPU + GPU synchronised priority scheduling Tvrtko Ursulin
2021-10-04 14:36 ` [RFC 1/8] sched: Add nice value change notifier Tvrtko Ursulin
2021-10-06  4:10   ` Wanghui (John)
2021-10-06  7:58     ` Barry Song [this message]
2021-10-06 13:44       ` Tvrtko Ursulin
2021-10-06 20:21         ` Barry Song
2021-10-07  8:50           ` Tvrtko Ursulin
2021-10-07  9:09             ` Tvrtko Ursulin
2021-10-07 10:00               ` Barry Song
2021-10-04 14:36 ` [RFC 2/8] drm/i915: Explicitly track DRM clients Tvrtko Ursulin
2021-10-04 14:36 ` [RFC 3/8] drm/i915: Make GEM contexts " Tvrtko Ursulin
2021-10-04 14:36 ` [RFC 4/8] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-10-04 14:36 ` [RFC 5/8] drm/i915: Keep track of registered clients indexed by task struct Tvrtko Ursulin
2021-10-04 14:36 ` [RFC 6/8] drm/i915: Make some recently added vfuncs use full scheduling attribute Tvrtko Ursulin
2021-10-06 17:12   ` Matthew Brost
2021-10-06 19:06     ` Tvrtko Ursulin
2021-10-13 12:01     ` [Intel-gfx] " Daniel Vetter
2021-10-13 15:50       ` Tvrtko Ursulin
2021-10-04 14:36 ` [RFC 7/8] drm/i915: Inherit process nice for context scheduling priority Tvrtko Ursulin
2021-10-06 17:16   ` [Intel-gfx] " Matthew Brost
2021-10-06 17:24   ` Matthew Brost
2021-10-06 18:42     ` Tvrtko Ursulin
2021-10-04 14:36 ` [RFC 8/8] drm/i915: Connect with the process nice change notifier Tvrtko Ursulin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4w5Y4=v93YmTrXJ6hDgjKshxiAZ-ox-Nz_7uRwe4ECtdw@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=john.wanghui@huawei.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tvrtko.ursulin@intel.com \
    --cc=tvrtko.ursulin@linux.intel.com \
    --cc=vincent.guittot@linaro.org \
    --subject='Re: [RFC 1/8] sched: Add nice value change notifier' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).