LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] sched: avoid scale real weight down to zero
@ 2020-03-05  2:57 王贇
  2020-03-06 15:06 ` Vincent Guittot
  2020-03-16  6:33 ` 王贇
  0 siblings, 2 replies; 4+ messages in thread
From: 王贇 @ 2020-03-05  2:57 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	open list:SCHEDULER, Vincent Guittot

During our testing, we found a case that shares no longer
working correctly, the cgroup topology is like:

  /sys/fs/cgroup/cpu/A		(shares=102400)
  /sys/fs/cgroup/cpu/A/B	(shares=2)
  /sys/fs/cgroup/cpu/A/B/C	(shares=1024)

  /sys/fs/cgroup/cpu/D		(shares=1024)
  /sys/fs/cgroup/cpu/D/E	(shares=1024)
  /sys/fs/cgroup/cpu/D/E/F	(shares=1024)

The same benchmark is running in group C & F, no other tasks are
running, the benchmark is capable to consumed all the CPUs.

We suppose the group C will win more CPU resources since it could
enjoy all the shares of group A, but it's F who wins much more.

The reason is because we have group B with shares as 2, since
A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
so A->cfs_rq.load.weight become very small.

And in calc_group_shares() we calculate shares as:

  load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
  shares = (tg_shares * load) / tg_weight;

Since the 'cfs_rq->load.weight' is too small, the load become 0
after scale down, although 'tg_shares' is 102400, shares of the se
which stand for group A on root cfs_rq become 2.

While the se of D on root cfs_rq is far more bigger than 2, so it
wins the battle.

Thus when scale_load_down() scale real weight down to 0, it's no
longer telling the real story, the caller will have the wrong
information and the calculation will be buggy.

This patch add check in scale_load_down(), so the real weight will
be >= MIN_SHARES after scale, after applied the group C wins as
expected.

Cc: Ben Segall <bsegall@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
---
 kernel/sched/sched.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2a0caf394dd4..75c283f22256 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
 #ifdef CONFIG_64BIT
 # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
 # define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
-# define scale_load_down(w)	((w) >> SCHED_FIXEDPOINT_SHIFT)
+# define scale_load_down(w) \
+({ \
+	unsigned long __w = (w); \
+	if (__w) \
+		__w = max(MIN_SHARES, __w >> SCHED_FIXEDPOINT_SHIFT); \
+	__w; \
+})
 #else
 # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
 # define scale_load(w)		(w)
-- 
2.14.4.44.g2045bb6


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched: avoid scale real weight down to zero
  2020-03-05  2:57 [PATCH] sched: avoid scale real weight down to zero 王贇
@ 2020-03-06 15:06 ` Vincent Guittot
  2020-03-11  3:12   ` 王贇
  2020-03-16  6:33 ` 王贇
  1 sibling, 1 reply; 4+ messages in thread
From: Vincent Guittot @ 2020-03-06 15:06 UTC (permalink / raw)
  To: 王贇
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, open list:SCHEDULER

On Thu, 5 Mar 2020 at 03:57, 王贇 <yun.wang@linux.alibaba.com> wrote:
>
> During our testing, we found a case that shares no longer
> working correctly, the cgroup topology is like:
>
>   /sys/fs/cgroup/cpu/A          (shares=102400)
>   /sys/fs/cgroup/cpu/A/B        (shares=2)
>   /sys/fs/cgroup/cpu/A/B/C      (shares=1024)
>
>   /sys/fs/cgroup/cpu/D          (shares=1024)
>   /sys/fs/cgroup/cpu/D/E        (shares=1024)
>   /sys/fs/cgroup/cpu/D/E/F      (shares=1024)
>
> The same benchmark is running in group C & F, no other tasks are
> running, the benchmark is capable to consumed all the CPUs.
>
> We suppose the group C will win more CPU resources since it could
> enjoy all the shares of group A, but it's F who wins much more.
>
> The reason is because we have group B with shares as 2, since
> A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
> so A->cfs_rq.load.weight become very small.
>
> And in calc_group_shares() we calculate shares as:
>
>   load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>   shares = (tg_shares * load) / tg_weight;
>
> Since the 'cfs_rq->load.weight' is too small, the load become 0
> after scale down, although 'tg_shares' is 102400, shares of the se
> which stand for group A on root cfs_rq become 2.
>
> While the se of D on root cfs_rq is far more bigger than 2, so it
> wins the battle.
>
> Thus when scale_load_down() scale real weight down to 0, it's no
> longer telling the real story, the caller will have the wrong
> information and the calculation will be buggy.
>
> This patch add check in scale_load_down(), so the real weight will
> be >= MIN_SHARES after scale, after applied the group C wins as
> expected.
>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/sched.h | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2a0caf394dd4..75c283f22256 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
>  #ifdef CONFIG_64BIT
>  # define NICE_0_LOAD_SHIFT     (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
>  # define scale_load(w)         ((w) << SCHED_FIXEDPOINT_SHIFT)
> -# define scale_load_down(w)    ((w) >> SCHED_FIXEDPOINT_SHIFT)
> +# define scale_load_down(w) \
> +({ \
> +       unsigned long __w = (w); \
> +       if (__w) \
> +               __w = max(MIN_SHARES, __w >> SCHED_FIXEDPOINT_SHIFT); \
> +       __w; \
> +})
>  #else
>  # define NICE_0_LOAD_SHIFT     (SCHED_FIXEDPOINT_SHIFT)
>  # define scale_load(w)         (w)
> --
> 2.14.4.44.g2045bb6
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched: avoid scale real weight down to zero
  2020-03-06 15:06 ` Vincent Guittot
@ 2020-03-11  3:12   ` 王贇
  0 siblings, 0 replies; 4+ messages in thread
From: 王贇 @ 2020-03-11  3:12 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, open list:SCHEDULER

On 2020/3/6 下午11:06, Vincent Guittot wrote:
[snip]
>>
>> Thus when scale_load_down() scale real weight down to 0, it's no
>> longer telling the real story, the caller will have the wrong
>> information and the calculation will be buggy.
>>
>> This patch add check in scale_load_down(), so the real weight will
>> be >= MIN_SHARES after scale, after applied the group C wins as
>> expected.
>>
>> Cc: Ben Segall <bsegall@google.com>
>> Cc: Vincent Guittot <vincent.guittot@linaro.org>
>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>> Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
> 
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

Thanks for the review :-)

Hi Peter, should we apply this one?

Regards,
Michael Wang


>> ---
>>  kernel/sched/sched.h | 8 +++++++-
>>  1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index 2a0caf394dd4..75c283f22256 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
>>  #ifdef CONFIG_64BIT
>>  # define NICE_0_LOAD_SHIFT     (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
>>  # define scale_load(w)         ((w) << SCHED_FIXEDPOINT_SHIFT)
>> -# define scale_load_down(w)    ((w) >> SCHED_FIXEDPOINT_SHIFT)
>> +# define scale_load_down(w) \
>> +({ \
>> +       unsigned long __w = (w); \
>> +       if (__w) \
>> +               __w = max(MIN_SHARES, __w >> SCHED_FIXEDPOINT_SHIFT); \
>> +       __w; \
>> +})
>>  #else
>>  # define NICE_0_LOAD_SHIFT     (SCHED_FIXEDPOINT_SHIFT)
>>  # define scale_load(w)         (w)
>> --
>> 2.14.4.44.g2045bb6
>>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched: avoid scale real weight down to zero
  2020-03-05  2:57 [PATCH] sched: avoid scale real weight down to zero 王贇
  2020-03-06 15:06 ` Vincent Guittot
@ 2020-03-16  6:33 ` 王贇
  1 sibling, 0 replies; 4+ messages in thread
From: 王贇 @ 2020-03-16  6:33 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	open list:SCHEDULER, Vincent Guittot

Hi, Peter

I've done more complicated testing with share 2 and every thing looks fine.

Should we apply this? or is there any concern?

Regards,
Michael Wang

On 2020/3/5 上午10:57, 王贇 wrote:
> During our testing, we found a case that shares no longer
> working correctly, the cgroup topology is like:
> 
>   /sys/fs/cgroup/cpu/A		(shares=102400)
>   /sys/fs/cgroup/cpu/A/B	(shares=2)
>   /sys/fs/cgroup/cpu/A/B/C	(shares=1024)
> 
>   /sys/fs/cgroup/cpu/D		(shares=1024)
>   /sys/fs/cgroup/cpu/D/E	(shares=1024)
>   /sys/fs/cgroup/cpu/D/E/F	(shares=1024)
> 
> The same benchmark is running in group C & F, no other tasks are
> running, the benchmark is capable to consumed all the CPUs.
> 
> We suppose the group C will win more CPU resources since it could
> enjoy all the shares of group A, but it's F who wins much more.
> 
> The reason is because we have group B with shares as 2, since
> A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
> so A->cfs_rq.load.weight become very small.
> 
> And in calc_group_shares() we calculate shares as:
> 
>   load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>   shares = (tg_shares * load) / tg_weight;
> 
> Since the 'cfs_rq->load.weight' is too small, the load become 0
> after scale down, although 'tg_shares' is 102400, shares of the se
> which stand for group A on root cfs_rq become 2.
> 
> While the se of D on root cfs_rq is far more bigger than 2, so it
> wins the battle.
> 
> Thus when scale_load_down() scale real weight down to 0, it's no
> longer telling the real story, the caller will have the wrong
> information and the calculation will be buggy.
> 
> This patch add check in scale_load_down(), so the real weight will
> be >= MIN_SHARES after scale, after applied the group C wins as
> expected.
> 
> Cc: Ben Segall <bsegall@google.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
> ---
>  kernel/sched/sched.h | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2a0caf394dd4..75c283f22256 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
>  #ifdef CONFIG_64BIT
>  # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
>  # define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
> -# define scale_load_down(w)	((w) >> SCHED_FIXEDPOINT_SHIFT)
> +# define scale_load_down(w) \
> +({ \
> +	unsigned long __w = (w); \
> +	if (__w) \
> +		__w = max(MIN_SHARES, __w >> SCHED_FIXEDPOINT_SHIFT); \
> +	__w; \
> +})
>  #else
>  # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
>  # define scale_load(w)		(w)
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-03-16  6:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-05  2:57 [PATCH] sched: avoid scale real weight down to zero 王贇
2020-03-06 15:06 ` Vincent Guittot
2020-03-11  3:12   ` 王贇
2020-03-16  6:33 ` 王贇

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).