LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: mingo@kernel.org, linux-kernel@vger.kernel.org
Cc: subhra.mazumdar@oracle.com, steven.sistare@oracle.com,
	dhaval.giani@oracle.com, rohit.k.jain@oracle.com,
	umgwanakikbuti@gmail.com, matt@codeblueprint.co.uk,
	riel@surriel.com, peterz@infradead.org
Subject: [RFC 10/11] sched/fair: Remove SIS_AGE/SIS_ONCE
Date: Wed, 30 May 2018 16:22:46 +0200	[thread overview]
Message-ID: <20180530143106.396633083@infradead.org> (raw)
In-Reply-To: <20180530142236.667774973@infradead.org>

[-- Attachment #1: peterz-sis-again-10.patch --]
[-- Type: text/plain, Size: 2984 bytes --]

The new scheme is clearly better (XXX need !hackbench numbers), clean
up the mess.

This leaves everything under SIS_PROP, which I think Facebook still
uses (to disable), Rik?

Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/sched/fair.c     |   43 ++++++++++++++++++-------------------------
 kernel/sched/features.h |    3 ---
 2 files changed, 18 insertions(+), 28 deletions(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6432,16 +6432,18 @@ static int select_idle_cpu(struct task_s
 	int cpu, loops = 0, nr = INT_MAX;
 	struct sched_domain *this_sd;
 	u64 avg_cost, avg_idle;
-	u64 time, cost;
-	s64 delta;
+	struct rq *this_rq;
+	u64 time;
 
 	this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
 	if (!this_sd)
 		return -1;
 
-	if (sched_feat(SIS_AGE)) {
+	if (sched_feat(SIS_PROP)) {
 		unsigned long now = jiffies;
-		struct rq *this_rq = this_rq();
+		u64 span_avg;
+
+		this_rq = this_rq();
 
 		/*
 		 * If we're busy, the assumption that the last idle period
@@ -6456,24 +6458,16 @@ static int select_idle_cpu(struct task_s
 		}
 
 		avg_idle = this_rq->wake_avg;
-	} else {
-		/*
-		 * Due to large variance we need a large fuzz factor; hackbench
-		 * in particularly is sensitive here.
-		 */
-		avg_idle = this_rq()->avg_idle / 512;
-	}
-	avg_cost = this_sd->avg_scan_cost + 1;
+		avg_cost = this_sd->avg_scan_cost + 1;
 
-	if (sched_feat(SIS_PROP)) {
-		u64 span_avg = sd->span_weight * avg_idle;
+		span_avg = sd->span_weight * avg_idle;
 		if (span_avg > sis_min_cores * avg_cost)
 			nr = div_u64(span_avg, avg_cost);
 		else
 			nr = sis_min_cores;
-	}
 
-	time = local_clock();
+		time = local_clock();
+	}
 
 #ifdef CONFIG_SCHED_SMT
 	if (sched_feat(SIS_FOLD) && static_branch_likely(&sched_smt_present) &&
@@ -6483,26 +6477,25 @@ static int select_idle_cpu(struct task_s
 #endif
 	cpu = __select_idle_cpu(p, sd, target, nr * sched_smt_weight, &loops);
 
-	time = local_clock() - time;
+	if (sched_feat(SIS_PROP)) {
+		s64 delta;
 
-	if (sched_feat(SIS_ONCE)) {
-		struct rq *this_rq = this_rq();
+		time = local_clock() - time;
 
 		/*
 		 * We need to consider the cost of all wakeups between
 		 * consequtive idle periods. We can only use the predicted
 		 * idle time once.
 		 */
-		if (this_rq->wake_avg > time)
+		if (avg_idle > time)
 			this_rq->wake_avg -= time;
 		else
 			this_rq->wake_avg = 0;
-	}
 
-	time = div_u64(time, loops);
-	cost = this_sd->avg_scan_cost;
-	delta = (s64)(time - cost) / 8;
-	this_sd->avg_scan_cost += delta;
+		time = div_u64(time, loops);
+		delta = (s64)(time - avg_cost) / 8;
+		this_sd->avg_scan_cost += delta;
+	}
 
 	return cpu;
 }
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -56,9 +56,6 @@ SCHED_FEAT(TTWU_QUEUE, true)
  * When doing wakeups, attempt to limit superfluous scans of the LLC domain.
  */
 SCHED_FEAT(SIS_PROP, true)
-
-SCHED_FEAT(SIS_AGE, true)
-SCHED_FEAT(SIS_ONCE, true)
 SCHED_FEAT(SIS_FOLD, true)
 
 /*

  parent reply	other threads:[~2018-05-30 14:37 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-30 14:22 [RFC 00/11] select_idle_sibling rework Peter Zijlstra
2018-05-30 14:22 ` [RFC 01/11] sched/fair: Fix select_idle_cpu()s cost accounting Peter Zijlstra
2018-05-30 14:22 ` [RFC 02/11] sched/fair: Age the average idle time Peter Zijlstra
2018-05-30 14:22 ` [RFC 03/11] sched/fair: Only use time once Peter Zijlstra
2018-05-30 14:22 ` [RFC 04/11] sched/topology: Introduce sched_domain_cores() Peter Zijlstra
2018-05-30 14:22 ` [RFC 05/11] sched/fair: Re-arrange select_idle_cpu() Peter Zijlstra
2018-05-30 14:22 ` [RFC 06/11] sched/fair: Make select_idle_cpu() proportional to cores Peter Zijlstra
2018-05-30 14:22 ` [RFC 07/11] sched/fair: Fold the select_idle_sibling() scans Peter Zijlstra
2018-05-30 14:22 ` [RFC 08/11] sched/fair: Optimize SIS_FOLD Peter Zijlstra
2018-05-30 14:22 ` [RFC 09/11] sched/fair: Remove SIS_AVG_PROP Peter Zijlstra
2018-05-30 14:22 ` Peter Zijlstra [this message]
2018-05-30 14:22 ` [RFC 11/11] sched/fair: Remove SIS_FOLD Peter Zijlstra
2018-06-19 22:06 ` [RFC 00/11] select_idle_sibling rework Matt Fleming
2018-06-20 22:20   ` Steven Sistare

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180530143106.396633083@infradead.org \
    --to=peterz@infradead.org \
    --cc=dhaval.giani@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matt@codeblueprint.co.uk \
    --cc=mingo@kernel.org \
    --cc=riel@surriel.com \
    --cc=rohit.k.jain@oracle.com \
    --cc=steven.sistare@oracle.com \
    --cc=subhra.mazumdar@oracle.com \
    --cc=umgwanakikbuti@gmail.com \
    --subject='Re: [RFC 10/11] sched/fair: Remove SIS_AGE/SIS_ONCE' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).