LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Claudio Scordino <claudio@evidence.eu.com>
To: linux-kernel@vger.kernel.org
Cc: Claudio Scordino <claudio@evidence.eu.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Patrick Bellasi <patrick.bellasi@arm.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Luca Abeni <luca.abeni@santannapisa.it>,
	Joel Fernandes <joelaf@google.com>,
	linux-pm@vger.kernel.org
Subject: [RFC PATCH] sched/cpufreq/schedutil: handling urgent frequency requests
Date: Mon,  7 May 2018 16:43:35 +0200	[thread overview]
Message-ID: <1525704215-8683-1-git-send-email-claudio@evidence.eu.com> (raw)

At OSPM, it was mentioned the issue about urgent CPU frequency requests
arriving when a frequency switch is already in progress.

Besides the various issues (physical time for switching frequency,
on-going kthread activity, etc.) one (minor) issue is the kernel
"forgetting" such request, thus waiting the next switch time for
recomputing the needed frequency and behaving accordingly.

This patch makes the kthread serve any urgent request occurred during
the previous frequency switch. It introduces a specific flag, only set
when the SCHED_DEADLINE scheduling class increases the CPU utilization,
aiming at decreasing the likelihood of a deadline miss.

Indeed, some preliminary tests in critical conditions (i.e.
SCHED_DEADLINE tasks with short periods) have shown reductions of more
than 10% of the average number of deadline misses. On the other hand,
the increase in terms of energy consumption when running SCHED_DEADLINE
tasks (not yet measured) is likely to be not negligible (especially in
case of critical scenarios like "ramp up" utilizations).

The patch is meant as follow-up discussion after OSPM.

Signed-off-by: Claudio Scordino <claudio@evidence.eu.com>
CC: Viresh Kumar <viresh.kumar@linaro.org>
CC: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Ingo Molnar <mingo@redhat.com>
CC: Patrick Bellasi <patrick.bellasi@arm.com>
CC: Juri Lelli <juri.lelli@redhat.com>
Cc: Luca Abeni <luca.abeni@santannapisa.it>
CC: Joel Fernandes <joelaf@google.com>
CC: linux-pm@vger.kernel.org
---
 kernel/sched/cpufreq_schedutil.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index d2c6083..4de06b0 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -41,6 +41,7 @@ struct sugov_policy {
 	bool			work_in_progress;
 
 	bool			need_freq_update;
+	bool			urgent_freq_update;
 };
 
 struct sugov_cpu {
@@ -92,6 +93,14 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
 	    !cpufreq_can_do_remote_dvfs(sg_policy->policy))
 		return false;
 
+	/*
+	 * Continue computing the new frequency. In case of work_in_progress,
+	 * the kthread will resched a change once the current transition is
+	 * finished.
+	 */
+	if (sg_policy->urgent_freq_update)
+		return true;
+
 	if (sg_policy->work_in_progress)
 		return false;
 
@@ -121,6 +130,9 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
 	sg_policy->next_freq = next_freq;
 	sg_policy->last_freq_update_time = time;
 
+	if (sg_policy->work_in_progress)
+		return;
+
 	if (policy->fast_switch_enabled) {
 		next_freq = cpufreq_driver_fast_switch(policy, next_freq);
 		if (!next_freq)
@@ -274,7 +286,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
 static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu, struct sugov_policy *sg_policy)
 {
 	if (cpu_util_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->util_dl)
-		sg_policy->need_freq_update = true;
+		sg_policy->urgent_freq_update = true;
 }
 
 static void sugov_update_single(struct update_util_data *hook, u64 time,
@@ -383,8 +395,11 @@ static void sugov_work(struct kthread_work *work)
 	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
 
 	mutex_lock(&sg_policy->work_lock);
-	__cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
+	do {
+		sg_policy->urgent_freq_update = false;
+		__cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
 				CPUFREQ_RELATION_L);
+	} while (sg_policy->urgent_freq_update);
 	mutex_unlock(&sg_policy->work_lock);
 
 	sg_policy->work_in_progress = false;
@@ -673,6 +688,7 @@ static int sugov_start(struct cpufreq_policy *policy)
 	sg_policy->next_freq			= UINT_MAX;
 	sg_policy->work_in_progress		= false;
 	sg_policy->need_freq_update		= false;
+	sg_policy->urgent_freq_update		= false;
 	sg_policy->cached_raw_freq		= 0;
 
 	for_each_cpu(cpu, policy->cpus) {
-- 
2.7.4

             reply	other threads:[~2018-05-07 14:43 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-07 14:43 Claudio Scordino [this message]
2018-05-08  6:54 ` [RFC PATCH] sched/cpufreq/schedutil: handling urgent frequency requests Viresh Kumar
2018-05-08 12:32   ` Claudio Scordino
2018-05-08 20:40     ` Rafael J. Wysocki
2018-05-09  4:54   ` Joel Fernandes
2018-05-09  6:45     ` Juri Lelli
2018-05-09  6:54       ` Viresh Kumar
2018-05-09  7:01         ` Joel Fernandes
2018-05-09  8:05           ` Rafael J. Wysocki
2018-05-09  8:22             ` Joel Fernandes
2018-05-09  8:41               ` Rafael J. Wysocki
2018-05-09  8:23             ` Juri Lelli
2018-05-09  8:25               ` Rafael J. Wysocki
2018-05-09  8:41                 ` Juri Lelli
2018-05-09  6:55       ` Joel Fernandes
2018-05-09  8:06       ` Joel Fernandes
2018-05-09  8:30         ` Rafael J. Wysocki
2018-05-09  8:40           ` Viresh Kumar
2018-05-09  9:02             ` Joel Fernandes
2018-05-09  9:28               ` Viresh Kumar
2018-05-09 10:34                 ` Joel Fernandes
2018-05-09  8:51           ` Joel Fernandes
2018-05-09  9:06             ` Rafael J. Wysocki
2018-05-09  9:39               ` Joel Fernandes
2018-05-09  9:48                 ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1525704215-8683-1-git-send-email-claudio@evidence.eu.com \
    --to=claudio@evidence.eu.com \
    --cc=joelaf@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=luca.abeni@santannapisa.it \
    --cc=mingo@redhat.com \
    --cc=patrick.bellasi@arm.com \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).