LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH RFC] schedutil: Address the r/w ordering race in kthread
@ 2018-05-22 23:50 Joel Fernandes (Google)
2018-05-23 0:18 ` Joel Fernandes
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Joel Fernandes (Google) @ 2018-05-22 23:50 UTC (permalink / raw)
To: linux-kernel
Cc: Joel Fernandes (Google),
Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar, Patrick Bellasi,
Juri Lelli, Luca Abeni, Todd Kjos, claudio, kernel-team,
linux-pm
Currently there is a race in schedutil code for slow-switch single-CPU
systems. Fix it by enforcing ordering the write to work_in_progress to
happen before the read of next_freq.
Kthread Sched update
sugov_work() sugov_update_single()
lock();
// The CPU is free to rearrange below
// two in any order, so it may clear
// the flag first and then read next
// freq. Lets assume it does.
work_in_progress = false
if (work_in_progress)
return;
sg_policy->next_freq = 0;
freq = sg_policy->next_freq;
sg_policy->next_freq = real-freq;
unlock();
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
CC: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Ingo Molnar <mingo@redhat.com>
CC: Patrick Bellasi <patrick.bellasi@arm.com>
CC: Juri Lelli <juri.lelli@redhat.com>
Cc: Luca Abeni <luca.abeni@santannapisa.it>
CC: Todd Kjos <tkjos@google.com>
CC: claudio@evidence.eu.com
CC: kernel-team@android.com
CC: linux-pm@vger.kernel.org
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
I split this into separate patch, because this race can also happen in
mainline.
kernel/sched/cpufreq_schedutil.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 5c482ec38610..ce7749da7a44 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -401,6 +401,13 @@ static void sugov_work(struct kthread_work *work)
*/
raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
freq = sg_policy->next_freq;
+
+ /*
+ * sugov_update_single can access work_in_progress without update_lock,
+ * make sure next_freq is read before work_in_progress is set.
+ */
+ smp_mb();
+
sg_policy->work_in_progress = false;
raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
--
2.17.0.441.gb46fe60e1d-goog
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH RFC] schedutil: Address the r/w ordering race in kthread
2018-05-22 23:50 [PATCH RFC] schedutil: Address the r/w ordering race in kthread Joel Fernandes (Google)
@ 2018-05-23 0:18 ` Joel Fernandes
2018-05-23 6:47 ` Juri Lelli
2018-05-23 8:23 ` Rafael J. Wysocki
2 siblings, 0 replies; 4+ messages in thread
From: Joel Fernandes @ 2018-05-23 0:18 UTC (permalink / raw)
To: Joel Fernandes (Google)
Cc: linux-kernel, Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar,
Patrick Bellasi, Juri Lelli, Luca Abeni, Todd Kjos, claudio,
kernel-team, linux-pm
On Tue, May 22, 2018 at 04:50:28PM -0700, Joel Fernandes (Google) wrote:
> Currently there is a race in schedutil code for slow-switch single-CPU
> systems. Fix it by enforcing ordering the write to work_in_progress to
> happen before the read of next_freq.
Aargh, s/before/after/.
Commit log has above issue but code is Ok. Should I resend this patch or
are there any additional comments? thanks!
- Joel
[..]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH RFC] schedutil: Address the r/w ordering race in kthread
2018-05-22 23:50 [PATCH RFC] schedutil: Address the r/w ordering race in kthread Joel Fernandes (Google)
2018-05-23 0:18 ` Joel Fernandes
@ 2018-05-23 6:47 ` Juri Lelli
2018-05-23 8:23 ` Rafael J. Wysocki
2 siblings, 0 replies; 4+ messages in thread
From: Juri Lelli @ 2018-05-23 6:47 UTC (permalink / raw)
To: Joel Fernandes (Google)
Cc: linux-kernel, Joel Fernandes (Google),
Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar, Patrick Bellasi,
Luca Abeni, Todd Kjos, claudio, kernel-team, linux-pm
Hi Joel,
On 22/05/18 16:50, Joel Fernandes (Google) wrote:
> Currently there is a race in schedutil code for slow-switch single-CPU
> systems. Fix it by enforcing ordering the write to work_in_progress to
> happen before the read of next_freq.
>
> Kthread Sched update
>
> sugov_work() sugov_update_single()
>
> lock();
> // The CPU is free to rearrange below
> // two in any order, so it may clear
> // the flag first and then read next
> // freq. Lets assume it does.
> work_in_progress = false
>
> if (work_in_progress)
> return;
>
> sg_policy->next_freq = 0;
> freq = sg_policy->next_freq;
> sg_policy->next_freq = real-freq;
> unlock();
>
> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
> CC: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> CC: Peter Zijlstra <peterz@infradead.org>
> CC: Ingo Molnar <mingo@redhat.com>
> CC: Patrick Bellasi <patrick.bellasi@arm.com>
> CC: Juri Lelli <juri.lelli@redhat.com>
> Cc: Luca Abeni <luca.abeni@santannapisa.it>
> CC: Todd Kjos <tkjos@google.com>
> CC: claudio@evidence.eu.com
> CC: kernel-team@android.com
> CC: linux-pm@vger.kernel.org
> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> ---
> I split this into separate patch, because this race can also happen in
> mainline.
>
> kernel/sched/cpufreq_schedutil.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 5c482ec38610..ce7749da7a44 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -401,6 +401,13 @@ static void sugov_work(struct kthread_work *work)
> */
> raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> freq = sg_policy->next_freq;
> +
> + /*
> + * sugov_update_single can access work_in_progress without update_lock,
> + * make sure next_freq is read before work_in_progress is set.
s/set/reset/
> + */
> + smp_mb();
> +
Also, doesn't this need a corresponding barrier (I guess in
sugov_should_update_freq)? That being a wmb and this a rmb?
Best,
- Juri
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH RFC] schedutil: Address the r/w ordering race in kthread
2018-05-22 23:50 [PATCH RFC] schedutil: Address the r/w ordering race in kthread Joel Fernandes (Google)
2018-05-23 0:18 ` Joel Fernandes
2018-05-23 6:47 ` Juri Lelli
@ 2018-05-23 8:23 ` Rafael J. Wysocki
2 siblings, 0 replies; 4+ messages in thread
From: Rafael J. Wysocki @ 2018-05-23 8:23 UTC (permalink / raw)
To: Joel Fernandes (Google)
Cc: Linux Kernel Mailing List, Joel Fernandes (Google),
Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar, Patrick Bellasi,
Juri Lelli, Luca Abeni, Todd Kjos, Claudio Scordino, kernel-team,
Linux PM
On Wed, May 23, 2018 at 1:50 AM, Joel Fernandes (Google)
<joelaf@google.com> wrote:
> Currently there is a race in schedutil code for slow-switch single-CPU
> systems. Fix it by enforcing ordering the write to work_in_progress to
> happen before the read of next_freq.
>
> Kthread Sched update
>
> sugov_work() sugov_update_single()
>
> lock();
> // The CPU is free to rearrange below
> // two in any order, so it may clear
> // the flag first and then read next
> // freq. Lets assume it does.
> work_in_progress = false
>
> if (work_in_progress)
> return;
>
> sg_policy->next_freq = 0;
> freq = sg_policy->next_freq;
> sg_policy->next_freq = real-freq;
> unlock();
>
> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
> CC: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> CC: Peter Zijlstra <peterz@infradead.org>
> CC: Ingo Molnar <mingo@redhat.com>
> CC: Patrick Bellasi <patrick.bellasi@arm.com>
> CC: Juri Lelli <juri.lelli@redhat.com>
> Cc: Luca Abeni <luca.abeni@santannapisa.it>
> CC: Todd Kjos <tkjos@google.com>
> CC: claudio@evidence.eu.com
> CC: kernel-team@android.com
> CC: linux-pm@vger.kernel.org
> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> ---
> I split this into separate patch, because this race can also happen in
> mainline.
>
> kernel/sched/cpufreq_schedutil.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 5c482ec38610..ce7749da7a44 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -401,6 +401,13 @@ static void sugov_work(struct kthread_work *work)
> */
> raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> freq = sg_policy->next_freq;
> +
> + /*
> + * sugov_update_single can access work_in_progress without update_lock,
> + * make sure next_freq is read before work_in_progress is set.
> + */
> + smp_mb();
> +
This requires a corresponding barrier somewhere else.
> sg_policy->work_in_progress = false;
> raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
>
> --
Also, as I said I actually would prefer to use the spinlock in the
one-CPU case when the kthread is used.
I'll have a patch for that shortly.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-05-23 8:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-22 23:50 [PATCH RFC] schedutil: Address the r/w ordering race in kthread Joel Fernandes (Google)
2018-05-23 0:18 ` Joel Fernandes
2018-05-23 6:47 ` Juri Lelli
2018-05-23 8:23 ` Rafael J. Wysocki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).