LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock
@ 2020-04-06 12:10 Muchun Song
2020-04-06 18:17 ` bsegall
2020-04-21 13:52 ` Peter Zijlstra
0 siblings, 2 replies; 7+ messages in thread
From: Muchun Song @ 2020-04-06 12:10 UTC (permalink / raw)
To: mingo, peterz, juri.lelli, vincent.guittot
Cc: linux-kernel, dietmar.eggemann, rostedt, bsegall, mgorman, Muchun Song
The walk_tg_tree_from() caller must hold rcu_lock, but the caller
do not call rcu_read_lock() in the unthrottle_cfs_rq(). The
unthrottle_cfs_rq() is used in 3 places. There are
distribute_cfs_runtime(), unthrottle_offline_cfs_rqs() and
tg_set_cfs_bandwidth(). The former 2 already hold the rcu lock,
but the last one does not. So fix it with calling rcu_read_lock()
in the unthrottle_cfs_rq().
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
kernel/sched/fair.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6f05843c76d7d..870853c47b63c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4782,7 +4782,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
raw_spin_unlock(&cfs_b->lock);
/* update hierarchical throttle state */
+ rcu_read_lock();
walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq);
+ rcu_read_unlock();
if (!cfs_rq->load.weight)
return;
--
2.11.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock
2020-04-06 12:10 [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock Muchun Song
@ 2020-04-06 18:17 ` bsegall
2020-04-13 15:00 ` [External] " 宋牧春
2020-04-21 13:52 ` Peter Zijlstra
1 sibling, 1 reply; 7+ messages in thread
From: bsegall @ 2020-04-06 18:17 UTC (permalink / raw)
To: Muchun Song
Cc: mingo, peterz, juri.lelli, vincent.guittot, linux-kernel,
dietmar.eggemann, rostedt, bsegall, mgorman
Muchun Song <songmuchun@bytedance.com> writes:
> The walk_tg_tree_from() caller must hold rcu_lock, but the caller
> do not call rcu_read_lock() in the unthrottle_cfs_rq(). The
> unthrottle_cfs_rq() is used in 3 places. There are
> distribute_cfs_runtime(), unthrottle_offline_cfs_rqs() and
> tg_set_cfs_bandwidth(). The former 2 already hold the rcu lock,
> but the last one does not. So fix it with calling rcu_read_lock()
> in the unthrottle_cfs_rq().
It might be a tiny bit better to put it in the tg_set_cfs_bandwidth
instead, but the other two sources were kinda by accident, so this is
reasonable too.
Reviewed-by: Ben Segall <bsegall@google.com>
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> kernel/sched/fair.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6f05843c76d7d..870853c47b63c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4782,7 +4782,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
> raw_spin_unlock(&cfs_b->lock);
>
> /* update hierarchical throttle state */
> + rcu_read_lock();
> walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq);
> + rcu_read_unlock();
>
> if (!cfs_rq->load.weight)
> return;
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [External] Re: [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock
2020-04-06 18:17 ` bsegall
@ 2020-04-13 15:00 ` 宋牧春
0 siblings, 0 replies; 7+ messages in thread
From: 宋牧春 @ 2020-04-13 15:00 UTC (permalink / raw)
To: Benjamin Segall, mingo, peterz, juri.lelli, Vincent Guittot
Cc: linux-kernel, dietmar.eggemann, rostedt, mgorman
On Tue, Apr 7, 2020 at 2:17 AM <bsegall@google.com> wrote:
>
> Muchun Song <songmuchun@bytedance.com> writes:
>
> > The walk_tg_tree_from() caller must hold rcu_lock, but the caller
> > do not call rcu_read_lock() in the unthrottle_cfs_rq(). The
> > unthrottle_cfs_rq() is used in 3 places. There are
> > distribute_cfs_runtime(), unthrottle_offline_cfs_rqs() and
> > tg_set_cfs_bandwidth(). The former 2 already hold the rcu lock,
> > but the last one does not. So fix it with calling rcu_read_lock()
> > in the unthrottle_cfs_rq().
>
> It might be a tiny bit better to put it in the tg_set_cfs_bandwidth
> instead, but the other two sources were kinda by accident, so this is
> reasonable too.
>
> Reviewed-by: Ben Segall <bsegall@google.com>
>
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> > kernel/sched/fair.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 6f05843c76d7d..870853c47b63c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -4782,7 +4782,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
> > raw_spin_unlock(&cfs_b->lock);
> >
> > /* update hierarchical throttle state */
> > + rcu_read_lock();
> > walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq);
> > + rcu_read_unlock();
> >
> > if (!cfs_rq->load.weight)
> > return;
Ping guys?
--
Yours,
Muchun
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock
2020-04-06 12:10 [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock Muchun Song
2020-04-06 18:17 ` bsegall
@ 2020-04-21 13:52 ` Peter Zijlstra
2020-04-21 15:43 ` Paul E. McKenney
1 sibling, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2020-04-21 13:52 UTC (permalink / raw)
To: Muchun Song
Cc: mingo, juri.lelli, vincent.guittot, linux-kernel,
dietmar.eggemann, rostedt, bsegall, mgorman, paulmck, joel
On Mon, Apr 06, 2020 at 08:10:08PM +0800, Muchun Song wrote:
> The walk_tg_tree_from() caller must hold rcu_lock,
Not quite; with the RCU unification done 'recently' having preemption
disabled is sufficient. AFAICT preemption is disabled.
In fact; and I mentioned this to someone the other day, perhaps Joel; we
can go and delete a whole bunch of rcu_read_lock() from the scheduler --
basically undo all the work we did after RCU was split many years ago.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock
2020-04-21 13:52 ` Peter Zijlstra
@ 2020-04-21 15:43 ` Paul E. McKenney
2020-04-21 16:24 ` Peter Zijlstra
0 siblings, 1 reply; 7+ messages in thread
From: Paul E. McKenney @ 2020-04-21 15:43 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Muchun Song, mingo, juri.lelli, vincent.guittot, linux-kernel,
dietmar.eggemann, rostedt, bsegall, mgorman, joel
On Tue, Apr 21, 2020 at 03:52:58PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 06, 2020 at 08:10:08PM +0800, Muchun Song wrote:
> > The walk_tg_tree_from() caller must hold rcu_lock,
>
> Not quite; with the RCU unification done 'recently' having preemption
> disabled is sufficient. AFAICT preemption is disabled.
>
> In fact; and I mentioned this to someone the other day, perhaps Joel; we
> can go and delete a whole bunch of rcu_read_lock() from the scheduler --
> basically undo all the work we did after RCU was split many years ago.
"If only I knew then what I know now..."
Then again, I suspect that we all have ample opportunity to use that
particular old phrase. ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock
2020-04-21 15:43 ` Paul E. McKenney
@ 2020-04-21 16:24 ` Peter Zijlstra
2020-04-21 17:39 ` Paul E. McKenney
0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2020-04-21 16:24 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Muchun Song, mingo, juri.lelli, vincent.guittot, linux-kernel,
dietmar.eggemann, rostedt, bsegall, mgorman, joel
On Tue, Apr 21, 2020 at 08:43:12AM -0700, Paul E. McKenney wrote:
> On Tue, Apr 21, 2020 at 03:52:58PM +0200, Peter Zijlstra wrote:
> > On Mon, Apr 06, 2020 at 08:10:08PM +0800, Muchun Song wrote:
> > > The walk_tg_tree_from() caller must hold rcu_lock,
> >
> > Not quite; with the RCU unification done 'recently' having preemption
> > disabled is sufficient. AFAICT preemption is disabled.
> >
> > In fact; and I mentioned this to someone the other day, perhaps Joel; we
> > can go and delete a whole bunch of rcu_read_lock() from the scheduler --
> > basically undo all the work we did after RCU was split many years ago.
>
> "If only I knew then what I know now..."
>
> Then again, I suspect that we all have ample opportunity to use that
> particular old phrase. ;-)
Quite so; I'm just fearing that rcu-lockdep annotation stuff. IIRC that
doesn't (nor can it, in general) consider the implicit preempt-disable
from locks and such for !PREEMPT builds.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock
2020-04-21 16:24 ` Peter Zijlstra
@ 2020-04-21 17:39 ` Paul E. McKenney
0 siblings, 0 replies; 7+ messages in thread
From: Paul E. McKenney @ 2020-04-21 17:39 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Muchun Song, mingo, juri.lelli, vincent.guittot, linux-kernel,
dietmar.eggemann, rostedt, bsegall, mgorman, joel
On Tue, Apr 21, 2020 at 06:24:52PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 21, 2020 at 08:43:12AM -0700, Paul E. McKenney wrote:
> > On Tue, Apr 21, 2020 at 03:52:58PM +0200, Peter Zijlstra wrote:
> > > On Mon, Apr 06, 2020 at 08:10:08PM +0800, Muchun Song wrote:
> > > > The walk_tg_tree_from() caller must hold rcu_lock,
> > >
> > > Not quite; with the RCU unification done 'recently' having preemption
> > > disabled is sufficient. AFAICT preemption is disabled.
> > >
> > > In fact; and I mentioned this to someone the other day, perhaps Joel; we
> > > can go and delete a whole bunch of rcu_read_lock() from the scheduler --
> > > basically undo all the work we did after RCU was split many years ago.
> >
> > "If only I knew then what I know now..."
> >
> > Then again, I suspect that we all have ample opportunity to use that
> > particular old phrase. ;-)
>
> Quite so; I'm just fearing that rcu-lockdep annotation stuff. IIRC that
> doesn't (nor can it, in general) consider the implicit preempt-disable
> from locks and such for !PREEMPT builds.
Heh! Now that might be me using that phrase again some time in the
future rather than you using it. ;-)
But what exactly are you looking for? After all, in !PREEMPT builds,
preemption is always disabled. It should not be too hard to make
something that looked at the state provided by DEBUG_ATOMIC_SLEEP when
selected, for example. Alternatively, there is always the option
of doing the testing in CONFIG_PREEMPT=y kernels.
But again, what exactly are you looking for?
Thanx, Paul
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-04-21 17:39 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-06 12:10 [PATCH] sched/fair: Fix call walk_tg_tree_from() without hold rcu_lock Muchun Song
2020-04-06 18:17 ` bsegall
2020-04-13 15:00 ` [External] " 宋牧春
2020-04-21 13:52 ` Peter Zijlstra
2020-04-21 15:43 ` Paul E. McKenney
2020-04-21 16:24 ` Peter Zijlstra
2020-04-21 17:39 ` Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).