LKML Archive on
help / color / mirror / Atom feed
From: Kirill Tkhai <>
To: Dmitry Vyukov <>
Cc: syzbot <>,
	Christian Brauner <>,
	David Miller <>,
	David Ahern <>, Florian Westphal <>,
	Jiri Benc <>, LKML <>,
	Xin Long <>,, netdev <>,
	syzkaller-bugs <>,
	Vladislav Yasevich <>
Subject: Re: INFO: task hung in ip6gre_exit_batch_net
Date: Fri, 8 Jun 2018 11:43:39 +0300	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On 08.06.2018 11:38, Dmitry Vyukov wrote:
> On Fri, Jun 8, 2018 at 10:31 AM, Kirill Tkhai <> wrote:
>>>>>>>>>> Hi, Dmirty!
>>>>>>>>>> On 04.06.2018 18:22, Dmitry Vyukov wrote:
>>>>>>>>>>> On Mon, Jun 4, 2018 at 5:03 PM, syzbot
>>>>>>>>>>> <> wrote:
>>>>>>>>>>>> Hello,
>>>>>>>>>>>> syzbot found the following crash on:
>>>>>>>>>>>> HEAD commit:    bc2dbc5420e8 Merge branch 'akpm' (patches from Andrew)
>>>>>>>>>>>> git tree:       upstream
>>>>>>>>>>>> console output:
>>>>>>>>>>>> kernel config:
>>>>>>>>>>>> dashboard link:
>>>>>>>>>>>> compiler:       gcc (GCC) 8.0.1 20180413 (experimental)
>>>>>>>>>>>> Unfortunately, I don't have any reproducer for this crash yet.
>>>>>>>>>>>> IMPORTANT: if you fix the bug, please add the following tag to the commit:
>>>>>>>>>>>> Reported-by:
>>>>>>>>>>> Another hang on rtnl lock:
>>>>>>>>>>> #syz dup: INFO: task hung in netdev_run_todo
>>>>>>>>>>> May be related to "unregister_netdevice: waiting for DEV to become free":
>>>>>>>>> netdev_wait_allrefs does not hold rtnl lock during waiting, so it must
>>>>>>>>> be something different.
>>>>>>>>>>> Any other explanations for massive hangs on rtnl lock for minutes?
>>>>>>>>>> To exclude the situation, when a task exists with rtnl_mutex held:
>>>>>>>>>> would the pr_warn() from print_held_locks_bug() be included in the console output
>>>>>>>>>> if they appear?
>>>>>>>>> Yes, everything containing "WARNING:" is detected as bug.
>>>>>>>> OK, then dead task not releasing the lock is excluded.
>>>>>>>> One more assumption: someone corrupted memory around rtnl_mutex and it looks like locked.
>>>>>>>> (I track lockdep "(rtnl_mutex){+.+.}" prints in initial message as "nobody owns rtnl_mutex").
>>>>>>>> There may help a crash dump of the VM.
>>>>>>> I can't find any legend for these +'s and .'s, but {+.+.} is present
>>>>>>> in large amounts in just any task hung report for different mutexes,
>>>>>>> so I would not expect that it means corruption.
>>>>>>> Are dozens of known corruptions that syzkaller can trigger. But
>>>>>>> usually they are reliably caught by KASAN. If any of them would lead
>>>>>>> to silent memory corruption, we would got dozens of assorted crashes
>>>>>>> throughout the kernel. We've seen that at some points, but not
>>>>>>> recently. So I would assume that memory is not corrupted in all these
>>>>>>> cases:
>>>>>> This BUG clarifies the {+.+.}:
>>>>>> 4 locks held by kworker/0:145/381:
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] work_static include/linux/workqueue.h:198 [inline]
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] set_work_data kernel/workqueue.c:619 [inline]
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
>>>>>>  #0:  ((wq_completion)"hwsim_wq"){+.+.}, at: [<000000003f9487f0>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
>>>>>>  #1:  ((work_completion)(&data->destroy_work)){+.+.}, at: [<00000000bbdd2115>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
>>>>>>  #2:  (rtnl_mutex){+.+.}, at: [<000000009c9d14f8>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74
>>>>>>  #3:  (rcu_sched_state.exp_mutex){+.+.}, at: [<000000001ba1a807>] exp_funnel_lock kernel/rcu/tree_exp.h:272 [inline]
>>>>>>  #3:  (rcu_sched_state.exp_mutex){+.+.}, at: [<000000001ba1a807>] _synchronize_rcu_expedited.constprop.72+0x9fa/0xac0 kernel/rcu/tree_exp.h:596
>>>>>> There we have rtnl_mutex locked and the {..} is like above. It's definitely locked
>>>>>> since there is one more lock after it.
>>>>>> This BUG happen because of there are many rtnl_mutex waiters while owner
>>>>>> is synchronizing RCU. Rather clear for me in comparison to the topic's hung.
>>>>> You mean that it's not hanged, but rather needs more than 2 minutes to
>>>>> complete, right?
>>>> Yeah, I think, this is possible. I've seen the situations like that.
>>>> Let synchronize_rcu_expedited() is executed for X seconds. Then,
>>>> it's need just 120/x calls of "this function under rtnl_mutex" to make
>>>> a soft lockup of someone else who wants the mutex too.
>>>> Also, despite the CFS is fair scheduler, in case of the calls are
>>>> made from workqueue, every work will cause sleep. So, every work
>>>> will be executed in separate worker task. Every worker will haved its
>>>> own time slice. This increases the probability these tasks will
>>>> take cpu time before the task in the header of the hang.
>>> OK, let's stick with this theory for now. Looking at the crash frequencies here:
>>> I can actually believe that this is just flakes due to too slow execution.
>>> I've noted that we need either reduce load and/or increase timeouts:
>> Hm, I'm not sure we should hide such the situations from syzbot,
>> because the load like this may occur in real life on real workload.
>> They may help us to understand whether rtnl_mutex already needs
>> a redesign came from this statistics. Also, these hungs may happen
>> in a place, which can be rewritten without rtnl_mutex, so we focus
>> attention on it.
> If somebody wants to act on these reports:
> it's even better. The point is that testing must not have false
> positives, one way or another. If we do nothing then syzbot will
> slowly discover all 250 usages of rtnl_lock and produce unique bugs
> for them. Each and every of these bug reports will need to handled by
> somebody.
> Does somebody want to act on these and improve rtnl performance in
> foreseeable future?

I just analyzed this question a little bit, and it looks like only
the preparations for improving the performance will take much much time.
And the performance won't change till preparations are finished.
So, this looks like "not a foreseeable future".


      reply	other threads:[~2018-06-08  8:44 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-04 15:03 syzbot
2018-06-04 15:22 ` Dmitry Vyukov
2018-06-05  9:03   ` Kirill Tkhai
2018-06-05  9:36     ` Dmitry Vyukov
2018-06-05 13:55       ` Kirill Tkhai
2018-06-07 18:23         ` Dmitry Vyukov
2018-06-07 18:54           ` Kirill Tkhai
2018-06-07 19:03             ` Dmitry Vyukov
2018-06-07 19:59               ` Kirill Tkhai
2018-06-08  8:18                 ` Dmitry Vyukov
2018-06-08  8:31                   ` Kirill Tkhai
2018-06-08  8:38                     ` Dmitry Vyukov
2018-06-08  8:43                       ` Kirill Tkhai [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: INFO: task hung in ip6gre_exit_batch_net' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).