LKML Archive on
help / color / mirror / Atom feed
From: Dmitry Vyukov <>
To: "Theodore Ts'o" <>,
	Dmitry Vyukov <>,
	syzbot <>,
	Andreas Dilger <>,,
	linux-fsdevel <>,
	LKML <>,
	syzkaller-bugs <>,
	Linus Torvalds <>,
	Al Viro <>
Subject: Re: possible deadlock in __generic_file_fsync
Date: Tue, 26 Mar 2019 11:32:48 +0100	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Sat, Mar 23, 2019 at 2:56 PM Theodore Ts'o <> wrote:
> On Sat, Mar 23, 2019 at 08:16:36AM +0100, Dmitry Vyukov wrote:
> >
> > This is a lockdep-detected bug, but it is reproduced with very low
> > probability...
> >
> > I would expect that for lockdep it's only enough to trigger each path
> > involved in the deadlock once. Why is it so hard to reproduce then? Is
> > it something to improve in lockdep?
> It's a false positive report.  The problem is that without doing some
> fairly deep code analysis --- the kind that a human needs to do; this
> is not the kind of thing that ML and adjusting weights in a nueral net
> can deal with --- a computer can't determine what the completion
> handler will need to do.
> The root cause here is that we have N CPU's that are trying to do
> direct I/O's, and on the very first DIO write for a fd, we need to
> create a workqueue.  (Why do we do it here?  Because most fd's don't
> do DIO, so we don't want to waste resources unnecessarily.  Why don't
> we fix it by adding a mutex?  Because it would slow down *all* Direct
> I/O operations just to suppress a rare, false positive, lockdep
> report.)
> The reason why it's so hard for lockdep to reproduce is because it's
> extremely rare for this situation to get hit.  When it does get hit,
> several CPU's will try to create the workqueue, and all but one will
> lose the cmpxchg, and so all but one will need to destroy the
> workqueue which they had just created.  Since the wq in question has
> never been used, it's safe to call destroy_workqueue(wq) while holding
> the inode mutex --- but lockdep doesn't know this.  As I pointed out
> in [1] one way to fix this is to create a new API and use it instead:
>    I_solemnly_swear_this_workqueue_has_never_been_used_please_destroy()
> [1]
> Unfortunately, this trades off fixing a very low probability lockdep
> false positive report that in practice only gets hit with Syzkaller,
> with making the code more fragile if the developer potentially uses
> the API incorrectly.
> As you can see from the date on the discussion, it's been over six
> months, and there still hasn't been a decision about the best way to
> fix this.  I think the real problem is that it's pretty low priority,
> since it's only something that Syzkaller notices.
> The reality is in a world without Syzkaller, maybe once a decade, it
> would get hit on a real-life workload, and so we'd have to close a bug
> report with "Won't Fix; Not reproducible", and add a note saying that
> it's a false positive lockdep report.  Syzkaller is adding stress to
> the system by demanding perfection from lockdep, and it isn't that,
> for better or for worse.  ‾\_(ツ)_/‾ The question is what is the best
> way to annotate this as a false positive, so we can suppress the
> report, either in Lockdep or in Syzkaller.

Hi Ted,

Thanks for the analysis.

So we can classify the reason for wrong bisection result as "too hard
to trigger bug".

Re lockdep perfection, do you see any better alternative then what is
happening now?
One alternative is obviously to stop testing kernel which would remove
all related stress from all involved parties and remove any
perfection/quality requirement from everything :)
But it does not look like a better path forward.

Re I_solemnly_swear_this_workqueue_has_never_been_used_please_destroy,
I wonder if it's possible to automatically note the fact that the
workqueue was used. It should not make the code more fragile and
should not use to incorrect uses of API. It can slightly move the
situation from "reporting false positives" to "not reporting true
positives", but all bugs should be detected eventually (we just need
any test where a single item was submitted to the queue). And in my
experience not reporting false positives is much more important then
reporting 100% of true positives.
Something along the lines of:

on submission of an item:
#ifdef LOCKDEP
  WRITE_ONCE(wq->was_used, 1);

in flush:
#ifdef LOCKDEP
  if (READ_ONCE(wq->was_used) {

  reply	other threads:[~2019-03-26 10:33 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-16 14:25 syzbot
2018-10-19  2:10 ` syzbot
2018-10-20 16:13 ` syzbot
2019-03-22 21:28 ` syzbot
2019-03-23  7:16   ` Dmitry Vyukov
2019-03-23 13:56     ` Theodore Ts'o
2019-03-26 10:32       ` Dmitry Vyukov [this message]
2020-03-08  5:52         ` [PATCH] fs/direct-io.c: avoid workqueue allocation race Eric Biggers
2020-03-08 23:12           ` Dave Chinner
2020-03-09  1:24             ` Eric Biggers
2020-03-10 16:27               ` Darrick J. Wong
2020-03-10 22:22                 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: possible deadlock in __generic_file_fsync' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).