Linux-Fsdevel Archive on
help / color / mirror / Atom feed
From: Dave Chinner <>
To: Eric Biggers <>
Subject: Re: [PATCH v2] fs/direct-io: fix one-time init of ->s_dio_done_wq
Date: Sat, 18 Jul 2020 10:15:36 +1000	[thread overview]
Message-ID: <20200718001536.GB2005@dread.disaster.area> (raw)
In-Reply-To: <>

On Thu, Jul 16, 2020 at 10:05:10PM -0700, Eric Biggers wrote:
> From: Eric Biggers <>
> Correctly implement the "one-time" init pattern for ->s_dio_done_wq.
> This fixes the following issues:
> - The LKMM doesn't guarantee that the workqueue will be seen initialized
>   before being used, if another CPU allocated it.  With regards to
>   specific CPU architectures, this is true on at least Alpha, but it may
>   be true on other architectures too if the internal implementation of
>   workqueues causes use of the workqueue to involve a control
>   dependency.  (There doesn't appear to be a control dependency
>   currently, but it's hard to tell and it could change in the future.)
> - The preliminary checks for sb->s_dio_done_wq are a data race, since
>   they do a plain load of a concurrently modified variable.  According
>   to the C standard, this undefined behavior.  In practice, the kernel
>   does sometimes makes assumptions about data races might be okay in
>   practice, but these rules are undocumented and not uniformly agreed
>   upon, so it's best to avoid cases where they might come into play.
> Following the guidance for one-time init I've proposed at
> replace it with the simplest implementation that is guaranteed to be
> correct while still achieving the following properties:
>     - Doesn't make direct I/O users contend on a mutex in the fast path.
>     - Doesn't allocate the workqueue when it will never be used.
> Fixes: 7b7a8665edd8 ("direct-io: Implement generic deferred AIO completions")
> Signed-off-by: Eric Biggers <>
> ---
> v2: new implementation using smp_load_acquire() + smp_store_release()
>     and a mutex.

A mutex?

That's over-engineered premature optimisation - the allocation path
is a slow path that will only ever be hit only on the first few
direct IOs if an app manages to synchronise it's first ever
concurrent DIOs to different files perfectly. There is zero need to
"optimise" the code like this.

I've already suggested that we get rid of this whole dynamic
initialisation code out of the direct IO path altogether for good
reason: all of this goes away and we don't have to care about
optimising it for performance at all.

We have two options as I see it: always allocate the workqueue on
direct IO capable filesytsems in their ->fill_super() method, or
allocate it on the first open(O_DIRECT) where we check if O_DIRECT
is supported by the filesystem.

i.e. do_dentry_open() does this:

        /* NB: we're sure to have correct a_ops only after f_op->open */
        if (f->f_flags & O_DIRECT) {
                if (!f->f_mapping->a_ops || !f->f_mapping->a_ops->direct_IO)
                        return -EINVAL;

Allocate the work queue there, and we don't need to care about how
fast or slow setting up the workqueue is and so there is zero need
to optimise it for speed.


Dave Chinner

  parent reply	other threads:[~2020-07-18  0:15 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-17  5:05 Eric Biggers
2020-07-17  6:02 ` Sedat Dilek
2020-07-17 21:04 ` Darrick J. Wong
2020-07-18  0:15 ` Dave Chinner [this message]
2020-07-18  0:42   ` Eric Biggers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200718001536.GB2005@dread.disaster.area \ \ \ \ \ \
    --subject='Re: [PATCH v2] fs/direct-io: fix one-time init of ->s_dio_done_wq' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).