LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Paolo Valente <paolo.valente@linaro.org>
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Cc: linux-fsdevel@vger.kernel.org,
linux-block <linux-block@vger.kernel.org>,
linux-ext4@vger.kernel.org, cgroups@vger.kernel.org,
kernel list <linux-kernel@vger.kernel.org>,
Jens Axboe <axboe@kernel.dk>, Jan Kara <jack@suse.cz>,
jmoyer@redhat.com, Theodore Ts'o <tytso@mit.edu>,
amakhalov@vmware.com, anishs@vmware.com, srivatsab@vmware.com
Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup controller
Date: Tue, 21 May 2019 18:21:48 +0200 [thread overview]
Message-ID: <2CB39B34-21EE-4A95-A073-8633CF2D187C@linaro.org> (raw)
In-Reply-To: <5B6570A2-541A-4CF8-98E0-979EA6E3717D@linaro.org>
[-- Attachment #1.1: Type: text/plain, Size: 2519 bytes --]
> Il giorno 21 mag 2019, alle ore 15:20, Paolo Valente <paolo.valente@linaro.org> ha scritto:
>
>
>
>> Il giorno 21 mag 2019, alle ore 13:25, Paolo Valente <paolo.valente@linaro.org> ha scritto:
>>
>>
>>
>>> Il giorno 20 mag 2019, alle ore 12:19, Paolo Valente <paolo.valente@linaro.org> ha scritto:
>>>
>>>
>>>
>>>> Il giorno 18 mag 2019, alle ore 22:50, Srivatsa S. Bhat <srivatsa@csail.mit.edu> ha scritto:
>>>>
>>>> On 5/18/19 11:39 AM, Paolo Valente wrote:
>>>>> I've addressed these issues in my last batch of improvements for BFQ,
>>>>> which landed in the upcoming 5.2. If you give it a try, and still see
>>>>> the problem, then I'll be glad to reproduce it, and hopefully fix it
>>>>> for you.
>>>>>
>>>>
>>>> Hi Paolo,
>>>>
>>>> Thank you for looking into this!
>>>>
>>>> I just tried current mainline at commit 72cf0b07, but unfortunately
>>>> didn't see any improvement:
>>>>
>>>> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>>>>
>>>> With mq-deadline, I get:
>>>>
>>>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.90981 s, 1.3 MB/s
>>>>
>>>> With bfq, I get:
>>>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 84.8216 s, 60.4 kB/s
>>>>
>>>
>>> Hi Srivatsa,
>>> thanks for reproducing this on mainline. I seem to have reproduced a
>>> bonsai-tree version of this issue.
>>
>> Hi again Srivatsa,
>> I've analyzed the trace, and I've found the cause of the loss of
>> throughput in on my side. To find out whether it is the same cause as
>> on your side, I've prepared a script that executes your test and takes
>> a trace during the test. If ok for you, could you please
>> - change the value for the DEVS parameter in the attached script, if
>> needed
>> - execute the script
>> - send me the trace file that the script will leave in your working
>> dir
>>
>
> Sorry, I forgot to add that I also need you to, first, apply the
> attached patch (it will make BFQ generate the log I need).
>
Sorry again :) This time for attaching one more patch. This is
basically a blind fix attempt, based on what I see in my VM.
So, instead of only sending me a trace, could you please:
1) apply this new patch on top of the one I attached in my previous email
2) repeat your test and report results
3) regardless of whether bfq performance improves, take a trace with
my script (I've attached a new version that doesn't risk to output an
annoying error message as the previous one)
Thanks,
Paolo
[-- Attachment #1.2: dsync_test.sh --]
[-- Type: application/octet-stream, Size: 1848 bytes --]
#!/bin/bash
DEVS=sda # please set this parameter to the dev name for your test drive
TRACE=1
function init_tracing {
if [ "$TRACE" == "1" ] ; then
if [ ! -d /sys/kernel/debug/tracing ] ; then
mount -t debugfs none /sys/kernel/debug
fi
echo nop > /sys/kernel/debug/tracing/current_tracer
echo 500000 > /sys/kernel/debug/tracing/buffer_size_kb
echo blk > /sys/kernel/debug/tracing/current_tracer
fi
}
function set_tracing {
if [ "$TRACE" == "1" ] ; then
if [[ -e /sys/kernel/debug/tracing/tracing_enabled && \
$(cat /sys/kernel/debug/tracing/tracing_enabled) -ne $1 ]]; then
echo "echo $1 > /sys/kernel/debug/tracing/tracing_enabled"
echo $1 > /sys/kernel/debug/tracing/tracing_enabled
fi
dev=$(echo $DEVS | awk '{ print $1 }')
if [[ -e /sys/block/$dev/trace/enable && \
$(cat /sys/block/$dev/trace/enable) -ne $1 ]]; then
echo "echo $1 > /sys/block/$dev/trace/enable"
echo $1 > /sys/block/$dev/trace/enable
fi
if [ "$1" == 0 ]; then
for cpu_path in /sys/kernel/debug/tracing/per_cpu/cpu?
do
stat_file=$cpu_path/stats
OVER=$(grep "overrun" $stat_file | \
grep -v "overrun: 0")
if [ "$OVER" != "" ]; then
cpu=$(basename $cpu_path)
echo $OVER on $cpu, please increase buffer size!
fi
done
fi
fi
}
init_tracing
mkdir /sys/fs/cgroup/blkio/testgrp
echo $BASHPID > /sys/fs/cgroup/blkio/testgrp/cgroup.procs
echo > /sys/kernel/debug/tracing/trace
set_tracing 1
echo bfq > /sys/block/sda/queue/scheduler
cat /sys/block/sda/queue/scheduler
echo 0 > /sys/block/sda/queue/iosched/low_latency
dd if=/dev/zero of=/root/test.img bs=512 count=5000 oflag=dsync
set_tracing 0
echo 1 > /sys/block/sda/queue/iosched/low_latency
cp /sys/kernel/debug/tracing/trace .
echo $BASHPID > /sys/fs/cgroup/blkio/cgroup.procs
rmdir /sys/fs/cgroup/blkio/testgrp
[-- Attachment #1.3: 0001-block-bfq-boost-injection.patch.gz --]
[-- Type: application/x-gzip, Size: 2462 bytes --]
[-- Attachment #1.4: Type: text/plain, Size: 5380 bytes --]
> Thanks,
> Paolo
>
> <0001-block-bfq-add-logs-and-BUG_ONs.patch.gz>
>
>> Looking forward to your trace,
>> Paolo
>>
>> <dsync_test.sh>
>>> Before digging into the block
>>> trace, I'd like to ask you for some feedback.
>>>
>>> First, in my test, the total throughput of the disk happens to be
>>> about 20 times as high as that enjoyed by dd, regardless of the I/O
>>> scheduler. I guess this massive overhead is normal with dsync, but
>>> I'd like know whether it is about the same on your side. This will
>>> help me understand whether I'll actually be analyzing about the same
>>> problem as yours.
>>>
>>> Second, the commands I used follow. Do they implement your test case
>>> correctly?
>>>
>>> [root@localhost tmp]# mkdir /sys/fs/cgroup/blkio/testgrp
>>> [root@localhost tmp]# echo $BASHPID > /sys/fs/cgroup/blkio/testgrp/cgroup.procs
>>> [root@localhost tmp]# cat /sys/block/sda/queue/scheduler
>>> [mq-deadline] bfq none
>>> [root@localhost tmp]# dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>>> 10000+0 record dentro
>>> 10000+0 record fuori
>>> 5120000 bytes (5,1 MB, 4,9 MiB) copied, 14,6892 s, 349 kB/s
>>> [root@localhost tmp]# echo bfq > /sys/block/sda/queue/scheduler
>>> [root@localhost tmp]# dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>>> 10000+0 record dentro
>>> 10000+0 record fuori
>>> 5120000 bytes (5,1 MB, 4,9 MiB) copied, 20,1953 s, 254 kB/s
>>>
>>> Thanks,
>>> Paolo
>>>
>>>> Please let me know if any more info about my setup might be helpful.
>>>>
>>>> Thank you!
>>>>
>>>> Regards,
>>>> Srivatsa
>>>> VMware Photon OS
>>>>
>>>>>
>>>>>> Il giorno 18 mag 2019, alle ore 00:16, Srivatsa S. Bhat <srivatsa@csail.mit.edu> ha scritto:
>>>>>>
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> One of my colleagues noticed upto 10x - 30x drop in I/O throughput
>>>>>> running the following command, with the CFQ I/O scheduler:
>>>>>>
>>>>>> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflags=dsync
>>>>>>
>>>>>> Throughput with CFQ: 60 KB/s
>>>>>> Throughput with noop or deadline: 1.5 MB/s - 2 MB/s
>>>>>>
>>>>>> I spent some time looking into it and found that this is caused by the
>>>>>> undesirable interaction between 4 different components:
>>>>>>
>>>>>> - blkio cgroup controller enabled
>>>>>> - ext4 with the jbd2 kthread running in the root blkio cgroup
>>>>>> - dd running on ext4, in any other blkio cgroup than that of jbd2
>>>>>> - CFQ I/O scheduler with defaults for slice_idle and group_idle
>>>>>>
>>>>>>
>>>>>> When docker is enabled, systemd creates a blkio cgroup called
>>>>>> system.slice to run system services (and docker) under it, and a
>>>>>> separate blkio cgroup called user.slice for user processes. So, when
>>>>>> dd is invoked, it runs under user.slice.
>>>>>>
>>>>>> The dd command above includes the dsync flag, which performs an
>>>>>> fdatasync after every write to the output file. Since dd is writing to
>>>>>> a file on ext4, jbd2 will be active, committing transactions
>>>>>> corresponding to those fdatasync requests from dd. (In other words, dd
>>>>>> depends on jdb2, in order to make forward progress). But jdb2 being a
>>>>>> kernel thread, runs in the root blkio cgroup, as opposed to dd, which
>>>>>> runs under user.slice.
>>>>>>
>>>>>> Now, if the I/O scheduler in use for the underlying block device is
>>>>>> CFQ, then its inter-queue/inter-group idling takes effect (via the
>>>>>> slice_idle and group_idle parameters, both of which default to 8ms).
>>>>>> Therefore, everytime CFQ switches between processing requests from dd
>>>>>> vs jbd2, this 8ms idle time is injected, which slows down the overall
>>>>>> throughput tremendously!
>>>>>>
>>>>>> To verify this theory, I tried various experiments, and in all cases,
>>>>>> the 4 pre-conditions mentioned above were necessary to reproduce this
>>>>>> performance drop. For example, if I used an XFS filesystem (which
>>>>>> doesn't use a separate kthread like jbd2 for journaling), or if I dd'ed
>>>>>> directly to a block device, I couldn't reproduce the performance
>>>>>> issue. Similarly, running dd in the root blkio cgroup (where jbd2
>>>>>> runs) also gets full performance; as does using the noop or deadline
>>>>>> I/O schedulers; or even CFQ itself, with slice_idle and group_idle set
>>>>>> to zero.
>>>>>>
>>>>>> These results were reproduced on a Linux VM (kernel v4.19) on ESXi,
>>>>>> both with virtualized storage as well as with disk pass-through,
>>>>>> backed by a rotational hard disk in both cases. The same problem was
>>>>>> also seen with the BFQ I/O scheduler in kernel v5.1.
>>>>>>
>>>>>> Searching for any earlier discussions of this problem, I found an old
>>>>>> thread on LKML that encountered this behavior [1], as well as a docker
>>>>>> github issue [2] with similar symptoms (mentioned later in the
>>>>>> thread).
>>>>>>
>>>>>> So, I'm curious to know if this is a well-understood problem and if
>>>>>> anybody has any thoughts on how to fix it.
>>>>>>
>>>>>> Thank you very much!
>>>>>>
>>>>>>
>>>>>> [1]. https://lkml.org/lkml/2015/11/19/359
>>>>>>
>>>>>> [2]. https://github.com/moby/moby/issues/21485
>>>>>> https://github.com/moby/moby/issues/21485#issuecomment-222941103
>>>>>>
>>>>>> Regards,
>>>>>> Srivatsa
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2019-05-21 16:21 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-17 22:16 Srivatsa S. Bhat
2019-05-18 18:39 ` Paolo Valente
2019-05-18 19:28 ` Theodore Ts'o
2019-05-20 9:15 ` Jan Kara
2019-05-20 10:45 ` Paolo Valente
2019-05-21 16:48 ` Theodore Ts'o
2019-05-21 18:19 ` Josef Bacik
2019-05-21 19:10 ` Theodore Ts'o
2019-05-20 10:38 ` Paolo Valente
2019-05-21 7:38 ` Andrea Righi
2019-05-18 20:50 ` Srivatsa S. Bhat
2019-05-20 10:19 ` Paolo Valente
2019-05-20 22:45 ` Srivatsa S. Bhat
2019-05-21 6:23 ` Paolo Valente
2019-05-21 7:19 ` Srivatsa S. Bhat
2019-05-21 9:10 ` Jan Kara
2019-05-21 16:31 ` Theodore Ts'o
2019-05-21 11:25 ` Paolo Valente
2019-05-21 13:20 ` Paolo Valente
2019-05-21 16:21 ` Paolo Valente [this message]
2019-05-21 17:38 ` Paolo Valente
2019-05-21 22:51 ` Srivatsa S. Bhat
2019-05-22 8:05 ` Paolo Valente
2019-05-22 9:02 ` Srivatsa S. Bhat
2019-05-22 9:12 ` Paolo Valente
2019-05-22 10:02 ` Srivatsa S. Bhat
2019-05-22 9:09 ` Paolo Valente
2019-05-22 10:01 ` Srivatsa S. Bhat
2019-05-22 10:54 ` Paolo Valente
2019-05-23 2:30 ` Srivatsa S. Bhat
2019-05-23 9:19 ` Paolo Valente
2019-05-23 17:22 ` Paolo Valente
2019-05-23 23:43 ` Srivatsa S. Bhat
2019-05-24 6:51 ` Paolo Valente
2019-05-24 7:56 ` Paolo Valente
2019-05-29 1:09 ` Srivatsa S. Bhat
2019-05-29 7:41 ` Paolo Valente
2019-05-30 8:29 ` Srivatsa S. Bhat
2019-05-30 10:45 ` Paolo Valente
2019-06-02 7:04 ` Srivatsa S. Bhat
2019-06-11 22:34 ` Srivatsa S. Bhat
2019-06-12 13:04 ` Jan Kara
2019-06-12 19:36 ` Srivatsa S. Bhat
2019-06-13 6:02 ` Greg Kroah-Hartman
2019-06-13 19:03 ` Srivatsa S. Bhat
2019-06-13 8:20 ` Jan Kara
2019-06-13 19:05 ` Srivatsa S. Bhat
2019-06-13 8:37 ` Jens Axboe
2019-06-13 5:46 ` Paolo Valente
2019-06-13 19:13 ` Srivatsa S. Bhat
2019-05-23 23:32 ` Srivatsa S. Bhat
2019-05-30 8:38 ` Srivatsa S. Bhat
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2CB39B34-21EE-4A95-A073-8633CF2D187C@linaro.org \
--to=paolo.valente@linaro.org \
--cc=amakhalov@vmware.com \
--cc=anishs@vmware.com \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=jack@suse.cz \
--cc=jmoyer@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=srivatsa@csail.mit.edu \
--cc=srivatsab@vmware.com \
--cc=tytso@mit.edu \
--subject='Re: CFQ idling kills I/O performance on ext4 with blkio cgroup controller' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).