LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Paolo Valente <paolo.valente@linaro.org>
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Cc: linux-fsdevel@vger.kernel.org,
linux-block <linux-block@vger.kernel.org>,
linux-ext4@vger.kernel.org, cgroups@vger.kernel.org,
kernel list <linux-kernel@vger.kernel.org>,
Jens Axboe <axboe@kernel.dk>, Jan Kara <jack@suse.cz>,
jmoyer@redhat.com, Theodore Ts'o <tytso@mit.edu>,
amakhalov@vmware.com, anishs@vmware.com, srivatsab@vmware.com
Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup controller
Date: Tue, 21 May 2019 08:23:05 +0200 [thread overview]
Message-ID: <6EB6C9D2-E774-48FA-AC95-BC98D97645D0@linaro.org> (raw)
In-Reply-To: <238e14ff-68d1-3b21-a291-28de4f2d77af@csail.mit.edu>
[-- Attachment #1: Type: text/plain, Size: 4449 bytes --]
> Il giorno 21 mag 2019, alle ore 00:45, Srivatsa S. Bhat <srivatsa@csail.mit.edu> ha scritto:
>
> On 5/20/19 3:19 AM, Paolo Valente wrote:
>>
>>
>>> Il giorno 18 mag 2019, alle ore 22:50, Srivatsa S. Bhat <srivatsa@csail.mit.edu> ha scritto:
>>>
>>> On 5/18/19 11:39 AM, Paolo Valente wrote:
>>>> I've addressed these issues in my last batch of improvements for BFQ,
>>>> which landed in the upcoming 5.2. If you give it a try, and still see
>>>> the problem, then I'll be glad to reproduce it, and hopefully fix it
>>>> for you.
>>>>
>>>
>>> Hi Paolo,
>>>
>>> Thank you for looking into this!
>>>
>>> I just tried current mainline at commit 72cf0b07, but unfortunately
>>> didn't see any improvement:
>>>
>>> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>>>
>>> With mq-deadline, I get:
>>>
>>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.90981 s, 1.3 MB/s
>>>
>>> With bfq, I get:
>>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 84.8216 s, 60.4 kB/s
>>>
>>
>> Hi Srivatsa,
>> thanks for reproducing this on mainline. I seem to have reproduced a
>> bonsai-tree version of this issue. Before digging into the block
>> trace, I'd like to ask you for some feedback.
>>
>> First, in my test, the total throughput of the disk happens to be
>> about 20 times as high as that enjoyed by dd, regardless of the I/O
>> scheduler. I guess this massive overhead is normal with dsync, but
>> I'd like know whether it is about the same on your side. This will
>> help me understand whether I'll actually be analyzing about the same
>> problem as yours.
>>
>
> Do you mean to say the throughput obtained by dd'ing directly to the
> block device (bypassing the filesystem)?
No no, I mean simply what follows.
1) in one terminal:
[root@localhost tmp]# dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
10000+0 record dentro
10000+0 record fuori
5120000 bytes (5,1 MB, 4,9 MiB) copied, 14,6892 s, 349 kB/s
2) In a second terminal, while the dd is in progress in the first
terminal:
$ iostat -tmd /dev/sda 3
Linux 5.1.0+ (localhost.localdomain) 20/05/2019 _x86_64_ (2 CPU)
...
20/05/2019 11:40:17
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2288,00 0,00 9,77 0 29
20/05/2019 11:40:20
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2325,33 0,00 9,93 0 29
20/05/2019 11:40:23
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2351,33 0,00 10,05 0 30
...
As you can see, the overall throughput (~10 MB/s) is more than 20
times as high as the dd throughput (~350 KB/s). But the dd is the
only source of I/O.
Do you also see such a huge difference?
Thanks,
Paolo
> That does give me a 20x
> speedup with bs=512, but much more with a bigger block size (achieving
> a max throughput of about 110 MB/s).
>
> dd if=/dev/zero of=/dev/sdc bs=512 count=10000 conv=fsync
> 10000+0 records in
> 10000+0 records out
> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.15257 s, 33.6 MB/s
>
> dd if=/dev/zero of=/dev/sdc bs=4k count=10000 conv=fsync
> 10000+0 records in
> 10000+0 records out
> 40960000 bytes (41 MB, 39 MiB) copied, 0.395081 s, 104 MB/s
>
> I'm testing this on a Toshiba MG03ACA1 (1TB) hard disk.
>
>> Second, the commands I used follow. Do they implement your test case
>> correctly?
>>
>> [root@localhost tmp]# mkdir /sys/fs/cgroup/blkio/testgrp
>> [root@localhost tmp]# echo $BASHPID > /sys/fs/cgroup/blkio/testgrp/cgroup.procs
>> [root@localhost tmp]# cat /sys/block/sda/queue/scheduler
>> [mq-deadline] bfq none
>> [root@localhost tmp]# dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>> 10000+0 record dentro
>> 10000+0 record fuori
>> 5120000 bytes (5,1 MB, 4,9 MiB) copied, 14,6892 s, 349 kB/s
>> [root@localhost tmp]# echo bfq > /sys/block/sda/queue/scheduler
>> [root@localhost tmp]# dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>> 10000+0 record dentro
>> 10000+0 record fuori
>> 5120000 bytes (5,1 MB, 4,9 MiB) copied, 20,1953 s, 254 kB/s
>>
>
> Yes, this is indeed the testcase, although I see a much bigger
> drop in performance with bfq, compared to the results from
> your setup.
>
> Regards,
> Srivatsa
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2019-05-21 6:23 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-17 22:16 Srivatsa S. Bhat
2019-05-18 18:39 ` Paolo Valente
2019-05-18 19:28 ` Theodore Ts'o
2019-05-20 9:15 ` Jan Kara
2019-05-20 10:45 ` Paolo Valente
2019-05-21 16:48 ` Theodore Ts'o
2019-05-21 18:19 ` Josef Bacik
2019-05-21 19:10 ` Theodore Ts'o
2019-05-20 10:38 ` Paolo Valente
2019-05-21 7:38 ` Andrea Righi
2019-05-18 20:50 ` Srivatsa S. Bhat
2019-05-20 10:19 ` Paolo Valente
2019-05-20 22:45 ` Srivatsa S. Bhat
2019-05-21 6:23 ` Paolo Valente [this message]
2019-05-21 7:19 ` Srivatsa S. Bhat
2019-05-21 9:10 ` Jan Kara
2019-05-21 16:31 ` Theodore Ts'o
2019-05-21 11:25 ` Paolo Valente
2019-05-21 13:20 ` Paolo Valente
2019-05-21 16:21 ` Paolo Valente
2019-05-21 17:38 ` Paolo Valente
2019-05-21 22:51 ` Srivatsa S. Bhat
2019-05-22 8:05 ` Paolo Valente
2019-05-22 9:02 ` Srivatsa S. Bhat
2019-05-22 9:12 ` Paolo Valente
2019-05-22 10:02 ` Srivatsa S. Bhat
2019-05-22 9:09 ` Paolo Valente
2019-05-22 10:01 ` Srivatsa S. Bhat
2019-05-22 10:54 ` Paolo Valente
2019-05-23 2:30 ` Srivatsa S. Bhat
2019-05-23 9:19 ` Paolo Valente
2019-05-23 17:22 ` Paolo Valente
2019-05-23 23:43 ` Srivatsa S. Bhat
2019-05-24 6:51 ` Paolo Valente
2019-05-24 7:56 ` Paolo Valente
2019-05-29 1:09 ` Srivatsa S. Bhat
2019-05-29 7:41 ` Paolo Valente
2019-05-30 8:29 ` Srivatsa S. Bhat
2019-05-30 10:45 ` Paolo Valente
2019-06-02 7:04 ` Srivatsa S. Bhat
2019-06-11 22:34 ` Srivatsa S. Bhat
2019-06-12 13:04 ` Jan Kara
2019-06-12 19:36 ` Srivatsa S. Bhat
2019-06-13 6:02 ` Greg Kroah-Hartman
2019-06-13 19:03 ` Srivatsa S. Bhat
2019-06-13 8:20 ` Jan Kara
2019-06-13 19:05 ` Srivatsa S. Bhat
2019-06-13 8:37 ` Jens Axboe
2019-06-13 5:46 ` Paolo Valente
2019-06-13 19:13 ` Srivatsa S. Bhat
2019-05-23 23:32 ` Srivatsa S. Bhat
2019-05-30 8:38 ` Srivatsa S. Bhat
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6EB6C9D2-E774-48FA-AC95-BC98D97645D0@linaro.org \
--to=paolo.valente@linaro.org \
--cc=amakhalov@vmware.com \
--cc=anishs@vmware.com \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=jack@suse.cz \
--cc=jmoyer@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=srivatsa@csail.mit.edu \
--cc=srivatsab@vmware.com \
--cc=tytso@mit.edu \
--subject='Re: CFQ idling kills I/O performance on ext4 with blkio cgroup controller' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).