LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: "yukuai (C)" <yukuai3@huawei.com>, axboe@kernel.dk, ming.lei@redhat.com
Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	yi.zhang@huawei.com
Subject: Re: [PATCH] blk-mq: allow hardware queue to get more tag while sharing a tag set
Date: Mon, 2 Aug 2021 09:17:31 -0700	[thread overview]
Message-ID: <e587c572-bcd7-87c4-5eea-30ccdc7455db@acm.org> (raw)
In-Reply-To: <5ab07cf8-a2a5-a60e-c86a-ab6ea53990bb@huawei.com>

On 8/2/21 6:34 AM, yukuai (C) wrote:
> I run a test on both null_blk and nvme, results show that there are no
> performance degradation:
> 
> test platform: x86
> test cpu: 2 nodes, total 72
> test scheduler: none
> test device: null_blk / nvme
> 
> test cmd: fio -filename=/dev/xxx -name=test -ioengine=libaio -direct=1
> -numjobs=72 -iodepth=16 -bs=4k -rw=write -offset_increment=1G
> -cpus_allowed=0:71 -cpus_allowed_policy=split -group_reporting
> -runtime=120
> 
> test results: iops
> 1) null_blk before this patch: 280k
> 2) null_blk after this patch: 282k
> 3) nvme before this patch: 378k
> 4) nvme after this patch: 384k

Please use io_uring for performance tests.

The null_blk numbers seem way too low to me. If I run a null_blk 
performance test inside a VM with 6 CPU cores (Xeon W-2135 CPU) I see 
about 6 million IOPS for synchronous I/O and about 4.4 million IOPS when 
using libaio. The options I used and that are not in the above command 
line are: --thread --gtod_reduce=1 --ioscheduler=none.

Thanks,

Bart.

  reply	other threads:[~2021-08-02 16:17 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-12  3:18 Yu Kuai
2021-07-12  3:19 ` yukuai (C)
2021-07-20 12:33 ` yukuai (C)
2021-07-31  7:13   ` yukuai (C)
2021-07-31 17:15 ` Bart Van Assche
2021-08-02 13:34   ` yukuai (C)
2021-08-02 16:17     ` Bart Van Assche [this message]
2021-08-03  2:57       ` yukuai (C)
2021-08-03 18:38         ` Bart Van Assche
2021-08-06  1:50           ` yukuai (C)
2021-08-06  2:43             ` Bart Van Assche
2021-08-14  9:43               ` yukuai (C)
2021-08-16  4:16                 ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e587c572-bcd7-87c4-5eea-30ccdc7455db@acm.org \
    --to=bvanassche@acm.org \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai3@huawei.com \
    --subject='Re: [PATCH] blk-mq: allow hardware queue to get more tag while sharing a tag set' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).