LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Robin Murphy <robin.murphy@arm.com>,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	iommu@lists.linux-foundation.org, Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Date: Tue, 10 Aug 2021 18:35:45 +0800	[thread overview]
Message-ID: <YRJWgU5VhzBe1JP4@T590> (raw)
In-Reply-To: <dfdd16e8-278f-3bc9-da97-a91264aec909@huawei.com>

On Tue, Aug 10, 2021 at 10:36:47AM +0100, John Garry wrote:
> On 28/07/2021 16:17, Ming Lei wrote:
> > > > > Have you tried turning off the IOMMU to ensure that this is really just
> > > > > an IOMMU problem?
> > > > > 
> > > > > You can try setting CONFIG_ARM_SMMU_V3=n in the defconfig or passing
> > > > > cmdline param iommu.passthrough=1 to bypass the the SMMU (equivalent to
> > > > > disabling for kernel drivers).
> > > > Bypassing SMMU via iommu.passthrough=1 basically doesn't make a difference
> > > > on this issue.
> > > A ~90% throughput drop still seems to me to be too high to be a software
> > > issue. More so since I don't see similar on my system. And that throughput
> > > drop does not lead to a total CPU usage drop, from the fio log.
> > > 
> > > Do you know if anyone has run memory benchmark tests on this board to find
> > > out NUMA effect? I think lmbench or stream could be used for this.
> > https://lore.kernel.org/lkml/YOhbc5C47IzC893B@T590/
> 
> Hi Ming,
> 
> Out of curiosity, did you investigate this topic any further?

IMO, the issue is probably in device/system side, since completion latency is
increased a lot, meantime submission latency isn't changed.

Either the submission isn't committed to hardware in time, or the
completion status isn't updated to HW in time from viewpoint of CPU.

We have tried to update to new FW, but not see difference made.

> 
> And you also asked about my results earlier:
> 
> On 22/07/2021 16:54, Ming Lei wrote:
> >> [   52.035895] nvme 0000:81:00.0: Adding to iommu group 5
> >> [   52.047732] nvme nvme0: pci function 0000:81:00.0
> >> [   52.067216] nvme nvme0: 22/0/2 default/read/poll queues
> >> [   52.087318]  nvme0n1: p1
> >>
> >> So I get these results:
> >> cpu0 335K
> >> cpu32 346K
> >> cpu64 300K
> >> cpu96 300K
> >>
> >> So still not massive changes.
> > In your last email, the results are the following with irq mode io_uring:
> >
> >   cpu0  497K
> >   cpu4  307K
> >   cpu32 566K
> >   cpu64 488K
> >   cpu96 508K
> >
> > So looks you get much worse result with real io_polling?
> >
> 
> Would the expectation be that at least I get the same performance with
> io_polling here?

io_polling is supposed to improve IO latency a lot compared with irq
mode, and the perf data shows that clearly on x86_64.

> Anything else to try which you can suggest to investigate
> this lower performance?

You may try to compare irq mode and polling and narrow down the possible
reasons, no exact suggestion on how to investigate it, :-(


Thanks,
Ming


  reply	other threads:[~2021-08-10 10:36 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09  8:38 [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node Ming Lei
2021-07-09 10:16 ` Russell King (Oracle)
2021-07-09 14:21   ` Ming Lei
2021-07-09 10:26 ` Robin Murphy
2021-07-09 11:04   ` John Garry
2021-07-09 12:34     ` Robin Murphy
2021-07-09 14:24   ` Ming Lei
2021-07-19 16:14     ` John Garry
2021-07-21  1:40       ` Ming Lei
2021-07-21  9:23         ` John Garry
2021-07-21  9:59           ` Ming Lei
2021-07-21 11:07             ` John Garry
2021-07-21 11:58               ` Ming Lei
2021-07-22  7:58               ` Ming Lei
2021-07-22 10:05                 ` John Garry
2021-07-22 10:19                   ` Ming Lei
2021-07-22 11:12                     ` John Garry
2021-07-22 12:53                       ` Marc Zyngier
2021-07-22 13:54                         ` John Garry
2021-07-22 15:54                       ` Ming Lei
2021-07-22 17:40                         ` Robin Murphy
2021-07-23 10:21                           ` Ming Lei
2021-07-26  7:51                             ` John Garry
2021-07-28  1:32                               ` Ming Lei
2021-07-28 10:38                                 ` John Garry
2021-07-28 15:17                                   ` Ming Lei
2021-07-28 15:39                                     ` Robin Murphy
2021-08-10  9:36                                     ` John Garry
2021-08-10 10:35                                       ` Ming Lei [this message]
2021-07-27 17:08                             ` Robin Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YRJWgU5VhzBe1JP4@T590 \
    --to=ming.lei@redhat.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=john.garry@huawei.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).