LKML Archive on
help / color / mirror / Atom feed
From: Johannes Weiner <>
To: Linus Torvalds <>
Cc: Shakeel Butt <>,
	Andrew Morton <>,
	kernel test robot <>, LKP <>,
	LKML <>,
	Michal Hocko <>, Roman Gushchin <>,
	Linux MM <>, Cgroups <>
Subject: Re: [PATCH] mm: memcontrol: don't batch updates of local VM stats and events
Date: Tue, 28 May 2019 16:32:18 -0400	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Tue, May 28, 2019 at 10:37:15AM -0700, Linus Torvalds wrote:
> On Tue, May 28, 2019 at 9:00 AM Shakeel Butt <> wrote:
> >
> > I was suspecting the following for-loop+atomic-add for the regression.
> If I read the kernel test robot reports correctly, Johannes' fix patch
> does fix the regression (well - mostly. The original reported
> regression was 26%, and with Johannes' fix patch it was 3% - so still
> a slight performance regression, but not nearly as bad).
> > Why the above atomic-add is the culprit?
> I think the problem with that one is that it's cross-cpu statistics,
> so you end up with lots of cacheline bounces on the local counts when
> you have lots of load.

In this case, that's true for both of them. The workload runs at the
root cgroup level, so per definition the local and the recursive
counters at that level are identical and written to at the same
rate. Adding the new counter obviously caused the regression, but
they're contributing equally to the cost, and we could
remove/per-cpuify either of them for the fix.

So why did I unshare the old counter instead of the new one? Well, the
old counter *used* to be unshared for the longest time, and was only
made into a shared one to make recursive aggregation cheaper - before
there was a dedicated recursive counter. But now that we have that
recursive counter, there isn't much reason to keep the local counter
shared and bounce it around on updates.

Essentially, this fix-up is a revert of a983b5ebee57 ("mm: memcontrol:
fix excessive complexity in memory.stat reporting") since the problem
described in that patch is now solved from the other end.

> But yes, the recursive updates still do show a small regression,
> probably because there's still some overhead from the looping up in
> the hierarchy. You still get *those* cacheline bounces, but now they
> are limited to the upper hierarchies that only get updated at batch
> time.

Right, I reduce the *shared* data back to how it was before the patch,
but it still adds a second (per-cpu) counter that needs to get bumped,
and the loop adds a branch as well.

But while I would expect that to show up in a case like will-it-scale,
I'd be surprised if the remaining difference would be noticeable for
real workloads that actually work with the memory they allocate.

      reply	other threads:[~2019-05-28 20:32 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-20  6:35 [mm] 42a3003535: will-it-scale.per_process_ops -25.9% regression kernel test robot
2019-05-20 21:53 ` Johannes Weiner
2019-05-21 13:46   ` [LKP] " kernel test robot
2019-05-21 15:13     ` Johannes Weiner
2019-05-21 15:16     ` [PATCH] mm: memcontrol: don't batch updates of local VM stats and events Johannes Weiner
2019-05-28 16:00       ` Shakeel Butt
2019-05-28 17:37         ` Linus Torvalds
2019-05-28 20:32           ` Johannes Weiner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: [PATCH] mm: memcontrol: don'\''t batch updates of local VM stats and events' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).