LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Roman Gushchin <firstname.lastname@example.org>
To: Shakeel Butt <email@example.com>
Cc: Andrew Morton <firstname.lastname@example.org>,
Linux MM <email@example.com>,
Kernel Team <Kernelfirstname.lastname@example.org>,
Johannes Weiner <email@example.com>,
Michal Hocko <firstname.lastname@example.org>,
"Rik van Riel" <email@example.com>,
Christoph Lameter <firstname.lastname@example.org>,
"Vladimir Davydov" <email@example.com>,
Subject: Re: [PATCH v3 0/7] mm: reparent slab memory on cgroup removal
Date: Tue, 14 May 2019 20:04:07 +0000 [thread overview]
Message-ID: <20190514200402.GE12629@tower.DHCP.thefacebook.com> (raw)
On Tue, May 14, 2019 at 12:22:08PM -0700, Shakeel Butt wrote:
> From: Roman Gushchin <firstname.lastname@example.org>
> Date: Mon, May 13, 2019 at 1:22 PM
> To: Shakeel Butt
> Cc: Andrew Morton, Linux MM, LKML, Kernel Team, Johannes Weiner,
> Michal Hocko, Rik van Riel, Christoph Lameter, Vladimir Davydov,
> > On Fri, May 10, 2019 at 05:32:15PM -0700, Shakeel Butt wrote:
> > > From: Roman Gushchin <email@example.com>
> > > Date: Wed, May 8, 2019 at 1:30 PM
> > > To: Andrew Morton, Shakeel Butt
> > > Cc: <firstname.lastname@example.org>, <email@example.com>,
> > > <firstname.lastname@example.org>, Johannes Weiner, Michal Hocko, Rik van Riel,
> > > Christoph Lameter, Vladimir Davydov, <email@example.com>, Roman
> > > Gushchin
> > >
> > > > # Why do we need this?
> > > >
> > > > We've noticed that the number of dying cgroups is steadily growing on most
> > > > of our hosts in production. The following investigation revealed an issue
> > > > in userspace memory reclaim code , accounting of kernel stacks ,
> > > > and also the mainreason: slab objects.
> > > >
> > > > The underlying problem is quite simple: any page charged
> > > > to a cgroup holds a reference to it, so the cgroup can't be reclaimed unless
> > > > all charged pages are gone. If a slab object is actively used by other cgroups,
> > > > it won't be reclaimed, and will prevent the origin cgroup from being reclaimed.
> > > >
> > > > Slab objects, and first of all vfs cache, is shared between cgroups, which are
> > > > using the same underlying fs, and what's even more important, it's shared
> > > > between multiple generations of the same workload. So if something is running
> > > > periodically every time in a new cgroup (like how systemd works), we do
> > > > accumulate multiple dying cgroups.
> > > >
> > > > Strictly speaking pagecache isn't different here, but there is a key difference:
> > > > we disable protection and apply some extra pressure on LRUs of dying cgroups,
> > >
> > > How do you apply extra pressure on dying cgroups? cgroup-v2 does not
> > > have memory.force_empty.
> > I mean the following part of get_scan_count():
> > /*
> > * If the cgroup's already been deleted, make sure to
> > * scrape out the remaining cache.
> > */
> > if (!scan && !mem_cgroup_online(memcg))
> > scan = min(lruvec_size, SWAP_CLUSTER_MAX);
> > It seems to work well, so that pagecache alone doesn't pin too many
> > dying cgroups. The price we're paying is some excessive IO here,
> Thanks for the explanation. However for this to work, something still
> needs to trigger the memory pressure until then we will keep the
> zombies around. BTW the get_scan_count() is getting really creepy. It
> needs a refactor soon.
Sure, but that's true for all sorts of memory.
Re get_scan_count(): for sure, yeah, it's way too hairy now.
> > which can be avoided had we be able to recharge the pagecache.
> Are you looking into this? Do you envision a mount option which will
> tell the filesystem is shared and do recharging on the offlining of
> the origin memcg?
Not really working on it now, but thinking of what to do here long-term.
One of the ideas I have (just an idea for now) is to move memcg pointer
from individual pages to the inode level. It can bring more opportunities
in terms of recharging and reparenting, but I'm not sure how complex it is
and what are possible downsides.
Do you have any plans or ideas here?
> > Btw, thank you very much for looking into the patchset. I'll address
> > all comments and send v4 soon.
> You are most welcome.
next prev parent reply other threads:[~2019-05-14 20:04 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <firstname.lastname@example.org>
2019-05-11 0:32 ` Shakeel Butt
2019-05-13 20:21 ` Roman Gushchin
2019-05-14 19:22 ` Shakeel Butt
2019-05-14 20:04 ` Roman Gushchin [this message]
[not found] ` <email@example.com>
2019-05-11 0:32 ` [PATCH v3 1/7] mm: postpone kmem_cache memcg pointer initialization to memcg_link_cache() Shakeel Butt
[not found] ` <firstname.lastname@example.org>
2019-05-11 0:33 ` [PATCH v3 2/7] mm: generalize postponed non-root kmem_cache deactivation Shakeel Butt
[not found] ` <email@example.com>
2019-05-11 0:33 ` [PATCH v3 3/7] mm: introduce __memcg_kmem_uncharge_memcg() Shakeel Butt
[not found] ` <firstname.lastname@example.org>
2019-05-11 0:33 ` [PATCH v3 5/7] mm: rework non-root kmem_cache lifecycle management Shakeel Butt
[not found] ` <email@example.com>
2019-05-11 0:34 ` [PATCH v3 6/7] mm: reparent slab memory on cgroup removal Shakeel Butt
[not found] ` <firstname.lastname@example.org>
2019-05-11 0:34 ` [PATCH v3 7/7] mm: fix /proc/kpagecgroup interface for slab pages Shakeel Butt
[not found] ` <email@example.com>
2019-05-11 0:33 ` [PATCH v3 4/7] mm: unify SLAB and SLUB page accounting Shakeel Butt
2019-05-13 18:01 ` Christopher Lameter
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--subject='Re: [PATCH v3 0/7] mm: reparent slab memory on cgroup removal' \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).