LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov.dev@gmail.com>
To: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: akpm@linux-foundation.org, shakeelb@google.com,
	viro@zeniv.linux.org.uk, hannes@cmpxchg.org, mhocko@kernel.org,
	tglx@linutronix.de, pombredanne@nexb.com,
	stummala@codeaurora.org, gregkh@linuxfoundation.org,
	sfr@canb.auug.org.au, guro@fb.com, mka@chromium.org,
	penguin-kernel@I-love.SAKURA.ne.jp, chris@chris-wilson.co.uk,
	longman@redhat.com, minchan@kernel.org, ying.huang@intel.com,
	mgorman@techsingularity.net, jbacik@fb.com, linux@roeck-us.net,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	willy@infradead.org, lirongqing@baidu.com,
	aryabinin@virtuozzo.com
Subject: Re: [PATCH v5 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg
Date: Tue, 15 May 2018 06:54:15 +0300	[thread overview]
Message-ID: <20180515035415.3jpx3uqpztnzlnez@esperanza> (raw)
In-Reply-To: <d8c3a265-f20c-7bf5-23a7-8b80cf25af3d@virtuozzo.com>

On Mon, May 14, 2018 at 12:34:45PM +0300, Kirill Tkhai wrote:
> >> +static void memcg_free_shrinker_maps(struct mem_cgroup *memcg)
> >> +{
> >> +	struct mem_cgroup_per_node *pn;
> >> +	struct memcg_shrinker_map *map;
> >> +	int nid;
> >> +
> >> +	if (memcg == root_mem_cgroup)
> >> +		return;
> >> +
> >> +	mutex_lock(&shrinkers_nr_max_mutex);
> > 
> > Why do you need to take the mutex here? You don't access shrinker map
> > capacity here AFAICS.
> 
> Allocation of shrinkers map is in css_online() now, and this wants its pay.
> memcg_expand_one_shrinker_map() must be able to differ mem cgroups with
> allocated maps, mem cgroups with not allocated maps, and mem cgroups with
> failed/failing css_online. So, the mutex is used for synchronization with
> expanding. See "old_size && !old" check in memcg_expand_one_shrinker_map().

Another reason to have 'expand' and 'alloc' paths separated - you
wouldn't need to take the mutex here as 'free' wouldn't be used for
undoing initial allocation, instead 'alloc' would cleanup by itself
while still holding the mutex.

> 
> >> +	for_each_node(nid) {
> >> +		pn = mem_cgroup_nodeinfo(memcg, nid);
> >> +		map = rcu_dereference_protected(pn->shrinker_map, true);
> >> +		if (map)
> >> +			call_rcu(&map->rcu, memcg_free_shrinker_map_rcu);
> >> +		rcu_assign_pointer(pn->shrinker_map, NULL);
> >> +	}
> >> +	mutex_unlock(&shrinkers_nr_max_mutex);
> >> +}
> >> +
> >> +static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg)
> >> +{
> >> +	int ret, size = memcg_shrinker_nr_max/BITS_PER_BYTE;
> >> +
> >> +	if (memcg == root_mem_cgroup)
> >> +		return 0;
> >> +
> >> +	mutex_lock(&shrinkers_nr_max_mutex);
> >> +	ret = memcg_expand_one_shrinker_map(memcg, size, 0);
> > 
> > I don't think it's worth reusing the function designed for reallocating
> > shrinker maps for initial allocation. Please just fold the code here -
> > it will make both 'alloc' and 'expand' easier to follow IMHO.
> 
> These function will have 80% code the same. What are the reasons to duplicate
> the same functionality? Two functions are more difficult for support, and
> everywhere in kernel we try to avoid this IMHO.

IMHO two functions with clear semantics are easier to maintain than
a function that does one of two things depending on some condition.
Separating 'alloc' from 'expand' would only add 10-15 SLOC.

> >> +	mutex_unlock(&shrinkers_nr_max_mutex);
> >> +
> >> +	if (ret)
> >> +		memcg_free_shrinker_maps(memcg);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static struct idr mem_cgroup_idr;
> >> +
> >> +int memcg_expand_shrinker_maps(int old_nr, int nr)
> >> +{
> >> +	int size, old_size, ret = 0;
> >> +	struct mem_cgroup *memcg;
> >> +
> >> +	old_size = old_nr / BITS_PER_BYTE;
> >> +	size = nr / BITS_PER_BYTE;
> >> +
> >> +	mutex_lock(&shrinkers_nr_max_mutex);
> >> +
> >> +	if (!root_mem_cgroup)
> >> +		goto unlock;
> > 
> > This wants a comment.
> 
> Which comment does this want? "root_mem_cgroup is not initialized, so
> it does not have child mem cgroups"?

Looking at this code again, I find it pretty self-explaining, sorry.

Thanks.

  reply	other threads:[~2018-05-15  3:54 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-10  9:52 [PATCH v5 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
2018-05-10  9:52 ` [PATCH v5 01/13] mm: Assign id to every memcg-aware shrinker Kirill Tkhai
2018-05-13  5:15   ` Vladimir Davydov
2018-05-14  9:03     ` Kirill Tkhai
2018-05-15  3:29       ` Vladimir Davydov
2018-05-10  9:52 ` [PATCH v5 02/13] memcg: Move up for_each_mem_cgroup{, _tree} defines Kirill Tkhai
2018-05-10  9:52 ` [PATCH v5 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg Kirill Tkhai
2018-05-13 16:47   ` Vladimir Davydov
2018-05-14  9:34     ` Kirill Tkhai
2018-05-15  3:54       ` Vladimir Davydov [this message]
2018-05-10  9:52 ` [PATCH v5 04/13] mm: Refactoring in workingset_init() Kirill Tkhai
2018-05-10  9:52 ` [PATCH v5 05/13] fs: Refactoring in alloc_super() Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 06/13] fs: Propagate shrinker::id to list_lru Kirill Tkhai
2018-05-13 16:57   ` Vladimir Davydov
2018-05-10  9:53 ` [PATCH v5 07/13] list_lru: Add memcg argument to list_lru_from_kmem() Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 08/13] list_lru: Pass dst_memcg argument to memcg_drain_list_lru_node() Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 09/13] list_lru: Pass lru " Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 10/13] mm: Set bit in memcg shrinker bitmap on first list_lru item apearance Kirill Tkhai
2018-05-15  4:08   ` Vladimir Davydov
2018-05-10  9:53 ` [PATCH v5 11/13] mm: Iterate only over charged shrinkers during memcg shrink_slab() Kirill Tkhai
2018-05-15  5:44   ` Vladimir Davydov
2018-05-15 10:12     ` Kirill Tkhai
2018-05-17  4:33       ` Vladimir Davydov
2018-05-17 11:39         ` Kirill Tkhai
2018-05-15 14:49     ` Kirill Tkhai
2018-05-17  4:16       ` Vladimir Davydov
2018-05-17 11:49         ` Kirill Tkhai
2018-05-17 13:51           ` Vladimir Davydov
2018-05-10  9:54 ` [PATCH v5 12/13] mm: Add SHRINK_EMPTY shrinker methods return value Kirill Tkhai
2018-05-10  9:54 ` [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg Kirill Tkhai
2018-05-15  5:59   ` Vladimir Davydov
2018-05-15  8:55     ` Kirill Tkhai
2018-05-17  4:49       ` Vladimir Davydov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180515035415.3jpx3uqpztnzlnez@esperanza \
    --to=vdavydov.dev@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=gregkh@linuxfoundation.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=jbacik@fb.com \
    --cc=ktkhai@virtuozzo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@roeck-us.net \
    --cc=lirongqing@baidu.com \
    --cc=longman@redhat.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=mka@chromium.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=pombredanne@nexb.com \
    --cc=sfr@canb.auug.org.au \
    --cc=shakeelb@google.com \
    --cc=stummala@codeaurora.org \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --subject='Re: [PATCH v5 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).