LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
To: David Rientjes <rientjes@google.com>, Michal Hocko <mhocko@kernel.org>
Cc: "Li,Rongqing" <lirongqing@baidu.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
"hannes@cmpxchg.org" <hannes@cmpxchg.org>
Subject: Re: 答复: 答复: [PATCH] mm/memcontrol.c: speed up to force empty a memory cgroup
Date: Wed, 21 Mar 2018 01:08:02 +0300 [thread overview]
Message-ID: <56508bd0-e8d7-55fd-5109-c8dacf26b13e@virtuozzo.com> (raw)
In-Reply-To: <alpine.DEB.2.20.1803201327060.167205@chino.kir.corp.google.com>
On 03/20/2018 11:29 PM, David Rientjes wrote:
> On Tue, 20 Mar 2018, Michal Hocko wrote:
>
>>>>>> Although SWAP_CLUSTER_MAX is used at the lower level, but the call
>>>>>> stack of try_to_free_mem_cgroup_pages is too long, increase the
>>>>>> nr_to_reclaim can reduce times of calling
>>>>>> function[do_try_to_free_pages, shrink_zones, hrink_node ]
>>>>>>
>>>>>> mem_cgroup_resize_limit
>>>>>> --->try_to_free_mem_cgroup_pages: .nr_to_reclaim = max(1024,
>>>>>> --->SWAP_CLUSTER_MAX),
>>>>>> ---> do_try_to_free_pages
>>>>>> ---> shrink_zones
>>>>>> --->shrink_node
>>>>>> ---> shrink_node_memcg
>>>>>> ---> shrink_list <-------loop will happen in this place
>>>>> [times=1024/32]
>>>>>> ---> shrink_page_list
>>>>>
>>>>> Can you actually measure this to be the culprit. Because we should rethink
>>>>> our call path if it is too complicated/deep to perform well.
>>>>> Adding arbitrary batch sizes doesn't sound like a good way to go to me.
>>>>
>>>> Ok, I will try
>>>>
>>>
>>> Looping in mem_cgroup_resize_limit(), which takes memcg_limit_mutex on
>>> every iteration which contends with lowering limits in other cgroups (on
>>> our systems, thousands), calling try_to_free_mem_cgroup_pages() with less
>>> than SWAP_CLUSTER_MAX is lame.
>>
>> Well, if the global lock is a bottleneck in your deployments then we
>> can come up with something more clever. E.g. per hierarchy locking
>> or even drop the lock for the reclaim altogether. If we reclaim in
>> SWAP_CLUSTER_MAX then the potential over-reclaim risk quite low when
>> multiple users are shrinking the same (sub)hierarchy.
>>
>
> I don't believe this to be a bottleneck if nr_pages is increased in
> mem_cgroup_resize_limit().
>
>>> It would probably be best to limit the
>>> nr_pages to the amount that needs to be reclaimed, though, rather than
>>> over reclaiming.
>>
>> How do you achieve that? The charging path is not synchornized with the
>> shrinking one at all.
>>
>
> The point is to get a better guess at how many pages, up to
> SWAP_CLUSTER_MAX, that need to be reclaimed instead of 1.
>
>>> If you wanted to be invasive, you could change page_counter_limit() to
>>> return the count - limit, fix up the callers that look for -EBUSY, and
>>> then use max(val, SWAP_CLUSTER_MAX) as your nr_pages.
>>
>> I am not sure I understand
>>
>
> Have page_counter_limit() return the number of pages over limit, i.e.
> count - limit, since it compares the two anyway. Fix up existing callers
> and then clamp that value to SWAP_CLUSTER_MAX in
> mem_cgroup_resize_limit(). It's a more accurate guess than either 1 or
> 1024.
>
JFYI, it's never 1, it's always SWAP_CLUSTER_MAX.
See try_to_free_mem_cgroup_pages():
....
struct scan_control sc = {
.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
next prev parent reply other threads:[~2018-03-20 22:08 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-19 8:29 Li RongQing
2018-03-19 8:53 ` Michal Hocko
[not found] ` <2AD939572F25A448A3AE3CAEA61328C23745764B@BC-MAIL-M28.internal.baidu.com>
2018-03-19 10:37 ` 答复: " Michal Hocko
2018-03-19 10:51 ` 答复: " Li,Rongqing
2018-03-19 17:51 ` David Rientjes
2018-03-20 8:39 ` Michal Hocko
2018-03-20 20:29 ` David Rientjes
2018-03-20 22:08 ` Andrey Ryabinin [this message]
2018-03-20 22:15 ` David Rientjes
2018-03-20 22:35 ` Andrey Ryabinin
2018-03-20 22:45 ` David Rientjes
2018-03-21 9:59 ` Michal Hocko
2018-03-23 2:58 ` Li,Rongqing
2018-03-23 10:08 ` Michal Hocko
2018-03-23 12:04 ` 答复: " Li,Rongqing
2018-03-23 12:29 ` Michal Hocko
2018-03-23 10:34 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56508bd0-e8d7-55fd-5109-c8dacf26b13e@virtuozzo.com \
--to=aryabinin@virtuozzo.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lirongqing@baidu.com \
--cc=mhocko@kernel.org \
--cc=rientjes@google.com \
--subject='Re: 答复: 答复: [PATCH] mm/memcontrol.c: speed up to force empty a memory cgroup' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).