LKML Archive on lore.kernel.org help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org> To: akpm@linux-foundation.org Cc: kamezawa.hiroyu@jp.fujitsu.com, nishimura@mxp.nes.nec.co.jp, balbir@linux.vnet.ibm.com, minchan.kim@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [patch 2/3] memcg: prevent endless loop when charging huge pages to near-limit group Date: Mon, 31 Jan 2011 15:03:54 +0100 [thread overview] Message-ID: <1296482635-13421-3-git-send-email-hannes@cmpxchg.org> (raw) In-Reply-To: <1296482635-13421-1-git-send-email-hannes@cmpxchg.org> If reclaim after a failed charging was unsuccessful, the limits are checked again, just in case they settled by means of other tasks. This is all fine as long as every charge is of size PAGE_SIZE, because in that case, being below the limit means having at least PAGE_SIZE bytes available. But with transparent huge pages, we may end up in an endless loop where charging and reclaim fail, but we keep going because the limits are not yet exceeded, although not allowing for a huge page. Fix this up by explicitely checking for enough room, not just whether we are within limits. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> --- include/linux/res_counter.h | 12 ++++++++++++ mm/memcontrol.c | 27 ++++++++++++++++++++------- 2 files changed, 32 insertions(+), 7 deletions(-) diff --git a/include/linux/res_counter.h b/include/linux/res_counter.h index fcb9884..5cfd78a 100644 --- a/include/linux/res_counter.h +++ b/include/linux/res_counter.h @@ -182,6 +182,18 @@ static inline bool res_counter_check_under_limit(struct res_counter *cnt) return ret; } +static inline bool res_counter_check_margin(struct res_counter *cnt, + unsigned long bytes) +{ + bool ret; + unsigned long flags; + + spin_lock_irqsave(&cnt->lock, flags); + ret = cnt->limit - cnt->usage >= bytes; + spin_unlock_irqrestore(&cnt->lock, flags); + return ret; +} + static inline bool res_counter_check_under_soft_limit(struct res_counter *cnt) { bool ret; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 73ea323..c28072f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1111,6 +1111,15 @@ static bool mem_cgroup_check_under_limit(struct mem_cgroup *mem) return false; } +static bool mem_cgroup_check_margin(struct mem_cgroup *mem, unsigned long bytes) +{ + if (!res_counter_check_margin(&mem->res, bytes)) + return false; + if (do_swap_account && !res_counter_check_margin(&mem->memsw, bytes)) + return false; + return true; +} + static unsigned int get_swappiness(struct mem_cgroup *memcg) { struct cgroup *cgrp = memcg->css.cgroup; @@ -1852,15 +1861,19 @@ static int __mem_cgroup_do_charge(struct mem_cgroup *mem, gfp_t gfp_mask, return CHARGE_WOULDBLOCK; ret = mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL, - gfp_mask, flags); + gfp_mask, flags); + if (mem_cgroup_check_margin(mem_over_limit, csize)) + return CHARGE_RETRY; /* - * try_to_free_mem_cgroup_pages() might not give us a full - * picture of reclaim. Some pages are reclaimed and might be - * moved to swap cache or just unmapped from the cgroup. - * Check the limit again to see if the reclaim reduced the - * current usage of the cgroup before giving up + * Even though the limit is exceeded at this point, reclaim + * may have been able to free some pages. Retry the charge + * before killing the task. + * + * Only for regular pages, though: huge pages are rather + * unlikely to succeed so close to the limit, and we fall back + * to regular pages anyway in case of failure. */ - if (ret || mem_cgroup_check_under_limit(mem_over_limit)) + if (csize == PAGE_SIZE && ret) return CHARGE_RETRY; /* -- 1.7.3.5
next prev parent reply other threads:[~2011-01-31 14:04 UTC|newest] Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top 2011-01-31 14:03 Johannes Weiner 2011-01-31 14:03 ` [patch 1/3] memcg: prevent endless loop when charging huge pages Johannes Weiner 2011-01-31 22:27 ` Minchan Kim 2011-01-31 23:48 ` KAMEZAWA Hiroyuki 2011-01-31 14:03 ` Johannes Weiner [this message] 2011-01-31 22:41 ` [patch 2/3] memcg: prevent endless loop when charging huge pages to near-limit group Andrew Morton 2011-01-31 23:50 ` KAMEZAWA Hiroyuki 2011-02-01 0:04 ` Johannes Weiner 2011-02-01 0:24 ` Andrew Morton 2011-02-01 0:34 ` Johannes Weiner 2011-02-03 12:53 ` [patch 0/2] memcg: clean up limit checking Johannes Weiner 2011-02-03 12:54 ` [patch 1/2] memcg: soft limit reclaim should end at limit not below Johannes Weiner 2011-02-03 23:41 ` KAMEZAWA Hiroyuki 2011-02-04 4:10 ` Balbir Singh 2011-02-03 12:56 ` [patch 2/2] memcg: simplify the way memory limits are checked Johannes Weiner 2011-02-03 23:44 ` KAMEZAWA Hiroyuki 2011-02-04 4:12 ` Balbir Singh 2011-01-31 22:42 ` [patch 2/3] memcg: prevent endless loop when charging huge pages to near-limit group Minchan Kim 2011-01-31 14:03 ` [patch 3/3] memcg: never OOM when charging huge pages Johannes Weiner 2011-01-31 22:52 ` Minchan Kim 2011-01-31 23:51 ` KAMEZAWA Hiroyuki
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1296482635-13421-3-git-send-email-hannes@cmpxchg.org \ --to=hannes@cmpxchg.org \ --cc=akpm@linux-foundation.org \ --cc=balbir@linux.vnet.ibm.com \ --cc=kamezawa.hiroyu@jp.fujitsu.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=minchan.kim@gmail.com \ --cc=nishimura@mxp.nes.nec.co.jp \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).