LKML Archive on lore.kernel.org help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "linux-mm@kvack.org" <linux-mm@kvack.org>, "balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>, "menage@google.com" <menage@google.com>, "nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>, "lizf@cn.fujitsu.com" <lizf@cn.fujitsu.com>, "akpm@linux-foundation.org" <akpm@linux-foundation.org> Subject: [PATCH] [BUGFIX]cgroup: fix potential deadlock in pre_destroy (v2) Date: Wed, 12 Nov 2008 16:32:56 +0900 [thread overview] Message-ID: <20081112163256.b36d6952.kamezawa.hiroyu@jp.fujitsu.com> (raw) In-Reply-To: <20081112133002.15c929c3.kamezawa.hiroyu@jp.fujitsu.com> This is fixed one. Thank you for all help. Regards, -Kame == As Balbir pointed out, memcg's pre_destroy handler has potential deadlock. It has following lock sequence. cgroup_mutex (cgroup_rmdir) -> pre_destroy -> mem_cgroup_pre_destroy-> force_empty -> cpu_hotplug.lock. (lru_add_drain_all-> schedule_work-> get_online_cpus) But, cpuset has following. cpu_hotplug.lock (call notifier) -> cgroup_mutex. (within notifier) Then, this lock sequence should be fixed. Considering how pre_destroy works, it's not necessary to holding cgroup_mutex() while calling it. As side effect, we don't have to wait at this mutex while memcg's force_empty works.(it can be long when there are tons of pages.) Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> --- kernel/cgroup.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) Index: mmotm-2.6.28-Nov10/kernel/cgroup.c =================================================================== --- mmotm-2.6.28-Nov10.orig/kernel/cgroup.c +++ mmotm-2.6.28-Nov10/kernel/cgroup.c @@ -2475,10 +2475,7 @@ static int cgroup_rmdir(struct inode *un mutex_unlock(&cgroup_mutex); return -EBUSY; } - - parent = cgrp->parent; - root = cgrp->root; - sb = root->sb; + mutex_unlock(&cgroup_mutex); /* * Call pre_destroy handlers of subsys. Notify subsystems @@ -2486,7 +2483,14 @@ static int cgroup_rmdir(struct inode *un */ cgroup_call_pre_destroy(cgrp); - if (cgroup_has_css_refs(cgrp)) { + mutex_lock(&cgroup_mutex); + parent = cgrp->parent; + root = cgrp->root; + sb = root->sb; + + if (atomic_read(&cgrp->count) + || !list_empty(&cgrp->children) + || cgroup_has_css_refs(cgrp)) { mutex_unlock(&cgroup_mutex); return -EBUSY; }
next prev parent reply other threads:[~2008-11-12 7:33 UTC|newest] Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top 2008-11-12 4:30 [PATCH] [BUGFIX]cgroup: fix potential deadlock in pre_destroy KAMEZAWA Hiroyuki 2008-11-12 4:53 ` Balbir Singh 2008-11-12 4:55 ` KAMEZAWA Hiroyuki 2008-11-12 6:58 ` Li Zefan 2008-11-12 8:15 ` KAMEZAWA Hiroyuki 2008-11-12 7:32 ` KAMEZAWA Hiroyuki [this message] 2008-11-12 11:26 ` [PATCH] [BUGFIX]cgroup: fix potential deadlock in pre_destroy (v2) Balbir Singh
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20081112163256.b36d6952.kamezawa.hiroyu@jp.fujitsu.com \ --to=kamezawa.hiroyu@jp.fujitsu.com \ --cc=akpm@linux-foundation.org \ --cc=balbir@linux.vnet.ibm.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lizf@cn.fujitsu.com \ --cc=menage@google.com \ --cc=nishimura@mxp.nes.nec.co.jp \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).