LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3)
@ 2007-10-18  3:25 Yasunori Goto
  2007-10-18  3:46 ` Andrew Morton
  2007-10-23  4:21 ` [PATCH] Fix warning in mm/slub.c Olof Johansson
  0 siblings, 2 replies; 11+ messages in thread
From: Yasunori Goto @ 2007-10-18  3:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Christoph Lameter, Linux Kernel ML, linux-mm


This patch fixes panic due to access NULL pointer
of kmem_cache_node at discard_slab() after memory online.

When memory online is called, kmem_cache_nodes are created for
all SLUBs for new node whose memory are available.

slab_mem_going_online_callback() is called to make kmem_cache_node()
in callback of memory online event. If it (or other callbacks) fails,
then slab_mem_offline_callback() is called for rollback.

In memory offline, slab_mem_going_offline_callback() is called to
shrink all slub cache, then slab_mem_offline_callback() is called later.

This patch is tested on my ia64 box.

Please apply.

Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>


---
 mm/slub.c |  115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 115 insertions(+)

Index: current/mm/slub.c
===================================================================
--- current.orig/mm/slub.c	2007-10-17 21:17:53.000000000 +0900
+++ current/mm/slub.c	2007-10-17 22:23:08.000000000 +0900
@@ -20,6 +20,7 @@
 #include <linux/mempolicy.h>
 #include <linux/ctype.h>
 #include <linux/kallsyms.h>
+#include <linux/memory.h>
 
 /*
  * Lock order:
@@ -2688,6 +2689,118 @@ int kmem_cache_shrink(struct kmem_cache 
 }
 EXPORT_SYMBOL(kmem_cache_shrink);
 
+#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)
+static int slab_mem_going_offline_callback(void *arg)
+{
+	struct kmem_cache *s;
+
+	down_read(&slub_lock);
+	list_for_each_entry(s, &slab_caches, list)
+		kmem_cache_shrink(s);
+	up_read(&slub_lock);
+
+	return 0;
+}
+
+static void slab_mem_offline_callback(void *arg)
+{
+	struct kmem_cache_node *n;
+	struct kmem_cache *s;
+	struct memory_notify *marg = arg;
+	int offline_node;
+
+	offline_node = marg->status_change_nid;
+
+	/*
+	 * If the node still has available memory. we need kmem_cache_node
+	 * for it yet.
+	 */
+	if (offline_node < 0)
+		return;
+
+	down_read(&slub_lock);
+	list_for_each_entry(s, &slab_caches, list) {
+		n = get_node(s, offline_node);
+		if (n) {
+			/*
+			 * if n->nr_slabs > 0, slabs still exist on the node
+			 * that is going down. We were unable to free them,
+			 * and offline_pages() function shoudn't call this
+			 * callback. So, we must fail.
+			 */
+			BUG_ON(atomic_read(&n->nr_slabs));
+
+			s->node[offline_node] = NULL;
+			kmem_cache_free(kmalloc_caches, n);
+		}
+	}
+	up_read(&slub_lock);
+}
+
+static int slab_mem_going_online_callback(void *arg)
+{
+	struct kmem_cache_node *n;
+	struct kmem_cache *s;
+	struct memory_notify *marg = arg;
+	int nid = marg->status_change_nid;
+
+	/*
+	 * If the node's memory is already available, then kmem_cache_node is
+	 * already created. Nothing to do.
+	 */
+	if (nid < 0)
+		return 0;
+
+	/*
+	 * We are bringing a node online. No memory is availabe yet. We must
+	 * allocate a kmem_cache_node structure in order to bring the node
+	 * online.
+	 */
+	down_read(&slub_lock);
+	list_for_each_entry(s, &slab_caches, list) {
+  		/*
+		 * XXX: kmem_cache_alloc_node will fallback to other nodes
+		 *      since memory is not yet available from the node that
+		 *      is brought up.
+  		 */
+		n = kmem_cache_alloc(kmalloc_caches, GFP_KERNEL);
+		if (!n)
+			return -ENOMEM;
+		init_kmem_cache_node(n);
+		s->node[nid] = n;
+  	}
+	up_read(&slub_lock);
+
+  	return 0;
+}
+
+static int slab_memory_callback(struct notifier_block *self,
+				unsigned long action, void *arg)
+{
+	int ret = 0;
+
+	switch (action) {
+	case MEM_GOING_ONLINE:
+		ret = slab_mem_going_online_callback(arg);
+		break;
+	case MEM_GOING_OFFLINE:
+		ret = slab_mem_going_offline_callback(arg);
+		break;
+	case MEM_OFFLINE:
+	case MEM_CANCEL_ONLINE:
+		slab_mem_offline_callback(arg);
+		break;
+	case MEM_ONLINE:
+	case MEM_CANCEL_OFFLINE:
+		break;
+	}
+
+	ret = notifier_from_errno(ret);
+	return ret;
+}
+
+#endif /* CONFIG_MEMORY_HOTPLUG */
+
 /********************************************************************
  *			Basic setup of slabs
  *******************************************************************/
@@ -2709,6 +2822,8 @@ void __init kmem_cache_init(void)
 		sizeof(struct kmem_cache_node), GFP_KERNEL);
 	kmalloc_caches[0].refcount = -1;
 	caches++;
+
+	hotplug_memory_notifier(slab_memory_callback, 1);
 #endif
 
 	/* Able to allocate the per node structures */

-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3)
  2007-10-18  3:25 [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3) Yasunori Goto
@ 2007-10-18  3:46 ` Andrew Morton
  2007-10-18  6:25   ` Christoph Lameter
  2007-10-18  9:20   ` Yasunori Goto
  2007-10-23  4:21 ` [PATCH] Fix warning in mm/slub.c Olof Johansson
  1 sibling, 2 replies; 11+ messages in thread
From: Andrew Morton @ 2007-10-18  3:46 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Christoph Lameter, Linux Kernel ML, linux-mm

On Thu, 18 Oct 2007 12:25:37 +0900 Yasunori Goto <y-goto@jp.fujitsu.com> wrote:

> 
> This patch fixes panic due to access NULL pointer
> of kmem_cache_node at discard_slab() after memory online.
> 
> When memory online is called, kmem_cache_nodes are created for
> all SLUBs for new node whose memory are available.
> 
> slab_mem_going_online_callback() is called to make kmem_cache_node()
> in callback of memory online event. If it (or other callbacks) fails,
> then slab_mem_offline_callback() is called for rollback.
> 
> In memory offline, slab_mem_going_offline_callback() is called to
> shrink all slub cache, then slab_mem_offline_callback() is called later.
> 
> This patch is tested on my ia64 box.
> 
> ...
>  
> +#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)

hm.  There should be no linkage between memory hotpluggability and
NUMA, surely?

> +static int slab_mem_going_offline_callback(void *arg)
> +{
> +	struct kmem_cache *s;
> +
> +	down_read(&slub_lock);
> +	list_for_each_entry(s, &slab_caches, list)
> +		kmem_cache_shrink(s);
> +	up_read(&slub_lock);
> +
> +	return 0;
> +}
> +
> +static void slab_mem_offline_callback(void *arg)
> +{
> +	struct kmem_cache_node *n;
> +	struct kmem_cache *s;
> +	struct memory_notify *marg = arg;
> +	int offline_node;
> +
> +	offline_node = marg->status_change_nid;
> +
> +	/*
> +	 * If the node still has available memory. we need kmem_cache_node
> +	 * for it yet.
> +	 */
> +	if (offline_node < 0)
> +		return;
> +
> +	down_read(&slub_lock);
> +	list_for_each_entry(s, &slab_caches, list) {
> +		n = get_node(s, offline_node);
> +		if (n) {
> +			/*
> +			 * if n->nr_slabs > 0, slabs still exist on the node
> +			 * that is going down. We were unable to free them,
> +			 * and offline_pages() function shoudn't call this
> +			 * callback. So, we must fail.
> +			 */
> +			BUG_ON(atomic_read(&n->nr_slabs));

Expereince tells us that WARN_ON is preferred for newly added code ;)

> +			s->node[offline_node] = NULL;
> +			kmem_cache_free(kmalloc_caches, n);
> +		}
> +	}
> +	up_read(&slub_lock);
> +}
> +
> +static int slab_mem_going_online_callback(void *arg)
> +{
> +	struct kmem_cache_node *n;
> +	struct kmem_cache *s;
> +	struct memory_notify *marg = arg;
> +	int nid = marg->status_change_nid;
> +
> +	/*
> +	 * If the node's memory is already available, then kmem_cache_node is
> +	 * already created. Nothing to do.
> +	 */
> +	if (nid < 0)
> +		return 0;
> +
> +	/*
> +	 * We are bringing a node online. No memory is availabe yet. We must
> +	 * allocate a kmem_cache_node structure in order to bring the node
> +	 * online.
> +	 */
> +	down_read(&slub_lock);
> +	list_for_each_entry(s, &slab_caches, list) {
> +  		/*
> +		 * XXX: kmem_cache_alloc_node will fallback to other nodes
> +		 *      since memory is not yet available from the node that
> +		 *      is brought up.
> +  		 */
> +		n = kmem_cache_alloc(kmalloc_caches, GFP_KERNEL);
> +		if (!n)
> +			return -ENOMEM;

err, we forgot slub_lock.  I'll fix that.

> +		init_kmem_cache_node(n);
> +		s->node[nid] = n;
> +  	}
> +	up_read(&slub_lock);
> +
> +  	return 0;
> +}

So that's slub.  Does slab already have this functionality or are you
not bothering to maintain slab in this area?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3)
  2007-10-18  3:46 ` Andrew Morton
@ 2007-10-18  6:25   ` Christoph Lameter
  2007-10-18  7:00     ` Andrew Morton
  2007-10-18  9:20   ` Yasunori Goto
  1 sibling, 1 reply; 11+ messages in thread
From: Christoph Lameter @ 2007-10-18  6:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Yasunori Goto, Linux Kernel ML, linux-mm

On Wed, 17 Oct 2007, Andrew Morton wrote:

> > +#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)
> 
> hm.  There should be no linkage between memory hotpluggability and
> NUMA, surely?

NUMA support in the slab allocators requires allocation of per node 
structures. The per node structures are folded into the global structure 
for non NUMA.

> > +			/*
> > +			 * if n->nr_slabs > 0, slabs still exist on the node
> > +			 * that is going down. We were unable to free them,
> > +			 * and offline_pages() function shoudn't call this
> > +			 * callback. So, we must fail.
> > +			 */
> > +			BUG_ON(atomic_read(&n->nr_slabs));
> 
> Expereince tells us that WARN_ON is preferred for newly added code ;)

It would be bad to just zap a per node array while there is still data in 
there. This will cause later failures when an attempt is made to free the 
objects that now have no per node structure anymore.

> > +  		/*
> > +		 * XXX: kmem_cache_alloc_node will fallback to other nodes
> > +		 *      since memory is not yet available from the node that
> > +		 *      is brought up.
> > +  		 */
> > +		n = kmem_cache_alloc(kmalloc_caches, GFP_KERNEL);
> > +		if (!n)
> > +			return -ENOMEM;
> 
> err, we forgot slub_lock.  I'll fix that.

Right.

> So that's slub.  Does slab already have this functionality or are you
> not bothering to maintain slab in this area?

Slab brings up a per node structure when the corresponding cpu is brought 
up. That was sufficient as long as we did not have any memoryless nodes. 
Now we may have to fix some things over there as well.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3)
  2007-10-18  6:25   ` Christoph Lameter
@ 2007-10-18  7:00     ` Andrew Morton
  2007-10-18  8:33       ` Yasunori Goto
  2007-10-18  9:13       ` Christoph Lameter
  0 siblings, 2 replies; 11+ messages in thread
From: Andrew Morton @ 2007-10-18  7:00 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Yasunori Goto, Linux Kernel ML, linux-mm

On Wed, 17 Oct 2007 23:25:58 -0700 (PDT) Christoph Lameter <clameter@sgi.com> wrote:

> > So that's slub.  Does slab already have this functionality or are you
> > not bothering to maintain slab in this area?
> 
> Slab brings up a per node structure when the corresponding cpu is brought 
> up. That was sufficient as long as we did not have any memoryless nodes. 
> Now we may have to fix some things over there as well.

Is there amy point?  Our time would be better spent in making
slab.c go away.  How close are we to being able to do that anwyay?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3)
  2007-10-18  7:00     ` Andrew Morton
@ 2007-10-18  8:33       ` Yasunori Goto
  2007-10-18  9:13       ` Christoph Lameter
  1 sibling, 0 replies; 11+ messages in thread
From: Yasunori Goto @ 2007-10-18  8:33 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Christoph Lameter, Linux Kernel ML, linux-mm

> On Wed, 17 Oct 2007 23:25:58 -0700 (PDT) Christoph Lameter <clameter@sgi.com> wrote:
> 
> > > So that's slub.  Does slab already have this functionality or are you
> > > not bothering to maintain slab in this area?
> > 
> > Slab brings up a per node structure when the corresponding cpu is brought 
> > up. That was sufficient as long as we did not have any memoryless nodes. 

Right. At least, I don't have any experience of panic with SLAB so far.
(If panic occurred, I already made a patch.).

> > Now we may have to fix some things over there as well.

Though the fix may be better for it, my priority is very low for it
now.



-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3)
  2007-10-18  7:00     ` Andrew Morton
  2007-10-18  8:33       ` Yasunori Goto
@ 2007-10-18  9:13       ` Christoph Lameter
  1 sibling, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2007-10-18  9:13 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Yasunori Goto, Linux Kernel ML, linux-mm

On Thu, 18 Oct 2007, Andrew Morton wrote:

> > Slab brings up a per node structure when the corresponding cpu is brought 
> > up. That was sufficient as long as we did not have any memoryless nodes. 
> > Now we may have to fix some things over there as well.
> 
> Is there amy point?  Our time would be better spent in making
> slab.c go away.  How close are we to being able to do that anwyay?

Well the problem right now is the regression in slab_free() on SMP. 
AFAICT UP and NUMA is fine and also most loads under SMP. Concurrent 
allocation / frees on multiple processors are several times faster (I see 
up to 10 fold improvements on an 8p).

However, long sequences of free operations from a single processor under 
SMP require too many atomic operations compared with SLAB. If I only do 
frees on a single processor on SMP then I can produce a 30% regression for 
slabs between 128 and 1024 byte in size. I have a patchset in the works 
that reduces the atomic operations for those.

SLAB currently has an advantage since it uses coarser grained locking. 
SLAB can take a global lock and then perform queue operations on 
multiple objects. SLUB has fine grained locking which increases 
concurrency but also the overhead of atomic operations.

The regression does not surface under UP since we do not need to do 
locking. And it does not surface under NUMA since the alien cache stuff in 
SLAB is reducing slab_free performance compared to SMP.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3)
  2007-10-18  3:46 ` Andrew Morton
  2007-10-18  6:25   ` Christoph Lameter
@ 2007-10-18  9:20   ` Yasunori Goto
  1 sibling, 0 replies; 11+ messages in thread
From: Yasunori Goto @ 2007-10-18  9:20 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Christoph Lameter, Linux Kernel ML, linux-mm

> On Thu, 18 Oct 2007 12:25:37 +0900 Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
> 
> > 
> > This patch fixes panic due to access NULL pointer
> > of kmem_cache_node at discard_slab() after memory online.
> > 
> > When memory online is called, kmem_cache_nodes are created for
> > all SLUBs for new node whose memory are available.
> > 
> > slab_mem_going_online_callback() is called to make kmem_cache_node()
> > in callback of memory online event. If it (or other callbacks) fails,
> > then slab_mem_offline_callback() is called for rollback.
> > 
> > In memory offline, slab_mem_going_offline_callback() is called to
> > shrink all slub cache, then slab_mem_offline_callback() is called later.
> > 
> > This patch is tested on my ia64 box.
> > 
> > ...
> >  
> > +#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)
> 
> hm.  There should be no linkage between memory hotpluggability and
> NUMA, surely?

Sure. IBM's powerpc boxes have to support memory hotplug even if it 
is non-numa machine. They have the Dynamic Logical Partitioning feature.

> > +	down_read(&slub_lock);
> > +	list_for_each_entry(s, &slab_caches, list) {
> > +		n = get_node(s, offline_node);
> > +		if (n) {
> > +			/*
> > +			 * if n->nr_slabs > 0, slabs still exist on the node
> > +			 * that is going down. We were unable to free them,
> > +			 * and offline_pages() function shoudn't call this
> > +			 * callback. So, we must fail.
> > +			 */
> > +			BUG_ON(atomic_read(&n->nr_slabs));
> 
> Expereince tells us that WARN_ON is preferred for newly added code ;)

Oh... Ok!

> > +			s->node[offline_node] = NULL;
> > +			kmem_cache_free(kmalloc_caches, n);
> > +		}
> > +	}
> > +	up_read(&slub_lock);
> > +}
> > +
> > +static int slab_mem_going_online_callback(void *arg)
> > +{
> > +	struct kmem_cache_node *n;
> > +	struct kmem_cache *s;
> > +	struct memory_notify *marg = arg;
> > +	int nid = marg->status_change_nid;
> > +
> > +	/*
> > +	 * If the node's memory is already available, then kmem_cache_node is
> > +	 * already created. Nothing to do.
> > +	 */
> > +	if (nid < 0)
> > +		return 0;
> > +
> > +	/*
> > +	 * We are bringing a node online. No memory is availabe yet. We must
> > +	 * allocate a kmem_cache_node structure in order to bring the node
> > +	 * online.
> > +	 */
> > +	down_read(&slub_lock);
> > +	list_for_each_entry(s, &slab_caches, list) {
> > +  		/*
> > +		 * XXX: kmem_cache_alloc_node will fallback to other nodes
> > +		 *      since memory is not yet available from the node that
> > +		 *      is brought up.
> > +  		 */
> > +		n = kmem_cache_alloc(kmalloc_caches, GFP_KERNEL);
> > +		if (!n)
> > +			return -ENOMEM;
> 
> err, we forgot slub_lock.  I'll fix that.

Oops. Indeed. Thanks for your check.

Bye.

-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] Fix warning in mm/slub.c
  2007-10-18  3:25 [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3) Yasunori Goto
  2007-10-18  3:46 ` Andrew Morton
@ 2007-10-23  4:21 ` Olof Johansson
  2007-10-23  5:35   ` Yasunori Goto
  2007-10-23 16:21   ` Christoph Lameter
  1 sibling, 2 replies; 11+ messages in thread
From: Olof Johansson @ 2007-10-23  4:21 UTC (permalink / raw)
  To: Yasunori Goto; +Cc: Andrew Morton, Christoph Lameter, Linux Kernel ML, linux-mm

Hi,

"Make kmem_cache_node for SLUB on memory online to avoid panic" introduced
the following:

mm/slub.c:2737: warning: passing argument 1 of 'atomic_read' from
incompatible pointer type


Signed-off-by: Olof Johansson <olof@lixom.net>


diff --git a/mm/slub.c b/mm/slub.c
index aac1dd3..bcdb2c8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2734,7 +2734,7 @@ static void slab_mem_offline_callback(void *arg)
 			 * and offline_pages() function shoudn't call this
 			 * callback. So, we must fail.
 			 */
-			BUG_ON(atomic_read(&n->nr_slabs));
+			BUG_ON(atomic_long_read(&n->nr_slabs));
 
 			s->node[offline_node] = NULL;
 			kmem_cache_free(kmalloc_caches, n);

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] Fix warning in mm/slub.c
  2007-10-23  4:21 ` [PATCH] Fix warning in mm/slub.c Olof Johansson
@ 2007-10-23  5:35   ` Yasunori Goto
  2007-10-23  7:52     ` Pekka Enberg
  2007-10-23 16:21   ` Christoph Lameter
  1 sibling, 1 reply; 11+ messages in thread
From: Yasunori Goto @ 2007-10-23  5:35 UTC (permalink / raw)
  To: Olof Johansson
  Cc: Andrew Morton, Christoph Lameter, Linux Kernel ML, linux-mm

> "Make kmem_cache_node for SLUB on memory online to avoid panic" introduced
> the following:
> 
> mm/slub.c:2737: warning: passing argument 1 of 'atomic_read' from
> incompatible pointer type
> 
> 
> Signed-off-by: Olof Johansson <olof@lixom.net>
> 
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index aac1dd3..bcdb2c8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2734,7 +2734,7 @@ static void slab_mem_offline_callback(void *arg)
>  			 * and offline_pages() function shoudn't call this
>  			 * callback. So, we must fail.
>  			 */
> -			BUG_ON(atomic_read(&n->nr_slabs));
> +			BUG_ON(atomic_long_read(&n->nr_slabs));
>  
>  			s->node[offline_node] = NULL;
>  			kmem_cache_free(kmalloc_caches, n);


Oops, yes. Thanks.

Acked-by: Yasunori Goto <y-goto@jp.fujitsu.com>



-- 
Yasunori Goto 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] Fix warning in mm/slub.c
  2007-10-23  5:35   ` Yasunori Goto
@ 2007-10-23  7:52     ` Pekka Enberg
  0 siblings, 0 replies; 11+ messages in thread
From: Pekka Enberg @ 2007-10-23  7:52 UTC (permalink / raw)
  To: Yasunori Goto
  Cc: Olof Johansson, Andrew Morton, Christoph Lameter,
	Linux Kernel ML, linux-mm

Hi,

On 10/23/07, Yasunori Goto <y-goto@jp.fujitsu.com> wrote:
> > "Make kmem_cache_node for SLUB on memory online to avoid panic" introduced
> > the following:
> >
> > mm/slub.c:2737: warning: passing argument 1 of 'atomic_read' from
> > incompatible pointer type
> >
> >
> > Signed-off-by: Olof Johansson <olof@lixom.net>
> >
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index aac1dd3..bcdb2c8 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2734,7 +2734,7 @@ static void slab_mem_offline_callback(void *arg)
> >                        * and offline_pages() function shoudn't call this
> >                        * callback. So, we must fail.
> >                        */
> > -                     BUG_ON(atomic_read(&n->nr_slabs));
> > +                     BUG_ON(atomic_long_read(&n->nr_slabs));
> >
> >                       s->node[offline_node] = NULL;
> >                       kmem_cache_free(kmalloc_caches, n);
>
>
> Oops, yes. Thanks.
>
> Acked-by: Yasunori Goto <y-goto@jp.fujitsu.com>

Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] Fix warning in mm/slub.c
  2007-10-23  4:21 ` [PATCH] Fix warning in mm/slub.c Olof Johansson
  2007-10-23  5:35   ` Yasunori Goto
@ 2007-10-23 16:21   ` Christoph Lameter
  1 sibling, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2007-10-23 16:21 UTC (permalink / raw)
  To: Olof Johansson; +Cc: Yasunori Goto, Andrew Morton, Linux Kernel ML, linux-mm

Acked-by: Christoph Lameter <clameter@sgi.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2007-10-23 16:21 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-10-18  3:25 [Patch](memory hotplug) Make kmem_cache_node for SLUB on memory online to avoid panic(take 3) Yasunori Goto
2007-10-18  3:46 ` Andrew Morton
2007-10-18  6:25   ` Christoph Lameter
2007-10-18  7:00     ` Andrew Morton
2007-10-18  8:33       ` Yasunori Goto
2007-10-18  9:13       ` Christoph Lameter
2007-10-18  9:20   ` Yasunori Goto
2007-10-23  4:21 ` [PATCH] Fix warning in mm/slub.c Olof Johansson
2007-10-23  5:35   ` Yasunori Goto
2007-10-23  7:52     ` Pekka Enberg
2007-10-23 16:21   ` Christoph Lameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).