LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [git pull] slab fixes for 2.6.25-rc7
@ 2008-03-26 17:50 Christoph Lameter
  0 siblings, 0 replies; only message in thread
From: Christoph Lameter @ 2008-03-26 17:50 UTC (permalink / raw)
  To: torvalds; +Cc: akpm, Pekka J Enberg, linux-kernel

Another fix to the memoryless node support in SLAB and a patch to avoid a 
compiler warning in SLUB if neither CONFIG_SLABINFO nor CONFIG_SLUB_DEBUG 
is et.

The following changes since commit 05dda977f2574c3341abef9b74c27d2b362e1e3a:
  Linus Torvalds (1):
        Linux 2.6.25-rc7

are available in the git repository at:

  git://master.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slab-linus

Christoph Lameter (1):
      count_partial() is not used if !SLUB_DEBUG and !CONFIG_SLABINFO

Daniel Yeisley (1):
      slab: fix cache_cache bootstrap in kmem_cache_init()

 mm/slab.c |    4 ++--
 mm/slub.c |    2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index bb4070e..04b308c 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1481,7 +1481,7 @@ void __init kmem_cache_init(void)
 	list_add(&cache_cache.next, &cache_chain);
 	cache_cache.colour_off = cache_line_size();
 	cache_cache.array[smp_processor_id()] = &initarray_cache.cache;
-	cache_cache.nodelists[node] = &initkmem_list3[CACHE_CACHE];
+	cache_cache.nodelists[node] = &initkmem_list3[CACHE_CACHE + node];
 
 	/*
 	 * struct kmem_cache size depends on nr_node_ids, which
@@ -1602,7 +1602,7 @@ void __init kmem_cache_init(void)
 		int nid;
 
 		for_each_online_node(nid) {
-			init_list(&cache_cache, &initkmem_list3[CACHE_CACHE], nid);
+			init_list(&cache_cache, &initkmem_list3[CACHE_CACHE + nid], nid);
 
 			init_list(malloc_sizes[INDEX_AC].cs_cachep,
 				  &initkmem_list3[SIZE_AC + nid], nid);
diff --git a/mm/slub.c b/mm/slub.c
index ca71d5b..b72bc98 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2685,6 +2685,7 @@ void kfree(const void *x)
 }
 EXPORT_SYMBOL(kfree);
 
+#if defined(SLUB_DEBUG) || defined(CONFIG_SLABINFO)
 static unsigned long count_partial(struct kmem_cache_node *n)
 {
 	unsigned long flags;
@@ -2697,6 +2698,7 @@ static unsigned long count_partial(struct kmem_cache_node *n)
 	spin_unlock_irqrestore(&n->list_lock, flags);
 	return x;
 }
+#endif
 
 /*
  * kmem_cache_shrink removes empty slabs from the partial lists and sorts

^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2008-03-26 17:52 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-03-26 17:50 [git pull] slab fixes for 2.6.25-rc7 Christoph Lameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).