LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Yinghai Lu <Yinghai.Lu@Sun.COM>
To: Christoph Lameter <clameter@sgi.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH] x86_64: cleanup setup_node_zones called by paging_init v2
Date: Wed, 09 Jan 2008 10:30:40 -0800	[thread overview]
Message-ID: <200801091030.40545.yinghai.lu@sun.com> (raw)
In-Reply-To: <Pine.LNX.4.64.0801090947570.10163@schroedinger.engr.sgi.com>

[PATCH] x86_64: cleanup setup_node_zones called by paging_init v2

setup_node_zones calcuates some variable but only use them when FLAT_NODE_MEM_MAP is set

so change the MACRO postion to avoid calculating.

also change it to static

Signed-off-by: Yinghai Lu <yinghai.lu@sun.com>

Index: linux-2.6/arch/x86/mm/numa_64.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/numa_64.c
+++ linux-2.6/arch/x86/mm/numa_64.c
@@ -229,8 +229,9 @@ void __init setup_node_bootmem(int nodei
 	node_set_online(nodeid);
 } 
 
+#ifdef CONFIG_FLAT_NODE_MEM_MAP
 /* Initialize final allocator for a zone */
-void __init setup_node_zones(int nodeid)
+static void __init setup_node_zones(int nodeid)
 { 
 	unsigned long start_pfn, end_pfn, memmapsize, limit;
 
@@ -244,14 +245,14 @@ void __init setup_node_zones(int nodeid)
 	   memory. */
 	memmapsize = sizeof(struct page) * (end_pfn-start_pfn);
 	limit = end_pfn << PAGE_SHIFT;
-#ifdef CONFIG_FLAT_NODE_MEM_MAP
+
 	NODE_DATA(nodeid)->node_mem_map = 
 		__alloc_bootmem_core(NODE_DATA(nodeid)->bdata, 
 				memmapsize, SMP_CACHE_BYTES, 
 				round_down(limit - memmapsize, PAGE_SIZE), 
 				limit);
-#endif
 } 
+#endif
 
 void __init numa_init_array(void)
 {
@@ -570,9 +571,11 @@ void __init paging_init(void)
 	sparse_memory_present_with_active_regions(MAX_NUMNODES);
 	sparse_init();
 
+#ifdef CONFIG_FLAT_NODE_MEM_MAP
 	for_each_online_node(i) {
 		setup_node_zones(i); 
 	}
+#endif
 
 	free_area_init_nodes(max_zone_pfns);
 } 


  reply	other threads:[~2008-01-09 18:25 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-09  3:34 [PATCH] x86_64: cleanup setup_node_zones called by paging_init Yinghai Lu
2008-01-09 17:49 ` Christoph Lameter
2008-01-09 18:30   ` Yinghai Lu [this message]
2008-01-09 19:11     ` [PATCH] x86_64: cleanup setup_node_zones called by paging_init v2 Dave Hansen
2008-01-09 19:19     ` Christoph Lameter
2008-01-09 20:34       ` [PATCH] x86_64: cleanup setup_node_zones called by paging_init v3 Yinghai Lu
2008-01-10 19:27         ` [PATCH] x86_64: cleanup setup_node_zones called by paging_init v4 Yinghai Lu
2008-01-12 11:26       ` [PATCH] x86_64: cleanup setup_node_zones called by paging_init v2 Yinghai Lu
2008-01-14 19:48         ` Christoph Lameter
2008-01-18 22:48           ` [PATCH] x86_64: only support sparsemem fix Yinghai Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200801091030.40545.yinghai.lu@sun.com \
    --to=yinghai.lu@sun.com \
    --cc=akpm@linux-foundation.org \
    --cc=clameter@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).