LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Simek <monstr@monstr.eu>, Mike Rapoport <rppt@kernel.org>,
Mike Rapoport <rppt@linux.ibm.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH 3/4] mm: introduce memmap_alloc() to unify memory map allocation
Date: Wed, 14 Jul 2021 15:37:38 +0300 [thread overview]
Message-ID: <20210714123739.16493-4-rppt@kernel.org> (raw)
In-Reply-To: <20210714123739.16493-1-rppt@kernel.org>
From: Mike Rapoport <rppt@linux.ibm.com>
There are several places that allocate memory for the memory map:
alloc_node_mem_map() for FLATMEM, sparse_buffer_init() and
__populate_section_memmap() for SPARSEMEM.
The memory allocated in the FLATMEM case is zeroed and it is never
poisoned, regardless of CONFIG_PAGE_POISON setting.
The memory allocated in the SPARSEMEM cases is not zeroed and it is
implicitly poisoned inside memblock if CONFIG_PAGE_POISON is set.
Introduce memmap_alloc() wrapper for memblock allocators that will be used
for both FLATMEM and SPARSEMEM cases and will makei memory map zeroing and
poisoning consistent for different memory models.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
mm/internal.h | 4 ++++
mm/page_alloc.c | 24 ++++++++++++++++++++++--
mm/sparse.c | 6 ++----
3 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 31ff935b2547..57e28261a3b1 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -211,6 +211,10 @@ extern void zone_pcp_reset(struct zone *zone);
extern void zone_pcp_disable(struct zone *zone);
extern void zone_pcp_enable(struct zone *zone);
+extern void *memmap_alloc(phys_addr_t size, phys_addr_t align,
+ phys_addr_t min_addr,
+ int nid, bool exact_nid);
+
#if defined CONFIG_COMPACTION || defined CONFIG_CMA
/*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 878d7af4403d..b82e55006894 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6730,6 +6730,26 @@ static void __init memmap_init(void)
init_unavailable_range(hole_pfn, end_pfn, zone_id, nid);
}
+void __init *memmap_alloc(phys_addr_t size, phys_addr_t align,
+ phys_addr_t min_addr, int nid, bool exact_nid)
+{
+ void *ptr;
+
+ if (exact_nid)
+ ptr = memblock_alloc_exact_nid_raw(size, align, min_addr,
+ MEMBLOCK_ALLOC_ACCESSIBLE,
+ nid);
+ else
+ ptr = memblock_alloc_try_nid_raw(size, align, min_addr,
+ MEMBLOCK_ALLOC_ACCESSIBLE,
+ nid);
+
+ if (ptr && size > 0)
+ page_init_poison(ptr, size);
+
+ return ptr;
+}
+
static int zone_batchsize(struct zone *zone)
{
#ifdef CONFIG_MMU
@@ -7501,8 +7521,8 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
end = pgdat_end_pfn(pgdat);
end = ALIGN(end, MAX_ORDER_NR_PAGES);
size = (end - start) * sizeof(struct page);
- map = memblock_alloc_node(size, SMP_CACHE_BYTES,
- pgdat->node_id);
+ map = memmap_alloc(size, SMP_CACHE_BYTES, MEMBLOCK_LOW_LIMIT,
+ pgdat->node_id, false);
if (!map)
panic("Failed to allocate %ld bytes for node %d memory map\n",
size, pgdat->node_id);
diff --git a/mm/sparse.c b/mm/sparse.c
index 6326cdf36c4f..a5fad244ac5f 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -462,8 +462,7 @@ struct page __init *__populate_section_memmap(unsigned long pfn,
if (map)
return map;
- map = memblock_alloc_try_nid_raw(size, size, addr,
- MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+ map = memmap_alloc(size, size, addr, nid, false);
if (!map)
panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%pa\n",
__func__, size, PAGE_SIZE, nid, &addr);
@@ -490,8 +489,7 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
* and we want it to be properly aligned to the section size - this is
* especially the case for VMEMMAP which maps memmap to PMDs
*/
- sparsemap_buf = memblock_alloc_exact_nid_raw(size, section_map_size(),
- addr, MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+ sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true);
sparsemap_buf_end = sparsemap_buf + size;
}
--
2.28.0
next prev parent reply other threads:[~2021-07-14 12:37 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-14 12:37 [PATCH 0/4] mm: ensure consistency of memory map poisoning Mike Rapoport
2021-07-14 12:37 ` [PATCH 1/4] mm/page_alloc: always initialize memory map for the holes Mike Rapoport
2021-07-31 16:56 ` Guenter Roeck
2021-07-31 18:30 ` Mike Rapoport
2021-07-31 19:11 ` Guenter Roeck
2021-08-25 12:11 ` David Hildenbrand
2021-07-14 12:37 ` [PATCH 2/4] microblaze: simplify pte_alloc_one_kernel() Mike Rapoport
2021-08-25 10:09 ` Michal Simek
2021-08-25 12:13 ` David Hildenbrand
2021-07-14 12:37 ` Mike Rapoport [this message]
2021-07-14 22:32 ` [PATCH 3/4] mm: introduce memmap_alloc() to unify memory map allocation Andrew Morton
2021-07-15 6:10 ` Mike Rapoport
2021-07-14 12:37 ` [PATCH 4/4] memblock: stop poisoning raw allocations Mike Rapoport
2021-07-31 17:13 ` Joe Perches
2021-08-03 7:58 ` Mike Rapoport
2021-08-03 16:19 ` Joe Perches
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210714123739.16493-4-rppt@kernel.org \
--to=rppt@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=monstr@monstr.eu \
--cc=rppt@linux.ibm.com \
--subject='Re: [PATCH 3/4] mm: introduce memmap_alloc() to unify memory map allocation' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).