LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <guro@fb.com>, Uladzislau Rezki <urezki@gmail.com>,
Michal Hocko <mhocko@suse.com>,
Matthew Wilcox <willy@infradead.org>,
linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
Thomas Garnier <thgarnie@google.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>,
Steven Rostedt <rostedt@goodmis.org>,
Joel Fernandes <joelaf@google.com>,
Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
Tejun Heo <tj@kernel.org>
Subject: [PATCH 2/4] mm/vmap: preload a CPU with one object for split purpose
Date: Wed, 22 May 2019 17:09:37 +0200 [thread overview]
Message-ID: <20190522150939.24605-2-urezki@gmail.com> (raw)
In-Reply-To: <20190522150939.24605-1-urezki@gmail.com>
Introduce ne_fit_preload()/ne_fit_preload_end() functions
for preloading one extra vmap_area object to ensure that
we have it available when fit type is NE_FIT_TYPE.
The preload is done per CPU and with GFP_KERNEL permissive
allocation masks, which allow to be more stable under low
memory condition and high memory pressure.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
mm/vmalloc.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 78 insertions(+), 3 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ea1b65fac599..5302e1b79c7b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -364,6 +364,13 @@ static LIST_HEAD(free_vmap_area_list);
*/
static struct rb_root free_vmap_area_root = RB_ROOT;
+/*
+ * Preload a CPU with one object for "no edge" split case. The
+ * aim is to get rid of allocations from the atomic context, thus
+ * to use more permissive allocation masks.
+ */
+static DEFINE_PER_CPU(struct vmap_area *, ne_fit_preload_node);
+
static __always_inline unsigned long
va_size(struct vmap_area *va)
{
@@ -950,9 +957,24 @@ adjust_va_to_fit_type(struct vmap_area *va,
* L V NVA V R
* |---|-------|---|
*/
- lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT);
- if (unlikely(!lva))
- return -1;
+ lva = __this_cpu_xchg(ne_fit_preload_node, NULL);
+ if (unlikely(!lva)) {
+ /*
+ * For percpu allocator we do not do any pre-allocation
+ * and leave it as it is. The reason is it most likely
+ * never ends up with NE_FIT_TYPE splitting. In case of
+ * percpu allocations offsets and sizes are aligned to
+ * fixed align request, i.e. RE_FIT_TYPE and FL_FIT_TYPE
+ * are its main fitting cases.
+ *
+ * There are few exceptions though, as en example it is
+ * a first allocation(early boot up) when we have "one"
+ * big free space that has to be split.
+ */
+ lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT);
+ if (!lva)
+ return -1;
+ }
/*
* Build the remainder.
@@ -1023,6 +1045,50 @@ __alloc_vmap_area(unsigned long size, unsigned long align,
}
/*
+ * Preload this CPU with one extra vmap_area object to ensure
+ * that we have it available when fit type of free area is
+ * NE_FIT_TYPE.
+ *
+ * The preload is done in non-atomic context thus, it allows us
+ * to use more permissive allocation masks, therefore to be more
+ * stable under low memory condition and high memory pressure.
+ *
+ * If success, it returns zero with preemption disabled. In case
+ * of error, (-ENOMEM) is returned with preemption not disabled.
+ * Note it has to be paired with alloc_vmap_area_preload_end().
+ */
+static void
+ne_fit_preload(int *preloaded)
+{
+ preempt_disable();
+
+ if (!__this_cpu_read(ne_fit_preload_node)) {
+ struct vmap_area *node;
+
+ preempt_enable();
+ node = kmem_cache_alloc(vmap_area_cachep, GFP_KERNEL);
+ if (node == NULL) {
+ *preloaded = 0;
+ return;
+ }
+
+ preempt_disable();
+
+ if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, node))
+ kmem_cache_free(vmap_area_cachep, node);
+ }
+
+ *preloaded = 1;
+}
+
+static void
+ne_fit_preload_end(int preloaded)
+{
+ if (preloaded)
+ preempt_enable();
+}
+
+/*
* Allocate a region of KVA of the specified size and alignment, within the
* vstart and vend.
*/
@@ -1034,6 +1100,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
struct vmap_area *va;
unsigned long addr;
int purged = 0;
+ int preloaded;
BUG_ON(!size);
BUG_ON(offset_in_page(size));
@@ -1056,6 +1123,12 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK);
retry:
+ /*
+ * Even if it fails we do not really care about that.
+ * Just proceed as it is. "overflow" path will refill
+ * the cache we allocate from.
+ */
+ ne_fit_preload(&preloaded);
spin_lock(&vmap_area_lock);
/*
@@ -1063,6 +1136,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
* returned. Therefore trigger the overflow path.
*/
addr = __alloc_vmap_area(size, align, vstart, vend);
+ ne_fit_preload_end(preloaded);
+
if (unlikely(addr == vend))
goto overflow;
--
2.11.0
next prev parent reply other threads:[~2019-05-22 15:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-22 15:09 [PATCH 1/4] mm/vmap: remove "node" argument Uladzislau Rezki (Sony)
2019-05-22 15:09 ` Uladzislau Rezki (Sony) [this message]
2019-05-22 18:19 ` [PATCH 2/4] mm/vmap: preload a CPU with one object for split purpose Andrew Morton
2019-05-23 11:42 ` Uladzislau Rezki
2019-05-22 15:09 ` [PATCH 3/4] mm/vmap: get rid of one single unlink_va() when merge Uladzislau Rezki (Sony)
2019-05-22 18:19 ` Andrew Morton
2019-05-23 11:49 ` Uladzislau Rezki
2019-05-22 15:09 ` [PATCH 4/4] mm/vmap: move BUG_ON() check to the unlink_va() Uladzislau Rezki (Sony)
2019-05-22 18:19 ` Andrew Morton
2019-05-23 12:07 ` Uladzislau Rezki
[not found] ` <20190524103316.1352-1-hdanton@sina.com>
2019-05-24 14:14 ` [PATCH 2/4] mm/vmap: preload a CPU with one object for split purpose Uladzislau Rezki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190522150939.24605-2-urezki@gmail.com \
--to=urezki@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=joelaf@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mingo@elte.hu \
--cc=oleksiy.avramchenko@sonymobile.com \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=thgarnie@google.com \
--cc=tj@kernel.org \
--cc=willy@infradead.org \
--subject='Re: [PATCH 2/4] mm/vmap: preload a CPU with one object for split purpose' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).