LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] mm/vmalloc: try alloc_pages_bulk first to get order 0 pages fast
@ 2021-07-09  9:28 Yang Huan
  2021-07-09  9:38 ` Mel Gorman
  0 siblings, 1 reply; 5+ messages in thread
From: Yang Huan @ 2021-07-09  9:28 UTC (permalink / raw)
  To: Andrew Morton, linux-mm, linux-kernel; +Cc: kernel, Yang Huan

Vmalloc may offen get pages by loop invoke alloc_pags, this is
cost too much time in count watermark/cpuset or something.
Let's just try alloc by alloc_pages_bulk, if failed, fullback in
original path.

With my own test, simulate loop alloc_page and alloc_pages_bulk_array,
get this:
size		1M	10M	20M	30
normal		44	1278	3665	5581
test		34	889	2167	3300
optimize	22%	30%	40%	40%
And in my vmalloc top sort, zram/f2fs may alloc more than 20MB, so,
It's worth to use alloc_pages_bulk.

Signed-off-by: Yang Huan <link@vivo.com>
---
 mm/vmalloc.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a13ac524f6ff..b5af7b4e30bc 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2791,17 +2791,23 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 	}
 
 	area->pages = pages;
-	area->nr_pages = nr_small_pages;
+	area->nr_pages = 0;
 	set_vm_area_page_order(area, page_shift - PAGE_SHIFT);
 
 	page_order = vm_area_page_order(area);
-
+	/* first try alloc in alloc bulk when order is 0*/
+	if (!page_order) {
+		area->nr_pages = alloc_pages_bulk_array(
+			gfp_mask, nr_small_pages, area->pages);
+		if (likely(area->nr_pages == nr_small_pages))
+			goto success;
+	}
 	/*
 	 * Careful, we allocate and map page_order pages, but tracking is done
 	 * per PAGE_SIZE page so as to keep the vm_struct APIs independent of
 	 * the physical/mapped size.
 	 */
-	for (i = 0; i < area->nr_pages; i += 1U << page_order) {
+	for (i = area->nr_pages; i < nr_small_pages; i += 1U << page_order) {
 		struct page *page;
 		int p;
 
@@ -2824,6 +2830,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		if (gfpflags_allow_blocking(gfp_mask))
 			cond_resched();
 	}
+	area->nr_pages = nr_small_pages;
+success:
 	atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
 
 	if (vmap_pages_range(addr, addr + size, prot, pages, page_shift) < 0) {
-- 
2.32.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/vmalloc: try alloc_pages_bulk first to get order 0 pages fast
  2021-07-09  9:28 [PATCH] mm/vmalloc: try alloc_pages_bulk first to get order 0 pages fast Yang Huan
@ 2021-07-09  9:38 ` Mel Gorman
  2021-07-09  9:53   ` 杨欢
  0 siblings, 1 reply; 5+ messages in thread
From: Mel Gorman @ 2021-07-09  9:38 UTC (permalink / raw)
  To: Yang Huan; +Cc: Andrew Morton, linux-mm, linux-kernel, kernel, Uladzislau Rezki

On Fri, Jul 09, 2021 at 05:28:31PM +0800, Yang Huan wrote:
> Vmalloc may offen get pages by loop invoke alloc_pags, this is
> cost too much time in count watermark/cpuset or something.
> Let's just try alloc by alloc_pages_bulk, if failed, fullback in
> original path.
> 
> With my own test, simulate loop alloc_page and alloc_pages_bulk_array,
> get this:
> size		1M	10M	20M	30
> normal		44	1278	3665	5581
> test		34	889	2167	3300
> optimize	22%	30%	40%	40%
> And in my vmalloc top sort, zram/f2fs may alloc more than 20MB, so,
> It's worth to use alloc_pages_bulk.
> 
> Signed-off-by: Yang Huan <link@vivo.com>

Thanks. I suggest you take a look at the current merge window and check
if anything additional needs to be done after the vmalloc bulk allocation
by Uladzislau Rezki.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re:Re: [PATCH] mm/vmalloc: try alloc_pages_bulk first to get order 0 pages fast
  2021-07-09  9:38 ` Mel Gorman
@ 2021-07-09  9:53   ` 杨欢
  2021-07-09 22:29     ` Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: 杨欢 @ 2021-07-09  9:53 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, linux-mm, linux-kernel, kernel, Uladzislau Rezki


>> Vmalloc may offen get pages by loop invoke alloc_pags, this is
>> cost too much time in count watermark/cpuset or something.
>> Let's just try alloc by alloc_pages_bulk, if failed, fullback in
>> original path.
>> 
>> With my own test, simulate loop alloc_page and alloc_pages_bulk_array,
>> get this:
>> size		1M	10M	20M	30
>> normal		44	1278	3665	5581
>> test		34	889	2167	3300
>> optimize	22%	30%	40%	40%
>> And in my vmalloc top sort, zram/f2fs may alloc more than 20MB, so,
>> It's worth to use alloc_pages_bulk.
>> 
>> Signed-off-by: Yang Huan <link@vivo.com>
>
>Thanks. I suggest you take a look at the current merge window and check
>if anything additional needs to be done after the vmalloc bulk allocation
Sorry for that, I will work in linux-next
>by Uladzislau Rezki.
>
>-- 
>Mel Gorman
>SUSE Labs
Yang Huan





^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/vmalloc: try alloc_pages_bulk first to get order 0 pages fast
  2021-07-09  9:53   ` 杨欢
@ 2021-07-09 22:29     ` Andrew Morton
  2021-07-12  2:51       ` 杨欢
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2021-07-09 22:29 UTC (permalink / raw)
  To: 杨欢
  Cc: Mel Gorman, linux-mm, linux-kernel, kernel, Uladzislau Rezki

On Fri, 9 Jul 2021 17:53:59 +0800 (GMT+08:00) 杨欢 <link@vivo.com> wrote:

> >Thanks. I suggest you take a look at the current merge window and check
> >if anything additional needs to be done after the vmalloc bulk allocation
> Sorry for that, I will work in linux-next

That material is now in mainline, so work against Linus's 5.14-rc1 please.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re:Re: [PATCH] mm/vmalloc: try alloc_pages_bulk first to get order 0 pages fast
  2021-07-09 22:29     ` Andrew Morton
@ 2021-07-12  2:51       ` 杨欢
  0 siblings, 0 replies; 5+ messages in thread
From: 杨欢 @ 2021-07-12  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, linux-mm, linux-kernel, kernel, Uladzislau Rezki


>
>> >Thanks. I suggest you take a look at the current merge window and check
>> >if anything additional needs to be done after the vmalloc bulk allocation
>> Sorry for that, I will work in linux-next
>
>That material is now in mainline, so work against Linus's 5.14-rc1 please.
OK, thanks










^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-07-12  2:51 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-09  9:28 [PATCH] mm/vmalloc: try alloc_pages_bulk first to get order 0 pages fast Yang Huan
2021-07-09  9:38 ` Mel Gorman
2021-07-09  9:53   ` 杨欢
2021-07-09 22:29     ` Andrew Morton
2021-07-12  2:51       ` 杨欢

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).