LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead
@ 2008-10-13 20:19 Brice Goglin
  2008-10-13 20:21 ` [PATCH 1/5] mm: stop returning -ENOENT from sys_move_pages() if nothing got migrated Brice Goglin
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Brice Goglin @ 2008-10-13 20:19 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

Hello,

Here's the first patchset reworking sys_move_pages() as discussed earlier.
It removes the possibly large vmalloc by using multiple chunks when migrating
large buffers. It also dramatically increases the throughput for large buffers
since the lookup in new_page_node() is now limited to a single chunk, causing
the quadratic complexity to have a much slower impact. There is no need to use
any radix-tree-like structure to improve this lookup.

sys_move_pages() duration on a 4-quadcore-opteron 2347HE (1.9Gz), migrating
between nodes #2 and #3:
	length		move_pages (us)		move_pages+patch (us)
	4kB		126			98
	40kB		198			168
	400kB		963			937
	4MB		12503			11930
	40MB		246867			11848

Patches #1 and #4 are the important ones:
1) stop returning -ENOENT from sys_move_pages() if nothing got migrated
2) don't vmalloc a huge page_to_node array for do_pages_stat()
3) extract do_pages_move() out of sys_move_pages()
4) rework do_pages_move() to work on page_sized chunks
5) move_pages: no need to set pp->page to ZERO_PAGE(0) by default

thanks,
Brice



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5] mm: stop returning -ENOENT from sys_move_pages() if nothing got migrated
  2008-10-13 20:19 [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
@ 2008-10-13 20:21 ` Brice Goglin
  2008-10-16 19:34   ` Christoph Lameter
  2008-10-13 20:21 ` [PATCH 2/5] mm: don't vmalloc a huge page_to_node array for do_pages_stat() Brice Goglin
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Brice Goglin @ 2008-10-13 20:21 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

There is no point in returning -ENOENT from sys_move_pages() if all
pages were already on the right node, while we return 0 if only 1 page
was not. Most application don't know where their pages are allocated,
so it's not an error to try to migrate them anyway.

Just return 0 and let the status array in user-space be checked if the
application needs details.

It will make the upcoming chunked-move_pages() support much easier.

Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
---
 mm/migrate.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 2a80136..e505b2f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -926,11 +926,10 @@ set_status:
 		pp->status = err;
 	}
 
+	err = 0;
 	if (!list_empty(&pagelist))
 		err = migrate_pages(&pagelist, new_page_node,
 				(unsigned long)pm);
-	else
-		err = -ENOENT;
 
 	up_read(&mm->mmap_sem);
 	return err;
-- 
1.5.6.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 2/5] mm: don't vmalloc a huge page_to_node array for do_pages_stat()
  2008-10-13 20:19 [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
  2008-10-13 20:21 ` [PATCH 1/5] mm: stop returning -ENOENT from sys_move_pages() if nothing got migrated Brice Goglin
@ 2008-10-13 20:21 ` Brice Goglin
  2008-10-16 19:39   ` Christoph Lameter
  2008-10-13 20:22 ` [PATCH 3/5] mm: extract do_pages_move() out of sys_move_pages() Brice Goglin
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Brice Goglin @ 2008-10-13 20:21 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

do_pages_stat() does not need any page_to_node entry for real.
Just pass the pointers to the user-space page address array and to
the user-space status array, and have do_pages_stat() traverse the
former and fill the latter directly.

Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
---
 mm/migrate.c |   40 +++++++++++++++++++++++++---------------
 1 files changed, 25 insertions(+), 15 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index e505b2f..e92e4f1 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -936,25 +936,33 @@ set_status:
 }
 
 /*
- * Determine the nodes of a list of pages. The addr in the pm array
- * must have been set to the virtual address of which we want to determine
- * the node number.
+ * Determine the nodes of an array of pages and store it in an array of status.
  */
-static int do_pages_stat(struct mm_struct *mm, struct page_to_node *pm)
+static int do_pages_stat(struct mm_struct *mm, unsigned long nr_pages,
+			 const void __user * __user *pages,
+			 int __user *status)
 {
+	unsigned long i;
+	int err;
+
 	down_read(&mm->mmap_sem);
 
-	for ( ; pm->node != MAX_NUMNODES; pm++) {
+	for (i = 0; i < nr_pages; i++) {
+		const void __user *p;
+		unsigned long addr;
 		struct vm_area_struct *vma;
 		struct page *page;
-		int err;
 
 		err = -EFAULT;
-		vma = find_vma(mm, pm->addr);
+		if (get_user(p, pages+i))
+			goto out;
+		addr = (unsigned long) p;
+
+		vma = find_vma(mm, addr);
 		if (!vma)
 			goto set_status;
 
-		page = follow_page(vma, pm->addr, 0);
+		page = follow_page(vma, addr, 0);
 
 		err = PTR_ERR(page);
 		if (IS_ERR(page))
@@ -967,11 +975,13 @@ static int do_pages_stat(struct mm_struct *mm, struct page_to_node *pm)
 
 		err = page_to_nid(page);
 set_status:
-		pm->status = err;
+		put_user(err, status+i);
 	}
+	err = 0;
 
+out:
 	up_read(&mm->mmap_sem);
-	return 0;
+	return err;
 }
 
 /*
@@ -1027,6 +1037,10 @@ asmlinkage long sys_move_pages(pid_t pid, unsigned long nr_pages,
  	if (err)
  		goto out2;
 
+	if (!nodes) {
+		err = do_pages_stat(mm, nr_pages, pages, status);
+		goto out2;
+	}
 
 	task_nodes = cpuset_mems_allowed(task);
 
@@ -1075,11 +1089,7 @@ asmlinkage long sys_move_pages(pid_t pid, unsigned long nr_pages,
 	/* End marker */
 	pm[nr_pages].node = MAX_NUMNODES;
 
-	if (nodes)
-		err = do_move_pages(mm, pm, flags & MPOL_MF_MOVE_ALL);
-	else
-		err = do_pages_stat(mm, pm);
-
+	err = do_move_pages(mm, pm, flags & MPOL_MF_MOVE_ALL);
 	if (err >= 0)
 		/* Return status information */
 		for (i = 0; i < nr_pages; i++)
-- 
1.5.6.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 3/5] mm: extract do_pages_move() out of sys_move_pages()
  2008-10-13 20:19 [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
  2008-10-13 20:21 ` [PATCH 1/5] mm: stop returning -ENOENT from sys_move_pages() if nothing got migrated Brice Goglin
  2008-10-13 20:21 ` [PATCH 2/5] mm: don't vmalloc a huge page_to_node array for do_pages_stat() Brice Goglin
@ 2008-10-13 20:22 ` Brice Goglin
  2008-10-16 19:40   ` Christoph Lameter
  2008-10-13 20:22 ` [PATCH 4/5] mm: rework do_pages_move() to work on page_sized chunks Brice Goglin
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Brice Goglin @ 2008-10-13 20:22 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

To prepare the chunking, move the sys_move_pages() code that
is used when nodes!=NULL into do_pages_move().
And rename do_move_pages() into do_move_page_to_node_array().

Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
---
 mm/migrate.c |  152 +++++++++++++++++++++++++++++++++-------------------------
 1 files changed, 86 insertions(+), 66 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index e92e4f1..dffc98b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -858,9 +858,11 @@ static struct page *new_page_node(struct page *p, unsigned long private,
  * Move a set of pages as indicated in the pm array. The addr
  * field must be set to the virtual address of the page to be moved
  * and the node number must contain a valid target node.
+ * The pm array ends with node = MAX_NUMNODES.
  */
-static int do_move_pages(struct mm_struct *mm, struct page_to_node *pm,
-				int migrate_all)
+static int do_move_page_to_node_array(struct mm_struct *mm,
+				      struct page_to_node *pm,
+				      int migrate_all)
 {
 	int err;
 	struct page_to_node *pp;
@@ -936,6 +938,81 @@ set_status:
 }
 
 /*
+ * Migrate an array of page address onto an array of nodes and fill
+ * the corresponding array of status.
+ */
+static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
+			 unsigned long nr_pages,
+			 const void __user * __user *pages,
+			 const int __user *nodes,
+			 int __user *status, int flags)
+{
+	struct page_to_node *pm = NULL;
+	nodemask_t task_nodes;
+	int err = 0;
+	int i;
+
+	task_nodes = cpuset_mems_allowed(task);
+
+	/* Limit nr_pages so that the multiplication may not overflow */
+	if (nr_pages >= ULONG_MAX / sizeof(struct page_to_node) - 1) {
+		err = -E2BIG;
+		goto out;
+	}
+
+	pm = vmalloc((nr_pages + 1) * sizeof(struct page_to_node));
+	if (!pm) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	/*
+	 * Get parameters from user space and initialize the pm
+	 * array. Return various errors if the user did something wrong.
+	 */
+	for (i = 0; i < nr_pages; i++) {
+		const void __user *p;
+
+		err = -EFAULT;
+		if (get_user(p, pages + i))
+			goto out_pm;
+
+		pm[i].addr = (unsigned long)p;
+		if (nodes) {
+			int node;
+
+			if (get_user(node, nodes + i))
+				goto out_pm;
+
+			err = -ENODEV;
+			if (!node_state(node, N_HIGH_MEMORY))
+				goto out_pm;
+
+			err = -EACCES;
+			if (!node_isset(node, task_nodes))
+				goto out_pm;
+
+			pm[i].node = node;
+		} else
+			pm[i].node = 0;	/* anything to not match MAX_NUMNODES */
+	}
+	/* End marker */
+	pm[nr_pages].node = MAX_NUMNODES;
+
+	err = do_move_page_to_node_array(mm, pm, flags & MPOL_MF_MOVE_ALL);
+	if (err >= 0)
+		/* Return status information */
+		for (i = 0; i < nr_pages; i++)
+			if (put_user(pm[i].status, status + i))
+				err = -EFAULT;
+
+out_pm:
+	vfree(pm);
+out:
+	return err;
+}
+
+/*
  * Determine the nodes of an array of pages and store it in an array of status.
  */
 static int do_pages_stat(struct mm_struct *mm, unsigned long nr_pages,
@@ -993,12 +1070,9 @@ asmlinkage long sys_move_pages(pid_t pid, unsigned long nr_pages,
 			const int __user *nodes,
 			int __user *status, int flags)
 {
-	int err = 0;
-	int i;
 	struct task_struct *task;
-	nodemask_t task_nodes;
 	struct mm_struct *mm;
-	struct page_to_node *pm = NULL;
+	int err;
 
 	/* Check flags */
 	if (flags & ~(MPOL_MF_MOVE|MPOL_MF_MOVE_ALL))
@@ -1030,75 +1104,21 @@ asmlinkage long sys_move_pages(pid_t pid, unsigned long nr_pages,
 	    (current->uid != task->suid) && (current->uid != task->uid) &&
 	    !capable(CAP_SYS_NICE)) {
 		err = -EPERM;
-		goto out2;
+		goto out;
 	}
 
  	err = security_task_movememory(task);
  	if (err)
- 		goto out2;
+		goto out;
 
-	if (!nodes) {
+	if (nodes) {
+		err = do_pages_move(mm, task, nr_pages, pages, nodes, status,
+				    flags);
+	} else {
 		err = do_pages_stat(mm, nr_pages, pages, status);
-		goto out2;
-	}
-
-	task_nodes = cpuset_mems_allowed(task);
-
-	/* Limit nr_pages so that the multiplication may not overflow */
-	if (nr_pages >= ULONG_MAX / sizeof(struct page_to_node) - 1) {
-		err = -E2BIG;
-		goto out2;
 	}
 
-	pm = vmalloc((nr_pages + 1) * sizeof(struct page_to_node));
-	if (!pm) {
-		err = -ENOMEM;
-		goto out2;
-	}
-
-	/*
-	 * Get parameters from user space and initialize the pm
-	 * array. Return various errors if the user did something wrong.
-	 */
-	for (i = 0; i < nr_pages; i++) {
-		const void __user *p;
-
-		err = -EFAULT;
-		if (get_user(p, pages + i))
-			goto out;
-
-		pm[i].addr = (unsigned long)p;
-		if (nodes) {
-			int node;
-
-			if (get_user(node, nodes + i))
-				goto out;
-
-			err = -ENODEV;
-			if (!node_state(node, N_HIGH_MEMORY))
-				goto out;
-
-			err = -EACCES;
-			if (!node_isset(node, task_nodes))
-				goto out;
-
-			pm[i].node = node;
-		} else
-			pm[i].node = 0;	/* anything to not match MAX_NUMNODES */
-	}
-	/* End marker */
-	pm[nr_pages].node = MAX_NUMNODES;
-
-	err = do_move_pages(mm, pm, flags & MPOL_MF_MOVE_ALL);
-	if (err >= 0)
-		/* Return status information */
-		for (i = 0; i < nr_pages; i++)
-			if (put_user(pm[i].status, status + i))
-				err = -EFAULT;
-
 out:
-	vfree(pm);
-out2:
 	mmput(mm);
 	return err;
 }
-- 
1.5.6.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 4/5] mm: rework do_pages_move() to work on page_sized chunks
  2008-10-13 20:19 [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
                   ` (2 preceding siblings ...)
  2008-10-13 20:22 ` [PATCH 3/5] mm: extract do_pages_move() out of sys_move_pages() Brice Goglin
@ 2008-10-13 20:22 ` Brice Goglin
  2008-10-16 19:51   ` Christoph Lameter
  2008-10-13 20:23 ` [PATCH 5/5] mm: move_pages: no need to set pp->page to ZERO_PAGE(0) by default Brice Goglin
  2008-10-14 20:53 ` [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
  5 siblings, 1 reply; 15+ messages in thread
From: Brice Goglin @ 2008-10-13 20:22 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

Rework do_pages_move() to work by page-sized chunks of struct page_to_node
that are passed to do_move_page_to_node_array(). We now only have to
allocate a single page instead a possibly very large vmalloc area to store
all page_to_node entries.

As a result, new_page_node() will now have a very small lookup, hidding
much of the overall sys_move_pages() overhead.

Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
Signed-off-by: Nathalie Furmento <Nathalie.Furmento@labri.fr>
---
 mm/migrate.c |   79 ++++++++++++++++++++++++++++++++-------------------------
 1 files changed, 44 insertions(+), 35 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index dffc98b..175e242 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -947,41 +947,43 @@ static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
 			 const int __user *nodes,
 			 int __user *status, int flags)
 {
-	struct page_to_node *pm = NULL;
+	struct page_to_node *pm;
 	nodemask_t task_nodes;
-	int err = 0;
-	int i;
+	unsigned long chunk_nr_pages;
+	unsigned long chunk_start;
+	int err;
 
 	task_nodes = cpuset_mems_allowed(task);
 
-	/* Limit nr_pages so that the multiplication may not overflow */
-	if (nr_pages >= ULONG_MAX / sizeof(struct page_to_node) - 1) {
-		err = -E2BIG;
-		goto out;
-	}
-
-	pm = vmalloc((nr_pages + 1) * sizeof(struct page_to_node));
-	if (!pm) {
-		err = -ENOMEM;
+	err = -ENOMEM;
+	pm = kmalloc(PAGE_SIZE, GFP_KERNEL);
+	if (!pm)
 		goto out;
-	}
-
 	/*
-	 * Get parameters from user space and initialize the pm
-	 * array. Return various errors if the user did something wrong.
+	 * Store a chunk of page_to_node array in a page,
+	 * but keep the last one as a marker
 	 */
-	for (i = 0; i < nr_pages; i++) {
-		const void __user *p;
+	chunk_nr_pages = PAGE_SIZE/sizeof(struct page_to_node) - 1;
 
-		err = -EFAULT;
-		if (get_user(p, pages + i))
-			goto out_pm;
+	for (chunk_start = 0;
+	     chunk_start < nr_pages;
+	     chunk_start += chunk_nr_pages) {
+		int j;
+
+		if (chunk_start + chunk_nr_pages > nr_pages)
+			chunk_nr_pages = nr_pages - chunk_start;
 
-		pm[i].addr = (unsigned long)p;
-		if (nodes) {
+		/* fill the chunk pm with addrs and nodes from user-space */
+		for (j = 0; j < chunk_nr_pages; j++) {
+			const void __user *p;
 			int node;
 
-			if (get_user(node, nodes + i))
+			err = -EFAULT;
+			if (get_user(p, pages + j + chunk_start))
+				goto out_pm;
+			pm[j].addr = (unsigned long) p;
+
+			if (get_user(node, nodes + j + chunk_start))
 				goto out_pm;
 
 			err = -ENODEV;
@@ -992,22 +994,29 @@ static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
 			if (!node_isset(node, task_nodes))
 				goto out_pm;
 
-			pm[i].node = node;
-		} else
-			pm[i].node = 0;	/* anything to not match MAX_NUMNODES */
-	}
-	/* End marker */
-	pm[nr_pages].node = MAX_NUMNODES;
+			pm[j].node = node;
+		}
+
+		/* End marker for this chunk */
+		pm[chunk_nr_pages].node = MAX_NUMNODES;
+
+		/* Migrate this chunk */
+		err = do_move_page_to_node_array(mm, pm,
+						 flags & MPOL_MF_MOVE_ALL);
+		if (err < 0)
+			goto out_pm;
 
-	err = do_move_page_to_node_array(mm, pm, flags & MPOL_MF_MOVE_ALL);
-	if (err >= 0)
 		/* Return status information */
-		for (i = 0; i < nr_pages; i++)
-			if (put_user(pm[i].status, status + i))
+		for (j = 0; j < chunk_nr_pages; j++)
+			if (put_user(pm[j].status, status + j + chunk_start)) {
 				err = -EFAULT;
+				goto out_pm;
+			}
+	}
+	err = 0;
 
 out_pm:
-	vfree(pm);
+	kfree(pm);
 out:
 	return err;
 }
-- 
1.5.6.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 5/5] mm: move_pages: no need to set pp->page to ZERO_PAGE(0) by default
  2008-10-13 20:19 [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
                   ` (3 preceding siblings ...)
  2008-10-13 20:22 ` [PATCH 4/5] mm: rework do_pages_move() to work on page_sized chunks Brice Goglin
@ 2008-10-13 20:23 ` Brice Goglin
  2008-10-16 19:42   ` Christoph Lameter
  2008-10-14 20:53 ` [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
  5 siblings, 1 reply; 15+ messages in thread
From: Brice Goglin @ 2008-10-13 20:23 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

pp->page is never used when not set to the right page, so there is
no need to set it to ZERO_PAGE(0) by default.

Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
---
 mm/migrate.c |    6 ------
 1 files changed, 0 insertions(+), 6 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 175e242..2453444 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -878,12 +878,6 @@ static int do_move_page_to_node_array(struct mm_struct *mm,
 		struct vm_area_struct *vma;
 		struct page *page;
 
-		/*
-		 * A valid page pointer that will not match any of the
-		 * pages that will be moved.
-		 */
-		pp->page = ZERO_PAGE(0);
-
 		err = -EFAULT;
 		vma = find_vma(mm, pp->addr);
 		if (!vma || !vma_migratable(vma))
-- 
1.5.6.5




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead
  2008-10-13 20:19 [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
                   ` (4 preceding siblings ...)
  2008-10-13 20:23 ` [PATCH 5/5] mm: move_pages: no need to set pp->page to ZERO_PAGE(0) by default Brice Goglin
@ 2008-10-14 20:53 ` Brice Goglin
  5 siblings, 0 replies; 15+ messages in thread
From: Brice Goglin @ 2008-10-14 20:53 UTC (permalink / raw)
  To: LKML; +Cc: linux-mm, Andrew Morton

By the way, this patchset replaces
mm-use-a-radix-tree-to-make-do_move_pages-complexity-linear-checkpatch-fixes
(currently in -mm).

Brice



Brice Goglin wrote:
> Hello,
>
> Here's the first patchset reworking sys_move_pages() as discussed earlier.
> It removes the possibly large vmalloc by using multiple chunks when migrating
> large buffers. It also dramatically increases the throughput for large buffers
> since the lookup in new_page_node() is now limited to a single chunk, causing
> the quadratic complexity to have a much slower impact. There is no need to use
> any radix-tree-like structure to improve this lookup.
>
> sys_move_pages() duration on a 4-quadcore-opteron 2347HE (1.9Gz), migrating
> between nodes #2 and #3:
> 	length		move_pages (us)		move_pages+patch (us)
> 	4kB		126			98
> 	40kB		198			168
> 	400kB		963			937
> 	4MB		12503			11930
> 	40MB		246867			11848
>
> Patches #1 and #4 are the important ones:
> 1) stop returning -ENOENT from sys_move_pages() if nothing got migrated
> 2) don't vmalloc a huge page_to_node array for do_pages_stat()
> 3) extract do_pages_move() out of sys_move_pages()
> 4) rework do_pages_move() to work on page_sized chunks
> 5) move_pages: no need to set pp->page to ZERO_PAGE(0) by default
>
> thanks,
> Brice
>
>
>
>   


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] mm: stop returning -ENOENT from sys_move_pages() if nothing got migrated
  2008-10-13 20:21 ` [PATCH 1/5] mm: stop returning -ENOENT from sys_move_pages() if nothing got migrated Brice Goglin
@ 2008-10-16 19:34   ` Christoph Lameter
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Lameter @ 2008-10-16 19:34 UTC (permalink / raw)
  To: Brice Goglin; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento



Acked-by: Christoph Lameter <cl@linux-foundation.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/5] mm: don't vmalloc a huge page_to_node array for do_pages_stat()
  2008-10-13 20:21 ` [PATCH 2/5] mm: don't vmalloc a huge page_to_node array for do_pages_stat() Brice Goglin
@ 2008-10-16 19:39   ` Christoph Lameter
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Lameter @ 2008-10-16 19:39 UTC (permalink / raw)
  To: Brice Goglin; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento



Acked-by: Christoph Lameter <cl@linux-foundation.org>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/5] mm: extract do_pages_move() out of sys_move_pages()
  2008-10-13 20:22 ` [PATCH 3/5] mm: extract do_pages_move() out of sys_move_pages() Brice Goglin
@ 2008-10-16 19:40   ` Christoph Lameter
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Lameter @ 2008-10-16 19:40 UTC (permalink / raw)
  To: Brice Goglin; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento


Acked-by: Christoph Lameter <cl@linux-foundation.org>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 5/5] mm: move_pages: no need to set pp->page to ZERO_PAGE(0) by default
  2008-10-13 20:23 ` [PATCH 5/5] mm: move_pages: no need to set pp->page to ZERO_PAGE(0) by default Brice Goglin
@ 2008-10-16 19:42   ` Christoph Lameter
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Lameter @ 2008-10-16 19:42 UTC (permalink / raw)
  To: Brice Goglin; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento


Acked-by: Christoph Lameter <cl@linux-foundation.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] mm: rework do_pages_move() to work on page_sized chunks
  2008-10-13 20:22 ` [PATCH 4/5] mm: rework do_pages_move() to work on page_sized chunks Brice Goglin
@ 2008-10-16 19:51   ` Christoph Lameter
  2008-10-16 21:18     ` Brice Goglin
  2008-10-17 11:35     ` [RESEND][PATCH] " Brice Goglin
  0 siblings, 2 replies; 15+ messages in thread
From: Christoph Lameter @ 2008-10-16 19:51 UTC (permalink / raw)
  To: Brice Goglin; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

Brice Goglin wrote:

> +	err = -ENOMEM;
> +	pm = kmalloc(PAGE_SIZE, GFP_KERNEL);
> +	if (!pm)

ok.... But if you need a page sized chunk then you can also do
	get_zeroed_page(GFP_KERNEL). Why bother the slab allocator for page 		sized
allocations?


> +	chunk_nr_pages = PAGE_SIZE/sizeof(struct page_to_node) - 1;

Blanks missing.



> +		/* fill the chunk pm with addrs and nodes from user-space */
> +		for (j = 0; j < chunk_nr_pages; j++) {

j? So the chunk_start used to be i?


Acked-by: Christoph Lameter <cl@linux-foundation.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] mm: rework do_pages_move() to work on page_sized chunks
  2008-10-16 19:51   ` Christoph Lameter
@ 2008-10-16 21:18     ` Brice Goglin
  2008-10-17 11:35     ` [RESEND][PATCH] " Brice Goglin
  1 sibling, 0 replies; 15+ messages in thread
From: Brice Goglin @ 2008-10-16 21:18 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

Christoph Lameter wrote:
>> +	err = -ENOMEM;
>> +	pm = kmalloc(PAGE_SIZE, GFP_KERNEL);
>> +	if (!pm)
>>     
>
> ok.... But if you need a page sized chunk then you can also do
> 	get_zeroed_page(GFP_KERNEL). Why bother the slab allocator for page 		sized
> allocations?
>   

Right. But why get_zeroed_page()? I don't think I need anything zeroed
(and I needed so, I would have to zero again between each chunk).

alloc_pages(order=0)+__free_pages() is probably good.

>> +		/* fill the chunk pm with addrs and nodes from user-space */
>> +		for (j = 0; j < chunk_nr_pages; j++) {
>>     
>
> j? So the chunk_start used to be i?
>   

The original "i" is somehow "chunk_start+j" now.

Thanks Christoph, I'll send an updated "4/5" patch in the next days.

Brice


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RESEND][PATCH] mm: rework do_pages_move() to work on page_sized chunks
  2008-10-16 19:51   ` Christoph Lameter
  2008-10-16 21:18     ` Brice Goglin
@ 2008-10-17 11:35     ` Brice Goglin
  2008-10-17 13:10       ` Christoph Lameter
  1 sibling, 1 reply; 15+ messages in thread
From: Brice Goglin @ 2008-10-17 11:35 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento

Rework do_pages_move() to work by page-sized chunks of struct page_to_node
that are passed to do_move_page_to_node_array(). We now only have to
allocate a single page instead a possibly very large vmalloc area to store
all page_to_node entries.

As a result, new_page_node() will now have a very small lookup, hidding
much of the overall sys_move_pages() overhead.

Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>
Signed-off-by: Nathalie Furmento <Nathalie.Furmento@labri.fr>
---
 mm/migrate.c |   79 ++++++++++++++++++++++++++++++++-------------------------
 1 files changed, 44 insertions(+), 35 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index dffc98b..678589c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -947,41 +947,43 @@ static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
 			 const int __user *nodes,
 			 int __user *status, int flags)
 {
-	struct page_to_node *pm = NULL;
+	struct page_to_node *pm;
 	nodemask_t task_nodes;
-	int err = 0;
-	int i;
+	unsigned long chunk_nr_pages;
+	unsigned long chunk_start;
+	int err;
 
 	task_nodes = cpuset_mems_allowed(task);
 
-	/* Limit nr_pages so that the multiplication may not overflow */
-	if (nr_pages >= ULONG_MAX / sizeof(struct page_to_node) - 1) {
-		err = -E2BIG;
-		goto out;
-	}
-
-	pm = vmalloc((nr_pages + 1) * sizeof(struct page_to_node));
-	if (!pm) {
-		err = -ENOMEM;
+	err = -ENOMEM;
+	pm = (struct page_to_node *)__get_free_page(GFP_KERNEL);
+	if (!pm)
 		goto out;
-	}
-
 	/*
-	 * Get parameters from user space and initialize the pm
-	 * array. Return various errors if the user did something wrong.
+	 * Store a chunk of page_to_node array in a page,
+	 * but keep the last one as a marker
 	 */
-	for (i = 0; i < nr_pages; i++) {
-		const void __user *p;
+	chunk_nr_pages = (PAGE_SIZE / sizeof(struct page_to_node)) - 1;
 
-		err = -EFAULT;
-		if (get_user(p, pages + i))
-			goto out_pm;
+	for (chunk_start = 0;
+	     chunk_start < nr_pages;
+	     chunk_start += chunk_nr_pages) {
+		int j;
+
+		if (chunk_start + chunk_nr_pages > nr_pages)
+			chunk_nr_pages = nr_pages - chunk_start;
 
-		pm[i].addr = (unsigned long)p;
-		if (nodes) {
+		/* fill the chunk pm with addrs and nodes from user-space */
+		for (j = 0; j < chunk_nr_pages; j++) {
+			const void __user *p;
 			int node;
 
-			if (get_user(node, nodes + i))
+			err = -EFAULT;
+			if (get_user(p, pages + j + chunk_start))
+				goto out_pm;
+			pm[j].addr = (unsigned long) p;
+
+			if (get_user(node, nodes + j + chunk_start))
 				goto out_pm;
 
 			err = -ENODEV;
@@ -992,22 +994,29 @@ static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
 			if (!node_isset(node, task_nodes))
 				goto out_pm;
 
-			pm[i].node = node;
-		} else
-			pm[i].node = 0;	/* anything to not match MAX_NUMNODES */
-	}
-	/* End marker */
-	pm[nr_pages].node = MAX_NUMNODES;
+			pm[j].node = node;
+		}
+
+		/* End marker for this chunk */
+		pm[chunk_nr_pages].node = MAX_NUMNODES;
+
+		/* Migrate this chunk */
+		err = do_move_page_to_node_array(mm, pm,
+						 flags & MPOL_MF_MOVE_ALL);
+		if (err < 0)
+			goto out_pm;
 
-	err = do_move_page_to_node_array(mm, pm, flags & MPOL_MF_MOVE_ALL);
-	if (err >= 0)
 		/* Return status information */
-		for (i = 0; i < nr_pages; i++)
-			if (put_user(pm[i].status, status + i))
+		for (j = 0; j < chunk_nr_pages; j++)
+			if (put_user(pm[j].status, status + j + chunk_start)) {
 				err = -EFAULT;
+				goto out_pm;
+			}
+	}
+	err = 0;
 
 out_pm:
-	vfree(pm);
+	free_page((unsigned long)pm);
 out:
 	return err;
 }
-- 
1.5.6.5



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RESEND][PATCH] mm: rework do_pages_move() to work on page_sized chunks
  2008-10-17 11:35     ` [RESEND][PATCH] " Brice Goglin
@ 2008-10-17 13:10       ` Christoph Lameter
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Lameter @ 2008-10-17 13:10 UTC (permalink / raw)
  To: Brice Goglin; +Cc: LKML, linux-mm, Andrew Morton, Nathalie Furmento


Acked-by: Christoph Lameter <cl@linux-foundation.org>


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2008-10-17 13:11 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-10-13 20:19 [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin
2008-10-13 20:21 ` [PATCH 1/5] mm: stop returning -ENOENT from sys_move_pages() if nothing got migrated Brice Goglin
2008-10-16 19:34   ` Christoph Lameter
2008-10-13 20:21 ` [PATCH 2/5] mm: don't vmalloc a huge page_to_node array for do_pages_stat() Brice Goglin
2008-10-16 19:39   ` Christoph Lameter
2008-10-13 20:22 ` [PATCH 3/5] mm: extract do_pages_move() out of sys_move_pages() Brice Goglin
2008-10-16 19:40   ` Christoph Lameter
2008-10-13 20:22 ` [PATCH 4/5] mm: rework do_pages_move() to work on page_sized chunks Brice Goglin
2008-10-16 19:51   ` Christoph Lameter
2008-10-16 21:18     ` Brice Goglin
2008-10-17 11:35     ` [RESEND][PATCH] " Brice Goglin
2008-10-17 13:10       ` Christoph Lameter
2008-10-13 20:23 ` [PATCH 5/5] mm: move_pages: no need to set pp->page to ZERO_PAGE(0) by default Brice Goglin
2008-10-16 19:42   ` Christoph Lameter
2008-10-14 20:53 ` [PATCH 0/5] mm: rework sys_move_pages() to avoid vmalloc and reduce the overhead Brice Goglin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).