LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [2.6 patch] mm/hugetlb.c: fix duplicate variable
@ 2008-05-05 18:28 Adrian Bunk
  2008-05-05 20:11 ` KOSAKI Motohiro
  0 siblings, 1 reply; 8+ messages in thread
From: Adrian Bunk @ 2008-05-05 18:28 UTC (permalink / raw)
  To: wli; +Cc: linux-kernel, Andrew Morton

It's confusing that set_max_huge_pages() contained two different 
variables named "ret", and although the code works correctly this should 
be fixed.

The inner of the two variables can simply be removed.

Spotted by sparse.

Signed-off-by: Adrian Bunk <bunk@kernel.org>

---

This patch has been sent on:
- 22 Apr 2008
- 14 Apr 2008
- 31 Mar 2008
- 25 Feb 2008

 mm/hugetlb.c |    1 -
 1 file changed, 1 deletion(-)

--- linux-2.6/mm/hugetlb.c.old	2008-02-24 23:17:52.000000000 +0200
+++ linux-2.6/mm/hugetlb.c	2008-02-24 23:26:07.000000000 +0200
@@ -518,45 +518,44 @@
 static unsigned long set_max_huge_pages(unsigned long count)
 {
 	unsigned long min_count, ret;
 
 	/*
 	 * Increase the pool size
 	 * First take pages out of surplus state.  Then make up the
 	 * remaining difference by allocating fresh huge pages.
 	 *
 	 * We might race with alloc_buddy_huge_page() here and be unable
 	 * to convert a surplus huge page to a normal huge page. That is
 	 * not critical, though, it just means the overall size of the
 	 * pool might be one hugepage larger than it needs to be, but
 	 * within all the constraints specified by the sysctls.
 	 */
 	spin_lock(&hugetlb_lock);
 	while (surplus_huge_pages && count > persistent_huge_pages) {
 		if (!adjust_pool_surplus(-1))
 			break;
 	}
 
 	while (count > persistent_huge_pages) {
-		int ret;
 		/*
 		 * If this allocation races such that we no longer need the
 		 * page, free_huge_page will handle it by freeing the page
 		 * and reducing the surplus.
 		 */
 		spin_unlock(&hugetlb_lock);
 		ret = alloc_fresh_huge_page();
 		spin_lock(&hugetlb_lock);
 		if (!ret)
 			goto out;
 
 	}
 
 	/*
 	 * Decrease the pool size
 	 * First return free pages to the buddy allocator (being careful
 	 * to keep enough around to satisfy reservations).  Then place
 	 * pages into surplus state as needed so the pool will shrink
 	 * to the desired size as pages become free.
 	 *
 	 * By placing pages into the surplus state independent of the
 	 * overcommit value, we are allowing the surplus pool size to



^ permalink raw reply	[flat|nested] 8+ messages in thread
* [2.6 patch] mm/hugetlb.c: fix duplicate variable
@ 2008-04-21 22:50 Adrian Bunk
  0 siblings, 0 replies; 8+ messages in thread
From: Adrian Bunk @ 2008-04-21 22:50 UTC (permalink / raw)
  To: wli; +Cc: linux-kernel

It's confusing that set_max_huge_pages() contained two different 
variables named "ret", and although the code works correctly this should 
be fixed.

The inner of the two variables can simply be removed.

Spotted by sparse.

Signed-off-by: Adrian Bunk <bunk@kernel.org>

---

This patch has been sent on:
- 14 Apr 2008
- 31 Mar 2008
- 25 Feb 2008

 mm/hugetlb.c |    1 -
 1 file changed, 1 deletion(-)

--- linux-2.6/mm/hugetlb.c.old	2008-02-24 23:17:52.000000000 +0200
+++ linux-2.6/mm/hugetlb.c	2008-02-24 23:26:07.000000000 +0200
@@ -518,45 +518,44 @@
 static unsigned long set_max_huge_pages(unsigned long count)
 {
 	unsigned long min_count, ret;
 
 	/*
 	 * Increase the pool size
 	 * First take pages out of surplus state.  Then make up the
 	 * remaining difference by allocating fresh huge pages.
 	 *
 	 * We might race with alloc_buddy_huge_page() here and be unable
 	 * to convert a surplus huge page to a normal huge page. That is
 	 * not critical, though, it just means the overall size of the
 	 * pool might be one hugepage larger than it needs to be, but
 	 * within all the constraints specified by the sysctls.
 	 */
 	spin_lock(&hugetlb_lock);
 	while (surplus_huge_pages && count > persistent_huge_pages) {
 		if (!adjust_pool_surplus(-1))
 			break;
 	}
 
 	while (count > persistent_huge_pages) {
-		int ret;
 		/*
 		 * If this allocation races such that we no longer need the
 		 * page, free_huge_page will handle it by freeing the page
 		 * and reducing the surplus.
 		 */
 		spin_unlock(&hugetlb_lock);
 		ret = alloc_fresh_huge_page();
 		spin_lock(&hugetlb_lock);
 		if (!ret)
 			goto out;
 
 	}
 
 	/*
 	 * Decrease the pool size
 	 * First return free pages to the buddy allocator (being careful
 	 * to keep enough around to satisfy reservations).  Then place
 	 * pages into surplus state as needed so the pool will shrink
 	 * to the desired size as pages become free.
 	 *
 	 * By placing pages into the surplus state independent of the
 	 * overcommit value, we are allowing the surplus pool size to



^ permalink raw reply	[flat|nested] 8+ messages in thread
* [2.6 patch] mm/hugetlb.c: fix duplicate variable
@ 2008-04-14 18:14 Adrian Bunk
  0 siblings, 0 replies; 8+ messages in thread
From: Adrian Bunk @ 2008-04-14 18:14 UTC (permalink / raw)
  To: wli; +Cc: linux-kernel

It's confusing that set_max_huge_pages() contained two different 
variables named "ret", and although the code works correctly this should 
be fixed.

The inner of the two variables can simply be removed.

Spotted by sparse.

Signed-off-by: Adrian Bunk <bunk@kernel.org>

---

This patch has been sent on:
- 31 Mar 2008
- 25 Feb 2008

 mm/hugetlb.c |    1 -
 1 file changed, 1 deletion(-)

--- linux-2.6/mm/hugetlb.c.old	2008-02-24 23:17:52.000000000 +0200
+++ linux-2.6/mm/hugetlb.c	2008-02-24 23:26:07.000000000 +0200
@@ -518,45 +518,44 @@
 static unsigned long set_max_huge_pages(unsigned long count)
 {
 	unsigned long min_count, ret;
 
 	/*
 	 * Increase the pool size
 	 * First take pages out of surplus state.  Then make up the
 	 * remaining difference by allocating fresh huge pages.
 	 *
 	 * We might race with alloc_buddy_huge_page() here and be unable
 	 * to convert a surplus huge page to a normal huge page. That is
 	 * not critical, though, it just means the overall size of the
 	 * pool might be one hugepage larger than it needs to be, but
 	 * within all the constraints specified by the sysctls.
 	 */
 	spin_lock(&hugetlb_lock);
 	while (surplus_huge_pages && count > persistent_huge_pages) {
 		if (!adjust_pool_surplus(-1))
 			break;
 	}
 
 	while (count > persistent_huge_pages) {
-		int ret;
 		/*
 		 * If this allocation races such that we no longer need the
 		 * page, free_huge_page will handle it by freeing the page
 		 * and reducing the surplus.
 		 */
 		spin_unlock(&hugetlb_lock);
 		ret = alloc_fresh_huge_page();
 		spin_lock(&hugetlb_lock);
 		if (!ret)
 			goto out;
 
 	}
 
 	/*
 	 * Decrease the pool size
 	 * First return free pages to the buddy allocator (being careful
 	 * to keep enough around to satisfy reservations).  Then place
 	 * pages into surplus state as needed so the pool will shrink
 	 * to the desired size as pages become free.
 	 *
 	 * By placing pages into the surplus state independent of the
 	 * overcommit value, we are allowing the surplus pool size to

^ permalink raw reply	[flat|nested] 8+ messages in thread
* [2.6 patch] mm/hugetlb.c: fix duplicate variable
@ 2008-03-30 22:53 Adrian Bunk
  0 siblings, 0 replies; 8+ messages in thread
From: Adrian Bunk @ 2008-03-30 22:53 UTC (permalink / raw)
  To: wli; +Cc: linux-kernel

It's confusing that set_max_huge_pages() contained two different 
variables named "ret", and although the code works correctly this should 
be fixed.

The inner of the two variables can simply be removed.

Spotted by sparse.

Signed-off-by: Adrian Bunk <bunk@kernel.org>

---

This patch has been sent on:
- 25 Feb 2008

 mm/hugetlb.c |    1 -
 1 file changed, 1 deletion(-)

--- linux-2.6/mm/hugetlb.c.old	2008-02-24 23:17:52.000000000 +0200
+++ linux-2.6/mm/hugetlb.c	2008-02-24 23:26:07.000000000 +0200
@@ -518,45 +518,44 @@
 static unsigned long set_max_huge_pages(unsigned long count)
 {
 	unsigned long min_count, ret;
 
 	/*
 	 * Increase the pool size
 	 * First take pages out of surplus state.  Then make up the
 	 * remaining difference by allocating fresh huge pages.
 	 *
 	 * We might race with alloc_buddy_huge_page() here and be unable
 	 * to convert a surplus huge page to a normal huge page. That is
 	 * not critical, though, it just means the overall size of the
 	 * pool might be one hugepage larger than it needs to be, but
 	 * within all the constraints specified by the sysctls.
 	 */
 	spin_lock(&hugetlb_lock);
 	while (surplus_huge_pages && count > persistent_huge_pages) {
 		if (!adjust_pool_surplus(-1))
 			break;
 	}
 
 	while (count > persistent_huge_pages) {
-		int ret;
 		/*
 		 * If this allocation races such that we no longer need the
 		 * page, free_huge_page will handle it by freeing the page
 		 * and reducing the surplus.
 		 */
 		spin_unlock(&hugetlb_lock);
 		ret = alloc_fresh_huge_page();
 		spin_lock(&hugetlb_lock);
 		if (!ret)
 			goto out;
 
 	}
 
 	/*
 	 * Decrease the pool size
 	 * First return free pages to the buddy allocator (being careful
 	 * to keep enough around to satisfy reservations).  Then place
 	 * pages into surplus state as needed so the pool will shrink
 	 * to the desired size as pages become free.
 	 *
 	 * By placing pages into the surplus state independent of the
 	 * overcommit value, we are allowing the surplus pool size to

^ permalink raw reply	[flat|nested] 8+ messages in thread
* [2.6 patch] mm/hugetlb.c: fix duplicate variable
@ 2008-02-25  0:09 Adrian Bunk
  0 siblings, 0 replies; 8+ messages in thread
From: Adrian Bunk @ 2008-02-25  0:09 UTC (permalink / raw)
  To: wli; +Cc: linux-kernel

It's confusing that set_max_huge_pages() contained two different 
variables named "ret", and although the code works correctly this should 
be fixed.

The inner of the two variables can simply be removed.

Spotted by sparse.

Signed-off-by: Adrian Bunk <bunk@kernel.org>

---

 mm/hugetlb.c |    1 -
 1 file changed, 1 deletion(-)

--- linux-2.6/mm/hugetlb.c.old	2008-02-24 23:17:52.000000000 +0200
+++ linux-2.6/mm/hugetlb.c	2008-02-24 23:26:07.000000000 +0200
@@ -518,45 +518,44 @@
 static unsigned long set_max_huge_pages(unsigned long count)
 {
 	unsigned long min_count, ret;
 
 	/*
 	 * Increase the pool size
 	 * First take pages out of surplus state.  Then make up the
 	 * remaining difference by allocating fresh huge pages.
 	 *
 	 * We might race with alloc_buddy_huge_page() here and be unable
 	 * to convert a surplus huge page to a normal huge page. That is
 	 * not critical, though, it just means the overall size of the
 	 * pool might be one hugepage larger than it needs to be, but
 	 * within all the constraints specified by the sysctls.
 	 */
 	spin_lock(&hugetlb_lock);
 	while (surplus_huge_pages && count > persistent_huge_pages) {
 		if (!adjust_pool_surplus(-1))
 			break;
 	}
 
 	while (count > persistent_huge_pages) {
-		int ret;
 		/*
 		 * If this allocation races such that we no longer need the
 		 * page, free_huge_page will handle it by freeing the page
 		 * and reducing the surplus.
 		 */
 		spin_unlock(&hugetlb_lock);
 		ret = alloc_fresh_huge_page();
 		spin_lock(&hugetlb_lock);
 		if (!ret)
 			goto out;
 
 	}
 
 	/*
 	 * Decrease the pool size
 	 * First return free pages to the buddy allocator (being careful
 	 * to keep enough around to satisfy reservations).  Then place
 	 * pages into surplus state as needed so the pool will shrink
 	 * to the desired size as pages become free.
 	 *
 	 * By placing pages into the surplus state independent of the
 	 * overcommit value, we are allowing the surplus pool size to

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2008-05-05 20:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-05-05 18:28 [2.6 patch] mm/hugetlb.c: fix duplicate variable Adrian Bunk
2008-05-05 20:11 ` KOSAKI Motohiro
2008-05-05 20:19   ` Adrian Bunk
2008-05-05 20:28     ` KOSAKI Motohiro
  -- strict thread matches above, loose matches on Subject: below --
2008-04-21 22:50 Adrian Bunk
2008-04-14 18:14 Adrian Bunk
2008-03-30 22:53 Adrian Bunk
2008-02-25  0:09 Adrian Bunk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).