LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH 0/3] hugetlb fixlet v3
@ 2015-03-31  8:50 Naoya Horiguchi
  2015-03-31  8:50 ` [PATCH v3 1/3] mm: don't call __page_cache_release for hugetlb Naoya Horiguchi
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Naoya Horiguchi @ 2015-03-31  8:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

This is the update of "active page flag for hugetlb" patch [*1].

The main change is patch 2/3, which fixes the race condition where concurrent
call of isolate_huge_page() causes kernel panic.

Patch 1/3 just mentions and "fixes" a potential problem, no behavioral change.
Patch 3/3 is a cleanup, reduces lines of code.

[*1] http://thread.gmane.org/gmane.linux.kernel/1889277/focus=1889380
---
Summary:

Naoya Horiguchi (3):
      mm: don't call __page_cache_release for hugetlb
      mm: hugetlb: introduce PageHugeActive flag
      mm: hugetlb: cleanup using PageHugeActive flag

 include/linux/hugetlb.h |  8 +++--
 mm/hugetlb.c            | 83 +++++++++++++++++++++++++------------------------
 mm/memory-failure.c     | 14 +++++++--
 mm/memory_hotplug.c     |  2 +-
 mm/swap.c               | 10 +++++-
 5 files changed, 71 insertions(+), 46 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/3] mm: don't call __page_cache_release for hugetlb
  2015-03-31  8:50 [PATCH 0/3] hugetlb fixlet v3 Naoya Horiguchi
@ 2015-03-31  8:50 ` Naoya Horiguchi
  2015-04-01 16:13   ` Michal Hocko
  2015-03-31  8:50 ` [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag Naoya Horiguchi
  2015-03-31  8:50 ` [PATCH v3 2/3] mm: hugetlb: introduce " Naoya Horiguchi
  2 siblings, 1 reply; 11+ messages in thread
From: Naoya Horiguchi @ 2015-03-31  8:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

__put_compound_page() calls __page_cache_release() to do some freeing works,
but it's obviously for thps, not for hugetlb. We didn't care it because PageLRU
is always cleared and page->mem_cgroup is always NULL for hugetlb.
But it's not correct and has potential risks, so let's make it conditional.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/swap.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git v4.0-rc6.orig/mm/swap.c v4.0-rc6/mm/swap.c
index cd3a5e64cea9..8e46823c3319 100644
--- v4.0-rc6.orig/mm/swap.c
+++ v4.0-rc6/mm/swap.c
@@ -31,6 +31,7 @@
 #include <linux/memcontrol.h>
 #include <linux/gfp.h>
 #include <linux/uio.h>
+#include <linux/hugetlb.h>
 
 #include "internal.h"
 
@@ -75,7 +76,14 @@ static void __put_compound_page(struct page *page)
 {
 	compound_page_dtor *dtor;
 
-	__page_cache_release(page);
+	/*
+	 * __page_cache_release() is supposed to be called for thp, not for
+	 * hugetlb. This is because hugetlb page does never have PageLRU set
+	 * (it's never listed to any LRU lists) and no memcg routines should
+	 * be called for hugetlb (it has a separate hugetlb_cgroup.)
+	 */
+	if (!PageHuge(page))
+		__page_cache_release(page);
 	dtor = get_compound_page_dtor(page);
 	(*dtor)(page);
 }
-- 
1.9.3

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 2/3] mm: hugetlb: introduce PageHugeActive flag
  2015-03-31  8:50 [PATCH 0/3] hugetlb fixlet v3 Naoya Horiguchi
  2015-03-31  8:50 ` [PATCH v3 1/3] mm: don't call __page_cache_release for hugetlb Naoya Horiguchi
  2015-03-31  8:50 ` [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag Naoya Horiguchi
@ 2015-03-31  8:50 ` Naoya Horiguchi
  2015-03-31 21:06   ` Andrew Morton
  2015-04-01 16:30   ` Michal Hocko
  2 siblings, 2 replies; 11+ messages in thread
From: Naoya Horiguchi @ 2015-03-31  8:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

We are not safe from calling isolate_huge_page() on a hugepage concurrently,
which can make the victim hugepage in invalid state and results in BUG_ON().

The root problem of this is that we don't have any information on struct page
(so easily accessible) about hugepages' activeness. Note that hugepages'
activeness means just being linked to hstate->hugepage_activelist, which is
not the same as normal pages' activeness represented by PageActive flag.

Normal pages are isolated by isolate_lru_page() which prechecks PageLRU before
isolation, so let's do similarly for hugetlb with a new PageHugeActive flag.

Set/ClearPageHugeActive should be called within hugetlb_lock. But hugetlb_cow()
and hugetlb_no_page() don't do this, being justified because in these function
SetPageHugeActive is called right after the hugepage is allocated and no other
thread tries to isolate it.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
ChangeLog v2->v3:
- Use PagePrivate of the first tail page to show hugepage's activeness instead
  of PageLRU
- drop ClearPageLRU in dequeue_hwpoisoned_huge_page() (which was wrong)
- fix return value of isolate_huge_page() (using ret)
- move __put_compound_page() part to a separate patch
- drop "Cc: stable" tag because this is not a simple fix

ChangeLog v1->v2:
- call isolate_huge_page() in soft_offline_huge_page() instead of list_move()
---
 mm/hugetlb.c        | 41 ++++++++++++++++++++++++++++++++++++++---
 mm/memory-failure.c | 14 ++++++++++++--
 2 files changed, 50 insertions(+), 5 deletions(-)

diff --git v4.0-rc6.orig/mm/hugetlb.c v4.0-rc6/mm/hugetlb.c
index c41b2a0ee273..05e0233d30d7 100644
--- v4.0-rc6.orig/mm/hugetlb.c
+++ v4.0-rc6/mm/hugetlb.c
@@ -855,6 +855,31 @@ struct hstate *size_to_hstate(unsigned long size)
 	return NULL;
 }
 
+/*
+ * Page flag to show that the hugepage is "active/in-use" (i.e. being linked to
+ * hstate->hugepage_activelist.)
+ *
+ * This function can be called for tail pages, but never returns true for them.
+ */
+int PageHugeActive(struct page *page)
+{
+	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	return PageHead(page) && PagePrivate(&page[1]);
+}
+
+/* never called for tail page */
+void SetPageHugeActive(struct page *page)
+{
+	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+	SetPagePrivate(&page[1]);
+}
+
+void ClearPageHugeActive(struct page *page)
+{
+	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+	ClearPagePrivate(&page[1]);
+}
+
 void free_huge_page(struct page *page)
 {
 	/*
@@ -875,6 +900,7 @@ void free_huge_page(struct page *page)
 	ClearPagePrivate(page);
 
 	spin_lock(&hugetlb_lock);
+	ClearPageHugeActive(page);
 	hugetlb_cgroup_uncharge_page(hstate_index(h),
 				     pages_per_huge_page(h), page);
 	if (restore_reserve)
@@ -2891,6 +2917,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	copy_user_huge_page(new_page, old_page, address, vma,
 			    pages_per_huge_page(h));
 	__SetPageUptodate(new_page);
+	SetPageHugeActive(new_page);
 
 	mmun_start = address & huge_page_mask(h);
 	mmun_end = mmun_start + huge_page_size(h);
@@ -3003,6 +3030,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		}
 		clear_huge_page(page, address, pages_per_huge_page(h));
 		__SetPageUptodate(page);
+		SetPageHugeActive(page);
 
 		if (vma->vm_flags & VM_MAYSHARE) {
 			int err;
@@ -3812,19 +3840,26 @@ int dequeue_hwpoisoned_huge_page(struct page *hpage)
 
 bool isolate_huge_page(struct page *page, struct list_head *list)
 {
+	bool ret = true;
+
 	VM_BUG_ON_PAGE(!PageHead(page), page);
-	if (!get_page_unless_zero(page))
-		return false;
 	spin_lock(&hugetlb_lock);
+	if (!PageHugeActive(page) || !get_page_unless_zero(page)) {
+		ret = false;
+		goto unlock;
+	}
+	ClearPageHugeActive(page);
 	list_move_tail(&page->lru, list);
+unlock:
 	spin_unlock(&hugetlb_lock);
-	return true;
+	return ret;
 }
 
 void putback_active_hugepage(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 	spin_lock(&hugetlb_lock);
+	SetPageHugeActive(page);
 	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
 	spin_unlock(&hugetlb_lock);
 	put_page(page);
diff --git v4.0-rc6.orig/mm/memory-failure.c v4.0-rc6/mm/memory-failure.c
index d487f8dc6d39..1d86cca8de26 100644
--- v4.0-rc6.orig/mm/memory-failure.c
+++ v4.0-rc6/mm/memory-failure.c
@@ -1540,8 +1540,18 @@ static int soft_offline_huge_page(struct page *page, int flags)
 	}
 	unlock_page(hpage);
 
-	/* Keep page count to indicate a given hugepage is isolated. */
-	list_move(&hpage->lru, &pagelist);
+	ret = isolate_huge_page(hpage, &pagelist);
+	if (ret) {
+		/*
+		 * get_any_page() and isolate_huge_page() takes a refcount each,
+		 * so need to drop one here.
+		 */
+		put_page(hpage);
+	} else {
+		pr_info("soft offline: %#lx hugepage failed to isolate\n", pfn);
+		return -EBUSY;
+	}
+
 	ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL,
 				MIGRATE_SYNC, MR_MEMORY_FAILURE);
 	if (ret) {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag
  2015-03-31  8:50 [PATCH 0/3] hugetlb fixlet v3 Naoya Horiguchi
  2015-03-31  8:50 ` [PATCH v3 1/3] mm: don't call __page_cache_release for hugetlb Naoya Horiguchi
@ 2015-03-31  8:50 ` Naoya Horiguchi
  2015-03-31 21:08   ` Andrew Morton
  2015-04-01 16:36   ` Michal Hocko
  2015-03-31  8:50 ` [PATCH v3 2/3] mm: hugetlb: introduce " Naoya Horiguchi
  2 siblings, 2 replies; 11+ messages in thread
From: Naoya Horiguchi @ 2015-03-31  8:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

Now we have an easy access to hugepages' activeness, so existing helpers to
get the information can be cleaned up.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 include/linux/hugetlb.h |  8 ++++++--
 mm/hugetlb.c            | 42 +++++-------------------------------------
 mm/memory_hotplug.c     |  2 +-
 3 files changed, 12 insertions(+), 40 deletions(-)

diff --git v4.0-rc6.orig/include/linux/hugetlb.h v4.0-rc6/include/linux/hugetlb.h
index 7b5785032049..8494abed02a5 100644
--- v4.0-rc6.orig/include/linux/hugetlb.h
+++ v4.0-rc6/include/linux/hugetlb.h
@@ -42,6 +42,7 @@ struct hugepage_subpool *hugepage_new_subpool(long nr_blocks);
 void hugepage_put_subpool(struct hugepage_subpool *spool);
 
 int PageHuge(struct page *page);
+int PageHugeActive(struct page *page);
 
 void reset_vma_resv_huge_pages(struct vm_area_struct *vma);
 int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
@@ -79,7 +80,6 @@ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed);
 int dequeue_hwpoisoned_huge_page(struct page *page);
 bool isolate_huge_page(struct page *page, struct list_head *list);
 void putback_active_hugepage(struct page *page);
-bool is_hugepage_active(struct page *page);
 void free_huge_page(struct page *page);
 
 #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
@@ -114,6 +114,11 @@ static inline int PageHuge(struct page *page)
 	return 0;
 }
 
+static inline int PageHugeActive(struct page *page)
+{
+	return 0;
+}
+
 static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
 {
 }
@@ -152,7 +157,6 @@ static inline bool isolate_huge_page(struct page *page, struct list_head *list)
 	return false;
 }
 #define putback_active_hugepage(p)	do {} while (0)
-#define is_hugepage_active(x)	false
 
 static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot)
diff --git v4.0-rc6.orig/mm/hugetlb.c v4.0-rc6/mm/hugetlb.c
index 05e0233d30d7..8e1c46affc59 100644
--- v4.0-rc6.orig/mm/hugetlb.c
+++ v4.0-rc6/mm/hugetlb.c
@@ -3795,20 +3795,6 @@ follow_huge_pud(struct mm_struct *mm, unsigned long address,
 
 #ifdef CONFIG_MEMORY_FAILURE
 
-/* Should be called in hugetlb_lock */
-static int is_hugepage_on_freelist(struct page *hpage)
-{
-	struct page *page;
-	struct page *tmp;
-	struct hstate *h = page_hstate(hpage);
-	int nid = page_to_nid(hpage);
-
-	list_for_each_entry_safe(page, tmp, &h->hugepage_freelists[nid], lru)
-		if (page == hpage)
-			return 1;
-	return 0;
-}
-
 /*
  * This function is called from memory failure code.
  * Assume the caller holds page lock of the head page.
@@ -3820,7 +3806,11 @@ int dequeue_hwpoisoned_huge_page(struct page *hpage)
 	int ret = -EBUSY;
 
 	spin_lock(&hugetlb_lock);
-	if (is_hugepage_on_freelist(hpage)) {
+	/*
+	 * Just checking !PageHugeActive is not enough, because that could be
+	 * an isolated/hwpoisoned hugepage (which have >0 refcount).
+	 */
+	if (!PageHugeActive(hpage) && !page_count(hpage)) {
 		/*
 		 * Hwpoisoned hugepage isn't linked to activelist or freelist,
 		 * but dangling hpage->lru can trigger list-debug warnings
@@ -3864,25 +3854,3 @@ void putback_active_hugepage(struct page *page)
 	spin_unlock(&hugetlb_lock);
 	put_page(page);
 }
-
-bool is_hugepage_active(struct page *page)
-{
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	/*
-	 * This function can be called for a tail page because the caller,
-	 * scan_movable_pages, scans through a given pfn-range which typically
-	 * covers one memory block. In systems using gigantic hugepage (1GB
-	 * for x86_64,) a hugepage is larger than a memory block, and we don't
-	 * support migrating such large hugepages for now, so return false
-	 * when called for tail pages.
-	 */
-	if (PageTail(page))
-		return false;
-	/*
-	 * Refcount of a hwpoisoned hugepages is 1, but they are not active,
-	 * so we should return false for them.
-	 */
-	if (unlikely(PageHWPoison(page)))
-		return false;
-	return page_count(page) > 0;
-}
diff --git v4.0-rc6.orig/mm/memory_hotplug.c v4.0-rc6/mm/memory_hotplug.c
index 65842d688b7c..2d53388c0715 100644
--- v4.0-rc6.orig/mm/memory_hotplug.c
+++ v4.0-rc6/mm/memory_hotplug.c
@@ -1376,7 +1376,7 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
 			if (PageLRU(page))
 				return pfn;
 			if (PageHuge(page)) {
-				if (is_hugepage_active(page))
+				if (PageHugeActive(page))
 					return pfn;
 				else
 					pfn = round_up(pfn + 1,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/3] mm: hugetlb: introduce PageHugeActive flag
  2015-03-31  8:50 ` [PATCH v3 2/3] mm: hugetlb: introduce " Naoya Horiguchi
@ 2015-03-31 21:06   ` Andrew Morton
  2015-04-01  1:40     ` Naoya Horiguchi
  2015-04-01 16:30   ` Michal Hocko
  1 sibling, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2015-03-31 21:06 UTC (permalink / raw)
  To: Naoya Horiguchi
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

On Tue, 31 Mar 2015 08:50:46 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> We are not safe from calling isolate_huge_page() on a hugepage concurrently,
> which can make the victim hugepage in invalid state and results in BUG_ON().
> 
> The root problem of this is that we don't have any information on struct page
> (so easily accessible) about hugepages' activeness. Note that hugepages'
> activeness means just being linked to hstate->hugepage_activelist, which is
> not the same as normal pages' activeness represented by PageActive flag.
> 
> Normal pages are isolated by isolate_lru_page() which prechecks PageLRU before
> isolation, so let's do similarly for hugetlb with a new PageHugeActive flag.
> 
> Set/ClearPageHugeActive should be called within hugetlb_lock. But hugetlb_cow()
> and hugetlb_no_page() don't do this, being justified because in these function
> SetPageHugeActive is called right after the hugepage is allocated and no other
> thread tries to isolate it.
> 
> ...
>
> +/*
> + * Page flag to show that the hugepage is "active/in-use" (i.e. being linked to
> + * hstate->hugepage_activelist.)
> + *
> + * This function can be called for tail pages, but never returns true for them.
> + */
> +int PageHugeActive(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(!PageHuge(page), page);
> +	return PageHead(page) && PagePrivate(&page[1]);
> +}

This is not a "page flag".  It is a regular old C function which tests
for a certain page state.  Yes, it kind of pretends to act like a
separate page flag but its use of the peculiar naming convention is
rather misleading.

I mean, if you see

	SetPageHugeActive(page);

then you expect that to be doing set_bit(PG_hugeactive, &page->flags)
but that isn't what it does, so the name is misleading.

Agree?

From: Andrew Morton <akpm@linux-foundation.org>
Subject: mm-hugetlb-introduce-pagehugeactive-flag-fix

s/PageHugeActive/page_huge_active/, make it return bool

Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |   22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff -puN mm/hugetlb.c~mm-hugetlb-introduce-pagehugeactive-flag-fix mm/hugetlb.c
--- a/mm/hugetlb.c~mm-hugetlb-introduce-pagehugeactive-flag-fix
+++ a/mm/hugetlb.c
@@ -925,25 +925,25 @@ struct hstate *size_to_hstate(unsigned l
 }
 
 /*
- * Page flag to show that the hugepage is "active/in-use" (i.e. being linked to
- * hstate->hugepage_activelist.)
+ * Test to determine whether the hugepage is "active/in-use" (i.e. being linked
+ * to hstate->hugepage_activelist.)
  *
  * This function can be called for tail pages, but never returns true for them.
  */
-int PageHugeActive(struct page *page)
+bool page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
 	return PageHead(page) && PagePrivate(&page[1]);
 }
 
 /* never called for tail page */
-void SetPageHugeActive(struct page *page)
+void set_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
 	SetPagePrivate(&page[1]);
 }
 
-void ClearPageHugeActive(struct page *page)
+void clear_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
 	ClearPagePrivate(&page[1]);
@@ -977,7 +977,7 @@ void free_huge_page(struct page *page)
 		restore_reserve = true;
 
 	spin_lock(&hugetlb_lock);
-	ClearPageHugeActive(page);
+	clear_page_huge_active(page);
 	hugetlb_cgroup_uncharge_page(hstate_index(h),
 				     pages_per_huge_page(h), page);
 	if (restore_reserve)
@@ -2998,7 +2998,7 @@ retry_avoidcopy:
 	copy_user_huge_page(new_page, old_page, address, vma,
 			    pages_per_huge_page(h));
 	__SetPageUptodate(new_page);
-	SetPageHugeActive(new_page);
+	set_page_huge_active(new_page);
 
 	mmun_start = address & huge_page_mask(h);
 	mmun_end = mmun_start + huge_page_size(h);
@@ -3111,7 +3111,7 @@ retry:
 		}
 		clear_huge_page(page, address, pages_per_huge_page(h));
 		__SetPageUptodate(page);
-		SetPageHugeActive(page);
+		set_page_huge_active(page);
 
 		if (vma->vm_flags & VM_MAYSHARE) {
 			int err;
@@ -3946,11 +3946,11 @@ bool isolate_huge_page(struct page *page
 
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 	spin_lock(&hugetlb_lock);
-	if (!PageHugeActive(page) || !get_page_unless_zero(page)) {
+	if (!page_huge_active(page) || !get_page_unless_zero(page)) {
 		ret = false;
 		goto unlock;
 	}
-	ClearPageHugeActive(page);
+	clear_page_huge_active(page);
 	list_move_tail(&page->lru, list);
 unlock:
 	spin_unlock(&hugetlb_lock);
@@ -3961,7 +3961,7 @@ void putback_active_hugepage(struct page
 {
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 	spin_lock(&hugetlb_lock);
-	SetPageHugeActive(page);
+	set_page_huge_active(page);
 	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
 	spin_unlock(&hugetlb_lock);
 	put_page(page);


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag
  2015-03-31  8:50 ` [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag Naoya Horiguchi
@ 2015-03-31 21:08   ` Andrew Morton
  2015-04-01  3:27     ` Naoya Horiguchi
  2015-04-01 16:36   ` Michal Hocko
  1 sibling, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2015-03-31 21:08 UTC (permalink / raw)
  To: Naoya Horiguchi
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

On Tue, 31 Mar 2015 08:50:46 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> Now we have an easy access to hugepages' activeness, so existing helpers to
> get the information can be cleaned up.

Similarly.  Also I adapted the code to fit in with
http://ozlabs.org/~akpm/mmots/broken-out/mm-consolidate-all-page-flags-helpers-in-linux-page-flagsh.patch


From: Andrew Morton <akpm@linux-foundation.org>
Subject: mm-hugetlb-cleanup-using-pagehugeactive-flag-fix

s/PageHugeActive/page_huge_active/

Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/hugetlb.h    |    7 -------
 include/linux/page-flags.h |    7 +++++++
 mm/hugetlb.c               |    4 ++--
 mm/memory_hotplug.c        |    2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

diff -puN include/linux/hugetlb.h~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix include/linux/hugetlb.h
--- a/include/linux/hugetlb.h~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix
+++ a/include/linux/hugetlb.h
@@ -44,8 +44,6 @@ extern int hugetlb_max_hstate __read_mos
 #define for_each_hstate(h) \
 	for ((h) = hstates; (h) < &hstates[hugetlb_max_hstate]; (h)++)
 
-int PageHugeActive(struct page *page);
-
 struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_hpages,
 						long min_hpages);
 void hugepage_put_subpool(struct hugepage_subpool *spool);
@@ -115,11 +113,6 @@ unsigned long hugetlb_change_protection(
 
 #else /* !CONFIG_HUGETLB_PAGE */
 
-static inline int PageHugeActive(struct page *page)
-{
-	return 0;
-}
-
 static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
 {
 }
diff -puN mm/hugetlb.c~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix mm/hugetlb.c
--- a/mm/hugetlb.c~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix
+++ a/mm/hugetlb.c
@@ -3909,10 +3909,10 @@ int dequeue_hwpoisoned_huge_page(struct
 
 	spin_lock(&hugetlb_lock);
 	/*
-	 * Just checking !PageHugeActive is not enough, because that could be
+	 * Just checking !page_huge_active is not enough, because that could be
 	 * an isolated/hwpoisoned hugepage (which have >0 refcount).
 	 */
-	if (!PageHugeActive(hpage) && !page_count(hpage)) {
+	if (!page_huge_active(hpage) && !page_count(hpage)) {
 		/*
 		 * Hwpoisoned hugepage isn't linked to activelist or freelist,
 		 * but dangling hpage->lru can trigger list-debug warnings
diff -puN mm/memory_hotplug.c~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix mm/memory_hotplug.c
--- a/mm/memory_hotplug.c~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix
+++ a/mm/memory_hotplug.c
@@ -1373,7 +1373,7 @@ static unsigned long scan_movable_pages(
 			if (PageLRU(page))
 				return pfn;
 			if (PageHuge(page)) {
-				if (PageHugeActive(page))
+				if (page_huge_active(page))
 					return pfn;
 				else
 					pfn = round_up(pfn + 1,
diff -puN include/linux/page-flags.h~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix include/linux/page-flags.h
--- a/include/linux/page-flags.h~mm-hugetlb-cleanup-using-pagehugeactive-flag-fix
+++ a/include/linux/page-flags.h
@@ -549,11 +549,18 @@ static inline void ClearPageCompound(str
 #ifdef CONFIG_HUGETLB_PAGE
 int PageHuge(struct page *page);
 int PageHeadHuge(struct page *page);
+bool page_huge_active(struct page *page);
 #else
 TESTPAGEFLAG_FALSE(Huge)
 TESTPAGEFLAG_FALSE(HeadHuge)
+
+static inline bool page_huge_active(struct page *page)
+{
+	return 0;
+}
 #endif
 
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 /*
  * PageHuge() only returns true for hugetlbfs pages, but not for
_


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/3] mm: hugetlb: introduce PageHugeActive flag
  2015-03-31 21:06   ` Andrew Morton
@ 2015-04-01  1:40     ` Naoya Horiguchi
  0 siblings, 0 replies; 11+ messages in thread
From: Naoya Horiguchi @ 2015-04-01  1:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

On Tue, Mar 31, 2015 at 02:06:10PM -0700, Andrew Morton wrote:
> On Tue, 31 Mar 2015 08:50:46 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
> 
> > We are not safe from calling isolate_huge_page() on a hugepage concurrently,
> > which can make the victim hugepage in invalid state and results in BUG_ON().
> > 
> > The root problem of this is that we don't have any information on struct page
> > (so easily accessible) about hugepages' activeness. Note that hugepages'
> > activeness means just being linked to hstate->hugepage_activelist, which is
> > not the same as normal pages' activeness represented by PageActive flag.
> > 
> > Normal pages are isolated by isolate_lru_page() which prechecks PageLRU before
> > isolation, so let's do similarly for hugetlb with a new PageHugeActive flag.
> > 
> > Set/ClearPageHugeActive should be called within hugetlb_lock. But hugetlb_cow()
> > and hugetlb_no_page() don't do this, being justified because in these function
> > SetPageHugeActive is called right after the hugepage is allocated and no other
> > thread tries to isolate it.
> > 
> > ...
> >
> > +/*
> > + * Page flag to show that the hugepage is "active/in-use" (i.e. being linked to
> > + * hstate->hugepage_activelist.)
> > + *
> > + * This function can be called for tail pages, but never returns true for them.
> > + */
> > +int PageHugeActive(struct page *page)
> > +{
> > +	VM_BUG_ON_PAGE(!PageHuge(page), page);
> > +	return PageHead(page) && PagePrivate(&page[1]);
> > +}
> 
> This is not a "page flag".  It is a regular old C function which tests
> for a certain page state.  Yes, it kind of pretends to act like a
> separate page flag but its use of the peculiar naming convention is
> rather misleading.
> 
> I mean, if you see
> 
> 	SetPageHugeActive(page);
> 
> then you expect that to be doing set_bit(PG_hugeactive, &page->flags)
> but that isn't what it does, so the name is misleading.
> 
> Agree?

Yes I agree, page_huge_active() is fine for me.

> From: Andrew Morton <akpm@linux-foundation.org>
> Subject: mm-hugetlb-introduce-pagehugeactive-flag-fix
> 
> s/PageHugeActive/page_huge_active/, make it return bool
> 
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Michal Hocko <mhocko@suse.cz>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: David Rientjes <rientjes@google.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> 
>  mm/hugetlb.c |   22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
> diff -puN mm/hugetlb.c~mm-hugetlb-introduce-pagehugeactive-flag-fix mm/hugetlb.c
> --- a/mm/hugetlb.c~mm-hugetlb-introduce-pagehugeactive-flag-fix
> +++ a/mm/hugetlb.c
> @@ -925,25 +925,25 @@ struct hstate *size_to_hstate(unsigned l
>  }
>  
>  /*
> - * Page flag to show that the hugepage is "active/in-use" (i.e. being linked to
> - * hstate->hugepage_activelist.)
> + * Test to determine whether the hugepage is "active/in-use" (i.e. being linked
> + * to hstate->hugepage_activelist.)
>   *
>   * This function can be called for tail pages, but never returns true for them.
>   */
> -int PageHugeActive(struct page *page)
> +bool page_huge_active(struct page *page)
>  {
>  	VM_BUG_ON_PAGE(!PageHuge(page), page);
>  	return PageHead(page) && PagePrivate(&page[1]);
>  }
>  
>  /* never called for tail page */
> -void SetPageHugeActive(struct page *page)
> +void set_page_huge_active(struct page *page)
>  {
>  	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
>  	SetPagePrivate(&page[1]);
>  }
>  
> -void ClearPageHugeActive(struct page *page)
> +void clear_page_huge_active(struct page *page)
>  {
>  	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
>  	ClearPagePrivate(&page[1]);
> @@ -977,7 +977,7 @@ void free_huge_page(struct page *page)
>  		restore_reserve = true;
>  
>  	spin_lock(&hugetlb_lock);
> -	ClearPageHugeActive(page);
> +	clear_page_huge_active(page);
>  	hugetlb_cgroup_uncharge_page(hstate_index(h),
>  				     pages_per_huge_page(h), page);
>  	if (restore_reserve)
> @@ -2998,7 +2998,7 @@ retry_avoidcopy:
>  	copy_user_huge_page(new_page, old_page, address, vma,
>  			    pages_per_huge_page(h));
>  	__SetPageUptodate(new_page);
> -	SetPageHugeActive(new_page);
> +	set_page_huge_active(new_page);
>  
>  	mmun_start = address & huge_page_mask(h);
>  	mmun_end = mmun_start + huge_page_size(h);
> @@ -3111,7 +3111,7 @@ retry:
>  		}
>  		clear_huge_page(page, address, pages_per_huge_page(h));
>  		__SetPageUptodate(page);
> -		SetPageHugeActive(page);
> +		set_page_huge_active(page);
>  
>  		if (vma->vm_flags & VM_MAYSHARE) {
>  			int err;
> @@ -3946,11 +3946,11 @@ bool isolate_huge_page(struct page *page
>  
>  	VM_BUG_ON_PAGE(!PageHead(page), page);
>  	spin_lock(&hugetlb_lock);
> -	if (!PageHugeActive(page) || !get_page_unless_zero(page)) {
> +	if (!page_huge_active(page) || !get_page_unless_zero(page)) {
>  		ret = false;
>  		goto unlock;
>  	}
> -	ClearPageHugeActive(page);
> +	clear_page_huge_active(page);
>  	list_move_tail(&page->lru, list);
>  unlock:
>  	spin_unlock(&hugetlb_lock);
> @@ -3961,7 +3961,7 @@ void putback_active_hugepage(struct page
>  {
>  	VM_BUG_ON_PAGE(!PageHead(page), page);
>  	spin_lock(&hugetlb_lock);
> -	SetPageHugeActive(page);
> +	set_page_huge_active(page);
>  	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
>  	spin_unlock(&hugetlb_lock);
>  	put_page(page);
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag
  2015-03-31 21:08   ` Andrew Morton
@ 2015-04-01  3:27     ` Naoya Horiguchi
  0 siblings, 0 replies; 11+ messages in thread
From: Naoya Horiguchi @ 2015-04-01  3:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Hugh Dickins, Michal Hocko, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

[-- Attachment #1: Type: text/plain, Size: 0 bytes --]



[-- Attachment #2: ATT00001.txt --]
[-- Type: text/plain, Size: 531 bytes --]

On Tue, Mar 31, 2015 at 02:08:14PM -0700, Andrew Morton wrote:
> On Tue, 31 Mar 2015 08:50:46 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
> 
> > Now we have an easy access to hugepages' activeness, so existing helpers to
> > get the information can be cleaned up.
> 
> Similarly.  Also I adapted the code to fit in with
> http://ozlabs.org/~akpm/mmots/broken-out/mm-consolidate-all-page-flags-helpers-in-linux-page-flagsh.patch

Thanks, moving the declaration/definition to include/linux/page-flags.h is OK.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/3] mm: don't call __page_cache_release for hugetlb
  2015-03-31  8:50 ` [PATCH v3 1/3] mm: don't call __page_cache_release for hugetlb Naoya Horiguchi
@ 2015-04-01 16:13   ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2015-04-01 16:13 UTC (permalink / raw)
  To: Naoya Horiguchi
  Cc: Andrew Morton, Hugh Dickins, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

On Tue 31-03-15 08:50:45, Naoya Horiguchi wrote:
> __put_compound_page() calls __page_cache_release() to do some freeing works,
> but it's obviously for thps, not for hugetlb. We didn't care it because PageLRU
> is always cleared and page->mem_cgroup is always NULL for hugetlb.
> But it's not correct and has potential risks, so let's make it conditional.
> 
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  mm/swap.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git v4.0-rc6.orig/mm/swap.c v4.0-rc6/mm/swap.c
> index cd3a5e64cea9..8e46823c3319 100644
> --- v4.0-rc6.orig/mm/swap.c
> +++ v4.0-rc6/mm/swap.c
> @@ -31,6 +31,7 @@
>  #include <linux/memcontrol.h>
>  #include <linux/gfp.h>
>  #include <linux/uio.h>
> +#include <linux/hugetlb.h>
>  
>  #include "internal.h"
>  
> @@ -75,7 +76,14 @@ static void __put_compound_page(struct page *page)
>  {
>  	compound_page_dtor *dtor;
>  
> -	__page_cache_release(page);
> +	/*
> +	 * __page_cache_release() is supposed to be called for thp, not for
> +	 * hugetlb. This is because hugetlb page does never have PageLRU set
> +	 * (it's never listed to any LRU lists) and no memcg routines should
> +	 * be called for hugetlb (it has a separate hugetlb_cgroup.)
> +	 */
> +	if (!PageHuge(page))
> +		__page_cache_release(page);
>  	dtor = get_compound_page_dtor(page);
>  	(*dtor)(page);
>  }
> -- 
> 1.9.3

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/3] mm: hugetlb: introduce PageHugeActive flag
  2015-03-31  8:50 ` [PATCH v3 2/3] mm: hugetlb: introduce " Naoya Horiguchi
  2015-03-31 21:06   ` Andrew Morton
@ 2015-04-01 16:30   ` Michal Hocko
  1 sibling, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2015-04-01 16:30 UTC (permalink / raw)
  To: Naoya Horiguchi
  Cc: Andrew Morton, Hugh Dickins, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

On Tue 31-03-15 08:50:46, Naoya Horiguchi wrote:
> We are not safe from calling isolate_huge_page() on a hugepage concurrently,
> which can make the victim hugepage in invalid state and results in BUG_ON().

It would be better to be specific about which specific BUG_ON this would
be. I guess you meant different BUG_ONs depending on how the race.

> The root problem of this is that we don't have any information on struct page
> (so easily accessible) about hugepages' activeness. Note that hugepages'
> activeness means just being linked to hstate->hugepage_activelist, which is
> not the same as normal pages' activeness represented by PageActive flag.
> 
> Normal pages are isolated by isolate_lru_page() which prechecks PageLRU before
> isolation, so let's do similarly for hugetlb with a new PageHugeActive flag.
> 
> Set/ClearPageHugeActive should be called within hugetlb_lock. But hugetlb_cow()
> and hugetlb_no_page() don't do this, being justified because in these function
> SetPageHugeActive is called right after the hugepage is allocated and no other
> thread tries to isolate it.
> 
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

The patch itself makes sense to me (even after Andrew's suggestions)

Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
> ChangeLog v2->v3:
> - Use PagePrivate of the first tail page to show hugepage's activeness instead
>   of PageLRU
> - drop ClearPageLRU in dequeue_hwpoisoned_huge_page() (which was wrong)
> - fix return value of isolate_huge_page() (using ret)
> - move __put_compound_page() part to a separate patch
> - drop "Cc: stable" tag because this is not a simple fix
> 
> ChangeLog v1->v2:
> - call isolate_huge_page() in soft_offline_huge_page() instead of list_move()
> ---
>  mm/hugetlb.c        | 41 ++++++++++++++++++++++++++++++++++++++---
>  mm/memory-failure.c | 14 ++++++++++++--
>  2 files changed, 50 insertions(+), 5 deletions(-)
> 
> diff --git v4.0-rc6.orig/mm/hugetlb.c v4.0-rc6/mm/hugetlb.c
> index c41b2a0ee273..05e0233d30d7 100644
> --- v4.0-rc6.orig/mm/hugetlb.c
> +++ v4.0-rc6/mm/hugetlb.c
> @@ -855,6 +855,31 @@ struct hstate *size_to_hstate(unsigned long size)
>  	return NULL;
>  }
>  
> +/*
> + * Page flag to show that the hugepage is "active/in-use" (i.e. being linked to
> + * hstate->hugepage_activelist.)
> + *
> + * This function can be called for tail pages, but never returns true for them.
> + */
> +int PageHugeActive(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(!PageHuge(page), page);
> +	return PageHead(page) && PagePrivate(&page[1]);
> +}
> +
> +/* never called for tail page */
> +void SetPageHugeActive(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
> +	SetPagePrivate(&page[1]);
> +}
> +
> +void ClearPageHugeActive(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
> +	ClearPagePrivate(&page[1]);
> +}
> +
>  void free_huge_page(struct page *page)
>  {
>  	/*
> @@ -875,6 +900,7 @@ void free_huge_page(struct page *page)
>  	ClearPagePrivate(page);
>  
>  	spin_lock(&hugetlb_lock);
> +	ClearPageHugeActive(page);
>  	hugetlb_cgroup_uncharge_page(hstate_index(h),
>  				     pages_per_huge_page(h), page);
>  	if (restore_reserve)
> @@ -2891,6 +2917,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
>  	copy_user_huge_page(new_page, old_page, address, vma,
>  			    pages_per_huge_page(h));
>  	__SetPageUptodate(new_page);
> +	SetPageHugeActive(new_page);
>  
>  	mmun_start = address & huge_page_mask(h);
>  	mmun_end = mmun_start + huge_page_size(h);
> @@ -3003,6 +3030,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  		}
>  		clear_huge_page(page, address, pages_per_huge_page(h));
>  		__SetPageUptodate(page);
> +		SetPageHugeActive(page);
>  
>  		if (vma->vm_flags & VM_MAYSHARE) {
>  			int err;
> @@ -3812,19 +3840,26 @@ int dequeue_hwpoisoned_huge_page(struct page *hpage)
>  
>  bool isolate_huge_page(struct page *page, struct list_head *list)
>  {
> +	bool ret = true;
> +
>  	VM_BUG_ON_PAGE(!PageHead(page), page);
> -	if (!get_page_unless_zero(page))
> -		return false;
>  	spin_lock(&hugetlb_lock);
> +	if (!PageHugeActive(page) || !get_page_unless_zero(page)) {
> +		ret = false;
> +		goto unlock;
> +	}
> +	ClearPageHugeActive(page);
>  	list_move_tail(&page->lru, list);
> +unlock:
>  	spin_unlock(&hugetlb_lock);
> -	return true;
> +	return ret;
>  }
>  
>  void putback_active_hugepage(struct page *page)
>  {
>  	VM_BUG_ON_PAGE(!PageHead(page), page);
>  	spin_lock(&hugetlb_lock);
> +	SetPageHugeActive(page);
>  	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
>  	spin_unlock(&hugetlb_lock);
>  	put_page(page);
> diff --git v4.0-rc6.orig/mm/memory-failure.c v4.0-rc6/mm/memory-failure.c
> index d487f8dc6d39..1d86cca8de26 100644
> --- v4.0-rc6.orig/mm/memory-failure.c
> +++ v4.0-rc6/mm/memory-failure.c
> @@ -1540,8 +1540,18 @@ static int soft_offline_huge_page(struct page *page, int flags)
>  	}
>  	unlock_page(hpage);
>  
> -	/* Keep page count to indicate a given hugepage is isolated. */
> -	list_move(&hpage->lru, &pagelist);
> +	ret = isolate_huge_page(hpage, &pagelist);
> +	if (ret) {
> +		/*
> +		 * get_any_page() and isolate_huge_page() takes a refcount each,
> +		 * so need to drop one here.
> +		 */
> +		put_page(hpage);
> +	} else {
> +		pr_info("soft offline: %#lx hugepage failed to isolate\n", pfn);
> +		return -EBUSY;
> +	}
> +
>  	ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL,
>  				MIGRATE_SYNC, MR_MEMORY_FAILURE);
>  	if (ret) {
> -- 
> 1.9.3

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag
  2015-03-31  8:50 ` [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag Naoya Horiguchi
  2015-03-31 21:08   ` Andrew Morton
@ 2015-04-01 16:36   ` Michal Hocko
  1 sibling, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2015-04-01 16:36 UTC (permalink / raw)
  To: Naoya Horiguchi
  Cc: Andrew Morton, Hugh Dickins, Mel Gorman, Johannes Weiner,
	David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi

On Tue 31-03-15 08:50:46, Naoya Horiguchi wrote:
> Now we have an easy access to hugepages' activeness, so existing helpers to
> get the information can be cleaned up.
> 
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  include/linux/hugetlb.h |  8 ++++++--
>  mm/hugetlb.c            | 42 +++++-------------------------------------
>  mm/memory_hotplug.c     |  2 +-
>  3 files changed, 12 insertions(+), 40 deletions(-)
> 
> diff --git v4.0-rc6.orig/include/linux/hugetlb.h v4.0-rc6/include/linux/hugetlb.h
> index 7b5785032049..8494abed02a5 100644
> --- v4.0-rc6.orig/include/linux/hugetlb.h
> +++ v4.0-rc6/include/linux/hugetlb.h
> @@ -42,6 +42,7 @@ struct hugepage_subpool *hugepage_new_subpool(long nr_blocks);
>  void hugepage_put_subpool(struct hugepage_subpool *spool);
>  
>  int PageHuge(struct page *page);
> +int PageHugeActive(struct page *page);
>  
>  void reset_vma_resv_huge_pages(struct vm_area_struct *vma);
>  int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
> @@ -79,7 +80,6 @@ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed);
>  int dequeue_hwpoisoned_huge_page(struct page *page);
>  bool isolate_huge_page(struct page *page, struct list_head *list);
>  void putback_active_hugepage(struct page *page);
> -bool is_hugepage_active(struct page *page);
>  void free_huge_page(struct page *page);
>  
>  #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
> @@ -114,6 +114,11 @@ static inline int PageHuge(struct page *page)
>  	return 0;
>  }
>  
> +static inline int PageHugeActive(struct page *page)
> +{
> +	return 0;
> +}
> +
>  static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
>  {
>  }
> @@ -152,7 +157,6 @@ static inline bool isolate_huge_page(struct page *page, struct list_head *list)
>  	return false;
>  }
>  #define putback_active_hugepage(p)	do {} while (0)
> -#define is_hugepage_active(x)	false
>  
>  static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>  		unsigned long address, unsigned long end, pgprot_t newprot)
> diff --git v4.0-rc6.orig/mm/hugetlb.c v4.0-rc6/mm/hugetlb.c
> index 05e0233d30d7..8e1c46affc59 100644
> --- v4.0-rc6.orig/mm/hugetlb.c
> +++ v4.0-rc6/mm/hugetlb.c
> @@ -3795,20 +3795,6 @@ follow_huge_pud(struct mm_struct *mm, unsigned long address,
>  
>  #ifdef CONFIG_MEMORY_FAILURE
>  
> -/* Should be called in hugetlb_lock */
> -static int is_hugepage_on_freelist(struct page *hpage)
> -{
> -	struct page *page;
> -	struct page *tmp;
> -	struct hstate *h = page_hstate(hpage);
> -	int nid = page_to_nid(hpage);
> -
> -	list_for_each_entry_safe(page, tmp, &h->hugepage_freelists[nid], lru)
> -		if (page == hpage)
> -			return 1;
> -	return 0;
> -}
> -
>  /*
>   * This function is called from memory failure code.
>   * Assume the caller holds page lock of the head page.
> @@ -3820,7 +3806,11 @@ int dequeue_hwpoisoned_huge_page(struct page *hpage)
>  	int ret = -EBUSY;
>  
>  	spin_lock(&hugetlb_lock);
> -	if (is_hugepage_on_freelist(hpage)) {
> +	/*
> +	 * Just checking !PageHugeActive is not enough, because that could be
> +	 * an isolated/hwpoisoned hugepage (which have >0 refcount).
> +	 */
> +	if (!PageHugeActive(hpage) && !page_count(hpage)) {
>  		/*
>  		 * Hwpoisoned hugepage isn't linked to activelist or freelist,
>  		 * but dangling hpage->lru can trigger list-debug warnings
> @@ -3864,25 +3854,3 @@ void putback_active_hugepage(struct page *page)
>  	spin_unlock(&hugetlb_lock);
>  	put_page(page);
>  }
> -
> -bool is_hugepage_active(struct page *page)
> -{
> -	VM_BUG_ON_PAGE(!PageHuge(page), page);
> -	/*
> -	 * This function can be called for a tail page because the caller,
> -	 * scan_movable_pages, scans through a given pfn-range which typically
> -	 * covers one memory block. In systems using gigantic hugepage (1GB
> -	 * for x86_64,) a hugepage is larger than a memory block, and we don't
> -	 * support migrating such large hugepages for now, so return false
> -	 * when called for tail pages.
> -	 */
> -	if (PageTail(page))
> -		return false;
> -	/*
> -	 * Refcount of a hwpoisoned hugepages is 1, but they are not active,
> -	 * so we should return false for them.
> -	 */
> -	if (unlikely(PageHWPoison(page)))
> -		return false;
> -	return page_count(page) > 0;
> -}
> diff --git v4.0-rc6.orig/mm/memory_hotplug.c v4.0-rc6/mm/memory_hotplug.c
> index 65842d688b7c..2d53388c0715 100644
> --- v4.0-rc6.orig/mm/memory_hotplug.c
> +++ v4.0-rc6/mm/memory_hotplug.c
> @@ -1376,7 +1376,7 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
>  			if (PageLRU(page))
>  				return pfn;
>  			if (PageHuge(page)) {
> -				if (is_hugepage_active(page))
> +				if (PageHugeActive(page))
>  					return pfn;
>  				else
>  					pfn = round_up(pfn + 1,
> -- 
> 1.9.3

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-04-01 16:36 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-31  8:50 [PATCH 0/3] hugetlb fixlet v3 Naoya Horiguchi
2015-03-31  8:50 ` [PATCH v3 1/3] mm: don't call __page_cache_release for hugetlb Naoya Horiguchi
2015-04-01 16:13   ` Michal Hocko
2015-03-31  8:50 ` [PATCH v3 3/3] mm: hugetlb: cleanup using PageHugeActive flag Naoya Horiguchi
2015-03-31 21:08   ` Andrew Morton
2015-04-01  3:27     ` Naoya Horiguchi
2015-04-01 16:36   ` Michal Hocko
2015-03-31  8:50 ` [PATCH v3 2/3] mm: hugetlb: introduce " Naoya Horiguchi
2015-03-31 21:06   ` Andrew Morton
2015-04-01  1:40     ` Naoya Horiguchi
2015-04-01 16:30   ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).