LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Yu Zhao <yuzhao@google.com>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Hugh Dickins <hughd@google.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
Matthew Wilcox <willy@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>, Yang Shi <shy828301@gmail.com>,
Zi Yan <ziy@nvidia.com>,
linux-kernel@vger.kernel.org, Yu Zhao <yuzhao@google.com>,
Shuang Zhai <zhais@google.com>
Subject: [PATCH 1/3] mm: don't take lru lock when splitting isolated thp
Date: Sat, 31 Jul 2021 00:39:36 -0600 [thread overview]
Message-ID: <20210731063938.1391602-2-yuzhao@google.com> (raw)
In-Reply-To: <20210731063938.1391602-1-yuzhao@google.com>
We won't put its tail pages on lru when splitting an isolated thp
under reclaim or migration, and therefore we don't need to take the
lru lock.
Signed-off-by: Yu Zhao <yuzhao@google.com>
Tested-by: Shuang Zhai <zhais@google.com>
---
mm/huge_memory.c | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index afff3ac87067..d8b655856e79 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2342,17 +2342,14 @@ static void remap_page(struct page *page, unsigned int nr)
}
}
-static void lru_add_page_tail(struct page *head, struct page *tail,
- struct lruvec *lruvec, struct list_head *list)
+static void lru_add_page_tail(struct page *head, struct page *tail, struct list_head *list)
{
VM_BUG_ON_PAGE(!PageHead(head), head);
VM_BUG_ON_PAGE(PageCompound(tail), head);
VM_BUG_ON_PAGE(PageLRU(tail), head);
- lockdep_assert_held(&lruvec->lru_lock);
if (list) {
- /* page reclaim is reclaiming a huge page */
- VM_WARN_ON(PageLRU(head));
+ /* page reclaim or migration is splitting an isolated thp */
get_page(tail);
list_add_tail(&tail->lru, list);
} else {
@@ -2363,8 +2360,7 @@ static void lru_add_page_tail(struct page *head, struct page *tail,
}
}
-static void __split_huge_page_tail(struct page *head, int tail,
- struct lruvec *lruvec, struct list_head *list)
+static void __split_huge_page_tail(struct page *head, int tail, struct list_head *list)
{
struct page *page_tail = head + tail;
@@ -2425,19 +2421,21 @@ static void __split_huge_page_tail(struct page *head, int tail,
* pages to show after the currently processed elements - e.g.
* migrate_pages
*/
- lru_add_page_tail(head, page_tail, lruvec, list);
+ lru_add_page_tail(head, page_tail, list);
}
static void __split_huge_page(struct page *page, struct list_head *list,
pgoff_t end)
{
struct page *head = compound_head(page);
- struct lruvec *lruvec;
+ struct lruvec *lruvec = NULL;
struct address_space *swap_cache = NULL;
unsigned long offset = 0;
unsigned int nr = thp_nr_pages(head);
int i;
+ VM_BUG_ON_PAGE(list && PageLRU(head), head);
+
/* complete memcg works before add pages to LRU */
split_page_memcg(head, nr);
@@ -2450,10 +2448,11 @@ static void __split_huge_page(struct page *page, struct list_head *list,
}
/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
- lruvec = lock_page_lruvec(head);
+ if (!list)
+ lruvec = lock_page_lruvec(head);
for (i = nr - 1; i >= 1; i--) {
- __split_huge_page_tail(head, i, lruvec, list);
+ __split_huge_page_tail(head, i, list);
/* Some pages can be beyond i_size: drop them from page cache */
if (head[i].index >= end) {
ClearPageDirty(head + i);
@@ -2471,7 +2470,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
}
ClearPageCompound(head);
- unlock_page_lruvec(lruvec);
+ if (lruvec)
+ unlock_page_lruvec(lruvec);
/* Caller disabled irqs, so they are still disabled here */
split_page_owner(head, nr);
@@ -2645,6 +2645,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
VM_BUG_ON_PAGE(is_huge_zero_page(head), head);
VM_BUG_ON_PAGE(!PageLocked(head), head);
VM_BUG_ON_PAGE(!PageCompound(head), head);
+ VM_BUG_ON_PAGE(list && PageLRU(head), head);
if (PageWriteback(head))
return -EBUSY;
--
2.32.0.554.ge1b32706d8-goog
next prev parent reply other threads:[~2021-07-31 6:39 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-31 6:39 [PATCH 0/3] mm: optimize thp for reclaim and migration Yu Zhao
2021-07-31 6:39 ` Yu Zhao [this message]
2021-07-31 6:39 ` [PATCH 2/3] mm: free zapped tail pages when splitting isolated thp Yu Zhao
2021-08-04 14:22 ` Kirill A. Shutemov
2021-08-08 17:28 ` Yu Zhao
2021-08-05 0:13 ` Yang Shi
2021-08-08 17:49 ` Yu Zhao
2021-08-11 22:25 ` Yang Shi
2021-08-11 23:12 ` Yu Zhao
2021-08-13 23:24 ` Yang Shi
2021-08-13 23:56 ` Yu Zhao
2021-08-14 0:30 ` Yang Shi
2021-08-14 1:49 ` Yu Zhao
2021-08-14 2:34 ` Yang Shi
2021-07-31 6:39 ` [PATCH 3/3] mm: don't remap clean subpages " Yu Zhao
2021-07-31 9:53 ` kernel test robot
2021-07-31 15:45 ` kernel test robot
2021-08-03 11:25 ` Matthew Wilcox
2021-08-03 11:36 ` Matthew Wilcox
2021-08-08 17:21 ` Yu Zhao
2021-08-04 14:27 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210731063938.1391602-2-yuzhao@google.com \
--to=yuzhao@google.com \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shy828301@gmail.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=zhais@google.com \
--cc=ziy@nvidia.com \
--subject='Re: [PATCH 1/3] mm: don'\''t take lru lock when splitting isolated thp' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).