LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Huang Ying <ying.huang@intel.com>
To: linux-kernel@vger.kernel.org
Cc: Huang Ying <ying.huang@intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>, Rik van Riel <riel@surriel.com>,
Mel Gorman <mgorman@suse.de>,
Peter Zijlstra <peterz@infradead.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Yang Shi <shy828301@gmail.com>, Zi Yan <ziy@nvidia.com>,
Wei Xu <weixugc@google.com>, osalvador <osalvador@suse.de>,
Shakeel Butt <shakeelb@google.com>,
linux-mm@kvack.org
Subject: [PATCH -V8 2/6] memory tiering: add page promotion counter
Date: Tue, 14 Sep 2021 09:36:57 +0800 [thread overview]
Message-ID: <20210914013701.344956-3-ying.huang@intel.com> (raw)
In-Reply-To: <20210914013701.344956-1-ying.huang@intel.com>
To distinguish the number of the memory tiering promoted pages from
that of the originally inter-socket NUMA balancing migrated pages.
The counter is per-node (count in the target node). So this can be
used to identify promotion imbalance among the NUMA nodes.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
include/linux/mmzone.h | 3 +++
include/linux/node.h | 5 +++++
mm/migrate.c | 11 +++++++++--
mm/vmstat.c | 3 +++
4 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 6a1d79d84675..37ccd6158765 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -209,6 +209,9 @@ enum node_stat_item {
NR_PAGETABLE, /* used for pagetables */
#ifdef CONFIG_SWAP
NR_SWAPCACHE,
+#endif
+#ifdef CONFIG_NUMA_BALANCING
+ PGPROMOTE_SUCCESS, /* promote successfully */
#endif
NR_VM_NODE_STAT_ITEMS
};
diff --git a/include/linux/node.h b/include/linux/node.h
index 8e5a29897936..26e96fcc66af 100644
--- a/include/linux/node.h
+++ b/include/linux/node.h
@@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
#define to_node(device) container_of(device, struct node, dev)
+static inline bool node_is_toptier(int node)
+{
+ return node_state(node, N_CPU);
+}
+
#endif /* _LINUX_NODE_H_ */
diff --git a/mm/migrate.c b/mm/migrate.c
index a159a36dd412..6f7a6e2ef41f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2163,6 +2163,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
pg_data_t *pgdat = NODE_DATA(node);
int isolated;
int nr_remaining;
+ int nr_succeeded;
LIST_HEAD(migratepages);
new_page_t *new;
bool compound;
@@ -2201,7 +2202,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
list_add(&page->lru, &migratepages);
nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
- MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
+ MIGRATE_ASYNC, MR_NUMA_MISPLACED,
+ &nr_succeeded);
if (nr_remaining) {
if (!list_empty(&migratepages)) {
list_del(&page->lru);
@@ -2210,8 +2212,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
putback_lru_page(page);
}
isolated = 0;
- } else
+ } else {
count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ !node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+ mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS,
+ nr_succeeded);
+ }
BUG_ON(!list_empty(&migratepages));
return isolated;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8ce2620344b2..fff0ec94d795 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_SWAP
"nr_swapcached",
#endif
+#ifdef CONFIG_NUMA_BALANCING
+ "pgpromote_success",
+#endif
/* enum writeback_stat_item counters */
"nr_dirty_threshold",
--
2.30.2
next prev parent reply other threads:[~2021-09-14 1:37 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-14 1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
2021-09-14 1:36 ` [PATCH -V8 1/6] NUMA balancing: optimize page " Huang Ying
2021-09-14 22:40 ` Yang Shi
2021-09-15 1:44 ` Huang, Ying
2021-09-15 2:47 ` Yang Shi
2021-09-15 3:58 ` Huang, Ying
2021-09-15 21:32 ` Yang Shi
2021-09-16 1:44 ` Huang, Ying
2021-09-17 0:47 ` Yang Shi
2021-09-17 1:24 ` Huang, Ying
2021-09-14 1:36 ` Huang Ying [this message]
2021-09-14 22:41 ` [PATCH -V8 2/6] memory tiering: add page promotion counter Yang Shi
2021-09-15 1:53 ` Huang, Ying
2021-09-14 1:36 ` [PATCH -V8 3/6] memory tiering: skip to scan fast memory Huang Ying
2021-09-14 1:36 ` [PATCH -V8 4/6] memory tiering: hot page selection with hint page fault latency Huang Ying
2021-09-14 1:37 ` [PATCH -V8 5/6] memory tiering: rate limit NUMA migration throughput Huang Ying
2021-09-14 1:37 ` [PATCH -V8 6/6] memory tiering: adjust hot threshold automatically Huang Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210914013701.344956-3-ying.huang@intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.com \
--cc=osalvador@suse.de \
--cc=peterz@infradead.org \
--cc=riel@surriel.com \
--cc=shakeelb@google.com \
--cc=shy828301@gmail.com \
--cc=weixugc@google.com \
--cc=ziy@nvidia.com \
--subject='Re: [PATCH -V8 2/6] memory tiering: add page promotion counter' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).