LKML Archive on lore.kernel.org help / color / mirror / Atom feed
From: Feng Tang <feng.tang@intel.com> To: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>, David Rientjes <rientjes@google.com>, Dave Hansen <dave.hansen@intel.com>, Ben Widawsky <ben.widawsky@intel.com> Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Andrea Arcangeli <aarcange@redhat.com>, Mel Gorman <mgorman@techsingularity.net>, Mike Kravetz <mike.kravetz@oracle.com>, Randy Dunlap <rdunlap@infradead.org>, Vlastimil Babka <vbabka@suse.cz>, Andi Kleen <ak@linux.intel.com>, Dan Williams <dan.j.williams@intel.com>, ying.huang@intel.com, Feng Tang <feng.tang@intel.com> Subject: [PATCH v7 2/5] mm/memplicy: add page allocation function for MPOL_PREFERRED_MANY policy Date: Tue, 3 Aug 2021 13:59:19 +0800 [thread overview] Message-ID: <1627970362-61305-3-git-send-email-feng.tang@intel.com> (raw) In-Reply-To: <1627970362-61305-1-git-send-email-feng.tang@intel.com> The semantics of MPOL_PREFERRED_MANY is similar to MPOL_PREFERRED, that it will first try to allocate memory from the preferred node(s), and fallback to all nodes in system when first try fails. Add a dedicated function alloc_pages_preferred_many() for it just like for 'interleave' policy, which will be used by 2 general memoory allocation APIs: alloc_pages() and alloc_pages_vma() Link: https://lore.kernel.org/r/20200630212517.308045-9-ben.widawsky@intel.com Suggested-by: Michal Hocko <mhocko@suse.com> Originally-by: Ben Widawsky <ben.widawsky@intel.com> Co-developed-by: Ben Widawsky <ben.widawsky@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Signed-off-by: Feng Tang <feng.tang@intel.com> --- mm/mempolicy.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 72f7ff760989..a00bb1c48a15 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2166,6 +2166,27 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, return page; } +static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, + int nid, struct mempolicy *pol) +{ + struct page *page; + gfp_t preferred_gfp; + + /* + * This is a two pass approach. The first pass will only try the + * preferred nodes but skip the direct reclaim and allow the + * allocation to fail, while the second pass will try all the + * nodes in system. + */ + preferred_gfp = gfp | __GFP_NOWARN; + preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); + page = __alloc_pages(preferred_gfp, order, nid, &pol->nodes); + if (!page) + page = __alloc_pages(gfp, order, numa_node_id(), NULL); + + return page; +} + /** * alloc_pages_vma - Allocate a page for a VMA. * @gfp: GFP flags. @@ -2201,6 +2222,12 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, goto out; } + if (pol->mode == MPOL_PREFERRED_MANY) { + page = alloc_pages_preferred_many(gfp, order, node, pol); + mpol_cond_put(pol); + goto out; + } + if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { int hpage_node = node; @@ -2278,6 +2305,9 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) */ if (pol->mode == MPOL_INTERLEAVE) page = alloc_page_interleave(gfp, order, interleave_nodes(pol)); + else if (pol->mode == MPOL_PREFERRED_MANY) + page = alloc_pages_preferred_many(gfp, order, + numa_node_id(), pol); else page = __alloc_pages(gfp, order, policy_node(gfp, pol, numa_node_id()), -- 2.14.1
next prev parent reply other threads:[~2021-08-03 5:59 UTC|newest] Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-03 5:59 [PATCH v7 0/5] Introduce multi-preference mempolicy Feng Tang 2021-08-03 5:59 ` [PATCH v7 1/5] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang 2021-08-06 13:27 ` Michal Hocko 2021-08-06 13:28 ` Michal Hocko 2021-08-03 5:59 ` Feng Tang [this message] 2021-08-06 13:29 ` [PATCH v7 2/5] mm/memplicy: add page allocation function for MPOL_PREFERRED_MANY policy Michal Hocko 2021-08-03 5:59 ` [PATCH v7 3/5] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY Feng Tang 2021-08-06 13:35 ` Michal Hocko 2021-08-09 2:44 ` Feng Tang 2021-08-09 8:41 ` Michal Hocko 2021-08-09 12:37 ` Feng Tang 2021-08-09 13:19 ` Michal Hocko 2021-08-10 8:50 ` Feng Tang 2021-08-10 21:35 ` Hugh Dickins 2021-08-11 1:37 ` Feng Tang 2021-08-10 20:06 ` [PATCH] mm/hugetlb: Initialize page to NULL in alloc_buddy_huge_page_with_mpol() Nathan Chancellor 2021-08-11 1:21 ` Feng Tang 2021-08-03 5:59 ` [PATCH v7 4/5] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang 2021-08-03 5:59 ` [PATCH v7 5/5] mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies Feng Tang 2021-12-01 3:09 ` [PATCH v7 0/5] Introduce multi-preference mempolicy Gang Li 2021-12-01 5:33 ` Feng Tang
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1627970362-61305-3-git-send-email-feng.tang@intel.com \ --to=feng.tang@intel.com \ --cc=aarcange@redhat.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=ben.widawsky@intel.com \ --cc=dan.j.williams@intel.com \ --cc=dave.hansen@intel.com \ --cc=linux-api@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@techsingularity.net \ --cc=mhocko@kernel.org \ --cc=mike.kravetz@oracle.com \ --cc=rdunlap@infradead.org \ --cc=rientjes@google.com \ --cc=vbabka@suse.cz \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).