LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: yanghui <yanghui.def@bytedance.com>
Cc: willy@infradead.org, songmuchun@bytedance.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3] mm/mempolicy: fix a race between offset_il_node and mpol_rebind_task
Date: Mon, 16 Aug 2021 17:59:52 -0700	[thread overview]
Message-ID: <20210816175952.3c0d1eee821cd2d9ed7c3879@linux-foundation.org> (raw)
In-Reply-To: <20210815061034.84309-1-yanghui.def@bytedance.com>

On Sun, 15 Aug 2021 14:10:34 +0800 yanghui <yanghui.def@bytedance.com> wrote:

> Servers happened below panic:
> Kernel version:5.4.56
> BUG: unable to handle page fault for address: 0000000000002c48
> RIP: 0010:__next_zones_zonelist+0x1d/0x40
> [264003.977696] RAX: 0000000000002c40 RBX: 0000000000100dca RCX: 0000000000000014
> [264003.977872] Call Trace:
> [264003.977888]  __alloc_pages_nodemask+0x277/0x310
> [264003.977908]  alloc_page_interleave+0x13/0x70
> [264003.977926]  handle_mm_fault+0xf99/0x1390
> [264003.977951]  __do_page_fault+0x288/0x500
> [264003.977979]  ? schedule+0x39/0xa0
> [264003.977994]  do_page_fault+0x30/0x110
> [264003.978010]  page_fault+0x3e/0x50
> 
> The reason of panic is that MAX_NUMNODES is passd in the third parameter
> in function __alloc_pages_nodemask(preferred_nid). So if to access
> zonelist->zoneref->zone_idx in __next_zones_zonelist the panic will happen.
> 
> In offset_il_node(), first_node() return nid from pol->v.nodes, after
> this other threads may changed pol->v.nodes before next_node().
> This race condition will let next_node return MAX_NUMNODES.So put
> pol->nodes in a local variable.
> 
> The race condition is between offset_il_node and cpuset_change_task_nodemask:
> CPU0:                                     CPU1:
> alloc_pages_vma()
>   interleave_nid(pol,)
>     offset_il_node(pol,)
>       first_node(pol->v.nodes)            cpuset_change_task_nodemask
>                       //nodes==0xc          mpol_rebind_task
>                                               mpol_rebind_policy
>                                                 mpol_rebind_nodemask(pol,nodes)
>                       //nodes==0x3
>       next_node(nid, pol->v.nodes)//return MAX_NUMNODES
> 
>
> ...
>
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1965,17 +1965,26 @@ unsigned int mempolicy_slab_node(void)
>   */
>  static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
>  {
> -	unsigned nnodes = nodes_weight(pol->nodes);
> -	unsigned target;
> +	nodemask_t nodemask = pol->nodes;

Ouch.  nodemask_t can be large - up to 128 bytes I think.  This looks
like an expensive thing to be adding to fast paths (alloc_pages_vma()).

Plus it consumes a lot of stack.

> +	unsigned int target, nnodes;
>  	int i;
>  	int nid;
> +	/*
> +	 * The barrier will stabilize the nodemask in a register or on
> +	 * the stack so that it will stop changing under the code.
> +	 *
> +	 * Between first_node() and next_node(), pol->nodes could be changed
> +	 * by other threads. So we put pol->nodes in a local stack.
> +	 */
> +	barrier();
>  
> +	nnodes = nodes_weight(nodemask);
>  	if (!nnodes)
>  		return numa_node_id();
>  	target = (unsigned int)n % nnodes;
> -	nid = first_node(pol->nodes);
> +	nid = first_node(nodemask);
>  	for (i = 0; i < target; i++)
> -		nid = next_node(nid, pol->nodes);
> +		nid = next_node(nid, nodemask);
>  	return nid;
>  }

The whole idea seems a bit hacky and fragile to be.  We're dealing with
a potentially stale copy of the nodemask, yes?

Ordinarily this is troublesome because there could be other problems
caused by working off stale data and a better fix would be to simply
avoid using stale data!

But I guess that if the worst case is that once in a billion times,
interleaving hands out a page which isn't on the intended node then we
can live with that.

And if this guess is correct and it is indeed the case that this is the
worst case, can we please spell all this out in the changelog.


  reply	other threads:[~2021-08-17  0:59 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-15  6:10 yanghui
2021-08-17  0:59 ` Andrew Morton [this message]
2021-08-17  1:43   ` Matthew Wilcox
2021-08-18 14:02     ` Muchun Song
2021-08-18 15:07       ` Matthew Wilcox
2021-08-19  2:04         ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210816175952.3c0d1eee821cd2d9ed7c3879@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=songmuchun@bytedance.com \
    --cc=willy@infradead.org \
    --cc=yanghui.def@bytedance.com \
    --subject='Re: [PATCH v3] mm/mempolicy: fix a race between offset_il_node and mpol_rebind_task' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).