Linux-Fsdevel Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Junxiao Bi <junxiao.bi@oracle.com>
To: "Eric W. Biederman" <ebiederm@xmission.com>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	Matthew Wilcox <matthew.wilcox@oracle.com>,
	Srinivas Eeda <SRINIVAS.EEDA@oracle.com>,
	"joe.jin@oracle.com" <joe.jin@oracle.com>
Subject: Re: severe proc dentry lock contention
Date: Thu, 18 Jun 2020 17:27:43 -0700	[thread overview]
Message-ID: <2cf6af59-e86b-f6cc-06d3-84309425bd1d@oracle.com> (raw)
In-Reply-To: <877dw3apn8.fsf@x220.int.ebiederm.org>

On 6/18/20 5:02 PM, ebiederm@xmission.com wrote:

> Matthew Wilcox <willy@infradead.org> writes:
>
>> On Thu, Jun 18, 2020 at 03:17:33PM -0700, Junxiao Bi wrote:
>>> When debugging some performance issue, i found that thousands of threads
>>> exit around same time could cause a severe spin lock contention on proc
>>> dentry "/proc/$parent_process_pid/task/", that's because threads needs to
>>> clean up their pid file from that dir when exit. Check the following
>>> standalone test case that simulated the case and perf top result on v5.7
>>> kernel. Any idea on how to fix this?
>> Thanks, Junxiao.
>>
>> We've looked at a few different ways of fixing this problem.
>>
>> Even though the contention is within the dcache, it seems like a usecase
>> that the dcache shouldn't be optimised for -- generally we do not have
>> hundreds of CPUs removing dentries from a single directory in parallel.
>>
>> We could fix this within procfs.  We don't have a great patch yet, but
>> the current approach we're looking at allows only one thread at a time
>> to call dput() on any /proc/*/task directory.
>>
>> We could also look at fixing this within the scheduler.  Only allowing
>> one CPU to run the threads of an exiting process would fix this particular
>> problem, but might have other consequences.
>>
>> I was hoping that 7bc3e6e55acf would fix this, but that patch is in 5.7,
>> so that hope is ruled out.
> Does anyone know if problem new in v5.7?  I am wondering if I introduced
> this problem when I refactored the code or if I simply churned the code
> but the issue remains effectively the same.
It's not new issue, we see it in old kernel like v4.14
>
> Can you try only flushing entries when the last thread of the process is
> reaped?  I think in practice we would want to be a little more
> sophisticated but it is a good test case to see if it solves the issue.

Thank you. i will try and let you know.

Thanks,

Junxiao.

>
> diff --git a/kernel/exit.c b/kernel/exit.c
> index cebae77a9664..d56e4eb60bdd 100644
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -152,7 +152,7 @@ void put_task_struct_rcu_user(struct task_struct *task)
>   void release_task(struct task_struct *p)
>   {
>   	struct task_struct *leader;
> -	struct pid *thread_pid;
> +	struct pid *thread_pid = NULL;
>   	int zap_leader;
>   repeat:
>   	/* don't need to get the RCU readlock here - the process is dead and
> @@ -165,7 +165,8 @@ void release_task(struct task_struct *p)
>   
>   	write_lock_irq(&tasklist_lock);
>   	ptrace_release_task(p);
> -	thread_pid = get_pid(p->thread_pid);
> +	if (p == p->group_leader)
> +		thread_pid = get_pid(p->thread_pid);
>   	__exit_signal(p);
>   
>   	/*
> @@ -188,8 +189,10 @@ void release_task(struct task_struct *p)
>   	}
>   
>   	write_unlock_irq(&tasklist_lock);
> -	proc_flush_pid(thread_pid);
> -	put_pid(thread_pid);
> +	if (thread_pid) {
> +		proc_flush_pid(thread_pid);
> +		put_pid(thread_pid);
> +	}
>   	release_thread(p);
>   	put_task_struct_rcu_user(p);
>   

  reply	other threads:[~2020-06-19  0:28 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-18 22:17 Junxiao Bi
2020-06-18 23:39 ` Matthew Wilcox
2020-06-19  0:02   ` Eric W. Biederman
2020-06-19  0:27     ` Junxiao Bi [this message]
2020-06-19  3:30       ` Eric W. Biederman
2020-06-19 14:09       ` [PATCH] proc: Avoid a thundering herd of threads freeing proc dentries Eric W. Biederman
2020-06-19 15:56         ` Junxiao Bi
2020-06-19 17:24           ` Eric W. Biederman
2020-06-19 21:56             ` Junxiao Bi
2020-06-19 22:42               ` Eric W. Biederman
2020-06-20 16:27                 ` Matthew Wilcox
2020-06-22  5:15                   ` Junxiao Bi
2020-06-22 15:20                     ` Eric W. Biederman
2020-06-22 15:48                       ` willy
2020-08-17 12:19                         ` Eric W. Biederman
2020-06-22 17:16                       ` Junxiao Bi
2020-06-23  0:47                     ` Matthew Wilcox
2020-06-25 22:11                       ` Junxiao Bi
2020-06-22  5:33         ` Masahiro Yamada
2020-06-22 15:13           ` Eric W. Biederman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2cf6af59-e86b-f6cc-06d3-84309425bd1d@oracle.com \
    --to=junxiao.bi@oracle.com \
    --cc=SRINIVAS.EEDA@oracle.com \
    --cc=ebiederm@xmission.com \
    --cc=joe.jin@oracle.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matthew.wilcox@oracle.com \
    --cc=willy@infradead.org \
    --subject='Re: severe proc dentry lock contention' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).