Linux-Fsdevel Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Junxiao Bi <junxiao.bi@oracle.com>
Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	Matthew Wilcox <matthew.wilcox@oracle.com>,
	Srinivas Eeda <SRINIVAS.EEDA@oracle.com>,
	"joe.jin@oracle.com" <joe.jin@oracle.com>,
	"Eric W. Biederman" <ebiederm@xmission.com>
Subject: Re: severe proc dentry lock contention
Date: Thu, 18 Jun 2020 16:39:58 -0700	[thread overview]
Message-ID: <20200618233958.GV8681@bombadil.infradead.org> (raw)
In-Reply-To: <54091fc0-ca46-2186-97a8-d1f3c4f3877b@oracle.com>

On Thu, Jun 18, 2020 at 03:17:33PM -0700, Junxiao Bi wrote:
> When debugging some performance issue, i found that thousands of threads
> exit around same time could cause a severe spin lock contention on proc
> dentry "/proc/$parent_process_pid/task/", that's because threads needs to
> clean up their pid file from that dir when exit. Check the following
> standalone test case that simulated the case and perf top result on v5.7
> kernel. Any idea on how to fix this?

Thanks, Junxiao.

We've looked at a few different ways of fixing this problem.

Even though the contention is within the dcache, it seems like a usecase
that the dcache shouldn't be optimised for -- generally we do not have
hundreds of CPUs removing dentries from a single directory in parallel.

We could fix this within procfs.  We don't have a great patch yet, but
the current approach we're looking at allows only one thread at a time
to call dput() on any /proc/*/task directory.

We could also look at fixing this within the scheduler.  Only allowing
one CPU to run the threads of an exiting process would fix this particular
problem, but might have other consequences.

I was hoping that 7bc3e6e55acf would fix this, but that patch is in 5.7,
so that hope is ruled out.

> 
>    PerfTop:   48891 irqs/sec  kernel:95.6%  exact: 100.0% lost: 0/0 drop:
> 0/0 [4000Hz cycles],  (all, 72 CPUs)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> 
> 
>     66.10%  [kernel]                               [k]
> native_queued_spin_lock_slowpath
>      1.13%  [kernel]                               [k] _raw_spin_lock
>      0.84%  [kernel]                               [k] clear_page_erms
>      0.82%  [kernel]                               [k]
> queued_write_lock_slowpath
>      0.64%  [kernel]                               [k] proc_task_readdir
>      0.61%  [kernel]                               [k]
> find_idlest_group.isra.95
>      0.61%  [kernel]                               [k]
> syscall_return_via_sysret
>      0.55%  [kernel]                               [k] entry_SYSCALL_64
>      0.49%  [kernel]                               [k] memcpy_erms
>      0.46%  [kernel]                               [k] update_cfs_group
>      0.41%  [kernel]                               [k] get_pid_task
>      0.39%  [kernel]                               [k]
> _raw_spin_lock_irqsave
>      0.37%  [kernel]                               [k]
> __list_del_entry_valid
>      0.34%  [kernel]                               [k]
> get_page_from_freelist
>      0.34%  [kernel]                               [k] __d_lookup
>      0.32%  [kernel]                               [k] update_load_avg
>      0.31%  libc-2.17.so                           [.] get_next_seq
>      0.27%  [kernel]                               [k] avc_has_perm_noaudit
>      0.26%  [kernel]                               [k] __sched_text_start
>      0.25%  [kernel]                               [k]
> selinux_inode_permission
>      0.25%  [kernel]                               [k] __slab_free
>      0.24%  [kernel]                               [k] detach_entity_cfs_rq
>      0.23%  [kernel]                               [k] zap_pte_range
>      0.22%  [kernel]                               [k]
> _find_next_bit.constprop.1
>      0.22%  libc-2.17.so                           [.] vfprintf
>      0.20%  libc-2.17.so                           [.] _int_malloc
>      0.19%  [kernel]                               [k] _raw_spin_lock_irq
>      0.18%  [kernel]                               [k] rb_erase
>      0.18%  [kernel]                               [k] pid_revalidate
>      0.18%  [kernel]                               [k] lockref_get_not_dead
>      0.18%  [kernel]                               [k]
> __alloc_pages_nodemask
>      0.17%  [kernel]                               [k] set_task_cpu
>      0.17%  libc-2.17.so                           [.] __strcoll_l
>      0.17%  [kernel]                               [k] do_syscall_64
>      0.17%  [kernel]                               [k] __vmalloc_node_range
>      0.17%  libc-2.17.so                           [.] _IO_vfscanf
>      0.17%  [kernel]                               [k] refcount_dec_not_one
>      0.15%  [kernel]                               [k] __task_pid_nr_ns
>      0.15%  [kernel]                               [k]
> native_irq_return_iret
>      0.15%  [kernel]                               [k] free_pcppages_bulk
>      0.14%  [kernel]                               [k] kmem_cache_alloc
>      0.14%  [kernel]                               [k] link_path_walk
>      0.14%  libc-2.17.so                           [.] _int_free
>      0.14%  [kernel]                               [k]
> __update_load_avg_cfs_rq
>      0.14%  perf.5.7.0-master.20200601.ol7.x86_64  [.] 0x00000000000eac29
>      0.13%  [kernel]                               [k] kmem_cache_free
>      0.13%  [kernel]                               [k] number
>      0.13%  [kernel]                               [k] memset_erms
>      0.12%  [kernel]                               [k] proc_pid_status
>      0.12%  [kernel]                               [k] __d_lookup_rcu
> 
> 
> =========== runme.sh ==========
> 
> #!/bin/bash
> 
> threads=${1:-10000}
> prog=proc_race
> while [ 1 ]; do ./$prog $threads; done &
> 
> while [ 1 ]; do
>     pid=`ps aux | grep $prog | grep -v grep| awk '{print $2}'`
>     if [ -z $pid ]; then continue; fi
>     threadnum=`ls -l /proc/$pid/task | wc -l`
>     if [ $threadnum -gt $threads ]; then
>         echo kill $pid
>         kill -9 $pid
>     fi
> done
> 
> 
> ===========proc_race.c=========
> 
> 
> #include <pthread.h>
> #include <string.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <unistd.h>
> #include <errno.h>
> #include <ctype.h>
> 
> #define handle_error_en(en, msg) \
>     do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)
> 
> #define handle_error(msg) \
>     do { perror(msg); exit(EXIT_FAILURE); } while (0)
> 
> struct thread_info {
>     pthread_t thread_id;
>     int       thread_num;
> };
> 
> static void *child_thread()
> {
>     int i;
> 
>     while (1) { if (!(i++ % 1000000)) sleep(1);}
>     return NULL;
> }
> 
> int main(int argc, char *argv[])
> {
>     int s, tnum, opt, num_threads;
>     struct thread_info *tinfo;
>     void *res;
> 
>     if (argc == 2)
>         num_threads = atoi(argv[1]);
>     else
>         num_threads = 10000;
> 
>     tinfo = calloc(num_threads, sizeof(struct thread_info));
>     if (tinfo == NULL)
>         handle_error("calloc");
> 
> 
>     for (tnum = 0; tnum < num_threads; tnum++) {
>         tinfo[tnum].thread_num = tnum + 1;
> 
>         s = pthread_create(&tinfo[tnum].thread_id, NULL,
>                 &child_thread, NULL);
>         if (s != 0)
>             handle_error_en(s, "pthread_create");
>     }
> 
>     for (tnum = 0; tnum < num_threads; tnum++) {
>         s = pthread_join(tinfo[tnum].thread_id, &res);
>         if (s != 0)
>             handle_error_en(s, "pthread_join");
> 
>         free(res);
>     }
> 
>     free(tinfo);
>     exit(EXIT_SUCCESS);
> }
> 
> ==========
> 
> Thanks,
> 
> Junxiao.
> 

  reply	other threads:[~2020-06-18 23:40 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-18 22:17 Junxiao Bi
2020-06-18 23:39 ` Matthew Wilcox [this message]
2020-06-19  0:02   ` Eric W. Biederman
2020-06-19  0:27     ` Junxiao Bi
2020-06-19  3:30       ` Eric W. Biederman
2020-06-19 14:09       ` [PATCH] proc: Avoid a thundering herd of threads freeing proc dentries Eric W. Biederman
2020-06-19 15:56         ` Junxiao Bi
2020-06-19 17:24           ` Eric W. Biederman
2020-06-19 21:56             ` Junxiao Bi
2020-06-19 22:42               ` Eric W. Biederman
2020-06-20 16:27                 ` Matthew Wilcox
2020-06-22  5:15                   ` Junxiao Bi
2020-06-22 15:20                     ` Eric W. Biederman
2020-06-22 15:48                       ` willy
2020-08-17 12:19                         ` Eric W. Biederman
2020-06-22 17:16                       ` Junxiao Bi
2020-06-23  0:47                     ` Matthew Wilcox
2020-06-25 22:11                       ` Junxiao Bi
2020-06-22  5:33         ` Masahiro Yamada
2020-06-22 15:13           ` Eric W. Biederman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200618233958.GV8681@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=SRINIVAS.EEDA@oracle.com \
    --cc=ebiederm@xmission.com \
    --cc=joe.jin@oracle.com \
    --cc=junxiao.bi@oracle.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matthew.wilcox@oracle.com \
    --subject='Re: severe proc dentry lock contention' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).