LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Riccardo Mancini <rickyman7@gmail.com>
Cc: Arnaldo Carvalho de Melo <arnaldo.melo@gmail.com>,
	Ian Rogers <irogers@google.com>,
	Namhyung Kim <namhyung@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Mark Rutland <mark.rutland@arm.com>, Jiri Olsa <jolsa@redhat.com>,
	linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [RFC PATCH v1 25/37] perf evsel: move event open in evsel__open_cpu to separate function
Date: Sat, 11 Sep 2021 16:10:20 -0300	[thread overview]
Message-ID: <YTz/HBuosvqOkvYE@kernel.org> (raw)
In-Reply-To: <9506b14fe2965e4145c034715eb10e02f2137f7b.camel@gmail.com>

Em Fri, Sep 03, 2021 at 11:52:18PM +0200, Riccardo Mancini escreveu:
> Hi Arnaldo,
> 
> thanks for your review and your suggestions, and also for the PRIu64 patch.
> 
> On Tue, 2021-08-31 at 16:54 -0300, Arnaldo Carvalho de Melo wrote:
> > Em Sat, Aug 21, 2021 at 11:19:31AM +0200, Riccardo Mancini escreveu:
> > > This is the final patch splitting evsel__open_cpu.
> > > This patch moves the entire loop code to a separate function, to be
> > > reused for the multithreaded code.
> > 
> > Are you going to use that 'enum perf_event_open_err' somewhere else?
> > I.e. is there a need to expose it in evsel.h?
> 
> Yes, in the next patch (26/37). It's being used to expose a function that just
> does the perf_event_open calls for an evsel. It needs to return such structure
> to provide information about the error (which return code, at which thread).
> 
> > 
> > I'm stopping at this patch to give the ones I merged so far some
> > testing, will now push it to tmp.perf/core.
> 
> I checked tmp.perf/core and it looks good to me.
> I also did some additional tests to check that fallback mechanisms where
> working:
> 
> check missing pid being ignored (rerun until warning is shown)
> $ sudo ./perf bench internals evlist-open-close -i10 -u $UID
> 
> check that weak group fallback is working
> $ sudo ./perf record -e '{cycles,cache-misses,cache-
> references,cpu_clk_unhalted.thread,cycles,cycles,cycles}:W' 
> 
> check that precision_ip fallback is working:
> edited perf-sys.h to make sys_perf_event_open fail if precision_ip > 2
> $ sudo ./perf record -e '{cycles,cs}:P'
> 
> 
> I've also run perf-test on my machine and it's passing too.
> I'm encounteirng one fail on the "BPF filter" test (42), which is present also
> in perf/core, so it should not be related to this patch.

Thanks! I'll try to resume work on it as soon as I have the plumbers
talk ready :-)

- Arnaldo
 
> Thanks,
> Riccardo
> 
> > 
> > - Arnaldo
> >  
> > > Signed-off-by: Riccardo Mancini <rickyman7@gmail.com>
> > > ---
> > >  tools/perf/util/evsel.c | 142 ++++++++++++++++++++++++----------------
> > >  tools/perf/util/evsel.h |  12 ++++
> > >  2 files changed, 99 insertions(+), 55 deletions(-)
> > > 
> > > diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> > > index 2e95416b8320c6b9..e41f55a7a70ea630 100644
> > > --- a/tools/perf/util/evsel.c
> > > +++ b/tools/perf/util/evsel.c
> > > @@ -1945,6 +1945,82 @@ bool evsel__increase_rlimit(enum rlimit_action
> > > *set_rlimit)
> > >         return false;
> > >  }
> > >  
> > > +static struct perf_event_open_result perf_event_open(struct evsel *evsel,
> > > +                                       pid_t pid, int cpu, int thread,
> > > +                                       struct perf_cpu_map *cpus,
> > > +                                       struct perf_thread_map *threads)
> > > +{
> > > +       int fd, group_fd, rc;
> > > +       struct perf_event_open_result res;
> > > +
> > > +       if (!evsel->cgrp && !evsel->core.system_wide)
> > > +               pid = perf_thread_map__pid(threads, thread);
> > > +
> > > +       group_fd = get_group_fd(evsel, cpu, thread);
> > > +
> > > +       test_attr__ready();
> > > +
> > > +       pr_debug2_peo("sys_perf_event_open: pid %d  cpu %d  group_fd %d  flags
> > > %#lx",
> > > +                       pid, cpus->map[cpu], group_fd, evsel->open_flags);
> > > +
> > > +       fd = sys_perf_event_open(&evsel->core.attr, pid, cpus->map[cpu],
> > > +                               group_fd, evsel->open_flags);
> > > +
> > > +       FD(evsel, cpu, thread) = fd;
> > > +       res.fd = fd;
> > > +
> > > +       if (fd < 0) {
> > > +               rc = -errno;
> > > +
> > > +               pr_debug2_peo("\nsys_perf_event_open failed, error %d\n",
> > > +                               rc);
> > > +               res.rc = rc;
> > > +               res.err = PEO_FALLBACK;
> > > +               return res;
> > > +       }
> > > +
> > > +       bpf_counter__install_pe(evsel, cpu, fd);
> > > +
> > > +       if (unlikely(test_attr__enabled)) {
> > > +               test_attr__open(&evsel->core.attr, pid,
> > > +                       cpus->map[cpu], fd,
> > > +                       group_fd, evsel->open_flags);
> > > +       }
> > > +
> > > +       pr_debug2_peo(" = %d\n", fd);
> > > +
> > > +       if (evsel->bpf_fd >= 0) {
> > > +               int evt_fd = fd;
> > > +               int bpf_fd = evsel->bpf_fd;
> > > +
> > > +               rc = ioctl(evt_fd,
> > > +                               PERF_EVENT_IOC_SET_BPF,
> > > +                               bpf_fd);
> > > +               if (rc && errno != EEXIST) {
> > > +                       pr_err("failed to attach bpf fd %d: %s\n",
> > > +                               bpf_fd, strerror(errno));
> > > +                       res.rc = -EINVAL;
> > > +                       res.err = PEO_ERROR;
> > > +                       return res;
> > > +               }
> > > +       }
> > > +
> > > +       /*
> > > +        * If we succeeded but had to kill clockid, fail and
> > > +        * have evsel__open_strerror() print us a nice error.
> > > +        */
> > > +       if (perf_missing_features.clockid ||
> > > +               perf_missing_features.clockid_wrong) {
> > > +               res.rc = -EINVAL;
> > > +               res.err = PEO_ERROR;
> > > +               return res;
> > > +       }
> > > +
> > > +       res.rc = 0;
> > > +       res.err = PEO_SUCCESS;
> > > +       return res;
> > > +}
> > > +
> > >  static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
> > >                 struct perf_thread_map *threads,
> > >                 int start_cpu, int end_cpu)
> > > @@ -1952,6 +2028,7 @@ static int evsel__open_cpu(struct evsel *evsel, struct
> > > perf_cpu_map *cpus,
> > >         int cpu, thread, nthreads;
> > >         int pid = -1, err, old_errno;
> > >         enum rlimit_action set_rlimit = NO_CHANGE;
> > > +       struct perf_event_open_result peo_res;
> > >  
> > >         err = __evsel__prepare_open(evsel, cpus, threads);
> > >         if (err)
> > > @@ -1979,67 +2056,22 @@ static int evsel__open_cpu(struct evsel *evsel, struct
> > > perf_cpu_map *cpus,
> > >         for (cpu = start_cpu; cpu < end_cpu; cpu++) {
> > >  
> > >                 for (thread = 0; thread < nthreads; thread++) {
> > > -                       int fd, group_fd;
> > >  retry_open:
> > >                         if (thread >= nthreads)
> > >                                 break;
> > >  
> > > -                       if (!evsel->cgrp && !evsel->core.system_wide)
> > > -                               pid = perf_thread_map__pid(threads, thread);
> > > -
> > > -                       group_fd = get_group_fd(evsel, cpu, thread);
> > > -
> > > -                       test_attr__ready();
> > > -
> > > -                       pr_debug2_peo("sys_perf_event_open: pid %d  cpu %d 
> > > group_fd %d  flags %#lx",
> > > -                               pid, cpus->map[cpu], group_fd, evsel-
> > > >open_flags);
> > > +                       peo_res = perf_event_open(evsel, pid, cpu, thread,
> > > cpus,
> > > +                                               threads);
> > >  
> > > -                       fd = sys_perf_event_open(&evsel->core.attr, pid, cpus-
> > > >map[cpu],
> > > -                                               group_fd, evsel->open_flags);
> > > -
> > > -                       FD(evsel, cpu, thread) = fd;
> > > -
> > > -                       if (fd < 0) {
> > > -                               err = -errno;
> > > -
> > > -                               pr_debug2_peo("\nsys_perf_event_open failed,
> > > error %d\n",
> > > -                                         err);
> > > +                       err = peo_res.rc;
> > > +                       switch (peo_res.err) {
> > > +                       case PEO_SUCCESS:
> > > +                               set_rlimit = NO_CHANGE;
> > > +                               continue;
> > > +                       case PEO_FALLBACK:
> > >                                 goto try_fallback;
> > > -                       }
> > > -
> > > -                       bpf_counter__install_pe(evsel, cpu, fd);
> > > -
> > > -                       if (unlikely(test_attr__enabled)) {
> > > -                               test_attr__open(&evsel->core.attr, pid, cpus-
> > > >map[cpu],
> > > -                                               fd, group_fd, evsel-
> > > >open_flags);
> > > -                       }
> > > -
> > > -                       pr_debug2_peo(" = %d\n", fd);
> > > -
> > > -                       if (evsel->bpf_fd >= 0) {
> > > -                               int evt_fd = fd;
> > > -                               int bpf_fd = evsel->bpf_fd;
> > > -
> > > -                               err = ioctl(evt_fd,
> > > -                                           PERF_EVENT_IOC_SET_BPF,
> > > -                                           bpf_fd);
> > > -                               if (err && errno != EEXIST) {
> > > -                                       pr_err("failed to attach bpf fd %d:
> > > %s\n",
> > > -                                              bpf_fd, strerror(errno));
> > > -                                       err = -EINVAL;
> > > -                                       goto out_close;
> > > -                               }
> > > -                       }
> > > -
> > > -                       set_rlimit = NO_CHANGE;
> > > -
> > > -                       /*
> > > -                        * If we succeeded but had to kill clockid, fail and
> > > -                        * have evsel__open_strerror() print us a nice error.
> > > -                        */
> > > -                       if (perf_missing_features.clockid ||
> > > -                           perf_missing_features.clockid_wrong) {
> > > -                               err = -EINVAL;
> > > +                       default:
> > > +                       case PEO_ERROR:
> > >                                 goto out_close;
> > >                         }
> > >                 }
> > > diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
> > > index 0a245afab2d87d74..8c9827a93ac001a7 100644
> > > --- a/tools/perf/util/evsel.h
> > > +++ b/tools/perf/util/evsel.h
> > > @@ -282,6 +282,18 @@ int evsel__enable(struct evsel *evsel);
> > >  int evsel__disable(struct evsel *evsel);
> > >  int evsel__disable_cpu(struct evsel *evsel, int cpu);
> > >  
> > > +enum perf_event_open_err {
> > > +       PEO_SUCCESS,
> > > +       PEO_FALLBACK,
> > > +       PEO_ERROR
> > > +};
> > > +
> > > +struct perf_event_open_result {
> > > +       enum perf_event_open_err err;
> > > +       int rc;
> > > +       int fd;
> > > +};
> > > +
> > >  int evsel__open_per_cpu(struct evsel *evsel, struct perf_cpu_map *cpus, int
> > > cpu);
> > >  int evsel__open_per_thread(struct evsel *evsel, struct perf_thread_map
> > > *threads);
> > >  int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
> > > -- 
> > > 2.31.1
> > 
> 

-- 

- Arnaldo

  reply	other threads:[~2021-09-11 19:10 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-21  9:19 [RFC PATCH v1 00/37] perf: use workqueue for evlist operations Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 01/37] libperf cpumap: improve idx function Riccardo Mancini
2021-08-31 18:46   ` Arnaldo Carvalho de Melo
2021-10-08 14:29   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 02/37] libperf cpumap: improve max function Riccardo Mancini
2021-08-31 18:47   ` Arnaldo Carvalho de Melo
2021-08-31 19:16     ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 03/37] perf evlist: replace evsel__cpu_iter* functions with evsel__find_cpu Riccardo Mancini
2021-10-08 14:38   ` [RFC PATCH v1 03/37] perf evlist: replace evsel__cpu_iter* functions with evsel__find_cpu() Arnaldo Carvalho de Melo
2021-12-11  0:20   ` [RFC PATCH v1 03/37] perf evlist: replace evsel__cpu_iter* functions with evsel__find_cpu Ian Rogers
2021-08-21  9:19 ` [RFC PATCH v1 04/37] perf util: add mmap_cpu_mask__duplicate function Riccardo Mancini
2021-08-31 19:21   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 05/37] perf util/mmap: add missing bitops.h header Riccardo Mancini
2021-08-31 19:22   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 06/37] perf workqueue: add affinities to threadpool Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 07/37] perf workqueue: add support for setting affinities to workers Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 08/37] perf workqueue: add method to execute work on specific CPU Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 09/37] perf python: add workqueue dependency Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 10/37] perf evlist: add multithreading helper Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 11/37] perf evlist: add multithreading to evlist__disable Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 12/37] perf evlist: add multithreading to evlist__enable Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 13/37] perf evlist: add multithreading to evlist__close Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 14/37] perf evsel: remove retry_sample_id goto label Riccardo Mancini
2021-08-31 19:25   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 15/37] perf evsel: separate open preparation from open itself Riccardo Mancini
2021-08-31 19:27   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 16/37] perf evsel: save open flags in evsel Riccardo Mancini
2021-08-31 19:31   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 17/37] perf evsel: separate missing feature disabling from evsel__open_cpu Riccardo Mancini
2021-08-31 19:35   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 18/37] perf evsel: add evsel__prepare_open function Riccardo Mancini
2021-08-31 19:36   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 19/37] perf evsel: separate missing feature detection from evsel__open_cpu Riccardo Mancini
2021-08-31 19:39   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 20/37] perf evsel: separate rlimit increase " Riccardo Mancini
2021-08-31 19:41   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 21/37] perf evsel: move ignore_missing_thread to fallback code Riccardo Mancini
2021-08-31 19:44   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 22/37] perf evsel: move test_attr__open to success path in evsel__open_cpu Riccardo Mancini
2021-08-31 19:47   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 23/37] perf evsel: move bpf_counter__install_pe " Riccardo Mancini
2021-08-31 19:50   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 24/37] perf evsel: handle precise_ip fallback " Riccardo Mancini
2021-08-31 19:52   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 25/37] perf evsel: move event open in evsel__open_cpu to separate function Riccardo Mancini
2021-08-31 19:54   ` Arnaldo Carvalho de Melo
2021-09-03 21:52     ` Riccardo Mancini
2021-09-11 19:10       ` Arnaldo Carvalho de Melo [this message]
2021-08-21  9:19 ` [RFC PATCH v1 26/37] perf evsel: add evsel__open_per_cpu_no_fallback function Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 27/37] perf evlist: add evlist__for_each_entry_from macro Riccardo Mancini
2021-08-31 20:06   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 28/37] perf evlist: add multithreading to evlist__open Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 29/37] perf evlist: add custom fallback " Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 30/37] perf record: use evlist__open_custom Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 31/37] tools lib/subcmd: add OPT_UINTEGER_OPTARG option type Riccardo Mancini
2021-08-31 18:44   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 32/37] perf record: add --threads option Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 33/37] perf record: pin threads to monitored cpus if enough threads available Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 34/37] perf record: apply multithreading in init and fini phases Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 35/37] perf test/evlist-open-close: add multithreading Riccardo Mancini
2021-08-21  9:19 ` [RFC PATCH v1 36/37] perf test/evlist-open-close: use inline func to convert timeval to usec Riccardo Mancini
2021-10-08 14:46   ` Arnaldo Carvalho de Melo
2021-08-21  9:19 ` [RFC PATCH v1 37/37] perf test/evlist-open-close: add detailed output mode Riccardo Mancini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YTz/HBuosvqOkvYE@kernel.org \
    --to=acme@kernel.org \
    --cc=arnaldo.melo@gmail.com \
    --cc=irogers@google.com \
    --cc=jolsa@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rickyman7@gmail.com \
    --subject='Re: [RFC PATCH v1 25/37] perf evsel: move event open in evsel__open_cpu to separate function' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).