LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Dario Faggioli <dfaggioli@suse.com>
To: Aaron Lu <aaron.lu@linux.alibaba.com>,
	Aubrey Li <aubrey.intel@gmail.com>
Cc: "Tim Chen" <tim.c.chen@linux.intel.com>,
	"Julien Desfossez" <jdesfossez@digitalocean.com>,
	"Li, Aubrey" <aubrey.li@linux.intel.com>,
	"Subhra Mazumdar" <subhra.mazumdar@oracle.com>,
	"Vineeth Remanan Pillai" <vpillai@digitalocean.com>,
	"Nishanth Aravamudan" <naravamudan@digitalocean.com>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Ingo Molnar" <mingo@kernel.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Paul Turner" <pjt@google.com>,
	"Linus Torvalds" <torvalds@linux-foundation.org>,
	"Linux List Kernel Mailing" <linux-kernel@vger.kernel.org>,
	"Frédéric Weisbecker" <fweisbec@gmail.com>,
	"Kees Cook" <keescook@chromium.org>,
	"Greg Kerr" <kerrnel@google.com>, "Phil Auld" <pauld@redhat.com>,
	"Valentin Schneider" <valentin.schneider@arm.com>,
	"Mel Gorman" <mgorman@techsingularity.net>,
	"Pawan Gupta" <pawan.kumar.gupta@linux.intel.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Dario Faggioli" <dfaggioli@suse.com>
Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3
Date: Tue, 29 Oct 2019 10:11:20 +0100	[thread overview]
Message-ID: <277737d6034b3da072d3b0b808d2fa6e110038b0.camel@suse.com> (raw)
In-Reply-To: <20190915141402.GA1349@aaronlu>

[-- Attachment #1: Type: text/plain, Size: 18419 bytes --]

On Sun, 2019-09-15 at 22:14 +0800, Aaron Lu wrote:
> I'm using the following branch as base which is v5.1.5 based:
> https://github.com/digitalocean/linux-coresched coresched-v3-v5.1.5-
> test
> 
> And I have pushed Tim's branch to:
> https://github.com/aaronlu/linux coresched-v3-v5.1.5-test-tim
> 
> Mine:
> https://github.com/aaronlu/linux coresched-v3-v5.1.5-test-
> core_vruntime
> 
Hello,

As anticipated, I've been trying to follow the development of this
feature and, in the meantime, I have done some benchmarks.

I actually have a lot of data (and am planning for more), so I am
sending a few emails, each one with a subset of the numbers in it,
instead than just one which would be beyond giant! :-)

I'll put, in this first one, some background and some common
information, e.g., about the benchmarking platform and configurations,
and on how to read and interpreet the data that will follow.

It's quite hard to come up with a concise summary, and sometimes it's
even tricky to identify consolidated trends. There are also things that
looks weird and, although I double checked my methodology, I can't
exclude that of glitches or errors may have occurred. For each of the
benchmark, I have at least some information about what the
configuration was when it was run, and also some monitorning and perf
data. So, if interested, try to ask and we'll see what we can dig out.

And in any case, I have the procedure for running these benchmarks
fairly decently (although not completely) automated. So if we see
things that looks really really weird, I can rerun (perhaps with
different configuration, more monitoring, etc).

For each benchmark, I'll "dump" the results, with just some comments
about the things that I find more relevant/interesting. Then, if we
want, we can look at them and analyze them together.
For each experiment, I do have some limited amount of tracing and
debugging information still available, in case it could be useful. And,
as said, I can always rerun.

I can also provide, quite easily, different looking tables. E.g.,
different set of columns, different baselines, etc. Just as what you
thinks it would be the most interesting to see, and, most likely, it
will be possible to do it.

Oh, and I'll upload text files whose contents will be identical to the
emails in this space:

  http://xenbits.xen.org/people/dariof/benchmarks/results/linux/core-sched/mmtests/boxes/wayrath/

In case tables are rendered better in a browser than in a MUA.

Thanks and Regards,
Dario
---

Code: 
 1) Linux 5.1.5 (commit id 835365932f0dc25468840753e071c05ad6abc76f)
 2) https://github.com/digitalocean/linux-coresched/tree/vpillai/coresched-v3-v5.1.5-test
 3) https://github.com/aaronlu/linux/tree/coresched-v3-v5.1.5-test-core_vruntime
 4) https://github.com/aaronlu/linux/tree/coresched-v3-v5.1.5-test-tim

Benchmarking suite:
 - MMTests: https://github.com/gormanm/mmtests
 - Tweaked to deal with running benchmarks in VMs. Still working on
   upstreaming that to Mel (a WIP is available here:
   https://github.com/dfaggioli/mmtests/tree/bench-virt )

Benchmarking host:
 - CPU: 1 socket, 4 cores, 2 threads
 - RAM: 32 GB
 - distro: opneSUSE Tumbleweed
 - HW Bugs Mitigations: fully disabled
 - Filesys: XFS

VMs:
 - vCPUs: either 8 or 4
 - distro: opneSUSE Tumbleweed
 - Kernel: 5.1.16
 - HW Bugs Mitigations: fully disabled
 - Filesys: XFS

Benchmarks:
- STREAM         : pure memory benchmark (various kind of mem-ops done
                   in parallel). Parallelism is NR_CPUS/2 tasks
- Kernbench      : builds a kernel, with varying number of compile
                   jobs. HT is, in general, known to help, as it let 
                   us do "more parallel" builds
- Hackbench      : communication (via pipes, in this case) between
                   group of processes. As we deal with _groups_ of
                   tasks, we're already in saturation with 1 group,
                   hence we expect HyperThreading disabled
                   configurations to suffer
- mutilate       : load generator for memcached, with high request
                   rate;
- netperf-unix   : two communicating tasks. Without any pinning
                   (neither at the host nor at the guest level), we
                   expect HT to play a role. In fact, depending on
                   where the two task are scheduler (i.e., whether on
                   two core of the same thread, or not) performance may
                   vary
- sysbenchcpu    : the process-based CPU stressing workload of sysbench
- sysbenchthread : the thread-based CPU stressing workload of sysbench
- sysbench       : the database workload

This is kind of a legend for the columns you will see in the tables.

- v-*   : vanilla, i.e., benchmarks were run on code _without_ any
          core-scheduling patch applied (see 1 in 'Code' section above)
- *BM-* : baremetal, i.e., benchmarks were run on the host, without 
          any VM running or anything
- *VM-* : Virtual Machine, i.e., benchmarks were run inside a VM, with
          the following haracteristics:
   - *VM-   : benchmarks were run in a VM with 8 vCPUs. That was the
              only VM running in the system
   - *VM-v4 : benchmarks were run in a VM with 4 vCPUs. That was the
              only VM running in the system
   - *VMx2  : benchmark were run in a VM with 8 vCPUs, and there was
              another VM running, also with 8 vCPUS, generating CPU,
              memory and IO stress load for about 600%
- *-csc-*          : benchmarks were run with Core scheduling v3 patch
                     series (see 2 in 'Code' section above)
- *-csc_stallfix-* : benchmarks were run with Core scheduling v3 and
                     the 'stallfix' feature enabled
- *-csc_vruntime-* : benchmarks were run with Core scheduling v3 + the
                     vruntime patches (see 3 in 'Code' section above)
- *-csc-_tim-*     : benchmarks were run with Core scheduling v3 +
                     Tim's patches (see 4 in 'Code' section above)
- *-noHT           : benchmarks were run with HyperThreading Disabled
- *-HT             : benchmarks were run with Hyperthreading enabled

So, for instance, the column BM-noHT shows data from a run done on
baremetal, with HyperThreading disabled. The column v-VM-HT shows data
from a run done in a 8 vCPUs VM, with HyperThreading enabled, and no
core-scheduling patches applied. The column VM-csc_vruntime-HT shows
data from a run done in a 8 vCPUs VM with core-scheduling v3 patches +
the vruntime patches applied. The column VM-v4-HT shows data from a run
done in a 4 vCPUs VM, core-scheduling patches were applied but not used
(the vCPUs of the VM weren't tagged). The column VMx2-csc_vruntime-HT
shows data from a run done in a 8 vCPUs VM, core-scheduling v3 + Tim's
patchs were applied and the vCPUs of the VM tagged, while there was
another (untagged) VM in the system, trying to introduce ~600% load
(CPU, memory and IO, via stress-ng). Etc.

See the 'Appendix' at the bottom of this email, for a comprehensive
list of all the combinations (or, at least I think is comprehensive...
I hope I haven't missed any :-) ).

In all tables, percent increase and decrease are always relative to the
first column. It is already taken care of whether lower or higher
values are better.
Basically, when we see -x.yz%, it always means performance are worse
than the baseline, and the absolute value of that (i.e., x.yz) tells
you by how much.

If, for instance, we want to compare HT and non HT, on baremetal, we
check the BM-HT and BM-noHT columns.
If we want to compare v3 + vruntime patches against no HyperThreading,
when the system is overloaded, we look at VMx2-noHT and VMx2-
csc_vruntime-HT columns and check by how much they deviates from the
baseline (i.e., which one regresses more). For comparing, the various
core scheduling solutions, we can check by how much each one is either
better or worse than baseline. And so on...

The most relevant comparisons, IMO, are:
- the various core scheduling solutions against their respective HT
baseline. This, in fact, tells us what people will experience if they
start using core scheduling on these workloads
- the various core scheduling solutions against their respective noHT
baseline. This, in fact, tells use whether or not core scheduling is
effective, for the given workload, or if it would just be better to
disable HyperThreading
- the overhead introduced by the core scheduling patches, when they are
not used (i.e., v-BM-HT against BM-HT, or v-VM-HT against VM-HT). This,
in fact, tells us what happens to *everyone*, including the ones that
do not want core scheduling and will keep it disabled, if we merge it

Note that the overhead, so far, has been evaluated only for the -csc
case, i.e., when patches from point 2 in 'Code' above are applied, but
tasks/vCPUs are not tagged, and hence core scheduling is not really
used,

Anyway, let's get to the point where I give you some data already! :-D
:-D

STREAM
======

http://xenbits.xen.org/people/dariof/benchmarks/results/linux/core-sched/mmtests/boxes/wayrath/coresched-email-1_stream.txt

                                  v                      v                     BM                     BM                     BM                     BM                     BM                     BM
                              BM-HT                BM-noHT                     HT                   noHT                 csc-HT        csc_stallfix-HT             csc_tim-HT        csc_vruntime-HT
MB/sec copy     33827.50 (   0.00%)    33654.32 (  -0.51%)    33683.34 (  -0.43%)    33819.30 (  -0.02%)    33830.88 (   0.01%)    33731.02 (  -0.29%)    33573.76 (  -0.75%)    33292.76 (  -1.58%)
MB/sec scale    22762.02 (   0.00%)    22524.00 (  -1.05%)    22416.54 (  -1.52%)    22444.16 (  -1.40%)    22652.56 (  -0.48%)    22462.80 (  -1.31%)    22461.90 (  -1.32%)    22670.84 (  -0.40%)
MB/sec add      26141.76 (   0.00%)    26241.42 (   0.38%)    26559.40 (   1.60%)    26365.36 (   0.86%)    26607.10 (   1.78%)    26384.50 (   0.93%)    26117.78 (  -0.09%)    26192.12 (   0.19%)
MB/sec triad    26522.46 (   0.00%)    26555.26 (   0.12%)    26499.62 (  -0.09%)    26373.26 (  -0.56%)    26667.32 (   0.55%)    26642.70 (   0.45%)    26505.38 (  -0.06%)    26409.60 (  -0.43%)
                                  v                      v                     VM                     VM                     VM                     VM                     VM                     VM
                              VM-HT                VM-noHT                     HT                   noHT                 csc-HT        csc_stallfix-HT             csc_tim-HT        csc_vruntime-HT
MB/sec copy     34559.32 (   0.00%)    34153.30 (  -1.17%)    34236.64 (  -0.93%)    33724.38 (  -2.42%)    33535.60 (  -2.96%)    33534.10 (  -2.97%)    33469.70 (  -3.15%)    33873.18 (  -1.99%)
MB/sec scale    22556.18 (   0.00%)    22834.88 (   1.24%)    22733.12 (   0.78%)    23010.46 (   2.01%)    22480.60 (  -0.34%)    22552.94 (  -0.01%)    22756.50 (   0.89%)    22434.96 (  -0.54%)
MB/sec add      26209.70 (   0.00%)    26640.08 (   1.64%)    26692.54 (   1.84%)    26747.40 (   2.05%)    26358.20 (   0.57%)    26353.50 (   0.55%)    26686.62 (   1.82%)    26256.50 (   0.18%)
MB/sec triad    26521.80 (   0.00%)    26490.26 (  -0.12%)    26598.66 (   0.29%)    26466.30 (  -0.21%)    26560.48 (   0.15%)    26496.30 (  -0.10%)    26609.10 (   0.33%)    26450.68 (  -0.27%)
                                  v                      v                     VM                     VM                     VM                     VM                     VM                     VM
                           VM-v4-HT             VM-v4-noHT                  v4-HT                v4-noHT              v4-csc-HT     v4-csc_stallfix-HT          v4-csc_tim-HT     v4-csc_vruntime-HT
MB/sec copy     32257.48 (   0.00%)    32504.18 (   0.76%)    32375.66 (   0.37%)    32261.98 (   0.01%)    31940.84 (  -0.98%)    32070.88 (  -0.58%)    31926.80 (  -1.03%)    31882.18 (  -1.16%)
MB/sec scale    19806.46 (   0.00%)    20281.18 (   2.40%)    20266.80 (   2.32%)    20075.46 (   1.36%)    19847.66 (   0.21%)    20119.00 (   1.58%)    19899.84 (   0.47%)    20060.48 (   1.28%)
MB/sec add      22178.58 (   0.00%)    22426.92 (   1.12%)    22185.54 (   0.03%)    22153.52 (  -0.11%)    21975.80 (  -0.91%)    22097.72 (  -0.36%)    21827.66 (  -1.58%)    22068.04 (  -0.50%)
MB/sec triad    22149.10 (   0.00%)    22200.54 (   0.23%)    22142.10 (  -0.03%)    21933.04 (  -0.98%)    21898.50 (  -1.13%)    22160.64 (   0.05%)    22003.40 (  -0.66%)    21951.16 (  -0.89%)
                                  v                      v                   VMx2                   VMx2                   VMx2                   VMx2                   VMx2                   VMx2
                            VMx2-HT              VMx2-noHT                     HT                   noHT                 csc-HT        csc_stallfix-HT             csc_tim-HT        csc_vruntime-HT
MB/sec copy     33514.96 (   0.00%)    24740.70 ( -26.18%)    30410.96 (  -9.26%)    22157.24 ( -33.89%)    29552.60 ( -11.82%)    29374.78 ( -12.35%)    28717.38 ( -14.31%)    29143.88 ( -13.04%)
MB/sec scale    22605.74 (   0.00%)    15473.56 ( -31.55%)    19051.76 ( -15.72%)    15278.64 ( -32.41%)    19246.98 ( -14.86%)    19081.04 ( -15.59%)    18747.60 ( -17.07%)    18776.02 ( -16.94%)
MB/sec add      26249.56 (   0.00%)    18559.92 ( -29.29%)    21143.90 ( -19.45%)    18664.30 ( -28.90%)    21236.00 ( -19.10%)    21067.40 ( -19.74%)    20878.78 ( -20.46%)    21266.92 ( -18.98%)
MB/sec triad    26290.16 (   0.00%)    19274.10 ( -26.69%)    20573.62 ( -21.74%)    17631.52 ( -32.93%)    21066.94 ( -19.87%)    20975.04 ( -20.22%)    20944.56 ( -20.33%)    20942.18 ( -20.34%)

So, STREAM, at least in this configuration, it is not (as it could have
been expected) really sensitive to HyperThreading. In fact, in most
cases, both when run on baremetal and in VMs, HT and noHT results are
pretty much the same. When core scheduling is used, things does not
look bad at all to me, although results are, most of the time, only
marginally worse.

Do check, however, the overloaded case. There, disabling HT has quite a
big impact, and core scheduling does a rather good job in restoring
good performance.

From the overhead point of view, the situation does not look too bad
either. In fact, in the first three group of measurements, the overhead
introduced by having core scheduling patches in, is acceptable (there
are actually cases where they seem to do more good than harm! :-P).
However, when the system is overloaded, despite there not being any
tagged task, numbers look pretty bad. It seems that, for instance, of
the 13.04% performance drop between v-VMx2-HT and VMx2-csc_vruntime-HT, 
9.26% comes from overhead (as that's there already in VMx2-HT)!!

Something to investigate better, I guess...


Appendix

* v-BM-HT      : no coresched patch applied, baremetal, HyperThreading enabled
* v-BM-noHT    : no coresched patch applied, baremetal, Hyperthreading disabled
* v-VM-HT      : no coresched patch applied, 8 vCPUs VM, HyperThreading enabled
* v-VM-noHT    : no coresched patch applied, 8 vCPUs VM, Hyperthreading disabled
* v-VM-v4-HT   : no coresched patch applied, 4 vCPUs VM, HyperThreading enabled
* v-VM-v4-noHT : no coresched patch applied, 4 vCPUs VM, Hyperthreading disabled
* v-VMx2-HT    : no coresched patch applied, 8 vCPUs VM + 600% stress overhead, HyperThreading enabled
* v-VMx2-noHT  : no coresched patch applied, 8 vCPUs VM + 600% stress overhead, Hyperthreading disabled

* BM-HT              : baremetal, HyperThreading enabled
* BM-noHT            : baremetal, Hyperthreading disabled
* BM-csc-HT          : baremetal, coresched-v3 (Hyperthreading enabled, of course)
* BM-csc_stallfix-HT : baremetal, coresched-v3 + stallfix (Hyperthreading enabled, of course)
* BM-csc_tim-HT      : baremetal, coresched-v3 + Tim's patches (Hyperthreading enabeld, of course)
* BM-csc_vruntime-HT : baremetal, coresched-v3 + vruntime patches (Hyperthreading enabled, of course)

* VM-HT              : 8 vCPUs VM, HyperThreading enabled
* VM-noHT            : 8 vCPUs VM, Hyperthreading disabled
* VM-csc-HT          : 8 vCPUs VM, coresched-v3 (Hyperthreading enabled, of course)
* VM-csc_stallfix-HT : 8 vCPUs VM, coresched-v3 + stallfix (Hyperthreading enabled, of course)
* VM-csc_tim-HT      : 8 vCPUs VM, coresched-v3 + Tim's patches (Hyperthreading enabeld, of course)
* VM-csc_vruntime-HT : 8 vCPUs VM, coresched-v3 + vruntime patches (Hyperthreading enabled, of course)

* VM-v4-HT              : 4 vCPUs VM, HyperThreading enabled
* VM-v4-noHT            : 4 vCPUs VM, Hyperthreading disabled
* VM-v4-csc-HT          : 4 vCPUs VM, coresched-v3 (Hyperthreading enabled, of course)
* VM-v4-csc_stallfix-HT : 4 vCPUs VM, coresched-v3 + stallfix (Hyperthreading enabled, of course)
* VM-v4-csc_tim-HT      : 4 vCPUs VM, coresched-v3 + Tim's patches (Hyperthreading enabeld, of course)
* VM-v4-csc_vruntime-HT : 4 vCPUs VM, coresched-v3 + vruntime patches (Hyperthreading enabled, of course)

* VMx2-HT              : 8 vCPUs VM + 600% stress overhead, HyperThreading enabled
* VMx2-noHT            : 8 vCPUs VM + 600% stress overhead, Hyperthreading disabled
* VMx2-csc-HT          : 8 vCPUs VM + 600% stress overhead, coresched-v3 (Hyperthreading enabled, of course)
* VMx2-csc_stallfix-HT : 8 vCPUs VM + 600% stress overhead, coresched-v3 + stallfix (Hyperthreading enabled, of course)
* VMx2-csc_tim-HT      : 8 vCPUs VM + 600% stress overhead, coresched-v3 + Tim's patches (Hyperthreading enabeld, of course)
* VMx2-csc_vruntime-HT : 8 vCPUs VM + 600% stress overhead,
                        coresched-v3 + vruntime patches (Hyperthreading enabled, of course)

-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  parent reply	other threads:[~2019-10-29  9:11 UTC|newest]

Thread overview: 161+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-29 20:36 Vineeth Remanan Pillai
2019-05-29 20:36 ` [RFC PATCH v3 01/16] stop_machine: Fix stop_cpus_in_progress ordering Vineeth Remanan Pillai
2019-08-08 10:54   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2019-08-26 16:19   ` [RFC PATCH v3 01/16] " mark gross
2019-08-26 16:59     ` Peter Zijlstra
2019-05-29 20:36 ` [RFC PATCH v3 02/16] sched: Fix kerneldoc comment for ia64_set_curr_task Vineeth Remanan Pillai
2019-08-08 10:55   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2019-08-26 16:20   ` [RFC PATCH v3 02/16] " mark gross
2019-05-29 20:36 ` [RFC PATCH v3 03/16] sched: Wrap rq::lock access Vineeth Remanan Pillai
2019-05-29 20:36 ` [RFC PATCH v3 04/16] sched/{rt,deadline}: Fix set_next_task vs pick_next_task Vineeth Remanan Pillai
2019-08-08 10:55   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2019-05-29 20:36 ` [RFC PATCH v3 05/16] sched: Add task_struct pointer to sched_class::set_curr_task Vineeth Remanan Pillai
2019-08-08 10:57   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2019-05-29 20:36 ` [RFC PATCH v3 06/16] sched/fair: Export newidle_balance() Vineeth Remanan Pillai
2019-08-08 10:58   ` [tip:sched/core] sched/fair: Expose newidle_balance() tip-bot for Peter Zijlstra
2019-05-29 20:36 ` [RFC PATCH v3 07/16] sched: Allow put_prev_task() to drop rq->lock Vineeth Remanan Pillai
2019-08-08 10:58   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2019-08-26 16:51   ` [RFC PATCH v3 07/16] " mark gross
2019-05-29 20:36 ` [RFC PATCH v3 08/16] sched: Rework pick_next_task() slow-path Vineeth Remanan Pillai
2019-08-08 10:59   ` [tip:sched/core] " tip-bot for Peter Zijlstra
2019-08-26 17:01   ` [RFC PATCH v3 08/16] " mark gross
2019-05-29 20:36 ` [RFC PATCH v3 09/16] sched: Introduce sched_class::pick_task() Vineeth Remanan Pillai
2019-08-26 17:14   ` mark gross
2019-05-29 20:36 ` [RFC PATCH v3 10/16] sched: Core-wide rq->lock Vineeth Remanan Pillai
2019-05-31 11:08   ` Peter Zijlstra
2019-05-31 15:23     ` Vineeth Pillai
2019-05-29 20:36 ` [RFC PATCH v3 11/16] sched: Basic tracking of matching tasks Vineeth Remanan Pillai
2019-08-26 20:59   ` mark gross
2019-05-29 20:36 ` [RFC PATCH v3 12/16] sched: A quick and dirty cgroup tagging interface Vineeth Remanan Pillai
2019-05-29 20:36 ` [RFC PATCH v3 13/16] sched: Add core wide task selection and scheduling Vineeth Remanan Pillai
2019-06-07 23:36   ` Pawan Gupta
2019-05-29 20:36 ` [RFC PATCH v3 14/16] sched/fair: Add a few assertions Vineeth Remanan Pillai
2019-05-29 20:36 ` [RFC PATCH v3 15/16] sched: Trivial forced-newidle balancer Vineeth Remanan Pillai
2019-05-29 20:36 ` [RFC PATCH v3 16/16] sched: Debug bits Vineeth Remanan Pillai
2019-05-29 21:02   ` Peter Oskolkov
2019-05-30 14:04 ` [RFC PATCH v3 00/16] Core scheduling v3 Aubrey Li
2019-05-30 14:17   ` Julien Desfossez
2019-05-31  4:55     ` Aubrey Li
2019-05-31  3:01   ` Aaron Lu
2019-05-31  5:12     ` Aubrey Li
2019-05-31  6:09       ` Aaron Lu
2019-05-31  6:53         ` Aubrey Li
2019-05-31  7:44           ` Aaron Lu
2019-05-31  8:26             ` Aubrey Li
2019-05-31 21:08     ` Julien Desfossez
2019-06-06 15:26       ` Julien Desfossez
2019-06-12  1:52         ` Li, Aubrey
2019-06-12 16:06           ` Julien Desfossez
2019-06-12 16:33         ` Julien Desfossez
2019-06-13  0:03           ` Subhra Mazumdar
2019-06-13  3:22             ` Julien Desfossez
2019-06-17  2:51               ` Aubrey Li
2019-06-19 18:33                 ` Julien Desfossez
2019-07-18 10:07                   ` Aaron Lu
2019-07-18 23:27                     ` Tim Chen
2019-07-19  5:52                       ` Aaron Lu
2019-07-19 11:48                         ` Aubrey Li
2019-07-19 18:33                         ` Tim Chen
2019-07-22 10:26                     ` Aubrey Li
2019-07-22 10:43                       ` Aaron Lu
2019-07-23  2:52                         ` Aubrey Li
2019-07-25 14:30                       ` Aaron Lu
2019-07-25 14:31                         ` [RFC PATCH 1/3] wrapper for cfs_rq->min_vruntime Aaron Lu
2019-07-25 14:32                         ` [PATCH 2/3] core vruntime comparison Aaron Lu
2019-08-06 14:17                           ` Peter Zijlstra
2019-07-25 14:33                         ` [PATCH 3/3] temp hack to make tick based schedule happen Aaron Lu
2019-07-25 21:42                         ` [RFC PATCH v3 00/16] Core scheduling v3 Li, Aubrey
2019-07-26 15:21                         ` Julien Desfossez
2019-07-26 21:29                           ` Tim Chen
2019-07-31  2:42                           ` Li, Aubrey
2019-08-02 15:37                             ` Julien Desfossez
2019-08-05 15:55                               ` Tim Chen
2019-08-06  3:24                                 ` Aaron Lu
2019-08-06  6:56                                   ` Aubrey Li
2019-08-06  7:04                                     ` Aaron Lu
2019-08-06 12:24                                       ` Vineeth Remanan Pillai
2019-08-06 13:49                                         ` Aaron Lu
2019-08-06 16:14                                           ` Vineeth Remanan Pillai
2019-08-06 14:16                                         ` Peter Zijlstra
2019-08-06 15:53                                           ` Vineeth Remanan Pillai
2019-08-06 17:03                                   ` Tim Chen
2019-08-06 17:12                                     ` Peter Zijlstra
2019-08-06 21:19                                       ` Tim Chen
2019-08-08  6:47                                         ` Aaron Lu
2019-08-08 17:27                                           ` Tim Chen
2019-08-08 21:42                                             ` Tim Chen
2019-08-10 14:15                                               ` Aaron Lu
2019-08-12 15:38                                                 ` Vineeth Remanan Pillai
2019-08-13  2:24                                                   ` Aaron Lu
2019-08-08 12:55                                 ` Aaron Lu
2019-08-08 16:39                                   ` Tim Chen
2019-08-10 14:18                                     ` Aaron Lu
2019-08-05 20:09                               ` Phil Auld
2019-08-06 13:54                                 ` Aaron Lu
2019-08-06 14:17                                   ` Phil Auld
2019-08-06 14:41                                     ` Aaron Lu
2019-08-06 14:55                                       ` Phil Auld
2019-08-07  8:58                               ` Dario Faggioli
2019-08-07 17:10                                 ` Tim Chen
2019-08-15 16:09                                   ` Dario Faggioli
2019-08-16  2:33                                     ` Aaron Lu
2019-09-05  1:44                                   ` Julien Desfossez
2019-09-06 22:17                                     ` Tim Chen
2019-09-18 21:27                                     ` Tim Chen
2019-09-06 18:30                                   ` Tim Chen
2019-09-11 14:02                                     ` Aaron Lu
2019-09-11 16:19                                       ` Tim Chen
2019-09-11 16:47                                         ` Vineeth Remanan Pillai
2019-09-12 12:35                                           ` Aaron Lu
2019-09-12 17:29                                             ` Tim Chen
2019-09-13 14:15                                               ` Aaron Lu
2019-09-13 17:13                                                 ` Tim Chen
2019-09-30 11:53                                             ` Vineeth Remanan Pillai
2019-10-02 20:48                                               ` Vineeth Remanan Pillai
2019-10-10 13:54                                                 ` Aaron Lu
2019-10-10 14:29                                                   ` Vineeth Remanan Pillai
2019-10-11  7:33                                                     ` Aaron Lu
2019-10-11 11:32                                                       ` Vineeth Remanan Pillai
2019-10-11 12:01                                                         ` Aaron Lu
2019-10-11 12:10                                                           ` Vineeth Remanan Pillai
2019-10-12  3:55                                                             ` Aaron Lu
2019-10-13 12:44                                                               ` Vineeth Remanan Pillai
2019-10-14  9:57                                                                 ` Aaron Lu
2019-10-21 12:30                                                                   ` Vineeth Remanan Pillai
2019-09-12 12:04                                         ` Aaron Lu
2019-09-12 17:05                                           ` Tim Chen
2019-09-13 13:57                                             ` Aaron Lu
2019-09-12 23:12                                           ` Aubrey Li
2019-09-15 14:14                                             ` Aaron Lu
2019-09-18  1:33                                               ` Aubrey Li
2019-09-18 20:40                                                 ` Tim Chen
2019-09-18 22:16                                                   ` Aubrey Li
2019-09-30 14:36                                                     ` Vineeth Remanan Pillai
2019-10-29 20:40                                                   ` Julien Desfossez
2019-11-01 21:42                                                     ` Tim Chen
2019-10-29  9:11                                               ` Dario Faggioli [this message]
2019-10-29  9:15                                                 ` Dario Faggioli
2019-10-29  9:16                                                 ` Dario Faggioli
2019-10-29  9:17                                                 ` Dario Faggioli
2019-10-29  9:18                                                 ` Dario Faggioli
2019-10-29  9:18                                                 ` Dario Faggioli
2019-10-29  9:19                                                 ` Dario Faggioli
2019-10-29  9:20                                                 ` Dario Faggioli
2019-10-29 20:34                                                   ` Julien Desfossez
2019-11-15 16:30                                                     ` Dario Faggioli
2019-09-25  2:40                                     ` Aubrey Li
2019-09-25 17:24                                       ` Tim Chen
2019-09-25 22:07                                         ` Aubrey Li
2019-09-30 15:22                                     ` Julien Desfossez
2019-08-27 21:14 ` Matthew Garrett
2019-08-27 21:50   ` Peter Zijlstra
2019-08-28 15:30     ` Phil Auld
2019-08-28 16:01       ` Peter Zijlstra
2019-08-28 16:37         ` Tim Chen
2019-08-29 14:30         ` Phil Auld
2019-08-29 14:38           ` Peter Zijlstra
2019-09-10 14:27             ` Julien Desfossez
2019-09-18 21:12               ` Tim Chen
2019-08-28 15:59     ` Tim Chen
2019-08-28 16:16       ` Peter Zijlstra
2019-08-27 23:24   ` Aubrey Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=277737d6034b3da072d3b0b808d2fa6e110038b0.camel@suse.com \
    --to=dfaggioli@suse.com \
    --cc=aaron.lu@linux.alibaba.com \
    --cc=aubrey.intel@gmail.com \
    --cc=aubrey.li@linux.intel.com \
    --cc=fweisbec@gmail.com \
    --cc=jdesfossez@digitalocean.com \
    --cc=keescook@chromium.org \
    --cc=kerrnel@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=naravamudan@digitalocean.com \
    --cc=pauld@redhat.com \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=subhra.mazumdar@oracle.com \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=valentin.schneider@arm.com \
    --cc=vpillai@digitalocean.com \
    --subject='Re: [RFC PATCH v3 00/16] Core scheduling v3' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).