LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Frederic Weisbecker <frederic@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>,
	LKML <linux-kernel@vger.kernel.org>, Tejun Heo <tj@kernel.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Alex Belits <abelits@marvell.com>, Nitesh Lal <nilal@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Nicolas Saenz <nsaenzju@redhat.com>,
	Christoph Lameter <cl@gentwo.de>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Zefan Li <lizefan.x@bytedance.com>,
	cgroups@vger.kernel.org
Subject: Re: [RFC PATCH 6/6] cpuset: Add cpuset.isolation_mask file
Date: Thu, 15 Jul 2021 11:04:19 +0200	[thread overview]
Message-ID: <20210715090419.GH2725@worktop.programming.kicks-ass.net> (raw)
In-Reply-To: <20210714231338.GA65963@lothringen>

On Thu, Jul 15, 2021 at 01:13:38AM +0200, Frederic Weisbecker wrote:
> On Wed, Jul 14, 2021 at 06:52:43PM +0200, Peter Zijlstra wrote:

> > cpusets already has means to create paritions; why are you creating
> > something else?
> 
> I was about to answer that the semantics of isolcpus, which reference
> a NULL domain, are different from SD_LOAD_BALANCE implied by
> cpuset.sched_load_balance. But then I realize that SD_LOAD_BALANCE has
> been removed.
> 
> How cpuset.sched_load_balance is implemented then? Commit
> e669ac8ab952df2f07dee1e1efbf40647d6de332 ("sched: Remove checks against
> SD_LOAD_BALANCE") advertize that setting cpuset.sched_load_balance to 0
> ends up creating NULL domain but that's not what I get. For example if I
> mount a single cpuset root (no other cpuset mountpoints):

SD_LOAD_BALANCE was only for when you wanted to stop balancing inside a
domain tree. That no longer happens (and hasn't for a *long* time).
Cpusets simply creates multiple domain trees (or the empty one if its
just one CPU).

> $ mount -t cgroup none ./cpuset -o cpuset
> $ cd cpuset
> $ cat cpuset.cpus
> 0-7
> $ cat cpuset.sched_load_balance
> 1
> $ echo 0 > cpuset.sched_load_balance
> $ ls /sys/kernel/debug/domains/cpu1/
> domain0  domain1
> 
> I still get the domains on all CPUs...

(note, that's the cgroup-v1 interface, the cgroup-v2 interface is
significantly different)

I'd suggest doing: echo 1 > /debug/sched/verbose, if I do the above I
get:

[1290784.889705] CPU0 attaching NULL sched-domain.
[1290784.894830] CPU1 attaching NULL sched-domain.
[1290784.899947] CPU2 attaching NULL sched-domain.
[1290784.905056] CPU3 attaching NULL sched-domain.
[1290784.910153] CPU4 attaching NULL sched-domain.
[1290784.915252] CPU5 attaching NULL sched-domain.
[1290784.920338] CPU6 attaching NULL sched-domain.
[1290784.925439] CPU7 attaching NULL sched-domain.
[1290784.930535] CPU8 attaching NULL sched-domain.
[1290784.935660] CPU9 attaching NULL sched-domain.
[1290784.940911] CPU10 attaching NULL sched-domain.
[1290784.946117] CPU11 attaching NULL sched-domain.
[1290784.951317] CPU12 attaching NULL sched-domain.
[1290784.956507] CPU13 attaching NULL sched-domain.
[1290784.961688] CPU14 attaching NULL sched-domain.
[1290784.966876] CPU15 attaching NULL sched-domain.
[1290784.972047] CPU16 attaching NULL sched-domain.
[1290784.977218] CPU17 attaching NULL sched-domain.
[1290784.982383] CPU18 attaching NULL sched-domain.
[1290784.987552] CPU19 attaching NULL sched-domain.
[1290784.992724] CPU20 attaching NULL sched-domain.
[1290784.997893] CPU21 attaching NULL sched-domain.
[1290785.003063] CPU22 attaching NULL sched-domain.
[1290785.008230] CPU23 attaching NULL sched-domain.
[1290785.013400] CPU24 attaching NULL sched-domain.
[1290785.018568] CPU25 attaching NULL sched-domain.
[1290785.023736] CPU26 attaching NULL sched-domain.
[1290785.028905] CPU27 attaching NULL sched-domain.
[1290785.034074] CPU28 attaching NULL sched-domain.
[1290785.039241] CPU29 attaching NULL sched-domain.
[1290785.044409] CPU30 attaching NULL sched-domain.
[1290785.049579] CPU31 attaching NULL sched-domain.
[1290785.054816] CPU32 attaching NULL sched-domain.
[1290785.059986] CPU33 attaching NULL sched-domain.
[1290785.065154] CPU34 attaching NULL sched-domain.
[1290785.070323] CPU35 attaching NULL sched-domain.
[1290785.075492] CPU36 attaching NULL sched-domain.
[1290785.080662] CPU37 attaching NULL sched-domain.
[1290785.085832] CPU38 attaching NULL sched-domain.
[1290785.091001] CPU39 attaching NULL sched-domain.

Then when I do:

# mkdir /cgroup/A
# echo 0,20 > /cgroup/A/cpuset.cpus

I get:

[1291020.749036] CPU0 attaching sched-domain(s):
[1291020.754251]  domain-0: span=0,20 level=SMT
[1291020.759061]   groups: 0:{ span=0 }, 20:{ span=20 }
[1291020.765386] CPU20 attaching sched-domain(s):
[1291020.770399]  domain-0: span=0,20 level=SMT
[1291020.775210]   groups: 20:{ span=20 }, 0:{ span=0 }
[1291020.780831] root domain span: 0,20 (max cpu_capacity = 1024)

IOW, I've created a load-balance domain on just the first core of the
system.

# echo 0-1,20-21 > /cgroup/A/cpuset.cpus

Extends it to the first two cores:

[1291340.260699] CPU0 attaching NULL sched-domain.
[1291340.265820] CPU20 attaching NULL sched-domain.
[1291340.271403] CPU0 attaching sched-domain(s):
[1291340.276315]  domain-0: span=0,20 level=SMT
[1291340.281122]   groups: 0:{ span=0 }, 20:{ span=20 }
[1291340.286719]   domain-1: span=0-1,20-21 level=MC
[1291340.292011]    groups: 0:{ span=0,20 cap=2048 }, 1:{ span=1,21 cap=2048 }
[1291340.299855] CPU1 attaching sched-domain(s):
[1291340.304757]  domain-0: span=1,21 level=SMT
[1291340.309564]   groups: 1:{ span=1 }, 21:{ span=21 }
[1291340.315190]   domain-1: span=0-1,20-21 level=MC
[1291340.320474]    groups: 1:{ span=1,21 cap=2048 }, 0:{ span=0,20 cap=2048 }
[1291340.328307] CPU20 attaching sched-domain(s):
[1291340.333344]  domain-0: span=0,20 level=SMT
[1291340.338136]   groups: 20:{ span=20 }, 0:{ span=0 }
[1291340.343721]   domain-1: span=0-1,20-21 level=MC
[1291340.348980]    groups: 0:{ span=0,20 cap=2048 }, 1:{ span=1,21 cap=2048 }
[1291340.356783] CPU21 attaching sched-domain(s):
[1291340.361755]  domain-0: span=1,21 level=SMT
[1291340.366534]   groups: 21:{ span=21 }, 1:{ span=1 }
[1291340.372099]   domain-1: span=0-1,20-21 level=MC
[1291340.377364]    groups: 1:{ span=1,21 cap=2048 }, 0:{ span=0,20 cap=2048 }
[1291340.385216] root domain span: 0-1,20-21 (max cpu_capacity = 1024)


  parent reply	other threads:[~2021-07-15  9:04 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-14 13:54 [RFC PATCH 0/6] cpuset: Allow to modify isolcpus through cpuset Frederic Weisbecker
2021-07-14 13:54 ` [RFC PATCH 1/6] pci: Decouple HK_FLAG_WQ and HK_FLAG_DOMAIN cpumask fetch Frederic Weisbecker
2021-07-14 13:54 ` [RFC PATCH 2/6] workqueue: " Frederic Weisbecker
2021-07-14 13:54 ` [RFC PATCH 3/6] net: " Frederic Weisbecker
2021-07-14 13:54 ` [RFC PATCH 4/6] sched/isolation: Split domain housekeeping mask from the rest Frederic Weisbecker
2021-07-14 13:54 ` [RFC PATCH 5/6] sched/isolation: Make HK_FLAG_DOMAIN mutable Frederic Weisbecker
2021-07-21 14:28   ` Vincent Donnefort
2021-07-14 13:54 ` [RFC PATCH 6/6] cpuset: Add cpuset.isolation_mask file Frederic Weisbecker
2021-07-14 16:31   ` Marcelo Tosatti
2021-07-19 13:26     ` Frederic Weisbecker
2021-07-19 15:41       ` Marcelo Tosatti
2021-07-14 16:52   ` Peter Zijlstra
2021-07-14 23:13     ` Frederic Weisbecker
2021-07-14 23:44       ` Valentin Schneider
2021-07-15  0:07         ` Frederic Weisbecker
2021-07-15  9:04       ` Peter Zijlstra [this message]
2021-07-19 13:17         ` Frederic Weisbecker
2021-07-16 18:02 ` [RFC PATCH 0/6] cpuset: Allow to modify isolcpus through cpuset Waiman Long
2021-07-19 13:57   ` Frederic Weisbecker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210715090419.GH2725@worktop.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=abelits@marvell.com \
    --cc=cgroups@vger.kernel.org \
    --cc=cl@gentwo.de \
    --cc=frederic@kernel.org \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=mtosatti@redhat.com \
    --cc=nilal@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=valentin.schneider@arm.com \
    --subject='Re: [RFC PATCH 6/6] cpuset: Add cpuset.isolation_mask file' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).