LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] irq: consider cpus on nodes are unbalanced
@ 2021-10-29  8:27 Rei Yamamoto
  2021-11-24 19:33 ` Thomas Gleixner
  0 siblings, 1 reply; 7+ messages in thread
From: Rei Yamamoto @ 2021-10-29  8:27 UTC (permalink / raw)
  To: tglx; +Cc: linux-kernel, yamamoto.rei

If cpus on a node are offline at boot time, there are
difference in the number of nodes between when building affinity
masks for present cpus and when building affinity masks for possible
cpus. This patch fixes 2 problems caused by the difference of the
number of nodes:

 - If some unused vectors remain after building masks for present cpus,
   remained vectors are assigned for building masks for possible cpus.
   Therefore "numvecs <= nodes" condition must be
   "vecs_to_assign <= nodes_to_assign". Fix this problem by making this
   condition appropriate.

 - The routine of "numvecs <= nodes" condition can overwrite bits of
   masks for present cpus in building masks for possible cpus. Fix this
   problem by making CPU bits, which is not target, not changing.

Signed-off-by: Rei Yamamoto <yamamoto.rei@jp.fujitsu.com>
---
 kernel/irq/affinity.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index f7ff8919dc9b..1cdf89e5e2fb 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -267,10 +267,16 @@ static int __irq_build_affinity_masks(unsigned int startvec,
 	 * If the number of nodes in the mask is greater than or equal the
 	 * number of vectors we just spread the vectors across the nodes.
 	 */
-	if (numvecs <= nodes) {
+	if (numvecs - (curvec - firstvec) <= nodes) {
 		for_each_node_mask(n, nodemsk) {
+			unsigned int ncpus;
+
+			cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
+			ncpus = cpumask_weight(nmsk);
+			if (!ncpus)
+				continue;
 			cpumask_or(&masks[curvec].mask, &masks[curvec].mask,
-				   node_to_cpumask[n]);
+				   nmsk);
 			if (++curvec == last_affv)
 				curvec = firstvec;
 		}
-- 
2.27.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] irq: consider cpus on nodes are unbalanced
  2021-10-29  8:27 [PATCH] irq: consider cpus on nodes are unbalanced Rei Yamamoto
@ 2021-11-24 19:33 ` Thomas Gleixner
  2021-12-15  1:57   ` Rei Yamamoto
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Gleixner @ 2021-11-24 19:33 UTC (permalink / raw)
  To: Rei Yamamoto
  Cc: linux-kernel, yamamoto.rei, Ming Lei, Christoph Hellwig,
	Marc Zyngier, Keith Busch

Rei,

On Fri, Oct 29 2021 at 17:27, Rei Yamamoto wrote:

Cc'ing a few people who worked on this code.

> If cpus on a node are offline at boot time, there are
> difference in the number of nodes between when building affinity
> masks for present cpus and when building affinity masks for possible
> cpus. This patch fixes 2 problems caused by the difference of the
> number of nodes:
>
>  - If some unused vectors remain after building masks for present cpus,
>    remained vectors are assigned for building masks for possible cpus.
>    Therefore "numvecs <= nodes" condition must be
>    "vecs_to_assign <= nodes_to_assign". Fix this problem by making this
>    condition appropriate.
>
>  - The routine of "numvecs <= nodes" condition can overwrite bits of
>    masks for present cpus in building masks for possible cpus. Fix this
>    problem by making CPU bits, which is not target, not changing.
>
> Signed-off-by: Rei Yamamoto <yamamoto.rei@jp.fujitsu.com>
> ---
>  kernel/irq/affinity.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index f7ff8919dc9b..1cdf89e5e2fb 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -267,10 +267,16 @@ static int __irq_build_affinity_masks(unsigned int startvec,
>  	 * If the number of nodes in the mask is greater than or equal the
>  	 * number of vectors we just spread the vectors across the nodes.
>  	 */
> -	if (numvecs <= nodes) {
> +	if (numvecs - (curvec - firstvec) <= nodes) {
>  		for_each_node_mask(n, nodemsk) {
> +			unsigned int ncpus;
> +
> +			cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
> +			ncpus = cpumask_weight(nmsk);
> +			if (!ncpus)
> +				continue;
>  			cpumask_or(&masks[curvec].mask, &masks[curvec].mask,
> -				   node_to_cpumask[n]);
> +				   nmsk);
>  			if (++curvec == last_affv)
>  				curvec = firstvec;
>  		}

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] irq: consider cpus on nodes are unbalanced
  2021-11-24 19:33 ` Thomas Gleixner
@ 2021-12-15  1:57   ` Rei Yamamoto
  2021-12-15  4:33     ` Ming Lei
  0 siblings, 1 reply; 7+ messages in thread
From: Rei Yamamoto @ 2021-12-15  1:57 UTC (permalink / raw)
  To: tglx; +Cc: hch, kbusch, linux-kernel, maz, ming.lei, yamamoto.rei

On Wed, Nov 24 2021 at 20:33, Thomas Gleixner wrote:
> Cc'ing a few people who worked on this code.
>
>> If cpus on a node are offline at boot time, there are
>> difference in the number of nodes between when building affinity
>> masks for present cpus and when building affinity masks for possible
>> cpus. This patch fixes 2 problems caused by the difference of the
>> number of nodes:
>>
>>  - If some unused vectors remain after building masks for present cpus,
>>    remained vectors are assigned for building masks for possible cpus.
>>    Therefore "numvecs <= nodes" condition must be
>>    "vecs_to_assign <= nodes_to_assign". Fix this problem by making this
>>    condition appropriate.
>>
>>  - The routine of "numvecs <= nodes" condition can overwrite bits of
>>    masks for present cpus in building masks for possible cpus. Fix this
>>    problem by making CPU bits, which is not target, not changing.
>>
>> Signed-off-by: Rei Yamamoto <yamamoto.rei@jp.fujitsu.com>
>> ---
>>  kernel/irq/affinity.c | 10 ++++++++--
>>  1 file changed, 8 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
>> index f7ff8919dc9b..1cdf89e5e2fb 100644
>> --- a/kernel/irq/affinity.c
>> +++ b/kernel/irq/affinity.c
>> @@ -267,10 +267,16 @@ static int __irq_build_affinity_masks(unsigned int startvec,
>>  	 * If the number of nodes in the mask is greater than or equal the
>>  	 * number of vectors we just spread the vectors across the nodes.
>>  	 */
>> -	if (numvecs <= nodes) {
>> +	if (numvecs - (curvec - firstvec) <= nodes) {
>>  		for_each_node_mask(n, nodemsk) {
>> +			unsigned int ncpus;
>> +
>> +			cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
>> +			ncpus = cpumask_weight(nmsk);
>> +			if (!ncpus)
>> +				continue;
>>  			cpumask_or(&masks[curvec].mask, &masks[curvec].mask,
>> -				   node_to_cpumask[n]);
>> +				   nmsk);
>>  			if (++curvec == last_affv)
>>  				curvec = firstvec;
>>  		}

Do you have any comments?

Rei

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] irq: consider cpus on nodes are unbalanced
  2021-12-15  1:57   ` Rei Yamamoto
@ 2021-12-15  4:33     ` Ming Lei
  2021-12-17  2:48       ` Rei Yamamoto
  0 siblings, 1 reply; 7+ messages in thread
From: Ming Lei @ 2021-12-15  4:33 UTC (permalink / raw)
  To: Rei Yamamoto; +Cc: tglx, hch, kbusch, linux-kernel, maz, ming.lei

On Wed, Dec 15, 2021 at 10:57:11AM +0900, Rei Yamamoto wrote:
> On Wed, Nov 24 2021 at 20:33, Thomas Gleixner wrote:
> > Cc'ing a few people who worked on this code.
> >
> >> If cpus on a node are offline at boot time, there are
> >> difference in the number of nodes between when building affinity
> >> masks for present cpus and when building affinity masks for possible
> >> cpus.

There is always difference between the two number of nodes, the 1st is
node number covering present cpus, and the 2nd one is the node number
covering other possible cpus not spread.

>> This patch fixes 2 problems caused by the difference of the

Is there any user visible problem?

> >> number of nodes:
> >>
> >>  - If some unused vectors remain after building masks for present cpus,

We just select a new vector for starting the spread if un-allocated
vectors remains, but the number for allocation is still numvecs. We hope both
present cpus and non-present cpus can be balanced on each vector, so that each
vector may get present cpu allocated.

> >>    remained vectors are assigned for building masks for possible cpus.
> >>    Therefore "numvecs <= nodes" condition must be
> >>    "vecs_to_assign <= nodes_to_assign". Fix this problem by making this
> >>    condition appropriate.
> >>
> >>  - The routine of "numvecs <= nodes" condition can overwrite bits of
> >>    masks for present cpus in building masks for possible cpus. Fix this
> >>    problem by making CPU bits, which is not target, not changing.

'numvecs' is always the total number of vectors for assigning CPUs, if
the number is <= nodes, we just assign interested cpus in the whole
node into each vector until all interested cpus are allocated out.

 
> Do you have any comments?

Not see issues in current way, or can you explain a bit the real
user visible problem in details?

Thanks,
Ming


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] irq: consider cpus on nodes are unbalanced
  2021-12-15  4:33     ` Ming Lei
@ 2021-12-17  2:48       ` Rei Yamamoto
  2021-12-17  6:57         ` Ming Lei
  0 siblings, 1 reply; 7+ messages in thread
From: Rei Yamamoto @ 2021-12-17  2:48 UTC (permalink / raw)
  To: ming.lei; +Cc: hch, kbusch, linux-kernel, maz, tglx, yamamoto.rei

On Wed, Dec 15, 2021 at 12:33, Ming Lei wrote:
>> >> If cpus on a node are offline at boot time, there are
>> >> difference in the number of nodes between when building affinity
>> >> masks for present cpus and when building affinity masks for possible
>> >> cpus.
>
> There is always difference between the two number of nodes, the 1st is
> node number covering present cpus, and the 2nd one is the node number
> covering other possible cpus not spread.

In this case, building affinity masks for possible cpus would change even
the affinity mask bits for present cpus in the "if (numvecs <= nodes)" route.
This is the second problem I mentioned.
I will explain about the actual case later.

>
>>> This patch fixes 2 problems caused by the difference of the
>
> Is there any user visible problem?

The panic occured in lpfc driver.

>
>> >> number of nodes:
>> >>
>> >>  - If some unused vectors remain after building masks for present cpus,
>
> We just select a new vector for starting the spread if un-allocated
> vectors remains, but the number for allocation is still numvecs. We hope both
> present cpus and non-present cpus can be balanced on each vector, so that each
> vector may get present cpu allocated.

I understood.
I withdraw the first problem I mentioned.

>
>> >>    remained vectors are assigned for building masks for possible cpus.
>> >>    Therefore "numvecs <= nodes" condition must be
>> >>    "vecs_to_assign <= nodes_to_assign". Fix this problem by making this
>> >>    condition appropriate.
>> >>
>> >>  - The routine of "numvecs <= nodes" condition can overwrite bits of
>> >>    masks for present cpus in building masks for possible cpus. Fix this
>> >>    problem by making CPU bits, which is not target, not changing.
>
> 'numvecs' is always the total number of vectors for assigning CPUs, if
> the number is <= nodes, we just assign interested cpus in the whole
> node into each vector until all interested cpus are allocated out.
>
>
>> Do you have any comments?
>
> Not see issues in current way, or can you explain a bit the real
> user visible problem in details?

I experienced a panic occurred in lpfc driver with broken affinity masks.

The system had the following configuration:
-----
node num: cpu num
Node #0: #0 #1 (#4 #8 #12)
Node #1: #2 #3 (#5 #9 #13)
Node #2: (#6 #10 #14)
Node #3: (#7 #11 #15)

Number of CPUs: 16
Present CPU: cpu0, cpu1, cpu2, cpu3
Number of nodes covering present cpus: 2
Number of nodes covering possible cpus: 4
Number of vectors: 4
-----

Due to the configuration above, cpumask_var_t *node_to_cpumask was as follows:
-----
node_to_cpumask[0] = 0x1113
node_to_cpumask[1] = 0x222c
node_to_cpumask[2] = 0x4440
node_to_cpumask[3] = 0x8880
-----

As the result of assigning vertors for present cpus, masks[].mask were as follows:
-----
masks[vec1].mask = 0x0004
masks[vec2].mask = 0x0008
masks[vec3].mask = 0x0001
masks[vec4].mask = 0x0002
-----

As the result of assigning vertors for possible cpus, masks[].mask were as follows:
-----
masks[vec1].mask = 0x1117
masks[vec2].mask = 0x222c
masks[vec3].mask = 0x4441
masks[vec4].mask = 0x8882
-----

The problem I encountered was that multiple vectors were assigned for
a single present cpu unexpectedly.
For example, vec1 and vec3 were assigned to cpu0.
Due to this mask, the panic occured in lpfc driver.

>> >>  - The routine of "numvecs <= nodes" condition can overwrite bits of
>> >>    masks for present cpus in building masks for possible cpus. Fix this
>> >>    problem by making CPU bits, which is not target, not changing.

Therefore, if it uses node_to_cpumask, AND is necessary in order not to change
CPU bits of non target.

Thanks,
Rei


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] irq: consider cpus on nodes are unbalanced
  2021-12-17  2:48       ` Rei Yamamoto
@ 2021-12-17  6:57         ` Ming Lei
  2021-12-17  7:12           ` Rei Yamamoto
  0 siblings, 1 reply; 7+ messages in thread
From: Ming Lei @ 2021-12-17  6:57 UTC (permalink / raw)
  To: Rei Yamamoto; +Cc: hch, kbusch, linux-kernel, maz, tglx

On Fri, Dec 17, 2021 at 11:48:05AM +0900, Rei Yamamoto wrote:
> On Wed, Dec 15, 2021 at 12:33, Ming Lei wrote:
> >> >> If cpus on a node are offline at boot time, there are
> >> >> difference in the number of nodes between when building affinity
> >> >> masks for present cpus and when building affinity masks for possible
> >> >> cpus.
> >
> > There is always difference between the two number of nodes, the 1st is
> > node number covering present cpus, and the 2nd one is the node number
> > covering other possible cpus not spread.
> 
> In this case, building affinity masks for possible cpus would change even
> the affinity mask bits for present cpus in the "if (numvecs <= nodes)" route.
> This is the second problem I mentioned.
> I will explain about the actual case later.
> 
> >
> >>> This patch fixes 2 problems caused by the difference of the
> >
> > Is there any user visible problem?
> 
> The panic occured in lpfc driver.
> 
> >
> >> >> number of nodes:
> >> >>
> >> >>  - If some unused vectors remain after building masks for present cpus,
> >
> > We just select a new vector for starting the spread if un-allocated
> > vectors remains, but the number for allocation is still numvecs. We hope both
> > present cpus and non-present cpus can be balanced on each vector, so that each
> > vector may get present cpu allocated.
> 
> I understood.
> I withdraw the first problem I mentioned.
> 
> >
> >> >>    remained vectors are assigned for building masks for possible cpus.
> >> >>    Therefore "numvecs <= nodes" condition must be
> >> >>    "vecs_to_assign <= nodes_to_assign". Fix this problem by making this
> >> >>    condition appropriate.
> >> >>
> >> >>  - The routine of "numvecs <= nodes" condition can overwrite bits of
> >> >>    masks for present cpus in building masks for possible cpus. Fix this
> >> >>    problem by making CPU bits, which is not target, not changing.
> >
> > 'numvecs' is always the total number of vectors for assigning CPUs, if
> > the number is <= nodes, we just assign interested cpus in the whole
> > node into each vector until all interested cpus are allocated out.
> >
> >
> >> Do you have any comments?
> >
> > Not see issues in current way, or can you explain a bit the real
> > user visible problem in details?
> 
> I experienced a panic occurred in lpfc driver with broken affinity masks.
> 
> The system had the following configuration:
> -----
> node num: cpu num
> Node #0: #0 #1 (#4 #8 #12)
> Node #1: #2 #3 (#5 #9 #13)
> Node #2: (#6 #10 #14)
> Node #3: (#7 #11 #15)
> 
> Number of CPUs: 16
> Present CPU: cpu0, cpu1, cpu2, cpu3
> Number of nodes covering present cpus: 2
> Number of nodes covering possible cpus: 4
> Number of vectors: 4
> -----
> 
> Due to the configuration above, cpumask_var_t *node_to_cpumask was as follows:
> -----
> node_to_cpumask[0] = 0x1113
> node_to_cpumask[1] = 0x222c
> node_to_cpumask[2] = 0x4440
> node_to_cpumask[3] = 0x8880
> -----
> 
> As the result of assigning vertors for present cpus, masks[].mask were as follows:
> -----
> masks[vec1].mask = 0x0004
> masks[vec2].mask = 0x0008
> masks[vec3].mask = 0x0001
> masks[vec4].mask = 0x0002
> -----
> 
> As the result of assigning vertors for possible cpus, masks[].mask were as follows:
> -----
> masks[vec1].mask = 0x1117
> masks[vec2].mask = 0x222c
> masks[vec3].mask = 0x4441
> masks[vec4].mask = 0x8882
> -----
> 
> The problem I encountered was that multiple vectors were assigned for
> a single present cpu unexpectedly.
> For example, vec1 and vec3 were assigned to cpu0.
> Due to this mask, the panic occured in lpfc driver.

OK, I can understand the issue now, and only the following part is enough
since nmsk won't be empty:


diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index f7ff8919dc9b..d2d01565d2ec 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -269,8 +269,9 @@ static int __irq_build_affinity_masks(unsigned int startvec,
 	 */
 	if (numvecs <= nodes) {
 		for_each_node_mask(n, nodemsk) {
+			cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
 			cpumask_or(&masks[curvec].mask, &masks[curvec].mask,
-				   node_to_cpumask[n]);
+				   nmsk);
 			if (++curvec == last_affv)
 				curvec = firstvec;
 		}

Thanks,
Ming


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] irq: consider cpus on nodes are unbalanced
  2021-12-17  6:57         ` Ming Lei
@ 2021-12-17  7:12           ` Rei Yamamoto
  0 siblings, 0 replies; 7+ messages in thread
From: Rei Yamamoto @ 2021-12-17  7:12 UTC (permalink / raw)
  To: ming.lei; +Cc: hch, kbusch, linux-kernel, maz, tglx, yamamoto.rei

On Fri, Dec 17, 2021 at 14:57, Ming Lei wrote:
> OK, I can understand the issue now, and only the following part is enough
> since nmsk won't be empty:
> 
> 
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index f7ff8919dc9b..d2d01565d2ec 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -269,8 +269,9 @@ static int __irq_build_affinity_masks(unsigned int startvec,
>  	 */
>  	if (numvecs <= nodes) {
>  		for_each_node_mask(n, nodemsk) {
> +			cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
>  			cpumask_or(&masks[curvec].mask, &masks[curvec].mask,
> -				   node_to_cpumask[n]);
> +				   nmsk);
>  			if (++curvec == last_affv)
>  				curvec = firstvec;
>  		}

OK, I will repost with the above code changes.

Thanks,
Rei


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-12-17  7:29 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-29  8:27 [PATCH] irq: consider cpus on nodes are unbalanced Rei Yamamoto
2021-11-24 19:33 ` Thomas Gleixner
2021-12-15  1:57   ` Rei Yamamoto
2021-12-15  4:33     ` Ming Lei
2021-12-17  2:48       ` Rei Yamamoto
2021-12-17  6:57         ` Ming Lei
2021-12-17  7:12           ` Rei Yamamoto

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).