LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [RFC] genirq: Add effective CPU index retrieving interface
@ 2021-06-07  7:33 Shung-Hsi Yu
  2021-08-10 14:12 ` Thomas Gleixner
  0 siblings, 1 reply; 3+ messages in thread
From: Shung-Hsi Yu @ 2021-06-07  7:33 UTC (permalink / raw)
  To: linux-kernel, linux-api, netdev, linux-pci
  Cc: Thomas Gleixner, Peter Zijlstra, Jesse Brandeburg, Nitesh Lal

Most driver's IRQ spreading scheme is naive compared to the IRQ spreading
scheme introduced since IRQ subsystem rework, so it better to rely
request_irq() to spread IRQ out.

However, drivers that care about performance enough also tends to try
allocating memory on the same NUMA node on which the IRQ handler will run.
For such driver to rely on request_irq() for IRQ spreading, we also need to
provide an interface to retrieve the CPU index after calling request_irq().

This should be the last missing piece of puzzle that allows removal of
calls to irq_set_affinity_hint() that were actually intended to spread out
IRQ.

Link: https://lore.kernel.org/lkml/CAFki+Lm0W_brLu31epqD3gAV+WNKOJfVDfX2M8ZM__aj3nv9uA@mail.gmail.com/
Link: https://lore.kernel.org/linux-api/87zgwo9u79.ffs@nanos.tec.linutronix.de/
Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
---

I've ask previously in another thread[1], but there doesn't seem to be one,
so hence this patch.

I apologize if there's anything glaringly wrong, it's my 1st time trying to
send a patch that deals with driver interface. Just let me know, and I'll
get it fixed.

Also, there's probably better a name for the interface but I can't think of
one.

1: https://lore.kernel.org/r/YK9yxQoBPeUfQG05@syu-laptop

---
 include/linux/interrupt.h |  1 +
 kernel/irq/manage.c       | 17 +++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 2ed65b01c961..b67621ccde35 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -324,6 +324,7 @@ extern cpumask_var_t irq_default_affinity;
 
 extern int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask);
 extern int irq_force_affinity(unsigned int irq, const struct cpumask *cpumask);
+extern int irq_get_effective_cpu(unsigned int irq);
 
 extern int irq_can_set_affinity(unsigned int irq);
 extern int irq_select_affinity(unsigned int irq);
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index ef30b4762947..5e2a722c5d93 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -487,6 +487,23 @@ int irq_force_affinity(unsigned int irq, const struct cpumask *cpumask)
 }
 EXPORT_SYMBOL_GPL(irq_force_affinity);
 
+/**
+ * irq_get_effective_cpu - Retrieve the effective CPU index
+ * @irq:	Target interrupt to retrieve effective CPU index
+ *
+ * When the effective affinity cpumask has multiple CPU toggled, it just
+ * returns the first CPU in the cpumask.
+ */
+int irq_get_effective_cpu(unsigned int irq)
+{
+	struct irq_data *data = irq_get_irq_data(irq);
+	struct cpumask *m;
+
+	m = irq_data_get_effective_affinity_mask(data);
+	return cpumask_first(m);
+}
+EXPORT_SYMBOL_GPL(irq_get_effective_cpu);
+
 int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m)
 {
 	unsigned long flags;
-- 
2.31.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC] genirq: Add effective CPU index retrieving interface
  2021-06-07  7:33 [RFC] genirq: Add effective CPU index retrieving interface Shung-Hsi Yu
@ 2021-08-10 14:12 ` Thomas Gleixner
  2021-08-10 14:35   ` Nitesh Lal
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Gleixner @ 2021-08-10 14:12 UTC (permalink / raw)
  To: Shung-Hsi Yu, linux-kernel, linux-api, netdev, linux-pci
  Cc: Peter Zijlstra, Jesse Brandeburg, Nitesh Lal

On Mon, Jun 07 2021 at 15:33, Shung-Hsi Yu wrote:
> Most driver's IRQ spreading scheme is naive compared to the IRQ spreading
> scheme introduced since IRQ subsystem rework, so it better to rely
> request_irq() to spread IRQ out.
>
> However, drivers that care about performance enough also tends to try
> allocating memory on the same NUMA node on which the IRQ handler will run.
> For such driver to rely on request_irq() for IRQ spreading, we also need to
> provide an interface to retrieve the CPU index after calling
> request_irq().

So if you are interested in the resulting NUMA node, then why exposing a
random CPU out of the affinity mask instead of exposing a function to
retrieve the NUMA node?
  
> +/**
> + * irq_get_effective_cpu - Retrieve the effective CPU index
> + * @irq:	Target interrupt to retrieve effective CPU index
> + *
> + * When the effective affinity cpumask has multiple CPU toggled, it just
> + * returns the first CPU in the cpumask.
> + */
> +int irq_get_effective_cpu(unsigned int irq)
> +{
> +	struct irq_data *data = irq_get_irq_data(irq);

This can be NULL.

> +	struct cpumask *m;
> +
> +	m = irq_data_get_effective_affinity_mask(data);
> +	return cpumask_first(m);
> +}

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC] genirq: Add effective CPU index retrieving interface
  2021-08-10 14:12 ` Thomas Gleixner
@ 2021-08-10 14:35   ` Nitesh Lal
  0 siblings, 0 replies; 3+ messages in thread
From: Nitesh Lal @ 2021-08-10 14:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Shung-Hsi Yu, linux-kernel, linux-api, netdev, linux-pci,
	Peter Zijlstra, Jesse Brandeburg

On Tue, Aug 10, 2021 at 10:13 AM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Mon, Jun 07 2021 at 15:33, Shung-Hsi Yu wrote:
> > Most driver's IRQ spreading scheme is naive compared to the IRQ spreading
> > scheme introduced since IRQ subsystem rework, so it better to rely
> > request_irq() to spread IRQ out.
> >
> > However, drivers that care about performance enough also tends to try
> > allocating memory on the same NUMA node on which the IRQ handler will run.
> > For such driver to rely on request_irq() for IRQ spreading, we also need to
> > provide an interface to retrieve the CPU index after calling
> > request_irq().
>
> So if you are interested in the resulting NUMA node, then why exposing a
> random CPU out of the affinity mask instead of exposing a function to
> retrieve the NUMA node?

Agreed, probably it will make more sense for the drivers to pass either the
local NUMA node index or NULL (in case they don't care about it) as a
parameter then at the time of allocation, we only find the best-fit CPUs
from that NUMA?

or, maybe we should do this by default, and if the local NUMA CPUs run out
of available vectors then we go to the other NUMA node CPUs.

>
> > +/**
> > + * irq_get_effective_cpu - Retrieve the effective CPU index
> > + * @irq:     Target interrupt to retrieve effective CPU index
> > + *
> > + * When the effective affinity cpumask has multiple CPU toggled, it just
> > + * returns the first CPU in the cpumask.
> > + */
> > +int irq_get_effective_cpu(unsigned int irq)
> > +{
> > +     struct irq_data *data = irq_get_irq_data(irq);
>
> This can be NULL.
>
> > +     struct cpumask *m;
> > +
> > +     m = irq_data_get_effective_affinity_mask(data);
> > +     return cpumask_first(m);
> > +}
>
> Thanks,
>
>         tglx
>


-- 
Thanks
Nitesh


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-08-10 14:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-07  7:33 [RFC] genirq: Add effective CPU index retrieving interface Shung-Hsi Yu
2021-08-10 14:12 ` Thomas Gleixner
2021-08-10 14:35   ` Nitesh Lal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).