LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* RE: [PATCH 09/14] x86_64 irq: Begin consolidating per_irq data in structures.
@ 2007-02-27 21:28 Lu, Yinghai
  2007-02-28  6:27 ` Eric W. Biederman
  0 siblings, 1 reply; 3+ messages in thread
From: Lu, Yinghai @ 2007-02-27 21:28 UTC (permalink / raw)
  To: ebiederm, Andrew Morton
  Cc: linux-kernel, Zwane Mwaikambo, Ashok Raj, Ingo Molnar,
	Natalie Protasevich, Andi Kleen, Siddha, Suresh B,
	Linus Torvalds


-----Original Message-----
From: ebiederm@xmission.com [mailto:ebiederm@xmission.com] 
Sent: Friday, February 23, 2007 3:33 AM

>+struct irq_cfg {
>+	cpumask_t domain;
>+	u8 vector;
>+};
>+
>+/* irq_cfg is indexed by the sum of all RTEs in all I/O APICs. */
>+struct irq_cfg irq_cfg[NR_IRQS] __read_mostly = {
>+	[0] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 0 },
>+	[1] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 1 },
>+	[2] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 2 },
>+	[3] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 3 },
>+	[4] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 4 },
>+	[5] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 5 },
>+	[6] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 6 },
>+	[7] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 7 },
>+	[8] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 8 },
>+	[9] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 9 },
>+	[10] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 10 },
>+	[11] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 11 },
>+	[12] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 12 },
>+	[13] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 13 },
>+	[14] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 14 },
>+	[15] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR
+ 15 },
>+};
>+
> static int assign_irq_vector(int irq, cpumask_t mask, cpumask_t
*result);

Why not use

IRQ0_VECTOR... IRQ15_VECTOR here. 

YH



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 09/14] x86_64 irq: Begin consolidating per_irq data in structures.
  2007-02-27 21:28 [PATCH 09/14] x86_64 irq: Begin consolidating per_irq data in structures Lu, Yinghai
@ 2007-02-28  6:27 ` Eric W. Biederman
  0 siblings, 0 replies; 3+ messages in thread
From: Eric W. Biederman @ 2007-02-28  6:27 UTC (permalink / raw)
  To: Lu, Yinghai
  Cc: ebiederm, Andrew Morton, linux-kernel, Zwane Mwaikambo,
	Ashok Raj, Ingo Molnar, Natalie Protasevich, Andi Kleen, Siddha,
	Suresh B, Linus Torvalds

"Lu, Yinghai" <yinghai.lu@amd.com> writes:
>
> Why not use
>
> IRQ0_VECTOR... IRQ15_VECTOR here. 

I do by the end of the patch series it was a patch ordering issue.

Eric

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 09/14] x86_64 irq: Begin consolidating per_irq data in structures.
  2007-02-23 11:26                                       ` [PATCH 08/14] x86_64 irq: Use NR_IRQS not NR_IRQ_VECTORS Eric W. Biederman
@ 2007-02-23 11:32                                         ` Eric W. Biederman
  0 siblings, 0 replies; 3+ messages in thread
From: Eric W. Biederman @ 2007-02-23 11:32 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, Zwane Mwaikambo, Ashok Raj, Ingo Molnar, Lu,
	Yinghai, Natalie Protasevich, Andi Kleen, Siddha, Suresh B,
	Linus Torvalds


Currently the io_apic.c has several parallel arrays for different
kinds of data that can be know about an irq.  The parallel arrays
make the code harder to maintain and make it difficult to remove
the static limits on the number of the number of irqs.

This patch pushes irq_data and irq_vector into a irq_cfg array and
updates the code to use it. 

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
---
 arch/x86_64/kernel/io_apic.c |  111 +++++++++++++++++++-----------------------
 1 files changed, 50 insertions(+), 61 deletions(-)

diff --git a/arch/x86_64/kernel/io_apic.c b/arch/x86_64/kernel/io_apic.c
index 65d7218..dd6580c 100644
--- a/arch/x86_64/kernel/io_apic.c
+++ b/arch/x86_64/kernel/io_apic.c
@@ -47,6 +47,31 @@
 #include <asm/msidef.h>
 #include <asm/hypertransport.h>
 
+struct irq_cfg {
+	cpumask_t domain;
+	u8 vector;
+};
+
+/* irq_cfg is indexed by the sum of all RTEs in all I/O APICs. */
+struct irq_cfg irq_cfg[NR_IRQS] __read_mostly = {
+	[0] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 0 },
+	[1] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 1 },
+	[2] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 2 },
+	[3] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 3 },
+	[4] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 4 },
+	[5] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 5 },
+	[6] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 6 },
+	[7] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 7 },
+	[8] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 8 },
+	[9] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 9 },
+	[10] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 10 },
+	[11] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 11 },
+	[12] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 12 },
+	[13] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 13 },
+	[14] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 14 },
+	[15] = { .domain = CPU_MASK_ALL, .vector = FIRST_EXTERNAL_VECTOR + 15 },
+};
+
 static int assign_irq_vector(int irq, cpumask_t mask, cpumask_t *result);
 
 #define __apicdebuginit  __init
@@ -613,46 +638,6 @@ static int pin_2_irq(int idx, int apic, int pin)
 	return irq;
 }
 
-
-/* irq_vectors is indexed by the sum of all RTEs in all I/O APICs. */
-static u8 irq_vector[NR_IRQS] __read_mostly = {
-	[0] = FIRST_EXTERNAL_VECTOR + 0,
-	[1] = FIRST_EXTERNAL_VECTOR + 1,
-	[2] = FIRST_EXTERNAL_VECTOR + 2,
-	[3] = FIRST_EXTERNAL_VECTOR + 3,
-	[4] = FIRST_EXTERNAL_VECTOR + 4,
-	[5] = FIRST_EXTERNAL_VECTOR + 5,
-	[6] = FIRST_EXTERNAL_VECTOR + 6,
-	[7] = FIRST_EXTERNAL_VECTOR + 7,
-	[8] = FIRST_EXTERNAL_VECTOR + 8,
-	[9] = FIRST_EXTERNAL_VECTOR + 9,
-	[10] = FIRST_EXTERNAL_VECTOR + 10,
-	[11] = FIRST_EXTERNAL_VECTOR + 11,
-	[12] = FIRST_EXTERNAL_VECTOR + 12,
-	[13] = FIRST_EXTERNAL_VECTOR + 13,
-	[14] = FIRST_EXTERNAL_VECTOR + 14,
-	[15] = FIRST_EXTERNAL_VECTOR + 15,
-};
-
-static cpumask_t irq_domain[NR_IRQS] __read_mostly = {
-	[0] = CPU_MASK_ALL,
-	[1] = CPU_MASK_ALL,
-	[2] = CPU_MASK_ALL,
-	[3] = CPU_MASK_ALL,
-	[4] = CPU_MASK_ALL,
-	[5] = CPU_MASK_ALL,
-	[6] = CPU_MASK_ALL,
-	[7] = CPU_MASK_ALL,
-	[8] = CPU_MASK_ALL,
-	[9] = CPU_MASK_ALL,
-	[10] = CPU_MASK_ALL,
-	[11] = CPU_MASK_ALL,
-	[12] = CPU_MASK_ALL,
-	[13] = CPU_MASK_ALL,
-	[14] = CPU_MASK_ALL,
-	[15] = CPU_MASK_ALL,
-};
-
 static int __assign_irq_vector(int irq, cpumask_t mask, cpumask_t *result)
 {
 	/*
@@ -670,19 +655,21 @@ static int __assign_irq_vector(int irq, cpumask_t mask, cpumask_t *result)
 	cpumask_t old_mask = CPU_MASK_NONE;
 	int old_vector = -1;
 	int cpu;
+	struct irq_cfg *cfg;
 
 	BUG_ON((unsigned)irq >= NR_IRQS);
+	cfg = &irq_cfg[irq];
 
 	/* Only try and allocate irqs on cpus that are present */
 	cpus_and(mask, mask, cpu_online_map);
 
-	if (irq_vector[irq] > 0)
-		old_vector = irq_vector[irq];
+	if (cfg->vector > 0)
+		old_vector = cfg->vector;
 	if (old_vector > 0) {
-		cpus_and(*result, irq_domain[irq], mask);
+		cpus_and(*result, cfg->domain, mask);
 		if (!cpus_empty(*result))
 			return old_vector;
-		cpus_and(old_mask, irq_domain[irq], cpu_online_map);
+		cpus_and(old_mask, cfg->domain, cpu_online_map);
 	}
 
 	for_each_cpu_mask(cpu, mask) {
@@ -716,8 +703,8 @@ next:
 			per_cpu(vector_irq, old_cpu)[old_vector] = -1;
 		for_each_cpu_mask(new_cpu, new_mask)
 			per_cpu(vector_irq, new_cpu)[vector] = irq;
-		irq_vector[irq] = vector;
-		irq_domain[irq] = domain;
+		cfg->vector = vector;
+		cfg->domain = domain;
 		cpus_and(*result, domain, mask);
 		return vector;
 	}
@@ -737,18 +724,21 @@ static int assign_irq_vector(int irq, cpumask_t mask, cpumask_t *result)
 
 static void __clear_irq_vector(int irq)
 {
+	struct irq_cfg *cfg;
 	cpumask_t mask;
 	int cpu, vector;
 
-	BUG_ON(!irq_vector[irq]);
+	BUG_ON((unsigned)irq >= NR_IRQS);
+	cfg = &irq_cfg[irq];
+	BUG_ON(!cfg->vector);
 
-	vector = irq_vector[irq];
-	cpus_and(mask, irq_domain[irq], cpu_online_map);
+	vector = cfg->vector;
+	cpus_and(mask, cfg->domain, cpu_online_map);
 	for_each_cpu_mask(cpu, mask)
 		per_cpu(vector_irq, cpu)[vector] = -1;
 
-	irq_vector[irq] = 0;
-	irq_domain[irq] = CPU_MASK_NONE;
+	cfg->vector = 0;
+	cfg->domain = CPU_MASK_NONE;
 }
 
 void __setup_vector_irq(int cpu)
@@ -759,9 +749,9 @@ void __setup_vector_irq(int cpu)
 
 	/* Mark the inuse vectors */
 	for (irq = 0; irq < NR_IRQS; ++irq) {
-		if (!cpu_isset(cpu, irq_domain[irq]))
+		if (!cpu_isset(cpu, irq_cfg[irq].domain))
 			continue;
-		vector = irq_vector[irq];
+		vector = irq_cfg[irq].vector;
 		per_cpu(vector_irq, cpu)[vector] = irq;
 	}
 	/* Mark the free vectors */
@@ -769,7 +759,7 @@ void __setup_vector_irq(int cpu)
 		irq = per_cpu(vector_irq, cpu)[vector];
 		if (irq < 0)
 			continue;
-		if (!cpu_isset(cpu, irq_domain[irq]))
+		if (!cpu_isset(cpu, irq_cfg[irq].domain))
 			per_cpu(vector_irq, cpu)[vector] = -1;
 	}
 }
@@ -1346,16 +1336,15 @@ static unsigned int startup_ioapic_irq(unsigned int irq)
 
 static int ioapic_retrigger_irq(unsigned int irq)
 {
+	struct irq_cfg *cfg = &irq_cfg[irq];
 	cpumask_t mask;
-	unsigned vector;
 	unsigned long flags;
 
 	spin_lock_irqsave(&vector_lock, flags);
-	vector = irq_vector[irq];
 	cpus_clear(mask);
-	cpu_set(first_cpu(irq_domain[irq]), mask);
+	cpu_set(first_cpu(cfg->domain), mask);
 
-	send_IPI_mask(mask, vector);
+	send_IPI_mask(mask, cfg->vector);
 	spin_unlock_irqrestore(&vector_lock, flags);
 
 	return 1;
@@ -1430,7 +1419,7 @@ static inline void init_IO_APIC_traps(void)
 	 */
 	for (irq = 0; irq < NR_IRQS ; irq++) {
 		int tmp = irq;
-		if (IO_APIC_IRQ(tmp) && !irq_vector[tmp]) {
+		if (IO_APIC_IRQ(tmp) && !irq_cfg[tmp].vector) {
 			/*
 			 * Hmm.. We don't have an entry for this,
 			 * so default to an old-fashioned 8259
@@ -1816,7 +1805,7 @@ int create_irq(void)
 	for (new = (NR_IRQS - 1); new >= 0; new--) {
 		if (platform_legacy_irq(new))
 			continue;
-		if (irq_vector[new] != 0)
+		if (irq_cfg[new].vector != 0)
 			continue;
 		vector = __assign_irq_vector(new, TARGET_CPUS, &mask);
 		if (likely(vector > 0))
@@ -2108,7 +2097,7 @@ void __init setup_ioapic_dest(void)
 			 * when you have too many devices, because at that time only boot
 			 * cpu is online.
 			 */
-			if(!irq_vector[irq])
+			if (!irq_cfg[irq].vector)
 				setup_IO_APIC_irq(ioapic, pin, irq,
 						  irq_trigger(irq_entry),
 						  irq_polarity(irq_entry));
-- 
1.5.0.g53756


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2007-02-28  6:28 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-27 21:28 [PATCH 09/14] x86_64 irq: Begin consolidating per_irq data in structures Lu, Yinghai
2007-02-28  6:27 ` Eric W. Biederman
     [not found] <200701221116.13154.luigi.genoni@pirelli.com>
2007-02-08 11:48 ` [PATCH 2/2] x86_64 irq: Handle irqs pending in IRR during irq migration Eric W. Biederman
2007-02-08 20:19   ` Eric W. Biederman
2007-02-09  6:40     ` Eric W. Biederman
2007-02-10 23:52       ` What are the real ioapic rte programming constraints? Eric W. Biederman
2007-02-11  5:57         ` Zwane Mwaikambo
2007-02-11 10:20           ` Eric W. Biederman
2007-02-11 16:16             ` Zwane Mwaikambo
2007-02-11 22:01               ` Eric W. Biederman
2007-02-12  1:05                 ` Zwane Mwaikambo
2007-02-12  4:51                   ` Eric W. Biederman
2007-02-23 10:51                     ` Conclusions from my investigation about ioapic programming Eric W. Biederman
2007-02-23 11:10                       ` [PATCH 0/14] x86_64 irq related fixes and cleanups Eric W. Biederman
2007-02-23 11:11                         ` [PATCH 01/14] x86_64 irq: Simplfy __assign_irq_vector Eric W. Biederman
2007-02-23 11:13                           ` [PATCH 02/14] irq: Remove set_native_irq_info Eric W. Biederman
2007-02-23 11:15                             ` [PATCH 03/14] x86_64 irq: Kill declaration of removed array, interrupt Eric W. Biederman
2007-02-23 11:16                               ` [PATCH 04/14] x86_64 irq: Remove the unused vector parameter from ioapic_register_intr Eric W. Biederman
2007-02-23 11:19                                 ` [PATCH 05/14] x86_64 irq: Refactor setup_IO_APIC_irq Eric W. Biederman
2007-02-23 11:20                                   ` [PATCH 06/14] x86_64 irq: Simplfiy the set_affinity logic Eric W. Biederman
2007-02-23 11:23                                     ` [PATCH 07/14] x86_64 irq: In __DO_ACTION perform the FINAL action for every entry Eric W. Biederman
2007-02-23 11:26                                       ` [PATCH 08/14] x86_64 irq: Use NR_IRQS not NR_IRQ_VECTORS Eric W. Biederman
2007-02-23 11:32                                         ` [PATCH 09/14] x86_64 irq: Begin consolidating per_irq data in structures Eric W. Biederman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).