LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
       [not found]       ` <47E6A5FD.6060407@cosmosbay.com>
@ 2008-03-31  9:48         ` Ingo Molnar
  2008-03-31 10:01           ` Eric Dumazet
  2008-03-31 10:08           ` David Miller
  0 siblings, 2 replies; 16+ messages in thread
From: Ingo Molnar @ 2008-03-31  9:48 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, linux-kernel, Peter Zijlstra


* Eric Dumazet <dada1@cosmosbay.com> wrote:

> I noticed some paths in kernel are very stack aggressive, and on i386 
> with CONFIG_4KSTACKS we were really in a dangerous land, even without 
> my patch.
>
> What we call 4K stacks is in fact 4K - sizeof(struct task_struct), so 
> a litle bit more than 2K. [...]

that's just wrong - 4K stacks on x86 are 4K-sizeof(thread_info) - the 
task struct is allocated elsewhere. The patch below runs just fine on 
4K-stack x86.

	Ingo

------------->
Subject: net: loopback speedup
From: Ingo Molnar <mingo@elte.hu>
Date: Mon Mar 31 11:23:21 CEST 2008

Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 drivers/net/loopback.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux/drivers/net/loopback.c
===================================================================
--- linux.orig/drivers/net/loopback.c
+++ linux/drivers/net/loopback.c
@@ -158,7 +158,7 @@ static int loopback_xmit(struct sk_buff 
 	lb_stats->bytes += skb->len;
 	lb_stats->packets++;
 
-	netif_rx(skb);
+	netif_receive_skb(skb);
 
 	return 0;
 }

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-03-31  9:48         ` [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx() Ingo Molnar
@ 2008-03-31 10:01           ` Eric Dumazet
  2008-03-31 10:12             ` Ingo Molnar
  2008-03-31 10:08           ` David Miller
  1 sibling, 1 reply; 16+ messages in thread
From: Eric Dumazet @ 2008-03-31 10:01 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: David Miller, netdev, linux-kernel, Peter Zijlstra

Ingo Molnar a écrit :
> * Eric Dumazet <dada1@cosmosbay.com> wrote:
>
>   
>> I noticed some paths in kernel are very stack aggressive, and on i386 
>> with CONFIG_4KSTACKS we were really in a dangerous land, even without 
>> my patch.
>>
>> What we call 4K stacks is in fact 4K - sizeof(struct task_struct), so 
>> a litle bit more than 2K. [...]
>>     
>
> that's just wrong - 4K stacks on x86 are 4K-sizeof(thread_info) - the 
> task struct is allocated elsewhere. The patch below runs just fine on 
> 4K-stack x86.
>
>   
Yes, this error was corrected by Andi already :)

Thank you Ingo but this patch was already suggested by me previously ( 
http://marc.info/?l=linux-netdev&m=120361996713007&w=2 ) and was 
rejected, since we can very easily consume all stack space, especially 
with 4K stacks.
(try with NFS mounts and XFS for example)



Only safe way is to check available free stack space, since we can nest  
loopback_xmit() several time.
In case of protocol errors (like in TCP, if we answer to an ACK by 
another ACK, or ICMP loops), we would exhaust stack instead of delaying 
packets for next softirq run.

Problem is to check available space :

It depends on stack growing UP or DOWN, and depends on caller running on 
process stack, or softirq stack, or even hardirq stack.




> 	Ingo
>
> ------------->
> Subject: net: loopback speedup
> From: Ingo Molnar <mingo@elte.hu>
> Date: Mon Mar 31 11:23:21 CEST 2008
>
> Signed-off-by: Ingo Molnar <mingo@elte.hu>
> ---
>  drivers/net/loopback.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: linux/drivers/net/loopback.c
> ===================================================================
> --- linux.orig/drivers/net/loopback.c
> +++ linux/drivers/net/loopback.c
> @@ -158,7 +158,7 @@ static int loopback_xmit(struct sk_buff 
>  	lb_stats->bytes += skb->len;
>  	lb_stats->packets++;
>  
> -	netif_rx(skb);
> +	netif_receive_skb(skb);
>  
>  	return 0;
>  }
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
>
>   





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-03-31  9:48         ` [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx() Ingo Molnar
  2008-03-31 10:01           ` Eric Dumazet
@ 2008-03-31 10:08           ` David Miller
  2008-03-31 10:44             ` Ingo Molnar
  1 sibling, 1 reply; 16+ messages in thread
From: David Miller @ 2008-03-31 10:08 UTC (permalink / raw)
  To: mingo; +Cc: dada1, netdev, linux-kernel, a.p.zijlstra

From: Ingo Molnar <mingo@elte.hu>
Date: Mon, 31 Mar 2008 11:48:23 +0200

> 
> * Eric Dumazet <dada1@cosmosbay.com> wrote:
> 
> > I noticed some paths in kernel are very stack aggressive, and on i386 
> > with CONFIG_4KSTACKS we were really in a dangerous land, even without 
> > my patch.
> >
> > What we call 4K stacks is in fact 4K - sizeof(struct task_struct), so 
> > a litle bit more than 2K. [...]
> 
> that's just wrong - 4K stacks on x86 are 4K-sizeof(thread_info) - the 
> task struct is allocated elsewhere. The patch below runs just fine on 
> 4K-stack x86.

I don't think it's safe.

Every packet you receive can result in a sent packet, which
in turn can result in a full packet receive path being
taken, and yet again another sent packet.

And so on and so forth.

Some cases like this would be stack bugs, but wouldn't
you like that bug to be a very busy cpu instead of a
crash from overrunning the current stack?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-03-31 10:01           ` Eric Dumazet
@ 2008-03-31 10:12             ` Ingo Molnar
  2008-04-01  9:19               ` Eric Dumazet
  0 siblings, 1 reply; 16+ messages in thread
From: Ingo Molnar @ 2008-03-31 10:12 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, netdev, linux-kernel, Peter Zijlstra


* Eric Dumazet <dada1@cosmosbay.com> wrote:

> Problem is to check available space :
>
> It depends on stack growing UP or DOWN, and depends on caller running 
> on process stack, or softirq stack, or even hardirq stack.

ok - i wish such threads were on lkml so that everyone not just the 
netdev kabal can read it. It's quite ugly, but if we want to check stack 
free space i'd suggest for you to put a stack_can_recurse() call into 
arch/x86/kernel/process.c and offer a default __weak implementation in 
kernel/fork.c that always returns 0.

the rule on x86 should be something like this: on 4K stacks and 64-bit 
[which have irqstacks] free stack space can go as low as 25%. On 8K 
stacks [which doesnt have irqstacks but nests irqs] it should not go 
below 50% before falling back to the explicitly queued packet branch.

this way other pieces of kernel code code can choose between on-stack 
fast recursion and explicit iterators. Although i'm not sure i like the 
whole concept to begin with ...

	Ingo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-03-31 10:08           ` David Miller
@ 2008-03-31 10:44             ` Ingo Molnar
  2008-03-31 11:02               ` David Miller
  0 siblings, 1 reply; 16+ messages in thread
From: Ingo Molnar @ 2008-03-31 10:44 UTC (permalink / raw)
  To: David Miller; +Cc: dada1, netdev, linux-kernel, a.p.zijlstra


* David Miller <davem@davemloft.net> wrote:

> I don't think it's safe.
> 
> Every packet you receive can result in a sent packet, which in turn 
> can result in a full packet receive path being taken, and yet again 
> another sent packet.
> 
> And so on and so forth.
> 
> Some cases like this would be stack bugs, but wouldn't you like that 
> bug to be a very busy cpu instead of a crash from overrunning the 
> current stack?

sure.

But the core problem remains: our loopback networking scalability is 
poor. For plain localhost<->localhost connected sockets we hit the 
loopback device lock for every packet, and this very much shows up on 
real workloads on a quad already: the lock instruction in netif_rx is 
the most expensive instruction in a sysbench DB workload.

and it's not just about scalability, the plain algorithmic overhead is 
way too high as well:

 $ taskset 1 ./bw_tcp -s
 $ taskset 1 ./bw_tcp localhost
 Socket bandwidth using localhost: 2607.09 MB/sec
 $ taskset 1 ./bw_pipe
 Pipe bandwidth: 3680.44 MB/sec

i dont think this is acceptable. Either we should fix loopback TCP 
performance or we should transparently switch to VFS pipes as a 
transport method when an app establishes a plain loopback connection (as 
long as there are no frills like content-modifying component in the 
delivery path of packets after a connection has been established - which 
covers 99.9% of the real-life loopback cases).

I'm not suggesting we shouldnt use TCP for connection establishing - but 
if the TCP loopback packet transport is too slow we should use the VFS 
transport which is both more scalable, less cache-intense and has lower 
straight overhead as well.

	Ingo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-03-31 10:44             ` Ingo Molnar
@ 2008-03-31 11:02               ` David Miller
  2008-03-31 11:36                 ` poor network loopback performance and scalability (was: Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()) Ingo Molnar
  0 siblings, 1 reply; 16+ messages in thread
From: David Miller @ 2008-03-31 11:02 UTC (permalink / raw)
  To: mingo; +Cc: dada1, netdev, linux-kernel, a.p.zijlstra

From: Ingo Molnar <mingo@elte.hu>
Date: Mon, 31 Mar 2008 12:44:03 +0200

> and it's not just about scalability, the plain algorithmic overhead is 
> way too high as well:
> 
>  $ taskset 1 ./bw_tcp -s
>  $ taskset 1 ./bw_tcp localhost
>  Socket bandwidth using localhost: 2607.09 MB/sec
>  $ taskset 1 ./bw_pipe
>  Pipe bandwidth: 3680.44 MB/sec

Set your loopback MTU to some larger value if this result and
the locking overhead upsets you.

Also, woe be to the application that wants fast local interprocess
communication and doesn't use IPC_SHM, MAP_SHARED, pipes, or AF_UNIX
sockets.  (there's not just one better facility, there are _four_!)

>From this perspective, people way-overemphasize loopback performance,
and 999 times out of 1000 they prove their points using synthetic
benchmarks.

And don't give me this garbage about the application wanting to be
generic and therefore use IP sockets for everything.  Either they want
to be generic, or they want the absolute best performance.  Trying
to get an "or" and have both at the same time will result in
ludicrious hacks ending up in the kernel.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* poor network loopback performance and scalability (was: Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx())
  2008-03-31 11:02               ` David Miller
@ 2008-03-31 11:36                 ` Ingo Molnar
  2008-04-21  3:24                   ` Herbert Xu
  0 siblings, 1 reply; 16+ messages in thread
From: Ingo Molnar @ 2008-03-31 11:36 UTC (permalink / raw)
  To: David Miller; +Cc: dada1, netdev, linux-kernel, a.p.zijlstra


* David Miller <davem@davemloft.net> wrote:

> From: Ingo Molnar <mingo@elte.hu>
> Date: Mon, 31 Mar 2008 12:44:03 +0200
> 
> > and it's not just about scalability, the plain algorithmic overhead is 
> > way too high as well:
> > 
> >  $ taskset 1 ./bw_tcp -s
> >  $ taskset 1 ./bw_tcp localhost
> >  Socket bandwidth using localhost: 2607.09 MB/sec
> >  $ taskset 1 ./bw_pipe
> >  Pipe bandwidth: 3680.44 MB/sec
> 
> Set your loopback MTU to some larger value if this result and the 
> locking overhead upsets you.

yes, of course it "upsets me" - it shows up in macrobenchmarks as well 
(not just lmbench) - wouldnt (and shouldnt) that upset you?

And even with a ridiculously high MTU of 1048576 there's only a 13% 
improvement:

   # ifconfig lo mtu 1048576
   # taskset 1 ./bw_tcp -s
   # taskset 1 ./bw_tcp localhost
   Socket bandwidth using localhost: 2951.51 MB/sec

pipes are still another ~25% faster:

   # taskset 1 ./bw_pipe
   Pipe bandwidth: 3657.40 MB/sec

> > i dont think this is acceptable. Either we should fix loopback TCP 
> > performance or we should transparently switch to VFS pipes as a 
> > transport method when an app establishes a plain loopback connection 
> > (as long as there are no frills like content-modifying component in 
> > the delivery path of packets after a connection has been established 
> > - which covers 99.9% of the real-life loopback cases).
> >
> > I'm not suggesting we shouldnt use TCP for connection establishing - 
> > but if the TCP loopback packet transport is too slow we should use 
> > the VFS transport which is both more scalable, less cache-intense 
> > and has lower straight overhead as well.
[...]

> Also, woe be to the application that wants fast local interprocess 
> communication and doesn't use IPC_SHM, MAP_SHARED, pipes, or AF_UNIX 
> sockets.  (there's not just one better facility, there are _four_!)
> 
> From this perspective, people way-overemphasize loopback performance, 
> and 999 times out of 1000 they prove their points using synthetic 
> benchmarks.
> 
> And don't give me this garbage about the application wanting to be 
> generic and therefore use IP sockets for everything.  Either they want 
> to be generic, or they want the absolute best performance.  Trying to 
> get an "or" and have both at the same time will result in ludicrious 
> hacks ending up in the kernel.

i talked about the localhost data transport only (in the portion you 
dropped from your quotes), not about the connection API or the overall 
management of such sockets. There's absolutely no good technical reason 
i can see why plain loopback sockets should be forced to go over a 
global lock, or why apps should be forced to change to another API when 
the real problem is that kernel developers are lazy or incompetent to 
fix their code.

And i'm still trying to establish whether we have common ground for 
discussion: do you accept my numbers that TCP loopback transport 
performs badly when compared to pipes (i think you accepted that 
implicitly, but i dont want to put anything into your mouth).

Having agreed on that, do you share my view that it should be and could 
be fixed? Or do you claim that it cannot be fixed and wont ever be 
fixed?

	Ingo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-03-31 10:12             ` Ingo Molnar
@ 2008-04-01  9:19               ` Eric Dumazet
  2008-04-03 14:06                 ` Pavel Machek
  0 siblings, 1 reply; 16+ messages in thread
From: Eric Dumazet @ 2008-04-01  9:19 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: David Miller, netdev, linux-kernel, Peter Zijlstra

[-- Attachment #1: Type: text/plain, Size: 2186 bytes --]

Ingo Molnar a écrit :
> * Eric Dumazet <dada1@cosmosbay.com> wrote:
>
>   
>> Problem is to check available space :
>>
>> It depends on stack growing UP or DOWN, and depends on caller running 
>> on process stack, or softirq stack, or even hardirq stack.
>>     
>
> ok - i wish such threads were on lkml so that everyone not just the 
> netdev kabal can read it. It's quite ugly, but if we want to check stack 
> free space i'd suggest for you to put a stack_can_recurse() call into 
> arch/x86/kernel/process.c and offer a default __weak implementation in 
> kernel/fork.c that always returns 0.
>
> the rule on x86 should be something like this: on 4K stacks and 64-bit 
> [which have irqstacks] free stack space can go as low as 25%. On 8K 
> stacks [which doesnt have irqstacks but nests irqs] it should not go 
> below 50% before falling back to the explicitly queued packet branch.
>
> this way other pieces of kernel code code can choose between on-stack 
> fast recursion and explicit iterators. Although i'm not sure i like the 
> whole concept to begin with ...
>
>   

Hi Ingo

I took the time to prepare a patch to implement  
arch_stack_can_recurse() as you suggested.

Thank you

[PATCH] x86 : arch_stack_can_recurse() introduction

Some paths in kernel would like to chose between on-stack fast recursion 
and explicit iterators.

One identified spot is in net loopback driver, where we can avoid 
netif_rx() and its slowdown if
sufficient stack space is available.

We introduce a generic arch_stack_can_recurse() which default to a weak 
function returning 0.

 On x86 arch, we implement following logic :

   32 bits and 4K stacks (separate irq stacks) : can use up to 25% of stack
   64 bits, 8K stacks (separate irq stacks)    : can use up to 25% of stack
   32 bits and 8K stacks (no irq stacks)       : can use up to 50% of stack

Example of use in drivers/net/loopback.c, function  loopback_xmit()

if (arch_stack_can_recurse())
    netif_receive_skb(skb); /* immediate delivery to stack */
else
    netif_rx(skb); /* defer to softirq handling */

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>



[-- Attachment #2: can_recurse.patch --]
[-- Type: text/plain, Size: 2282 bytes --]

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 0e613e7..6edc1d3 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -43,3 +43,25 @@ void arch_task_cache_init(void)
 				  __alignof__(union thread_xstate),
 				  SLAB_PANIC, NULL);
 }
+
+
+/*
+ * Used to check if we can recurse without risking stack overflow
+ * Rules are :
+ *   32 bits and 4K stacks (separate irq stacks) : can use up to 25% of stack
+ *   64 bits, 8K stacks (separate irq stacks)    : can use up to 25% of stack
+ *   32 bits and 8K stacks (no irq stacks)       : can use up to 50% of stack
+ */
+#if defined(CONFIG_4KSTACKS) || defined(CONFIG_X86_64)
+# define STACK_RECURSE_LIMIT (THREAD_SIZE/4)
+#else
+# define STACK_RECURSE_LIMIT (THREAD_SIZE/2)
+#endif
+
+int arch_stack_can_recurse()
+{
+	unsigned long offset_stack = current_stack_pointer & (THREAD_SIZE - 1);
+	unsigned long avail_stack = offset_stack - sizeof(struct thread_info);
+
+	return avail_stack >= STACK_RECURSE_LIMIT;
+}
diff --git a/include/asm-x86/thread_info_64.h b/include/asm-x86/thread_info_64.h
index f23fefc..9a913c4 100644
--- a/include/asm-x86/thread_info_64.h
+++ b/include/asm-x86/thread_info_64.h
@@ -60,6 +60,9 @@ struct thread_info {
 #define init_thread_info	(init_thread_union.thread_info)
 #define init_stack		(init_thread_union.stack)
 
+/* how to get the current stack pointer from C */
+register unsigned long current_stack_pointer asm("rsp") __used;
+
 static inline struct thread_info *current_thread_info(void)
 {
 	struct thread_info *ti;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ca720f0..445b8da 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1600,6 +1600,8 @@ union thread_union {
 	unsigned long stack[THREAD_SIZE/sizeof(long)];
 };
 
+extern int arch_stack_can_recurse(void);
+
 #ifndef __HAVE_ARCH_KSTACK_END
 static inline int kstack_end(void *addr)
 {
diff --git a/kernel/fork.c b/kernel/fork.c
index a19df75..cd5d1e1 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -136,6 +136,11 @@ void __attribute__((weak)) arch_task_cache_init(void)
 {
 }
 
+int __attribute__((weak)) arch_stack_can_recurse(void)
+{
+	return 0;
+}
+
 void __init fork_init(unsigned long mempages)
 {
 #ifndef __HAVE_ARCH_TASK_STRUCT_ALLOCATOR

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-04-01  9:19               ` Eric Dumazet
@ 2008-04-03 14:06                 ` Pavel Machek
  2008-04-03 16:19                   ` Eric Dumazet
  0 siblings, 1 reply; 16+ messages in thread
From: Pavel Machek @ 2008-04-03 14:06 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Ingo Molnar, David Miller, netdev, linux-kernel, Peter Zijlstra

Hi!

> >the rule on x86 should be something like this: on 4K 
> >stacks and 64-bit [which have irqstacks] free stack 
> >space can go as low as 25%. On 8K stacks [which doesnt 
...
>   32 bits and 4K stacks (separate irq stacks) : can use 
>   up to 25% of stack

I think ingo meant 'up to 75% used'.


-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
  2008-04-03 14:06                 ` Pavel Machek
@ 2008-04-03 16:19                   ` Eric Dumazet
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Dumazet @ 2008-04-03 16:19 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Ingo Molnar, David Miller, netdev, linux-kernel, Peter Zijlstra

Pavel Machek a écrit :
> Hi!
>
>   
>>> the rule on x86 should be something like this: on 4K 
>>> stacks and 64-bit [which have irqstacks] free stack 
>>> space can go as low as 25%. On 8K stacks [which doesnt 
>>>       
> ...
>   
>>   32 bits and 4K stacks (separate irq stacks) : can use 
>>   up to 25% of stack
>>     
>
> I think ingo meant 'up to 75% used'.
>
>
>   
Patch is OK, my english might be a litle bit unsual :)





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: poor network loopback performance and scalability (was: Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx())
  2008-03-31 11:36                 ` poor network loopback performance and scalability (was: Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()) Ingo Molnar
@ 2008-04-21  3:24                   ` Herbert Xu
  2008-04-21  3:38                     ` poor network loopback performance and scalability David Miller
  0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2008-04-21  3:24 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: davem, dada1, netdev, linux-kernel, a.p.zijlstra

Ingo Molnar <mingo@elte.hu> wrote:
> 
>   # ifconfig lo mtu 1048576

Chiming in late here, but 1048576 can't possibly work with IP
which uses a 16-bit quantity as the length header.  In fact a
quick test seems to indicate that an 1048576 mtu doesn't generate
anything bigger than the default 16K mtu.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: poor network loopback performance and scalability
  2008-04-21  3:24                   ` Herbert Xu
@ 2008-04-21  3:38                     ` David Miller
  2008-04-21  8:11                       ` Ingo Molnar
  0 siblings, 1 reply; 16+ messages in thread
From: David Miller @ 2008-04-21  3:38 UTC (permalink / raw)
  To: herbert; +Cc: mingo, dada1, netdev, linux-kernel, a.p.zijlstra

From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Mon, 21 Apr 2008 11:24:04 +0800

> Ingo Molnar <mingo@elte.hu> wrote:
> > 
> >   # ifconfig lo mtu 1048576
> 
> Chiming in late here, but 1048576 can't possibly work with IP
> which uses a 16-bit quantity as the length header.  In fact a
> quick test seems to indicate that an 1048576 mtu doesn't generate
> anything bigger than the default 16K mtu.

Right.

To move things forward, we should look into doing something
similar to what Al Viro suggested, which would be to return
an SKB pointer from the transmit path and call back into
netif_receive_skb() using that.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: poor network loopback performance and scalability
  2008-04-21  3:38                     ` poor network loopback performance and scalability David Miller
@ 2008-04-21  8:11                       ` Ingo Molnar
  2008-04-21  8:16                         ` David Miller
  0 siblings, 1 reply; 16+ messages in thread
From: Ingo Molnar @ 2008-04-21  8:11 UTC (permalink / raw)
  To: David Miller; +Cc: herbert, dada1, netdev, linux-kernel, a.p.zijlstra


* David Miller <davem@davemloft.net> wrote:

> From: Herbert Xu <herbert@gondor.apana.org.au>
> Date: Mon, 21 Apr 2008 11:24:04 +0800
> 
> > Ingo Molnar <mingo@elte.hu> wrote:
> > > 
> > >   # ifconfig lo mtu 1048576
> > 
> > Chiming in late here, but 1048576 can't possibly work with IP which 
> > uses a 16-bit quantity as the length header.  In fact a quick test 
> > seems to indicate that an 1048576 mtu doesn't generate anything 
> > bigger than the default 16K mtu.
> 
> Right.
> 
> To move things forward, we should look into doing something similar to 
> what Al Viro suggested, which would be to return an SKB pointer from 
> the transmit path and call back into netif_receive_skb() using that.

yep, basically the sk_peer trick that AF_UNIX is already using.

it just seems rather more tricky in the 'real skb' localhost case 
because there's no real established trust path we can pass this coupling 
of the two sockets over. Netfilter might affect it and deny a localhost 
connection. Lifetime rules seem rather tricky as well: either end of the 
localhost connection can go away independently so a refcount to the 
socket has to be kept. skb->sk might be something to use, but it looks 
like a dangerous complication and it would burden the fastpath with an 
extra sk reference inc/dec.

... so i'm not implying that any of this is an easy topic to solve (to 
me at least :). But fact is that database connections over localhost are 
very common on web apps and it is very convenient as well. I use it 
myself - AF_UNIX transport is often non-existing in apps and libraries 
or is often just an afterthought with limitations - apps tend to 
gravitate towards a single API. So i dont think "use AF_UNIX" is an 
acceptable answer in this case. I believe we should try to make 
localhost transport comparably fast to AF_UNIX.

	Ingo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: poor network loopback performance and scalability
  2008-04-21  8:11                       ` Ingo Molnar
@ 2008-04-21  8:16                         ` David Miller
  2008-04-21 10:19                           ` Herbert Xu
  0 siblings, 1 reply; 16+ messages in thread
From: David Miller @ 2008-04-21  8:16 UTC (permalink / raw)
  To: mingo; +Cc: herbert, dada1, netdev, linux-kernel, a.p.zijlstra

From: Ingo Molnar <mingo@elte.hu>
Date: Mon, 21 Apr 2008 10:11:03 +0200

> 
> * David Miller <davem@davemloft.net> wrote:
> 
> > To move things forward, we should look into doing something similar to 
> > what Al Viro suggested, which would be to return an SKB pointer from 
> > the transmit path and call back into netif_receive_skb() using that.
> 
> yep, basically the sk_peer trick that AF_UNIX is already using.

Please read again, that isn't the suggestion being discussed.

What's being discussed is having the top of the transmit call path
getting a socket "buffer" pointer, that it can feed back into the
packet input path directly.  Loopback would return buffer pointers
from ->hard_start_xmit() instead of passing them netif_rx().  The top
of the transmit call path, upon getting a non-NULL buffer returned,
would pass it to netif_receive_skb().

We're not talking about sockets, although that is another idea (which
I'm working on a patch for, and I have a mechanism for what you refer
to as "path validation").

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: poor network loopback performance and scalability
  2008-04-21  8:16                         ` David Miller
@ 2008-04-21 10:19                           ` Herbert Xu
  2008-04-21 10:22                             ` David Miller
  0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2008-04-21 10:19 UTC (permalink / raw)
  To: David Miller; +Cc: mingo, dada1, netdev, linux-kernel, a.p.zijlstra

On Mon, Apr 21, 2008 at 01:16:23AM -0700, David Miller wrote:
>
> What's being discussed is having the top of the transmit call path
> getting a socket "buffer" pointer, that it can feed back into the
> packet input path directly.  Loopback would return buffer pointers
> from ->hard_start_xmit() instead of passing them netif_rx().  The top
> of the transmit call path, upon getting a non-NULL buffer returned,
> would pass it to netif_receive_skb().

Yes this will definitely reduce the per-packet cost.  The other
low-hanging fruit is to raise the loopback MTU to just below 64K.
I belive the current value is a legacy from the days when we didn't
support skb page frags so everything had to be physically contiguous.

Longer term we could look at generating packets > 64K on lo, for
IPv6 anyway.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: poor network loopback performance and scalability
  2008-04-21 10:19                           ` Herbert Xu
@ 2008-04-21 10:22                             ` David Miller
  0 siblings, 0 replies; 16+ messages in thread
From: David Miller @ 2008-04-21 10:22 UTC (permalink / raw)
  To: herbert; +Cc: mingo, dada1, netdev, linux-kernel, a.p.zijlstra

From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Mon, 21 Apr 2008 18:19:08 +0800

> I belive the current value is a legacy from the days when we didn't
> support skb page frags so everything had to be physically contiguous.

It's legacy from when my top-of-the-line UltraSPARC-I 130Mhz cpus
got the best loopback results using that value :-)

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2008-04-21 10:22 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <47BDC848.50607@cosmosbay.com>
     [not found] ` <20080226.182120.183405235.davem@davemloft.net>
     [not found]   ` <47C92F49.4070100@cosmosbay.com>
     [not found]     ` <20080323.032949.194309002.davem@davemloft.net>
     [not found]       ` <47E6A5FD.6060407@cosmosbay.com>
2008-03-31  9:48         ` [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx() Ingo Molnar
2008-03-31 10:01           ` Eric Dumazet
2008-03-31 10:12             ` Ingo Molnar
2008-04-01  9:19               ` Eric Dumazet
2008-04-03 14:06                 ` Pavel Machek
2008-04-03 16:19                   ` Eric Dumazet
2008-03-31 10:08           ` David Miller
2008-03-31 10:44             ` Ingo Molnar
2008-03-31 11:02               ` David Miller
2008-03-31 11:36                 ` poor network loopback performance and scalability (was: Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()) Ingo Molnar
2008-04-21  3:24                   ` Herbert Xu
2008-04-21  3:38                     ` poor network loopback performance and scalability David Miller
2008-04-21  8:11                       ` Ingo Molnar
2008-04-21  8:16                         ` David Miller
2008-04-21 10:19                           ` Herbert Xu
2008-04-21 10:22                             ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).