LKML Archive on
help / color / mirror / Atom feed
From: Eric Dumazet <>
To: PK <>
Cc:, netdev <>
Subject: Re: Problems with /proc/net/tcp6  - possible bug - ipv6
Date: Sat, 22 Jan 2011 09:59:41 +0100	[thread overview]
Message-ID: <1295686781.2609.37.camel@edumazet-laptop> (raw)
In-Reply-To: <>

Le vendredi 21 janvier 2011 à 22:30 -0800, PK a écrit :
> Creating many ipv6 connections hits a ceiling on connections/fds ; okay, fine.
> But in my case I'm seeing millions of entries spring up within a few seconds and 
> then vanish within a few minutes, in /proc/net/tcp6 (vanish due to garbage 
> collection?)
> Furthermore I can trigger this easily on vanilla kernels from 2.6.36 to 
> 2.6.38-rc1-next-20110121  inside a ubuntu 10.10 amd64 vm, causing the kernel to 
> spew warnings.  There is also some corruption in the logs (see kernel-sample.log 
> line 296), but that may be unrelated.
> More explanation, kernel config of the primary machine I saw this on, sample 
> ruby script to reproduce (inside the ubuntu VMs I apt-get and use ruby-1.9.1), 
> are located at
> Seems to only affect 64-bit.  So far I have not been able to reproduce on 32-bit 
> ubuntu VMs of any kernel version.
> Seems to only affect IPv6.  So far I have not been able to reproduce using IPv4 
> connections (and watching /proc/net/tcp of course).
> Does not trigger the bug if the connections are made to ::1.  Only externally 
> routable local and global IPv6 addresses seem to cause problems.
> Seems to have been introduced between 2.6.35 and 2.6.36 (see README on github 
> for more kernels I've tried)
> All the tested Ubuntu VMs are stock 10.10 userland, with vanilla kernels (the 
> latest ubuntu kernel is 2.6.35-something, and my initial test didn't show it 
> suffering from this problem)
> Originally noticed on separate Gentoo 64-bit non-vm system when doing web 
> benchmarking.
> not subscribed, so please keep me in cc although I'll try to follow the thread

Hi PK (Sorry, your real name is hidden)

I could not reproduce this on current linux-2.6 kernel.

How many vcpus running in your VM, and memory ?

Note : a recent commit did fix /proc/net/tcp[6] behavior

commit 1bde5ac49398a064c753bb490535cfad89e99a5f
Author: Eric Dumazet <>
Date:   Thu Dec 23 09:32:46 2010 -0800

    tcp: fix listening_get_next()
    Alexey Vlasov found /proc/net/tcp could sometime loop and display
    millions of sockets in LISTEN state.
    In 2.6.29, when we converted TCP hash tables to RCU, we left two
    sk_next() calls in listening_get_next().
    We must instead use sk_nulls_next() to properly detect an end of chain.
    Reported-by: Alexey Vlasov <>
    Signed-off-by: Eric Dumazet <>
    Signed-off-by: David S. Miller <>

  reply	other threads:[~2011-01-22  8:59 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-22  6:30 PK
2011-01-22  8:59 ` Eric Dumazet [this message]
2011-01-22 15:15   ` Eric Dumazet
2011-01-22 19:42     ` PK
2011-01-22 21:20       ` Eric Dumazet
2011-01-22 21:40         ` Eric Dumazet
2011-01-24 22:31           ` David Miller
2011-01-24 22:40           ` David Miller
2011-01-25  0:02       ` David Miller
2011-01-24 22:42     ` David Miller
2011-01-31 22:51 PK

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1295686781.2609.37.camel@edumazet-laptop \ \ \ \ \
    --subject='Re: Problems with /proc/net/tcp6  - possible bug - ipv6' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).