LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Ferenc Wagner <wferi@niif.hu>
To: David Chinner <dgc@sgi.com>
Cc: linux-kernel@vger.kernel.org, wferi@niif.hu
Subject: Re: inode leak in 2.6.24?
Date: Sat, 01 Mar 2008 16:25:25 +0100	[thread overview]
Message-ID: <87wsommpp6.fsf@szonett.ki.iif.hu> (raw)
In-Reply-To: <20080220211552.GS155407@sgi.com> (David Chinner's message of "Thu, 21 Feb 2008 08:15:52 +1100")

[-- Attachment #1: Type: text/plain, Size: 1302 bytes --]

David Chinner <dgc@sgi.com> writes:

> On Wed, Feb 20, 2008 at 03:36:53PM +0100, Ferenc Wagner wrote:
>
>>David Chinner <dgc@sgi.com> writes:
>>
>>> I guess the first thing to find out is whether memory pressure
>>> results in freeing the dentries. To simulate memory pressure causing
>>> slab cache reclaim, can you run:
>>>
>>> # echo 2 > /proc/sys/vm/drop_caches
>>>
>>> and see if the number of dentries and inodes drops. If the number
>>> goes down significantly, then we aren't leaking dentries and there's
>>> been a change in memoy reclaim behaviour. If it stays the same, then
>>> we probably are leaking dentries....
>> 
>> Thanks for looking into this.  There's no real conclusion yet: the
>> simulated memory pressure sent the numbers down allright, but
>> meanwhile it turned out that this is a different case: on this machine
>> the increase wasn't a constant growth, but related to the daily
>> updatedb job.  I'll reload the original kernel on the original
>> machine, and collect the same info if the problem reappers.
>
> Ok, let me know how it goes when you get a chance.

So, the leak is ruled out now.  The machine has been running the
"leaky" kernel for a week; the inode usage grows, but simulated memory
pressure gets it back to normal (to 1k right now).  See the graph again:


[-- Attachment #2: open_inodes-month.png --]
[-- Type: image/png, Size: 31794 bytes --]

[-- Attachment #3: Type: text/plain, Size: 726 bytes --]


For orientation: week 7 alerted me, but changing to 2.6.24.2 helped.
Then on the last day of week 8 I rebooted the same kernel for further
investigation, and simulated memory pressure 3 times, which resulted
in the drops (down to about 1k each time).  The increase seems to be
sublinear now for some reason, though the usage pattern of the machine
is very much the same.

Week 05           : 2.6.23.14
Week 06 Feb.   -10:   -""-
Week 07 Feb. 11-17: 2.6.24 (HEAD: 25f666300625d894ebe04bac2b4b3aadb907c861)
Week 08 Feb. 18-23: 2.6.24.2
Week 09 Feb. 24-  : 2.6.24 (same as before)

So, it looks like there's no leak, but the reclaim behaviour has
changed significantly, exactly as you suspected.
-- 
Thanks for your time,
Feri.

      reply	other threads:[~2008-03-01 15:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-15 23:18 Ferenc Wagner
2008-02-18 21:53 ` David Chinner
2008-02-19  0:50   ` Ferenc Wagner
2008-02-19 16:57   ` Ferenc Wagner
2008-02-20  1:04     ` David Chinner
2008-02-20 14:36       ` Ferenc Wagner
2008-02-20 21:15         ` David Chinner
2008-03-01 15:25           ` Ferenc Wagner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wsommpp6.fsf@szonett.ki.iif.hu \
    --to=wferi@niif.hu \
    --cc=dgc@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --subject='Re: inode leak in 2.6.24?' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).