LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Maynard Johnson <maynardj@us.ibm.com>
To: michael@ellerman.id.au
Cc: Carl Love <cel@us.ibm.com>,
	linuxppc-dev@ozlabs.org, linux-kernel@vger.kernel.org,
	cbe-oss-dev@ozlabs.org
Subject: Re: [Cbe-oss-dev] [RFC, PATCH] CELL Oprofile SPU profiling updated patch
Date: Thu, 08 Feb 2007 09:03:59 -0600	[thread overview]
Message-ID: <45CB3BDF.4080408@us.ibm.com> (raw)
In-Reply-To: <1170888537.7831.14.camel@concordia.ozlabs.ibm.com>

Michael,
Thanks very much for the advice.  Both issues have been solved now, with 
your help.

-Maynard

Michael Ellerman wrote:

>On Wed, 2007-02-07 at 09:41 -0600, Maynard Johnson wrote:
>  
>
>>Carl Love wrote:
>>
>>    
>>
>>>Subject: Add support to OProfile for profiling Cell BE SPUs
>>>
>>>From: Maynard Johnson <maynardj@us.ibm.com>
>>>
>>>This patch updates the existing arch/powerpc/oprofile/op_model_cell.c
>>>to add in the SPU profiling capabilities.  In addition, a 'cell' subdirectory
>>>was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling
>>>code.
>>>
>>>Signed-off-by: Carl Love <carll@us.ibm.com>
>>>Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
>>>
>>>Index: linux-2.6.20-rc1/arch/powerpc/configs/cell_defconfig
>>> 
>>>
>>>      
>>>
>>I've discovered more problems with the kref handling for the cached_info 
>>object that we store in the spu_context.  :-(
>>
>>When the OProfile module initially determines that no cached_info yet 
>>exists for a given spu_context, it creates the cached_info, inits the 
>>cached_info's kref (which increments the refcount) and does a kref_get 
>>(for SPUFS' ref) before passing the cached_info reference off to SUPFS 
>>to store into the spu_context.  When OProfile shuts down or the SPU job 
>>ends, OProfile gives up its ref to the cached_info with kref_put.  Then 
>>when SPUFS destroys the spu_context, it also gives up its ref.  HOWEVER 
>>. . . . If OProfile shuts down while the SPU job is still active _and_ 
>>_then_ is restarted while the job is still active, OProfile will find 
>>that the cached_info exists for the given spu_context, so it won't go 
>>through the process of creating it and doing kref_init on the kref.  
>>Under this scenario, OProfile does not own a ref of its own to the 
>>cached_info, and should not be doing a kref_put when done using the 
>>cached_info -- but it does, and so does SPUFS when the spu_context is 
>>destroyed.  The end result (with the code as currently written) is that 
>>an extra kref_put is done when the refcount is already down to zero.  To 
>>fix this, OProfile needs to detect when it finds an existing cached_info 
>>already stored in the spu_context.  Then, instead of creating a new one, 
>>it sets a reminder flag to be used later when it's done using the cached 
>>info to indicate whether or not it needs to call kref_put.
>>    
>>
>
>I think all you want to do is when oprofile finds the cached_info
>already existing, it does a kref_get(). After all it doesn't have a
>reference to it, so before it starts using it it must inc the ref count.
>
>  
>
>>Unfortunately, there's another problem (one that should have been 
>>obvious to me).  The cached_info's kref "release" function is 
>>destroy_cached_info(), defined in the OProfile module.  If the OProfile 
>>module is unloaded when SPUFS destroys the spu_context and calls 
>>kref_put on the cached_info's kref -- KABOOM!  The destroy_cached_info 
>>function (the second arg to kref_put) is not in memory, so we get a 
>>paging fault.  I see a couple options to solve this:
>>    1) Don't store the cached_info in the spu_context.  Basically, go 
>>back to the simplistic model of creating/deleting the cached_info on 
>>every SPU task activation/deactivation.
>>    2)  If there's some way to do this, force the OProfile module to 
>>stay loaded until SPUFS has destroyed its last spu_context that holds a 
>>cached_info object.
>>    
>>
>
>There is a mechanism for that, you just have each cached_info inc the
>module's refcount.
>
>Another option would be to have a mapping, in the oprofile code, from
>spu_contexts to cached_infos, ie. a hash table or something.
>
>cheers
>
>  
>



  reply	other threads:[~2007-02-08 15:04 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-02-06  0:28 [RFC,PATCH] CELL PPU " Carl Love
2007-02-06  0:46 ` Paul Mackerras
2007-02-06  0:49 ` [Cbe-oss-dev] [RFC, PATCH] " Benjamin Herrenschmidt
2007-02-06 23:02 ` [Cbe-oss-dev] [RFC, PATCH] CELL " Carl Love
2007-02-07 15:41   ` Maynard Johnson
2007-02-07 22:48     ` Michael Ellerman
2007-02-08 15:03       ` Maynard Johnson [this message]
2007-02-08 14:18   ` Milton Miller
2007-02-08 17:21     ` Arnd Bergmann
2007-02-08 18:01       ` Adrian Reber
2007-02-08 22:51       ` Carl Love
2007-02-09  2:46         ` Milton Miller
2007-02-09 16:17           ` Carl Love
2007-02-11 22:46             ` Milton Miller
2007-02-12 16:38               ` Carl Love
2007-02-09 18:47       ` Milton Miller
2007-02-09 19:10         ` Arnd Bergmann
2007-02-09 19:46           ` Milton Miller
2007-02-08 23:59     ` Maynard Johnson
2007-02-09 18:03       ` Milton Miller
2007-02-14 23:52 Carl Love
2007-02-15 14:37 ` Arnd Bergmann
2007-02-15 16:15   ` Maynard Johnson
2007-02-15 18:13     ` Arnd Bergmann
2007-02-15 20:21   ` Carl Love
2007-02-15 21:03     ` Arnd Bergmann
2007-02-15 21:50     ` Paul E. McKenney
2007-02-16  0:33       ` Arnd Bergmann
2007-02-16  0:32   ` Maynard Johnson
2007-02-16 17:14     ` Arnd Bergmann
2007-02-16 21:43       ` Maynard Johnson
2007-02-18 23:18         ` Maynard Johnson
2007-02-22  0:02 Carl Love
2007-02-26 23:50 ` Arnd Bergmann
2007-02-27  1:31   ` Michael Ellerman
2007-02-27 16:52   ` Maynard Johnson
2007-02-28  1:44     ` Arnd Bergmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=45CB3BDF.4080408@us.ibm.com \
    --to=maynardj@us.ibm.com \
    --cc=cbe-oss-dev@ozlabs.org \
    --cc=cel@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=michael@ellerman.id.au \
    --subject='Re: [Cbe-oss-dev] [RFC, PATCH] CELL Oprofile SPU profiling updated patch' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).