LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* Questions about the SYSVIPC share memory on NOMMU uClinux architecture
@ 2007-03-02 10:33 Wu, Bryan
  2007-03-05  1:40 ` Questions about the SYSVIPC share memory on NOMMU uClinuxarchitecture Wu, Bryan
  0 siblings, 1 reply; 2+ messages in thread
From: Wu, Bryan @ 2007-03-02 10:33 UTC (permalink / raw)
  To: linux-kernel

Hi folks,

Recently, I was struggling in a bug about the shm->nattch. Actually, the
test case is from LTP kernel/syscall/ipc/shmctl/shmctl01.c code. We
ported it to the uClinux-blackfin platform.

The algorithm is very simple. 
a) the parent process will create a share memory 
b) parent will vfork/execlp 4 children process
c) children will call shmat() attach to the share memory (shm->nattch
should be increased), then children will pause()
d) parent call shmclt() to get the share memory nattch, if nattch != 4,
then the testcase will fail.

In our uClinux-blackfin platform, nattch = 1.

So I dig into the source code ipc/shm.c, then there are some questions
about the code.

a)
in function do_shmat(), after nattch++ why nattch-- as following:
================================================================================
 user_addr = (void*) do_mmap (file, addr, size, prot, flags, 0);

	// here no return or goto valid place.

invalid:
        up_write(&current->mm->mmap_sem);

        mutex_lock(&shm_ids(ns).mutex);
        shp = shm_lock(ns, shmid);
        BUG_ON(!shp);
        shp->shm_nattch--; /* Why??? */
        if(shp->shm_nattch == 0 &&
           shp->shm_perm.mode & SHM_DEST)
                shm_destroy(ns, shp);
        else
                shm_unlock(shp);
        mutex_unlock(&shm_ids(ns).mutex);

        *raddr = (unsigned long) user_addr;
        err = 0;
        if (IS_ERR(user_addr))
                err = PTR_ERR(user_addr);
out:
        return err;
================================================================================

b) do_mmap() -> mm/nommu.c do_mmap_pgoff()
When create a new vma structure, shm_open(), shm_inc() will be called. Then nattch++.
So the nattch counting is disordered.

Please give me some hint about it. Actually, this case can pass on X86 platform

Thanks
-Bryan Wu

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Questions about the SYSVIPC share memory on NOMMU uClinuxarchitecture
  2007-03-02 10:33 Questions about the SYSVIPC share memory on NOMMU uClinux architecture Wu, Bryan
@ 2007-03-05  1:40 ` Wu, Bryan
  0 siblings, 0 replies; 2+ messages in thread
From: Wu, Bryan @ 2007-03-05  1:40 UTC (permalink / raw)
  To: bryan.wu, ebiederm; +Cc: linux-kernel

On Fri, 2007-03-02 at 05:33 -0500, Wu, Bryan wrote:
> Hi folks,
> 
> Recently, I was struggling in a bug about the shm->nattch. Actually,
> the 
> test case is from LTP kernel/syscall/ipc/shmctl/shmctl01.c code. We 
> ported it to the uClinux-blackfin platform.
> 

Sorry for dropping the kernel version inforamtion, I found this in
2.6.19 kernel and 2.6.20-mm2 kernel.

> The algorithm is very simple.  
> a) the parent process will create a share memory  
> b) parent will vfork/execlp 4 children process 
> c) children will call shmat() attach to the share memory (shm->nattch 
> should be increased), then children will pause() 
> d) parent call shmclt() to get the share memory nattch, if nattch !=
> 4, 
> then the testcase will fail.
> 
> In our uClinux-blackfin platform, nattch = 1.
> 
> So I dig into the source code ipc/shm.c, then there are some
> questions 
> about the code.
> 
> a) 
> in function do_shmat(), after nattch++ why nattch-- as following: 
> ================================================================================ 
>  user_addr = (void*) do_mmap (file, addr, size, prot, flags, 0);
> 
>         // here no return or goto valid place.
> 
> invalid: 
>         up_write(&current->mm->mmap_sem);
> 
>         mutex_lock(&shm_ids(ns).mutex); 
>         shp = shm_lock(ns, shmid); 
>         BUG_ON(!shp); 
>         shp->shm_nattch--; /* Why??? */ 
>         if(shp->shm_nattch == 0 && 
>            shp->shm_perm.mode & SHM_DEST) 
>                 shm_destroy(ns, shp); 
>         else 
>                 shm_unlock(shp); 
>         mutex_unlock(&shm_ids(ns).mutex);
> 
>         *raddr = (unsigned long) user_addr; 
>         err = 0; 
>         if (IS_ERR(user_addr)) 
>                 err = PTR_ERR(user_addr); 
> out: 
>         return err; 
> ================================================================================
> 
> b) do_mmap() -> mm/nommu.c do_mmap_pgoff() 
> When create a new vma structure, shm_open(), shm_inc() will be called.
> Then nattch++. 
> So the nattch counting is disordered.
> 
> Please give me some hint about it. Actually, this case can pass on X86
> platform
> 

Is there any help available? 
Thanks a lot
-Bryan

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2007-03-05  1:40 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-03-02 10:33 Questions about the SYSVIPC share memory on NOMMU uClinux architecture Wu, Bryan
2007-03-05  1:40 ` Questions about the SYSVIPC share memory on NOMMU uClinuxarchitecture Wu, Bryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).