LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Yang Shi <yang.shi@linux.alibaba.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 1/8] mm: mmap: unmap large mapping by section
Date: Wed, 21 Mar 2018 15:40:09 -0700	[thread overview]
Message-ID: <274f9d37-3dee-2bff-b1fd-1ca7fa41f1ca@linux.alibaba.com> (raw)
In-Reply-To: <20180321221502.GA3969@bombadil.infradead.org>



On 3/21/18 3:15 PM, Matthew Wilcox wrote:
> On Wed, Mar 21, 2018 at 02:45:44PM -0700, Yang Shi wrote:
>> On 3/21/18 10:29 AM, Matthew Wilcox wrote:
>>> On Wed, Mar 21, 2018 at 09:31:22AM -0700, Yang Shi wrote:
>>>> On 3/21/18 6:08 AM, Michal Hocko wrote:
>>>>> Yes, this definitely sucks. One way to work that around is to split the
>>>>> unmap to two phases. One to drop all the pages. That would only need
>>>>> mmap_sem for read and then tear down the mapping with the mmap_sem for
>>>>> write. This wouldn't help for parallel mmap_sem writers but those really
>>>>> need a different approach (e.g. the range locking).
>>>> page fault might sneak in to map a page which has been unmapped before?
>>>>
>>>> range locking should help a lot on manipulating small sections of a large
>>>> mapping in parallel or multiple small mappings. It may not achieve too much
>>>> for single large mapping.
>>> I don't think we need range locking.  What if we do munmap this way:
>>>
>>> Take the mmap_sem for write
>>> Find the VMA
>>>     If the VMA is large(*)
>>>       Mark the VMA as deleted
>>>       Drop the mmap_sem
>>>       zap all of the entries
>>>       Take the mmap_sem
>>>     Else
>>>       zap all of the entries
>>> Continue finding VMAs
>>> Drop the mmap_sem
>>>
>>> Now we need to change everywhere which looks up a VMA to see if it needs
>>> to care the the VMA is deleted (page faults, eg will need to SIGBUS; mmap
>> Marking vma as deleted sounds good. The problem for my current approach is
>> the concurrent page fault may succeed if it access the not yet unmapped
>> section. Marking deleted vma could tell page fault the vma is not valid
>> anymore, then return SIGSEGV.
>>
>>> does not care; munmap will need to wait for the existing munmap operation
>> Why mmap doesn't care? How about MAP_FIXED? It may fail unexpectedly, right?
> Oh, I forgot about MAP_FIXED.  Yes, MAP_FIXED should wait for the munmap
> to finish.  But a regular mmap can just pretend that it happened before
> the munmap call and avoid the deleted VMAs.

But, my test shows race condition for reduced size mmap which calls 
do_munmap(). It may need wait for the munmap finish too.

So, in my patches, I just make the do_munmap() called from mmap() hold 
mmap_sem all the time.

Thanks,
Yang

  reply	other threads:[~2018-03-21 22:40 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-20 21:31 [RFC PATCH 0/8] Drop mmap_sem during unmapping large map Yang Shi
2018-03-20 21:31 ` [RFC PATCH 1/8] mm: mmap: unmap large mapping by section Yang Shi
2018-03-21 13:08   ` Michal Hocko
2018-03-21 16:31     ` Yang Shi
2018-03-21 17:29       ` Matthew Wilcox
2018-03-21 21:45         ` Yang Shi
2018-03-21 22:15           ` Matthew Wilcox
2018-03-21 22:40             ` Yang Shi [this message]
2018-03-21 22:46           ` Matthew Wilcox
2018-03-22 15:32             ` Laurent Dufour
2018-03-22 15:40               ` Matthew Wilcox
2018-03-22 15:54                 ` Laurent Dufour
2018-03-22 16:05                   ` Matthew Wilcox
2018-03-22 16:18                     ` Laurent Dufour
2018-03-22 16:46                       ` Yang Shi
2018-03-23 13:03                         ` Laurent Dufour
2018-03-22 16:51                       ` Matthew Wilcox
2018-03-22 16:49                     ` Yang Shi
2018-03-22 17:34         ` Yang Shi
2018-03-22 18:48           ` Matthew Wilcox
2018-03-24 18:24         ` Jerome Glisse
2018-03-21 13:14   ` Michal Hocko
2018-03-21 16:50     ` Yang Shi
2018-03-21 17:16       ` Yang Shi
2018-03-21 21:23         ` Michal Hocko
2018-03-21 22:36           ` Yang Shi
2018-03-22  9:10             ` Michal Hocko
2018-03-22 16:06               ` Yang Shi
2018-03-22 16:12                 ` Michal Hocko
2018-03-22 16:13                 ` Matthew Wilcox
2018-03-22 16:28                   ` Laurent Dufour
2018-03-22 16:36                     ` David Laight
2018-03-20 21:31 ` [RFC PATCH 2/8] mm: mmap: pass atomic parameter to do_munmap() call sites Yang Shi
2018-03-20 21:31 ` [RFC PATCH 3/8] mm: mremap: pass atomic parameter to do_munmap() Yang Shi
2018-03-20 21:31 ` [RFC PATCH 4/8] mm: nommu: add " Yang Shi
2018-03-20 21:31 ` [RFC PATCH 5/8] ipc: shm: pass " Yang Shi
2018-03-20 21:31 ` [RFC PATCH 6/8] fs: proc/vmcore: " Yang Shi
2018-03-20 21:31 ` [RFC PATCH 7/8] x86: mpx: " Yang Shi
2018-03-20 22:35   ` Thomas Gleixner
2018-03-21 16:53     ` Yang Shi
2018-03-20 21:31 ` [RFC PATCH 8/8] x86: vma: " Yang Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=274f9d37-3dee-2bff-b1fd-1ca7fa41f1ca@linux.alibaba.com \
    --to=yang.shi@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=willy@infradead.org \
    --subject='Re: [RFC PATCH 1/8] mm: mmap: unmap large mapping by section' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).