LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early
@ 2007-02-13 7:43 Trond Myklebust
2007-02-13 7:43 ` [PATCH 2/2] VM: invalidate_inode_pages2_range() shouldn't fail on page dirty Trond Myklebust
2007-02-14 21:58 ` [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early Andrew Morton
0 siblings, 2 replies; 4+ messages in thread
From: Trond Myklebust @ 2007-02-13 7:43 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel
From: Trond Myklebust <Trond.Myklebust@netapp.com>
Fix invalidate_inode_pages2_range() so that it does not immediately exit
just because a single page in the specified range could not be removed.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
---
mm/truncate.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index ebf3fcb..0f4b6d1 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -375,10 +375,10 @@ int invalidate_inode_pages2_range(struct
pagevec_init(&pvec, 0);
next = start;
- while (next <= end && !ret && !wrapped &&
+ while (next <= end && !wrapped &&
pagevec_lookup(&pvec, mapping, next,
min(end - next, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) {
- for (i = 0; !ret && i < pagevec_count(&pvec); i++) {
+ for (i = 0; i < pagevec_count(&pvec); i++) {
struct page *page = pvec.pages[i];
pgoff_t page_index;
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/2] VM: invalidate_inode_pages2_range() shouldn't fail on page dirty...
2007-02-13 7:43 [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early Trond Myklebust
@ 2007-02-13 7:43 ` Trond Myklebust
2007-02-14 22:00 ` Andrew Morton
2007-02-14 21:58 ` [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early Andrew Morton
1 sibling, 1 reply; 4+ messages in thread
From: Trond Myklebust @ 2007-02-13 7:43 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel
From: Trond Myklebust <Trond.Myklebust@netapp.com>
invalidate_inode_pages2() should not try to fix races between direct_IO and
mmap(). It should only be trying to clear out pages that were dirty before
the direct_IO write (see generic_file_direct_IO()).
Skipping dirty pages should therefore not result in an error.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
---
mm/truncate.c | 8 +++++---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index 0f4b6d1..c3ff820 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -318,6 +318,8 @@ EXPORT_SYMBOL(invalidate_mapping_pages);
* invalidation guarantees, and cannot afford to leave pages behind because
* shrink_list() has a temp ref on them, or because they're transiently sitting
* in the lru_cache_add() pagevecs.
+ * Note: this function just skips pages that are dirty without flagging
+ * an error.
*/
static int
invalidate_complete_page2(struct address_space *mapping, struct page *page)
@@ -330,7 +332,7 @@ invalidate_complete_page2(struct address
write_lock_irq(&mapping->tree_lock);
if (PageDirty(page))
- goto failed;
+ goto dirty;
BUG_ON(PagePrivate(page));
__remove_from_page_cache(page);
@@ -338,9 +340,9 @@ invalidate_complete_page2(struct address
ClearPageUptodate(page);
page_cache_release(page); /* pagecache ref */
return 1;
-failed:
+dirty:
write_unlock_irq(&mapping->tree_lock);
- return 0;
+ return 1;
}
static int do_launder_page(struct address_space *mapping, struct page *page)
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early
2007-02-13 7:43 [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early Trond Myklebust
2007-02-13 7:43 ` [PATCH 2/2] VM: invalidate_inode_pages2_range() shouldn't fail on page dirty Trond Myklebust
@ 2007-02-14 21:58 ` Andrew Morton
1 sibling, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2007-02-14 21:58 UTC (permalink / raw)
To: Trond Myklebust; +Cc: linux-kernel
On Mon, 12 Feb 2007 23:43:35 -0800
Trond Myklebust <Trond.Myklebust@netapp.com> wrote:
> From: Trond Myklebust <Trond.Myklebust@netapp.com>
>
> Fix invalidate_inode_pages2_range() so that it does not immediately exit
> just because a single page in the specified range could not be removed.
>
One man's "fix" is another man's "slow down" ;)
Could we please have a description of why this change is needed?
> ---
>
> mm/truncate.c | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/truncate.c b/mm/truncate.c
> index ebf3fcb..0f4b6d1 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -375,10 +375,10 @@ int invalidate_inode_pages2_range(struct
>
> pagevec_init(&pvec, 0);
> next = start;
> - while (next <= end && !ret && !wrapped &&
> + while (next <= end && !wrapped &&
> pagevec_lookup(&pvec, mapping, next,
> min(end - next, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) {
> - for (i = 0; !ret && i < pagevec_count(&pvec); i++) {
> + for (i = 0; i < pagevec_count(&pvec); i++) {
> struct page *page = pvec.pages[i];
> pgoff_t page_index;
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 2/2] VM: invalidate_inode_pages2_range() shouldn't fail on page dirty...
2007-02-13 7:43 ` [PATCH 2/2] VM: invalidate_inode_pages2_range() shouldn't fail on page dirty Trond Myklebust
@ 2007-02-14 22:00 ` Andrew Morton
0 siblings, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2007-02-14 22:00 UTC (permalink / raw)
To: Trond Myklebust; +Cc: linux-kernel
On Mon, 12 Feb 2007 23:43:38 -0800
Trond Myklebust <Trond.Myklebust@netapp.com> wrote:
> From: Trond Myklebust <Trond.Myklebust@netapp.com>
>
> invalidate_inode_pages2() should not try to fix races between direct_IO and
> mmap(). It should only be trying to clear out pages that were dirty before
> the direct_IO write (see generic_file_direct_IO()).
> Skipping dirty pages should therefore not result in an error.
>
This change worries me. It's a very bad situation if we leave dirty
pagecache sitting over a piece of the file which is about to be either read
or written via direct-IO. As far as the application is concerned, it
pretty much guarantees impending data corruption and I do think we need to
tell the application the bad news and not just pretend that things are all
OK.
What problem are we trying to fix here?
> ---
>
> mm/truncate.c | 8 +++++---
> 1 files changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/truncate.c b/mm/truncate.c
> index 0f4b6d1..c3ff820 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -318,6 +318,8 @@ EXPORT_SYMBOL(invalidate_mapping_pages);
> * invalidation guarantees, and cannot afford to leave pages behind because
> * shrink_list() has a temp ref on them, or because they're transiently sitting
> * in the lru_cache_add() pagevecs.
> + * Note: this function just skips pages that are dirty without flagging
> + * an error.
> */
> static int
> invalidate_complete_page2(struct address_space *mapping, struct page *page)
> @@ -330,7 +332,7 @@ invalidate_complete_page2(struct address
>
> write_lock_irq(&mapping->tree_lock);
> if (PageDirty(page))
> - goto failed;
> + goto dirty;
>
> BUG_ON(PagePrivate(page));
> __remove_from_page_cache(page);
> @@ -338,9 +340,9 @@ invalidate_complete_page2(struct address
> ClearPageUptodate(page);
> page_cache_release(page); /* pagecache ref */
> return 1;
> -failed:
> +dirty:
> write_unlock_irq(&mapping->tree_lock);
> - return 0;
> + return 1;
> }
>
> static int do_launder_page(struct address_space *mapping, struct page *page)
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2007-02-14 22:00 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-13 7:43 [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early Trond Myklebust
2007-02-13 7:43 ` [PATCH 2/2] VM: invalidate_inode_pages2_range() shouldn't fail on page dirty Trond Myklebust
2007-02-14 22:00 ` Andrew Morton
2007-02-14 21:58 ` [PATCH 1/2] VM: invalidate_inode_pages2_range() should not exit early Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).