Linux-Fsdevel Archive on lore.kernel.org
help / color / mirror / Atom feed
* [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2]
@ 2020-09-02 15:44 David Howells
2020-09-02 15:44 ` [RFC PATCH 1/6] Fix khugepaged's request size in collapse_file() " David Howells
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: David Howells @ 2020-09-02 15:44 UTC (permalink / raw)
To: willy; +Cc: Song Liu, dhowells, linux-fsdevel, linux-mm, linux-kernel
Hi Willy,
Here's a set of patches to expand the use of the readahead_control struct,
essentially from do_sync_mmap_readahead() down. It's on top of:
http://git.infradead.org/users/willy/pagecache.git
Also pass file_ra_state into force_page_cache_readahead().
The bugfix for khugepaged.c is included as that code is further changed
here.
David
---
David Howells (6):
Fix khugepaged's request size in collapse_file()
mm: Make ondemand_readahead() take a readahead_control struct
mm: Push readahead_control down into force_page_cache_readahead()
mm: Pass readahead_control into page_cache_{sync,async}_readahead()
mm: Fold ra_submit() into do_sync_mmap_readahead()
mm: Pass a file_ra_state struct into force_page_cache_readahead()
fs/btrfs/free-space-cache.c | 4 ++-
fs/btrfs/ioctl.c | 9 +++--
fs/btrfs/relocation.c | 10 +++---
fs/btrfs/send.c | 16 +++++----
fs/ext4/dir.c | 11 +++---
fs/f2fs/dir.c | 8 +++--
include/linux/pagemap.h | 7 ++--
mm/fadvise.c | 6 +++-
mm/filemap.c | 32 +++++++++--------
mm/internal.h | 14 ++------
mm/khugepaged.c | 4 +--
mm/readahead.c | 70 ++++++++++++++++++-------------------
12 files changed, 100 insertions(+), 91 deletions(-)
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC PATCH 1/6] Fix khugepaged's request size in collapse_file() [ver #2]
2020-09-02 15:44 [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2] David Howells
@ 2020-09-02 15:44 ` David Howells
2020-09-02 15:44 ` [RFC PATCH 2/6] mm: Make ondemand_readahead() take a readahead_control struct " David Howells
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: David Howells @ 2020-09-02 15:44 UTC (permalink / raw)
To: willy; +Cc: Song Liu, dhowells, linux-fsdevel, linux-mm, linux-kernel
collapse_file() in khugepaged passes PAGE_SIZE as the number of pages to be
read ahead to page_cache_sync_readahead(). It seems this was expressed as a
number of bytes rather than a number of pages.
Fix it to use the number of pages to the end of the window instead.
Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/khugepaged.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 6d199c353281..f2d243077b74 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1706,7 +1706,7 @@ static void collapse_file(struct mm_struct *mm,
xas_unlock_irq(&xas);
page_cache_sync_readahead(mapping, &file->f_ra,
file, index,
- PAGE_SIZE);
+ end - index);
/* drain pagevecs to help isolate_lru_page() */
lru_add_drain();
page = find_lock_page(mapping, index);
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH 2/6] mm: Make ondemand_readahead() take a readahead_control struct [ver #2]
2020-09-02 15:44 [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2] David Howells
2020-09-02 15:44 ` [RFC PATCH 1/6] Fix khugepaged's request size in collapse_file() " David Howells
@ 2020-09-02 15:44 ` David Howells
2020-09-02 15:44 ` [RFC PATCH 3/6] mm: Push readahead_control down into force_page_cache_readahead() " David Howells
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: David Howells @ 2020-09-02 15:44 UTC (permalink / raw)
To: willy; +Cc: dhowells, linux-fsdevel, linux-mm, linux-kernel
Make ondemand_readahead() take a readahead_control struct in preparation
for making do_sync_mmap_readahead() pass down an RAC struct.
Signed-off-by: David Howells <dhowells@redhat.com>
---
mm/readahead.c | 32 ++++++++++++++++++--------------
1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index 91859e6e2b7d..e3e3419dfe3d 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -511,14 +511,14 @@ static bool page_cache_readahead_order(struct readahead_control *rac,
/*
* A minimal readahead algorithm for trivial sequential/random reads.
*/
-static void ondemand_readahead(struct address_space *mapping,
- struct file_ra_state *ra, struct file *file,
- struct page *page, pgoff_t index, unsigned long req_size)
+static void ondemand_readahead(struct readahead_control *rac,
+ struct file_ra_state *ra,
+ struct page *page, unsigned long req_size)
{
- DEFINE_READAHEAD(rac, file, mapping, index);
- struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
+ struct backing_dev_info *bdi = inode_to_bdi(rac->mapping->host);
unsigned long max_pages = ra->ra_pages;
unsigned long add_pages;
+ unsigned long index = rac->_index;
pgoff_t prev_index;
/*
@@ -556,7 +556,7 @@ static void ondemand_readahead(struct address_space *mapping,
pgoff_t start;
rcu_read_lock();
- start = page_cache_next_miss(mapping, index + 1, max_pages);
+ start = page_cache_next_miss(rac->mapping, index + 1, max_pages);
rcu_read_unlock();
if (!start || start - index > max_pages)
@@ -589,14 +589,14 @@ static void ondemand_readahead(struct address_space *mapping,
* Query the page cache and look for the traces(cached history pages)
* that a sequential stream would leave behind.
*/
- if (try_context_readahead(mapping, ra, index, req_size, max_pages))
+ if (try_context_readahead(rac->mapping, ra, index, req_size, max_pages))
goto readit;
/*
* standalone, small random read
* Read as is, and do not pollute the readahead state.
*/
- __do_page_cache_readahead(&rac, req_size, 0);
+ __do_page_cache_readahead(rac, req_size, 0);
return;
initial_readahead:
@@ -622,10 +622,10 @@ static void ondemand_readahead(struct address_space *mapping,
}
}
- rac._index = ra->start;
- if (page && page_cache_readahead_order(&rac, ra, thp_order(page)))
+ rac->_index = ra->start;
+ if (page && page_cache_readahead_order(rac, ra, thp_order(page)))
return;
- __do_page_cache_readahead(&rac, ra->size, ra->async_size);
+ __do_page_cache_readahead(rac, ra->size, ra->async_size);
}
/**
@@ -645,6 +645,8 @@ void page_cache_sync_readahead(struct address_space *mapping,
struct file_ra_state *ra, struct file *filp,
pgoff_t index, unsigned long req_count)
{
+ DEFINE_READAHEAD(rac, filp, mapping, index);
+
/* no read-ahead */
if (!ra->ra_pages)
return;
@@ -659,7 +661,7 @@ void page_cache_sync_readahead(struct address_space *mapping,
}
/* do read-ahead */
- ondemand_readahead(mapping, ra, filp, NULL, index, req_count);
+ ondemand_readahead(&rac, ra, NULL, req_count);
}
EXPORT_SYMBOL_GPL(page_cache_sync_readahead);
@@ -683,7 +685,9 @@ page_cache_async_readahead(struct address_space *mapping,
struct page *page, pgoff_t index,
unsigned long req_count)
{
- /* no read-ahead */
+ DEFINE_READAHEAD(rac, filp, mapping, index);
+
+ /* No Read-ahead */
if (!ra->ra_pages)
return;
@@ -705,7 +709,7 @@ page_cache_async_readahead(struct address_space *mapping,
return;
/* do read-ahead */
- ondemand_readahead(mapping, ra, filp, page, index, req_count);
+ ondemand_readahead(&rac, ra, page, req_count);
}
EXPORT_SYMBOL_GPL(page_cache_async_readahead);
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH 3/6] mm: Push readahead_control down into force_page_cache_readahead() [ver #2]
2020-09-02 15:44 [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2] David Howells
2020-09-02 15:44 ` [RFC PATCH 1/6] Fix khugepaged's request size in collapse_file() " David Howells
2020-09-02 15:44 ` [RFC PATCH 2/6] mm: Make ondemand_readahead() take a readahead_control struct " David Howells
@ 2020-09-02 15:44 ` David Howells
2020-09-02 15:54 ` Matthew Wilcox
2020-09-02 15:44 ` [RFC PATCH 4/6] mm: Pass readahead_control into page_cache_{sync,async}_readahead() " David Howells
` (2 subsequent siblings)
5 siblings, 1 reply; 8+ messages in thread
From: David Howells @ 2020-09-02 15:44 UTC (permalink / raw)
To: willy; +Cc: dhowells, linux-fsdevel, linux-mm, linux-kernel
Push readahead_control down into force_page_cache_readahead() from its
callers in preparation for making do_sync_mmap_readahead() pass down an RAC
struct.
Signed-off-by: David Howells <dhowells@redhat.com>
---
mm/fadvise.c | 5 ++++-
mm/internal.h | 3 +--
mm/readahead.c | 19 +++++++++++--------
3 files changed, 16 insertions(+), 11 deletions(-)
diff --git a/mm/fadvise.c b/mm/fadvise.c
index 0e66f2aaeea3..997f7c16690a 100644
--- a/mm/fadvise.c
+++ b/mm/fadvise.c
@@ -104,7 +104,10 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice)
if (!nrpages)
nrpages = ~0UL;
- force_page_cache_readahead(mapping, file, start_index, nrpages);
+ {
+ DEFINE_READAHEAD(rac, file, mapping, start_index);
+ force_page_cache_readahead(&rac, nrpages);
+ }
break;
case POSIX_FADV_NOREUSE:
break;
diff --git a/mm/internal.h b/mm/internal.h
index bf2bee6c42a1..c8ccf208f524 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -49,8 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end,
struct zap_details *details);
-void force_page_cache_readahead(struct address_space *, struct file *,
- pgoff_t index, unsigned long nr_to_read);
+void force_page_cache_readahead(struct readahead_control *, unsigned long);
void __do_page_cache_readahead(struct readahead_control *,
unsigned long nr_to_read, unsigned long lookahead_size);
diff --git a/mm/readahead.c b/mm/readahead.c
index e3e3419dfe3d..366357e6e845 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -271,13 +271,13 @@ void __do_page_cache_readahead(struct readahead_control *rac,
* Chunk the readahead into 2 megabyte units, so that we don't pin too much
* memory at once.
*/
-void force_page_cache_readahead(struct address_space *mapping,
- struct file *file, pgoff_t index, unsigned long nr_to_read)
+void force_page_cache_readahead(struct readahead_control *rac,
+ unsigned long nr_to_read)
{
- DEFINE_READAHEAD(rac, file, mapping, index);
+ struct address_space *mapping = rac->mapping;
struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
- struct file_ra_state *ra = &file->f_ra;
- unsigned long max_pages;
+ struct file_ra_state *ra = &rac->file->f_ra;
+ unsigned long max_pages, index;
if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages &&
!mapping->a_ops->readahead))
@@ -287,14 +287,17 @@ void force_page_cache_readahead(struct address_space *mapping,
* If the request exceeds the readahead window, allow the read to
* be up to the optimal hardware IO size
*/
+ index = readahead_index(rac);
max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages);
- nr_to_read = min(nr_to_read, max_pages);
+ nr_to_read = min_t(unsigned long, nr_to_read, max_pages);
while (nr_to_read) {
unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE;
if (this_chunk > nr_to_read)
this_chunk = nr_to_read;
- __do_page_cache_readahead(&rac, this_chunk, 0);
+
+ rac->_index = index;
+ __do_page_cache_readahead(rac, this_chunk, 0);
index += this_chunk;
nr_to_read -= this_chunk;
@@ -656,7 +659,7 @@ void page_cache_sync_readahead(struct address_space *mapping,
/* be dumb */
if (filp && (filp->f_mode & FMODE_RANDOM)) {
- force_page_cache_readahead(mapping, filp, index, req_count);
+ force_page_cache_readahead(&rac, req_count);
return;
}
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH 4/6] mm: Pass readahead_control into page_cache_{sync,async}_readahead() [ver #2]
2020-09-02 15:44 [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2] David Howells
` (2 preceding siblings ...)
2020-09-02 15:44 ` [RFC PATCH 3/6] mm: Push readahead_control down into force_page_cache_readahead() " David Howells
@ 2020-09-02 15:44 ` David Howells
2020-09-02 15:44 ` [RFC PATCH 5/6] mm: Fold ra_submit() into do_sync_mmap_readahead() " David Howells
2020-09-02 15:45 ` [RFC PATCH 6/6] mm: Pass a file_ra_state struct into force_page_cache_readahead() " David Howells
5 siblings, 0 replies; 8+ messages in thread
From: David Howells @ 2020-09-02 15:44 UTC (permalink / raw)
To: willy; +Cc: dhowells, linux-fsdevel, linux-mm, linux-kernel
Pass struct readahead_control into the page_cache_{sync,async}_readahead()
functions in preparation for making do_sync_mmap_readahead() pass down an
RAC struct.
Signed-off-by: David Howells <dhowells@redhat.com>
---
fs/btrfs/free-space-cache.c | 4 +++-
fs/btrfs/ioctl.c | 9 ++++++---
fs/btrfs/relocation.c | 10 ++++++----
fs/btrfs/send.c | 16 ++++++++++------
fs/ext4/dir.c | 11 ++++++-----
fs/f2fs/dir.c | 8 ++++++--
include/linux/pagemap.h | 7 +++----
mm/filemap.c | 26 ++++++++++++++------------
mm/khugepaged.c | 4 ++--
mm/readahead.c | 34 +++++++++++++---------------------
10 files changed, 69 insertions(+), 60 deletions(-)
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index dc82fd0c80cb..c64af32453b6 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -288,6 +288,8 @@ static void readahead_cache(struct inode *inode)
struct file_ra_state *ra;
unsigned long last_index;
+ DEFINE_READAHEAD(rac, NULL, inode->i_mapping, 0);
+
ra = kzalloc(sizeof(*ra), GFP_NOFS);
if (!ra)
return;
@@ -295,7 +297,7 @@ static void readahead_cache(struct inode *inode)
file_ra_state_init(ra, inode->i_mapping);
last_index = (i_size_read(inode) - 1) >> PAGE_SHIFT;
- page_cache_sync_readahead(inode->i_mapping, ra, NULL, 0, last_index);
+ page_cache_sync_readahead(&rac, ra, last_index);
kfree(ra);
}
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index bd3511c5ca81..9f9321f20615 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -1428,6 +1428,8 @@ int btrfs_defrag_file(struct inode *inode, struct file *file,
struct page **pages = NULL;
bool do_compress = range->flags & BTRFS_DEFRAG_RANGE_COMPRESS;
+ DEFINE_READAHEAD(rac, file, inode->i_mapping, 0);
+
if (isize == 0)
return 0;
@@ -1534,9 +1536,10 @@ int btrfs_defrag_file(struct inode *inode, struct file *file,
if (i + cluster > ra_index) {
ra_index = max(i, ra_index);
- if (ra)
- page_cache_sync_readahead(inode->i_mapping, ra,
- file, ra_index, cluster);
+ if (ra) {
+ rac._index = ra_index;
+ page_cache_sync_readahead(&rac, ra, cluster);
+ }
ra_index += cluster;
}
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 4ba1ab9cc76d..3d21aeaaa762 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -2684,6 +2684,8 @@ static int relocate_file_extent_cluster(struct inode *inode,
int nr = 0;
int ret = 0;
+ DEFINE_READAHEAD(rac, NULL, inode->i_mapping, 0);
+
if (!cluster->nr)
return 0;
@@ -2712,8 +2714,8 @@ static int relocate_file_extent_cluster(struct inode *inode,
page = find_lock_page(inode->i_mapping, index);
if (!page) {
- page_cache_sync_readahead(inode->i_mapping,
- ra, NULL, index,
+ rac._index = index;
+ page_cache_sync_readahead(&rac, ra,
last_index + 1 - index);
page = find_or_create_page(inode->i_mapping, index,
mask);
@@ -2728,8 +2730,8 @@ static int relocate_file_extent_cluster(struct inode *inode,
}
if (PageReadahead(page)) {
- page_cache_async_readahead(inode->i_mapping,
- ra, NULL, page, index,
+ rac._index = index;
+ page_cache_async_readahead(&rac, ra, page,
last_index + 1 - index);
}
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index d9813a5b075a..f41391fc4230 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -4811,6 +4811,8 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len)
unsigned pg_offset = offset_in_page(offset);
ssize_t ret = 0;
+ DEFINE_READAHEAD(rac, NULL, NULL, 0);
+
inode = btrfs_iget(fs_info->sb, sctx->cur_ino, root);
if (IS_ERR(inode))
return PTR_ERR(inode);
@@ -4829,15 +4831,18 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len)
/* initial readahead */
memset(&sctx->ra, 0, sizeof(struct file_ra_state));
file_ra_state_init(&sctx->ra, inode->i_mapping);
+ rac.mapping = inode->i_mapping;
while (index <= last_index) {
unsigned cur_len = min_t(unsigned, len,
PAGE_SIZE - pg_offset);
+ rac._index = index;
+
page = find_lock_page(inode->i_mapping, index);
if (!page) {
- page_cache_sync_readahead(inode->i_mapping, &sctx->ra,
- NULL, index, last_index + 1 - index);
+ page_cache_sync_readahead(&rac, &sctx->ra,
+ last_index + 1 - index);
page = find_or_create_page(inode->i_mapping, index,
GFP_KERNEL);
@@ -4847,10 +4852,9 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len)
}
}
- if (PageReadahead(page)) {
- page_cache_async_readahead(inode->i_mapping, &sctx->ra,
- NULL, page, index, last_index + 1 - index);
- }
+ if (PageReadahead(page))
+ page_cache_async_readahead(&rac, &sctx->ra, page,
+ last_index + 1 - index);
if (!PageUptodate(page)) {
btrfs_readpage(NULL, page);
diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
index 1d82336b1cd4..9fca0de50e0f 100644
--- a/fs/ext4/dir.c
+++ b/fs/ext4/dir.c
@@ -118,6 +118,8 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
struct buffer_head *bh = NULL;
struct fscrypt_str fstr = FSTR_INIT(NULL, 0);
+ DEFINE_READAHEAD(rac, file, sb->s_bdev->bd_inode->i_mapping, 0);
+
if (IS_ENCRYPTED(inode)) {
err = fscrypt_get_encryption_info(inode);
if (err)
@@ -176,11 +178,10 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
if (err > 0) {
pgoff_t index = map.m_pblk >>
(PAGE_SHIFT - inode->i_blkbits);
- if (!ra_has_index(&file->f_ra, index))
- page_cache_sync_readahead(
- sb->s_bdev->bd_inode->i_mapping,
- &file->f_ra, file,
- index, 1);
+ if (!ra_has_index(&file->f_ra, index)) {
+ rac._index = index;
+ page_cache_sync_readahead(&rac, &file->f_ra, 1);
+ }
file->f_ra.prev_pos = (loff_t)index << PAGE_SHIFT;
bh = ext4_bread(NULL, inode, map.m_lblk, 0);
if (IS_ERR(bh)) {
diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index 069f498af1e3..69a316e7808d 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -1027,6 +1027,8 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx)
struct fscrypt_str fstr = FSTR_INIT(NULL, 0);
int err = 0;
+ DEFINE_READAHEAD(rac, file, inode->i_mapping, 0);
+
if (IS_ENCRYPTED(inode)) {
err = fscrypt_get_encryption_info(inode);
if (err)
@@ -1052,9 +1054,11 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx)
cond_resched();
/* readahead for multi pages of dir */
- if (npages - n > 1 && !ra_has_index(ra, n))
- page_cache_sync_readahead(inode->i_mapping, ra, file, n,
+ if (npages - n > 1 && !ra_has_index(ra, n)) {
+ rac._index = n;
+ page_cache_sync_readahead(&rac, ra,
min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES));
+ }
dentry_page = f2fs_find_data_page(inode, n);
if (IS_ERR(dentry_page)) {
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 8bf048a76c43..3c362ddfeb4d 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -769,11 +769,10 @@ void delete_from_page_cache_batch(struct address_space *mapping,
#define VM_READAHEAD_PAGES (SZ_128K / PAGE_SIZE)
-void page_cache_sync_readahead(struct address_space *, struct file_ra_state *,
- struct file *, pgoff_t index, unsigned long req_count);
-void page_cache_async_readahead(struct address_space *, struct file_ra_state *,
- struct file *, struct page *, pgoff_t index,
+void page_cache_sync_readahead(struct readahead_control *, struct file_ra_state *,
unsigned long req_count);
+void page_cache_async_readahead(struct readahead_control *, struct file_ra_state *,
+ struct page *, unsigned long req_count);
void page_cache_readahead_unbounded(struct readahead_control *,
unsigned long nr_to_read, unsigned long lookahead_count);
diff --git a/mm/filemap.c b/mm/filemap.c
index 82b97cf4306c..fdfeedd1eb71 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2070,6 +2070,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
unsigned int prev_offset;
int error = 0;
+ DEFINE_READAHEAD(rac, filp, mapping, 0);
+
if (unlikely(*ppos >= inode->i_sb->s_maxbytes))
return 0;
iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
@@ -2097,9 +2099,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
if (!page) {
if (iocb->ki_flags & IOCB_NOIO)
goto would_block;
- page_cache_sync_readahead(mapping,
- ra, filp,
- index, last_index - index);
+ rac._index = index;
+ page_cache_sync_readahead(&rac, ra, last_index - index);
page = find_get_page(mapping, index);
if (unlikely(page == NULL))
goto no_cached_page;
@@ -2109,9 +2110,9 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
put_page(page);
goto out;
}
- page_cache_async_readahead(mapping,
- ra, filp, thp_head(page),
- index, last_index - index);
+ rac._index = index;
+ page_cache_async_readahead(&rac, ra, thp_head(page),
+ last_index - index);
}
if (!PageUptodate(page)) {
/*
@@ -2469,6 +2470,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
pgoff_t offset = vmf->pgoff;
unsigned int mmap_miss;
+ DEFINE_READAHEAD(rac, file, mapping, offset);
+
/* If we don't want any read-ahead, don't bother */
if (vmf->vma->vm_flags & VM_RAND_READ)
return fpin;
@@ -2477,8 +2480,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
if (vmf->vma->vm_flags & VM_SEQ_READ) {
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
- page_cache_sync_readahead(mapping, ra, file, offset,
- ra->ra_pages);
+ page_cache_sync_readahead(&rac, ra, ra->ra_pages);
return fpin;
}
@@ -2515,10 +2517,10 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
{
struct file *file = vmf->vma->vm_file;
struct file_ra_state *ra = &file->f_ra;
- struct address_space *mapping = file->f_mapping;
struct file *fpin = NULL;
unsigned int mmap_miss;
- pgoff_t offset = vmf->pgoff;
+
+ DEFINE_READAHEAD(rac, file, file->f_mapping, vmf->pgoff);
/* If we don't want any read-ahead, don't bother */
if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages)
@@ -2528,8 +2530,8 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
WRITE_ONCE(ra->mmap_miss, --mmap_miss);
if (PageReadahead(thp_head(page))) {
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
- page_cache_async_readahead(mapping, ra, file,
- thp_head(page), offset, ra->ra_pages);
+ page_cache_async_readahead(&rac, ra, thp_head(page),
+ ra->ra_pages);
}
return fpin;
}
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index f2d243077b74..84305574b36d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1703,9 +1703,9 @@ static void collapse_file(struct mm_struct *mm,
}
} else { /* !is_shmem */
if (!page || xa_is_value(page)) {
+ DEFINE_READAHEAD(rac, file, mapping, index);
xas_unlock_irq(&xas);
- page_cache_sync_readahead(mapping, &file->f_ra,
- file, index,
+ page_cache_sync_readahead(&rac, &file->f_ra,
end - index);
/* drain pagevecs to help isolate_lru_page() */
lru_add_drain();
diff --git a/mm/readahead.c b/mm/readahead.c
index 366357e6e845..d8e3e59e4c46 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -633,10 +633,8 @@ static void ondemand_readahead(struct readahead_control *rac,
/**
* page_cache_sync_readahead - generic file readahead
- * @mapping: address_space which holds the pagecache and I/O vectors
+ * @rac: Readahead control.
* @ra: file_ra_state which holds the readahead state
- * @filp: passed on to ->readpage() and ->readpages()
- * @index: Index of first page to be read.
* @req_count: Total number of pages being read by the caller.
*
* page_cache_sync_readahead() should be called when a cache miss happened:
@@ -644,12 +642,10 @@ static void ondemand_readahead(struct readahead_control *rac,
* pages onto the read request if access patterns suggest it will improve
* performance.
*/
-void page_cache_sync_readahead(struct address_space *mapping,
- struct file_ra_state *ra, struct file *filp,
- pgoff_t index, unsigned long req_count)
+void page_cache_sync_readahead(struct readahead_control *rac,
+ struct file_ra_state *ra,
+ unsigned long req_count)
{
- DEFINE_READAHEAD(rac, filp, mapping, index);
-
/* no read-ahead */
if (!ra->ra_pages)
return;
@@ -658,23 +654,21 @@ void page_cache_sync_readahead(struct address_space *mapping,
return;
/* be dumb */
- if (filp && (filp->f_mode & FMODE_RANDOM)) {
- force_page_cache_readahead(&rac, req_count);
+ if (rac->file && (rac->file->f_mode & FMODE_RANDOM)) {
+ force_page_cache_readahead(rac, req_count);
return;
}
/* do read-ahead */
- ondemand_readahead(&rac, ra, NULL, req_count);
+ ondemand_readahead(rac, ra, NULL, req_count);
}
EXPORT_SYMBOL_GPL(page_cache_sync_readahead);
/**
* page_cache_async_readahead - file readahead for marked pages
- * @mapping: address_space which holds the pagecache and I/O vectors
+ * @rac: Readahead control.
* @ra: file_ra_state which holds the readahead state
- * @filp: passed on to ->readpage() and ->readpages()
* @page: The page at @index which triggered the readahead call.
- * @index: Index of first page to be read.
* @req_count: Total number of pages being read by the caller.
*
* page_cache_async_readahead() should be called when a page is used which
@@ -683,13 +677,11 @@ EXPORT_SYMBOL_GPL(page_cache_sync_readahead);
* more pages.
*/
void
-page_cache_async_readahead(struct address_space *mapping,
- struct file_ra_state *ra, struct file *filp,
- struct page *page, pgoff_t index,
+page_cache_async_readahead(struct readahead_control *rac,
+ struct file_ra_state *ra,
+ struct page *page,
unsigned long req_count)
{
- DEFINE_READAHEAD(rac, filp, mapping, index);
-
/* No Read-ahead */
if (!ra->ra_pages)
return;
@@ -705,14 +697,14 @@ page_cache_async_readahead(struct address_space *mapping,
/*
* Defer asynchronous read-ahead on IO congestion.
*/
- if (inode_read_congested(mapping->host))
+ if (inode_read_congested(rac->mapping->host))
return;
if (blk_cgroup_congested())
return;
/* do read-ahead */
- ondemand_readahead(&rac, ra, page, req_count);
+ ondemand_readahead(rac, ra, page, req_count);
}
EXPORT_SYMBOL_GPL(page_cache_async_readahead);
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH 5/6] mm: Fold ra_submit() into do_sync_mmap_readahead() [ver #2]
2020-09-02 15:44 [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2] David Howells
` (3 preceding siblings ...)
2020-09-02 15:44 ` [RFC PATCH 4/6] mm: Pass readahead_control into page_cache_{sync,async}_readahead() " David Howells
@ 2020-09-02 15:44 ` David Howells
2020-09-02 15:45 ` [RFC PATCH 6/6] mm: Pass a file_ra_state struct into force_page_cache_readahead() " David Howells
5 siblings, 0 replies; 8+ messages in thread
From: David Howells @ 2020-09-02 15:44 UTC (permalink / raw)
To: willy; +Cc: dhowells, linux-fsdevel, linux-mm, linux-kernel
Fold ra_submit() into its last remaining user and pass the previously added
readahead_control struct down into __do_page_cache_readahead().
Signed-off-by: David Howells <dhowells@redhat.com>
---
mm/filemap.c | 6 +++---
mm/internal.h | 10 ----------
2 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index fdfeedd1eb71..eaa046fdc0b6 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2500,10 +2500,10 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
* mmap read-around
*/
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
- ra->start = max_t(long, 0, offset - ra->ra_pages / 2);
- ra->size = ra->ra_pages;
+ ra->start = rac._index = max_t(long, 0, offset - ra->ra_pages / 2);
+ ra->size = ra->ra_pages;
ra->async_size = ra->ra_pages / 4;
- ra_submit(ra, mapping, file);
+ __do_page_cache_readahead(&rac, ra->size, ra->async_size);
return fpin;
}
diff --git a/mm/internal.h b/mm/internal.h
index c8ccf208f524..d62df5559500 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -53,16 +53,6 @@ void force_page_cache_readahead(struct readahead_control *, unsigned long);
void __do_page_cache_readahead(struct readahead_control *,
unsigned long nr_to_read, unsigned long lookahead_size);
-/*
- * Submit IO for the read-ahead request in file_ra_state.
- */
-static inline void ra_submit(struct file_ra_state *ra,
- struct address_space *mapping, struct file *file)
-{
- DEFINE_READAHEAD(rac, file, mapping, ra->start);
- __do_page_cache_readahead(&rac, ra->size, ra->async_size);
-}
-
/**
* page_evictable - test whether a page is evictable
* @page: the page to test
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH 6/6] mm: Pass a file_ra_state struct into force_page_cache_readahead() [ver #2]
2020-09-02 15:44 [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2] David Howells
` (4 preceding siblings ...)
2020-09-02 15:44 ` [RFC PATCH 5/6] mm: Fold ra_submit() into do_sync_mmap_readahead() " David Howells
@ 2020-09-02 15:45 ` David Howells
5 siblings, 0 replies; 8+ messages in thread
From: David Howells @ 2020-09-02 15:45 UTC (permalink / raw)
To: willy; +Cc: dhowells, linux-fsdevel, linux-mm, linux-kernel
Pass a file_ra_state struct into force_page_cache_readahead(). One caller
has one that should be passed in and the other doesn't, but the former
needs to pass its in.
Signed-off-by: David Howells <dhowells@redhat.com>
---
mm/fadvise.c | 3 ++-
mm/internal.h | 3 ++-
mm/readahead.c | 5 ++---
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/fadvise.c b/mm/fadvise.c
index 997f7c16690a..e1b09975caaa 100644
--- a/mm/fadvise.c
+++ b/mm/fadvise.c
@@ -106,7 +106,8 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice)
{
DEFINE_READAHEAD(rac, file, mapping, start_index);
- force_page_cache_readahead(&rac, nrpages);
+ force_page_cache_readahead(&rac, &rac.file->f_ra,
+ nrpages);
}
break;
case POSIX_FADV_NOREUSE:
diff --git a/mm/internal.h b/mm/internal.h
index d62df5559500..ff7b549f6a9d 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -49,7 +49,8 @@ void unmap_page_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end,
struct zap_details *details);
-void force_page_cache_readahead(struct readahead_control *, unsigned long);
+void force_page_cache_readahead(struct readahead_control *, struct file_ra_state *,
+ unsigned long);
void __do_page_cache_readahead(struct readahead_control *,
unsigned long nr_to_read, unsigned long lookahead_size);
diff --git a/mm/readahead.c b/mm/readahead.c
index d8e3e59e4c46..3f3ce65afc64 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -272,11 +272,10 @@ void __do_page_cache_readahead(struct readahead_control *rac,
* memory at once.
*/
void force_page_cache_readahead(struct readahead_control *rac,
- unsigned long nr_to_read)
+ struct file_ra_state *ra, unsigned long nr_to_read)
{
struct address_space *mapping = rac->mapping;
struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
- struct file_ra_state *ra = &rac->file->f_ra;
unsigned long max_pages, index;
if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages &&
@@ -655,7 +654,7 @@ void page_cache_sync_readahead(struct readahead_control *rac,
/* be dumb */
if (rac->file && (rac->file->f_mode & FMODE_RANDOM)) {
- force_page_cache_readahead(rac, req_count);
+ force_page_cache_readahead(rac, ra, req_count);
return;
}
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 3/6] mm: Push readahead_control down into force_page_cache_readahead() [ver #2]
2020-09-02 15:44 ` [RFC PATCH 3/6] mm: Push readahead_control down into force_page_cache_readahead() " David Howells
@ 2020-09-02 15:54 ` Matthew Wilcox
0 siblings, 0 replies; 8+ messages in thread
From: Matthew Wilcox @ 2020-09-02 15:54 UTC (permalink / raw)
To: David Howells; +Cc: linux-fsdevel, linux-mm, linux-kernel
On Wed, Sep 02, 2020 at 04:44:38PM +0100, David Howells wrote:
> +++ b/mm/fadvise.c
> @@ -104,7 +104,10 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice)
> if (!nrpages)
> nrpages = ~0UL;
>
> - force_page_cache_readahead(mapping, file, start_index, nrpages);
> + {
> + DEFINE_READAHEAD(rac, file, mapping, start_index);
> + force_page_cache_readahead(&rac, nrpages);
> + }
> break;
This is kind of awkward. How about this:
static void force_page_cache_readahead(struct address_space *mapping,
struct file *file, pgoff_t index, unsigned long nr_to_read)
{
DEFINE_READAHEAD(rac, file, mapping, index);
force_page_cache_ra(&rac, nr_to_read);
}
in mm/internal.h for now (and it can migrate if it needs to be somewhere else)
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-09-02 15:54 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-02 15:44 [RFC PATCH 0/6] mm: Make more use of readahead_control [ver #2] David Howells
2020-09-02 15:44 ` [RFC PATCH 1/6] Fix khugepaged's request size in collapse_file() " David Howells
2020-09-02 15:44 ` [RFC PATCH 2/6] mm: Make ondemand_readahead() take a readahead_control struct " David Howells
2020-09-02 15:44 ` [RFC PATCH 3/6] mm: Push readahead_control down into force_page_cache_readahead() " David Howells
2020-09-02 15:54 ` Matthew Wilcox
2020-09-02 15:44 ` [RFC PATCH 4/6] mm: Pass readahead_control into page_cache_{sync,async}_readahead() " David Howells
2020-09-02 15:44 ` [RFC PATCH 5/6] mm: Fold ra_submit() into do_sync_mmap_readahead() " David Howells
2020-09-02 15:45 ` [RFC PATCH 6/6] mm: Pass a file_ra_state struct into force_page_cache_readahead() " David Howells
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).