Linux-Fsdevel Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Howells <dhowells@redhat.com>,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
Eric Biggers <ebiggers@google.com>,
Matthew Wilcox <willy@infradead.org>
Subject: [PATCH 6/9] mm/readahead: Pass readahead_control to force_page_cache_ra
Date: Thu, 3 Sep 2020 15:08:41 +0100 [thread overview]
Message-ID: <20200903140844.14194-7-willy@infradead.org> (raw)
In-Reply-To: <20200903140844.14194-1-willy@infradead.org>
From: David Howells <dhowells@redhat.com>
Reimplement force_page_cache_readahead() as a wrapper around
force_page_cache_ra(). Pass the existing readahead_control from
page_cache_sync_readahead().
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/internal.h | 13 +++++++++----
mm/readahead.c | 18 ++++++++++--------
2 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 6aef85f62b9d..5533e85bd123 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -49,10 +49,15 @@ void unmap_page_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end,
struct zap_details *details);
-void force_page_cache_readahead(struct address_space *, struct file *,
- pgoff_t index, unsigned long nr_to_read);
-void do_page_cache_ra(struct readahead_control *,
- unsigned long nr_to_read, unsigned long lookahead_size);
+void do_page_cache_ra(struct readahead_control *, unsigned long nr_to_read,
+ unsigned long lookahead_size);
+void force_page_cache_ra(struct readahead_control *, unsigned long nr);
+static inline void force_page_cache_readahead(struct address_space *mapping,
+ struct file *file, pgoff_t index, unsigned long nr_to_read)
+{
+ DEFINE_READAHEAD(ractl, file, mapping, index);
+ force_page_cache_ra(&ractl, nr_to_read);
+}
/*
* Submit IO for the read-ahead request in file_ra_state.
diff --git a/mm/readahead.c b/mm/readahead.c
index 73110c4148f8..3115ced5faae 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -271,13 +271,13 @@ void do_page_cache_ra(struct readahead_control *ractl,
* Chunk the readahead into 2 megabyte units, so that we don't pin too much
* memory at once.
*/
-void force_page_cache_readahead(struct address_space *mapping,
- struct file *file, pgoff_t index, unsigned long nr_to_read)
+void force_page_cache_ra(struct readahead_control *ractl,
+ unsigned long nr_to_read)
{
- DEFINE_READAHEAD(ractl, file, mapping, index);
+ struct address_space *mapping = ractl->mapping;
struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
- struct file_ra_state *ra = &file->f_ra;
- unsigned long max_pages;
+ struct file_ra_state *ra = &ractl->file->f_ra;
+ unsigned long max_pages, index;
if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages &&
!mapping->a_ops->readahead))
@@ -287,14 +287,16 @@ void force_page_cache_readahead(struct address_space *mapping,
* If the request exceeds the readahead window, allow the read to
* be up to the optimal hardware IO size
*/
+ index = readahead_index(ractl);
max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages);
- nr_to_read = min(nr_to_read, max_pages);
+ nr_to_read = min_t(unsigned long, nr_to_read, max_pages);
while (nr_to_read) {
unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE;
if (this_chunk > nr_to_read)
this_chunk = nr_to_read;
- do_page_cache_ra(&ractl, this_chunk, 0);
+ ractl->_index = index;
+ do_page_cache_ra(ractl, this_chunk, 0);
index += this_chunk;
nr_to_read -= this_chunk;
@@ -576,7 +578,7 @@ void page_cache_sync_readahead(struct address_space *mapping,
/* be dumb */
if (filp && (filp->f_mode & FMODE_RANDOM)) {
- force_page_cache_readahead(mapping, filp, index, req_count);
+ force_page_cache_ra(&ractl, req_count);
return;
}
--
2.28.0
next prev parent reply other threads:[~2020-09-03 14:58 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 1/9] Fix khugepaged's request size in collapse_file Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 2/9] mm/readahead: Add DEFINE_READAHEAD Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control Matthew Wilcox (Oracle)
2020-09-03 19:22 ` Andrew Morton
2020-09-03 19:33 ` Matthew Wilcox
2020-09-03 14:08 ` [PATCH 4/9] mm/readahead: Make do_page_cache_ra " Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 5/9] mm/readahead: Make ondemand_readahead " Matthew Wilcox (Oracle)
2020-09-03 14:08 ` Matthew Wilcox (Oracle) [this message]
2020-09-03 14:08 ` [PATCH 7/9] mm/readahead: Add page_cache_sync_ra and page_cache_async_ra Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 8/9] mm/filemap: Fold ra_submit into do_sync_mmap_readahead Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 9/9] mm/readahead: Pass a file_ra_state into force_page_cache_ra Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200903140844.14194-7-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=dhowells@redhat.com \
--cc=ebiggers@google.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--subject='Re: [PATCH 6/9] mm/readahead: Pass readahead_control to force_page_cache_ra' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).