LKML Archive on lore.kernel.org help / color / mirror / Atom feed
From: Fengguang Wu <wfg@mail.ustc.edu.cn> To: Andrew Morton <akpm@osdl.org> Cc: Martin Peschke <mp3@de.ibm.com>, linux-kernel@vger.kernel.org Subject: [PATCH 4/8] readahead: state based method: move readahead_ratio out of compute_thrashing_threshold() Date: Sat, 27 Jan 2007 16:02:23 +0800 [thread overview] Message-ID: <369886263.12429@ustc.edu.cn> (raw) Message-ID: <20070127082529.616795882@mail.ustc.edu.cn> (raw) In-Reply-To: 20070127080219.161473179@mail.ustc.edu.cn [-- Attachment #1: readahead-state-based-method-move-readahead_ratio-out-of-compute_thrashing_threshold.patch --] [-- Type: text/plain, Size: 957 bytes --] Make compute_thrashing_threshold() a pure computing routine, by moving the readahead_ratio policy out of it. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> --- mm/readahead.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- linux-2.6.20-rc4-mm1.orig/mm/readahead.c +++ linux-2.6.20-rc4-mm1/mm/readahead.c @@ -1025,7 +1025,7 @@ static unsigned long compute_thrashing_t stream_shift = ra_invoke_interval(ra); /* future safe space */ - ll = (uint64_t) stream_shift * (global_size >> 9) * readahead_ratio * 5; + ll = (uint64_t) stream_shift * global_size; do_div(ll, global_shift); ra_size = ll; @@ -1063,6 +1063,7 @@ state_based_readahead(struct address_spa la_old = la_size = ra->readahead_index - offset; ra_old = ra_readahead_size(ra); ra_size = compute_thrashing_threshold(ra, &remain_space); + ra_size = ra_size * readahead_ratio / 100; if (page && remain_space <= la_size) { rescue_pages(page, la_size); --
prev parent reply other threads:[~2007-01-27 8:25 UTC|newest] Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top [not found] <369886263.20106@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 0/8] readahead updates Fengguang Wu [not found] ` <369886263.49250@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 1/8] readahead: min/max sizes: increase VM_MIN_READAHEAD to 32KB Fengguang Wu [not found] ` <369886263.27547@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 2/8] readahead: state based method routines: explicitly embed class_new/class_old inside flags Fengguang Wu [not found] ` <369886263.59195@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 3/8] readahead: state based method: prevent tiny size Fengguang Wu [not found] ` <369886263.12429@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 4/8] readahead: state based method: move readahead_ratio out of compute_thrashing_threshold() Fengguang Wu [not found] ` <369886264.20106@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 5/8] readahead: initial method: user recommended size: rename to read_ahead_initial_kb Fengguang Wu [not found] ` <369886264.06097@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 6/8] readahead: thrashing recovery method fix Fengguang Wu [not found] ` <369886264.76457@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 7/8] readahead: call scheme: fix thrashed unaligned read Fengguang Wu [not found] ` <369886264.38864@ustc.edu.cn> 2007-01-27 8:02 ` [PATCH 8/8] readahead: laptop mode fix Fengguang Wu 2007-01-31 13:37 ` [PATCH 0/8] readahead updates martin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=369886263.12429@ustc.edu.cn \ --to=wfg@mail.ustc.edu.cn \ --cc=akpm@osdl.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mp3@de.ibm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).