LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
To: Mel Gorman <mel@skynet.ie>
Cc: akpm@linux-foundation.org, clameter@sgi.com,
kamezawa.hiroyu@jp.fujitsu.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Apply memory policies to top two highest zones when highest zone is ZONE_MOVABLE
Date: Thu, 02 Aug 2007 15:41:51 -0400 [thread overview]
Message-ID: <1186083711.5040.74.camel@localhost> (raw)
In-Reply-To: <20070802172118.GD23133@skynet.ie>
On Thu, 2007-08-02 at 18:21 +0100, Mel Gorman wrote:
> The NUMA layer only supports NUMA policies for the highest zone. When
> ZONE_MOVABLE is configured with kernelcore=, the the highest zone becomes
> ZONE_MOVABLE. The result is that policies are only applied to allocations
> like anonymous pages and page cache allocated from ZONE_MOVABLE when the
> zone is used.
>
> This patch applies policies to the two highest zones when the highest zone
> is ZONE_MOVABLE. As ZONE_MOVABLE consists of pages from the highest "real"
> zone, it's always functionally equivalent.
>
> The patch has been tested on a variety of machines both NUMA and non-NUMA
> covering x86, x86_64 and ppc64. No abnormal results were seen in kernbench,
> tbench, dbench or hackbench. It passes regression tests from the numactl
> package with and without kernelcore= once numactl tests are patched to
> wait for vmstat counters to update.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
And an ia_64 NUMA platform with some ad hoc, interactive functional
testing with memtoy and an overnight run of a usex job mix. Job mix
included a 32-way kernel build, several povray tracing apps, IO tests,
sequential and random vm fault tests and a number of 'bin' tests that
simulate a half a dozen crazed monkeys pounding away at keyboards
entering surprisingly error-free commands. All of these loop until
stopped or the system hangs/crashes--which it didn't...
Acked-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Tested-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
next prev parent reply other threads:[~2007-08-02 19:47 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-02 17:21 Mel Gorman
2007-08-02 19:41 ` Lee Schermerhorn [this message]
2007-08-02 20:45 ` Christoph Lameter
2007-08-06 19:44 ` Andrew Morton
2007-08-06 20:13 ` Christoph Lameter
2007-08-06 21:56 ` Paul Jackson
2007-08-03 22:02 ` Andi Kleen
2007-08-04 0:23 ` Mel Gorman
2007-08-04 8:51 ` Andi Kleen
2007-08-04 16:39 ` Mel Gorman
2007-08-06 19:15 ` Andrew Morton
2007-08-06 19:18 ` Christoph Lameter
2007-08-06 20:31 ` Andi Kleen
2007-08-06 21:55 ` Mel Gorman
2007-08-07 5:12 ` Andrew Morton
2007-08-07 16:55 ` Mel Gorman
2007-08-07 18:14 ` Andrew Morton
2007-08-07 20:37 ` Christoph Lameter
2007-08-08 16:49 ` Mel Gorman
2007-08-08 17:03 ` Christoph Lameter
2007-08-06 21:48 ` Mel Gorman
2007-08-06 22:31 ` Christoph Lameter
2007-08-06 22:57 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1186083711.5040.74.camel@localhost \
--to=lee.schermerhorn@hp.com \
--cc=akpm@linux-foundation.org \
--cc=clameter@sgi.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@skynet.ie \
--subject='Re: [PATCH] Apply memory policies to top two highest zones when highest zone is ZONE_MOVABLE' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).