LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* Re: Swap prefetch merge plans
@ 2007-02-09 18:09 Andrew Burgess
  2007-02-09 20:12 ` Con Kolivas
  0 siblings, 1 reply; 3+ messages in thread
From: Andrew Burgess @ 2007-02-09 18:09 UTC (permalink / raw)
  To: kernel, ck, akpm, linux-kernel

>I'm stuck developing code I'm having trouble proving it helps. Normal users 
>find it helps and an artificial testcase shows it helps, but that is not 
>enough, since the normal users will be tainted in their opinion, and the 
>artificial testcase will always be artificial. My mistake of developing novel 
>code in areas that were unquantifiable has been my undoing.

Could you add some statistics gathering to measure
cumulatively how long processes wait for swapin?  Then you
could run with and without and maybe say "on average my
system waits 4 minutes a day without swap prefetch and 2
minutes with? Or if a simple sum doesn't work, some sort of
graph? Then anyone could run and measure the benefit.

Apologies if you've already thought of this...


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Swap prefetch merge plans
  2007-02-09 18:09 Swap prefetch merge plans Andrew Burgess
@ 2007-02-09 20:12 ` Con Kolivas
  0 siblings, 0 replies; 3+ messages in thread
From: Con Kolivas @ 2007-02-09 20:12 UTC (permalink / raw)
  To: Andrew Burgess; +Cc: ck, akpm, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2061 bytes --]

On Saturday 10 February 2007 05:09, Andrew Burgess wrote:
> >I'm stuck developing code I'm having trouble proving it helps. Normal
> > users find it helps and an artificial testcase shows it helps, but that
> > is not enough, since the normal users will be tainted in their opinion,
> > and the artificial testcase will always be artificial. My mistake of
> > developing novel code in areas that were unquantifiable has been my
> > undoing.
>
> Could you add some statistics gathering to measure
> cumulatively how long processes wait for swapin?  Then you
> could run with and without and maybe say "on average my
> system waits 4 minutes a day without swap prefetch and 2
> minutes with? Or if a simple sum doesn't work, some sort of
> graph? Then anyone could run and measure the benefit.
>
> Apologies if you've already thought of this...

It would depend entirely on the workload / how you use your machine and the 
balance of ram size vs hard drive speed vs swap size. The simple test app I 
used attached below used on a 1GB machine with a fairly modern hard drive 
saved:

Without:
Timed portion 6272175 microseconds
With:
Timed portion 523623 microseconds

This was with 700MB of what would be considered "application data". So if you 
had lots of firefox windows open, openoffice, email client etc open and then 
did something which caused a big swap load (I have this effect when printing 
an full page colour picture at high resolution), then the total time saved 
over clicking those applications back to life after some idle time (so you 
printed and walked away while it was printing) was about 5.5 seconds. 

So the total saved time over a day would depend on how often you hit swap, and 
how often you clicked things back to life. Of course if you never hit swap 
the code does basically nothing.


Note this app is a silly little thing that only worked on 32bit if I recall 
correctly but here it is.

build with 
gcc -o mallocall mallocall.c -W -Wall -lrt

then test without swap prefetch enabled, and then enable it and test again.

-- 
-ck

[-- Attachment #2: mallocall.c --]
[-- Type: text/x-csrc, Size: 2067 bytes --]

#include <stdio.h>
#include <stdarg.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/mman.h>
#include <time.h>

void fatal(const char *format, ...)
{
	va_list ap;

	if (format) {
		va_start(ap, format);
		vfprintf(stderr, format, ap);
		va_end(ap);
	}

	fprintf(stderr, "Fatal error - exiting\n");
	exit(1);
}

size_t get_ram(void)
{
        unsigned long ramsize;
	FILE *meminfo;
        char aux[256];

	if(!(meminfo = fopen("/proc/meminfo", "r")))
		fatal("fopen\n");

	while( !feof(meminfo) && !fscanf(meminfo, "MemTotal: %lu kB", &ramsize) )
		fgets(aux,sizeof(aux),meminfo);
	if (fclose(meminfo) == -1)
		fatal("fclose");
	return ramsize * 1000;
}

unsigned long get_usecs(struct timespec *myts)
{
	if (clock_gettime(CLOCK_REALTIME, myts))
		fatal("clock_gettime");
	return (myts->tv_sec * 1000000 + myts->tv_nsec / 1000 );
}

int main(void)
{
	unsigned long current_time, time_diff;
	struct timespec myts;
	char *buf1, *buf2, *buf3, *buf4;
	size_t size, full_size = get_ram();
	int sleep_seconds = 600;

	size = full_size * 7 / 10;
	printf("Starting first malloc of %d bytes\n", size);
	buf1 = malloc(size);
	if (buf1 == (char *)-1)
		fatal("Failed to malloc 1st buffer\n");
	memset(buf1, 0, size);
	printf("Completed first malloc and starting second malloc of %d bytes\n", full_size);

	buf2 = malloc(full_size);
	if (buf2 == (char *)-1)
		fatal("Failed to malloc 2nd buffer\n");
	memset(buf2, 0, full_size);
	buf4 = malloc(1);
	for (buf3 = buf2 + full_size; buf3 > buf2; buf3--)
		*buf4 = *buf3;
	free(buf2);
	printf("Completed second malloc and free\n");

	printf("Sleeping for %d seconds\n", sleep_seconds);
	sleep(sleep_seconds);

	printf("Important part - starting read of first malloc\n");
	time_diff = current_time = get_usecs(&myts);
	for (buf3 = buf1; buf3 < buf1 + size; buf3++)
		*buf4 = *buf3;
	current_time = get_usecs(&myts);
	free(buf4);
	free(buf1);
	printf("Completed read and freeing of first malloc\n");
	time_diff = current_time - time_diff;
	printf("Timed portion %lu microseconds\n",time_diff);

	return 0;
}

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Swap prefetch merge plans
  2007-02-09 20:50   ` Con Kolivas
@ 2007-02-09 22:49     ` Con Kolivas
  0 siblings, 0 replies; 3+ messages in thread
From: Con Kolivas @ 2007-02-09 22:49 UTC (permalink / raw)
  To: ck; +Cc: Andrew Morton, linux-kernel

On Saturday 10 February 2007 07:50, Con Kolivas wrote:
> On Saturday 10 February 2007 07:30, Andrew Morton wrote:
> > On Fri, 09 Feb 2007 14:13:03 +0100
> >
> > jos poortvliet <jos@mijnkamer.nl> wrote:
> > > Nick's comment, replying to me some time ago:
> >
> > I think I was thinking of this:
> >
> > 	http://lkml.org/lkml/2006/2/6/509
>
> Fortunately that predates a lot of changes where I did address all those.
> These will seem out of context without looking at that original email so I
> apologise in advance.
>
> buffered_rmqueue and prefetching x86 specific (not into DMA) were dropped
>
> It is NUMA aware
>
> Global cacheline bouncing in page allocation and page reclaim paths I have
> no answer for as I have to tell swap prefetch that the vm is busy somehow
> and I do that by setting precisely one bit in a lockless manner.
>
> The trylocks were dropped.
>
> The other ideas were to :
> -extend the prefetching. That's extra features
> -knowing for sure when a system is really idle. I've tried hard to do that
> as cheaply as possible.
> -putting pages on the lru? well it puts them on the tail
> -papering over an issue? As I said, no matter how good the vm is, there
> will always be loads that swap.

Perhaps I haven't made this clear enough.

Nick has been kind enough to review pretty much all of swap prefetch at every 
turn. He is understandably suspicious about any code that I generate since 
I'm a doctor and not a computer programmer. I have addressed every concern he 
has made along the way and even joked at the end that he "threatened to 
review it over and over and over" which he did not take in jest. 

Swap prefetch has actually had far more review than a lot of code so I'm 
surprised that this is a remaning concern.  It has been reviewed, and I have 
addressed the concerns. It is possible to hold back code forever at the 
suggestion that more review is always required, and basically that's what I 
feel this has become.

If there is one valid concern with swap prefetch, there is a numa=64 scenario 
that Rohit Seth brought to my attention and I gave him a patch to test 2 
months ago. I've pinged him since, but I understand he's busy so has not had 
a chance to test the patch.

-- 
-ck

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2007-02-09 22:50 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-09 18:09 Swap prefetch merge plans Andrew Burgess
2007-02-09 20:12 ` Con Kolivas
     [not found] <200702091038.37143.kernel@kolivas.org>
2007-02-09 20:30 ` [ck] " Andrew Morton
2007-02-09 20:50   ` Con Kolivas
2007-02-09 22:49     ` Con Kolivas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).