LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Olof Johansson <olof@lixom.net>
Cc: Willy Tarreau <w@1wt.eu>,
linux-kernel@vger.kernel.org,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Ingo Molnar <mingo@elte.hu>
Subject: Re: Scheduler(?) regression from 2.6.22 to 2.6.24 for short-lived threads
Date: Tue, 12 Feb 2008 10:23:26 +0100 [thread overview]
Message-ID: <1202808206.7829.36.camel@homer.simson.net> (raw)
In-Reply-To: <20080211203159.GA11161@lixom.net>
[-- Attachment #1: Type: text/plain, Size: 1579 bytes --]
On Mon, 2008-02-11 at 14:31 -0600, Olof Johansson wrote:
> On Mon, Feb 11, 2008 at 08:58:46PM +0100, Mike Galbraith wrote:
> > It shouldn't matter if you yield or not really, that should reduce the
> > number of non-work spin cycles wasted awaiting preemption as threads
> > execute in series (the problem), and should improve your performance
> > numbers, but not beyond single threaded.
> >
> > If I plugged a yield into the busy wait, I would expect to see a large
> > behavioral difference due to yield implementation changes, but that
> > would only be a symptom in this case, no? Yield should be a noop.
>
> Exactly. It made a big impact on the first testcase from Friday, where
> the spin-off thread spent the bulk of the time in the busy-wait loop,
> with a very small initial workload loop. Thus the yield passed the cpu
> over to the other thread who got a chance to run the small workload,
> followed by a quick finish by both of them. The better model spends the
> bulk of the time in the first workload loop, so yielding doesn't gain
> at all the same amount.
There is a strong dependency on execution order in this testcase.
Between cpu affinity and giving the child a little head start to reduce
the chance (100% if child wakes on same CPU and doesn't preempt parent)
of busy wait, modified testcase behaves. I don't think I should need
the CPU affinity, but I do.
If you plunk a usleep(1) in prior to calling thread_func() does your
testcase performance change radically? If so, I wonder if the real
application has the same kind of dependency.
-Mike
[-- Attachment #2: threadtest.c --]
[-- Type: text/x-csrc, Size: 2029 bytes --]
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include <sched.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/syscall.h>
#ifdef __PPC__
static void atomic_inc(volatile long *a)
{
asm volatile ("1:\n\
lwarx %0,0,%1\n\
addic %0,%0,1\n\
stwcx. %0,0,%1\n\
bne- 1b" : "=&r" (result) : "r"(a));
}
#else
static void atomic_inc(volatile long *a)
{
asm volatile ("lock; incl %0" : "+m" (*a));
}
#endif
long usecs(void)
{
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec * 1000000 + tv.tv_usec;
}
void burn(long *burnt)
{
long then, now, delta, tolerance = 10;
then = now = usecs();
while (now == then)
now = usecs();
delta = now - then;
if (delta < tolerance)
*burnt += delta;
}
volatile long stopped;
long burn_usecs = 1000, tot_work, tot_wait;
pid_t parent;
#define gettid() syscall(SYS_gettid)
void *thread_func(void *cpus)
{
long work = 0, wait = 0;
cpu_set_t cpuset;
pid_t whoami = gettid();
if (whoami != parent) {
CPU_ZERO(&cpuset);
CPU_SET(1, &cpuset);
sched_setaffinity(whoami, sizeof(cpuset), &cpuset);
usleep(1);
}
while (work < burn_usecs)
burn(&work);
tot_work += work;
atomic_inc(&stopped);
/* Busy-wait */
while (stopped < *(int *)cpus)
burn(&wait);
tot_wait += wait;
return NULL;
}
int main(int argc, char **argv)
{
pthread_t thread;
int iter = 500, cpus = 2;
long t1, t2;
cpu_set_t cpuset;
if (argc > 1)
iter = atoi(argv[1]);
if (argc > 2)
burn_usecs = atoi(argv[2]);
parent = gettid();
CPU_ZERO(&cpuset);
CPU_SET(0, &cpuset);
sched_setaffinity(parent, sizeof(cpuset), &cpuset);
t1 = usecs();
while(iter--) {
stopped = 0;
pthread_create(&thread, NULL, &thread_func, &cpus);
/* clild needs headstart guarantee to avoid busy wait */
usleep(1);
thread_func(&cpus);
pthread_join(thread, NULL);
}
t2 = usecs();
printf("time: %ld (us) work: %ld wait: %ld idx: %2.2f\n",
t2-t1, tot_work, tot_wait, (double)tot_work/(t2-t1));
return 0;
}
next prev parent reply other threads:[~2008-02-12 9:24 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-02-09 0:04 Olof Johansson
2008-02-09 0:08 ` Ingo Molnar
2008-02-09 0:32 ` Olof Johansson
2008-02-09 7:58 ` Mike Galbraith
2008-02-09 8:03 ` Willy Tarreau
2008-02-09 10:58 ` Mike Galbraith
2008-02-09 11:40 ` Willy Tarreau
2008-02-09 13:37 ` Mike Galbraith
2008-02-09 16:19 ` Willy Tarreau
2008-02-09 17:33 ` Mike Galbraith
2008-02-10 5:29 ` Olof Johansson
2008-02-10 6:15 ` Willy Tarreau
2008-02-10 7:00 ` Olof Johansson
2008-02-10 7:58 ` Willy Tarreau
2008-02-11 8:15 ` Mike Galbraith
2008-02-11 17:26 ` Olof Johansson
2008-02-11 19:58 ` Mike Galbraith
2008-02-11 20:31 ` Olof Johansson
2008-02-12 9:23 ` Mike Galbraith [this message]
2008-02-13 5:49 ` Mike Galbraith
2008-02-11 21:45 ` Bill Davidsen
2008-02-12 4:30 ` Mike Galbraith
[not found] <fa.6N2dhyJ1cmBqiuFKgCaYfwduM+0@ifi.uio.no>
2008-02-09 1:49 ` Robert Hancock
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1202808206.7829.36.camel@homer.simson.net \
--to=efault@gmx.de \
--cc=a.p.zijlstra@chello.nl \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=olof@lixom.net \
--cc=w@1wt.eu \
--subject='Re: Scheduler(?) regression from 2.6.22 to 2.6.24 for short-lived threads' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).