LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5
@ 2007-04-23 21:57 Bill Davidsen
  2007-04-23 23:45 ` [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46 Ed Tomlinson
  0 siblings, 1 reply; 8+ messages in thread
From: Bill Davidsen @ 2007-04-23 21:57 UTC (permalink / raw)
  To: Linux Kernel M/L

[-- Attachment #1: Type: text/plain, Size: 84 bytes --]

I am not sure a binary attachment will go thru, I will move to the web 
ste if not.

[-- Attachment #2: GL2.6.21-rc7-git6-CFSv5_nice0_jump --]
[-- Type: application/octet-stream, Size: 2039 bytes --]

[-- Attachment #3: GL2.6.21-rc7-git6-CFSv5_nice0_nojump --]
[-- Type: application/octet-stream, Size: 4959 bytes --]

[-- Attachment #4: GL2.6.21-rc7-git6-CFSv5_nice19_nojump --]
[-- Type: application/octet-stream, Size: 4517 bytes --]

[-- Attachment #5: GL2.6.21-rc7-git6-CFSv5_nice-19_jump --]
[-- Type: application/octet-stream, Size: 3676 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46
  2007-04-23 21:57 [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 Bill Davidsen
@ 2007-04-23 23:45 ` Ed Tomlinson
  2007-04-23 23:49   ` Ed Tomlinson
  0 siblings, 1 reply; 8+ messages in thread
From: Ed Tomlinson @ 2007-04-23 23:45 UTC (permalink / raw)
  To: davidsen, Kolivas, Con, Molnar, Ingo; +Cc: Linux Kernel M/L

On Monday 23 April 2007 17:57, Bill Davidsen wrote:
> I am not sure a binary attachment will go thru, I will move to the web 
> ste if not.

I did a quick try of this script here.

With SD 0.46 with X at nice 0 I was getting 1-2 frames per second.  I decided to try cfs v5.
The option disable auto renicing did not work so many threads other than X are now at -19...

SD 0.46		1-2 FPS
cfs v5 nice -19	219-233 FPS
cfs v5 nice 0 	1000-1996

Looks like, in this case, nice -19 for X is NOT a good idea.

Kernel is 2.6.20.7 (gentoo) UP amd64 with HZ 300 voluntary prempt (a fully premptable kernel eventually 
locks up switching between 32 and 64 apps)

Thanks,

Ed Tomlinson

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46
  2007-04-23 23:45 ` [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46 Ed Tomlinson
@ 2007-04-23 23:49   ` Ed Tomlinson
  2007-04-24  6:57     ` Ingo Molnar
  0 siblings, 1 reply; 8+ messages in thread
From: Ed Tomlinson @ 2007-04-23 23:49 UTC (permalink / raw)
  To: davidsen; +Cc: Kolivas, Con, Molnar, Ingo, Linux Kernel M/L

On Monday 23 April 2007 19:45, Ed Tomlinson wrote:
> On Monday 23 April 2007 17:57, Bill Davidsen wrote:
> > I am not sure a binary attachment will go thru, I will move to the web 
> > ste if not.
> 
> I did a quick try of this script here.
> 
> With SD 0.46 with X at nice 0 I was getting 1-2 frames per second.  I decided to try cfs v5.
> The option disable auto renicing did not work so many threads other than X are now at -19...
> 
> SD 0.46		1-2 FPS
> cfs v5 nice -19	219-233 FPS
> cfs v5 nice 0 	1000-1996
   cfs v5 nice -10  60-65 FPS
> 
> Looks like, in this case, nice -19 for X is NOT a good idea.
> 
> Kernel is 2.6.20.7 (gentoo) UP amd64 with HZ 300 voluntary prempt (a fully premptable kernel eventually 
> locks up switching between 32 and 64 apps)

Thanks
Ed 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46
  2007-04-23 23:49   ` Ed Tomlinson
@ 2007-04-24  6:57     ` Ingo Molnar
  2007-04-26 22:00       ` Bill Davidsen
  0 siblings, 1 reply; 8+ messages in thread
From: Ingo Molnar @ 2007-04-24  6:57 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: davidsen, Kolivas, Con, Linux Kernel M/L


* Ed Tomlinson <edt@aei.ca> wrote:

> > SD 0.46		1-2 FPS
> > cfs v5 nice -19	219-233 FPS
> > cfs v5 nice 0 	1000-1996
>    cfs v5 nice -10  60-65 FPS

the problem is, the glxgears portion of this test is an _inverse_ 
testcase.

The reason? glxgears on true 3D hardware will _not_ use X, it will 
directly use the 3D driver of the kernel. So by renicing X to -19 you 
give the xterms more chance to show stuff - the performance of the 
glxgears will 'degrade' - but that is what you asked for: glxgears is 
'just another CPU hog' that competes with X, it's not a "true" X client.

if you are after glxgears performance in this test then you'll get the 
best performance out of this by renicing X to +19 or even SCHED_BATCH.

	Ingo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46
  2007-04-24  6:57     ` Ingo Molnar
@ 2007-04-26 22:00       ` Bill Davidsen
  2007-04-26 22:56         ` Con Kolivas
  0 siblings, 1 reply; 8+ messages in thread
From: Bill Davidsen @ 2007-04-26 22:00 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Ed Tomlinson, Kolivas, Con, Linux Kernel M/L, Bill Davidsen

[-- Attachment #1: Type: text/plain, Size: 1664 bytes --]

Ingo Molnar wrote:
> * Ed Tomlinson <edt@aei.ca> wrote:
> 
>>> SD 0.46		1-2 FPS
>>> cfs v5 nice -19	219-233 FPS
>>> cfs v5 nice 0 	1000-1996
>>    cfs v5 nice -10  60-65 FPS
> 
> the problem is, the glxgears portion of this test is an _inverse_ 
> testcase.
> 
> The reason? glxgears on true 3D hardware will _not_ use X, it will 
> directly use the 3D driver of the kernel. So by renicing X to -19 you 
> give the xterms more chance to show stuff - the performance of the 
> glxgears will 'degrade' - but that is what you asked for: glxgears is 
> 'just another CPU hog' that competes with X, it's not a "true" X client.
> 
> if you are after glxgears performance in this test then you'll get the 
> best performance out of this by renicing X to +19 or even SCHED_BATCH.
> 
Several points on this...

First, I don't think this is accelerated in the way you mean, the 
machine is a test server, with motherboard video using the 945G video 
driver. Given the limitations of the support in that setup, I don't 
think it qualified as "true 3D hardware," although I guess I could try 
using the vesafb version as a test.

The 2nd thing I note is that on FC6 this scheduler seems to confuse 
'top' to some degree, since the glxgears is shown as taking 51% of the 
CPU (one core), while the state breakdown shows about 73% in idle, 
waitio, and int. image attached.

After I upgrade the kernel and cfs to the absolute latest I'll repeat 
this, as well as test with vesafb, and my planned run under heavy load.

-- 
Bill Davidsen <davidsen@tmr.com>
   "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot

[-- Attachment #2: top.png --]
[-- Type: image/png, Size: 58571 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD  0.46
  2007-04-26 22:00       ` Bill Davidsen
@ 2007-04-26 22:56         ` Con Kolivas
  2007-04-27  2:52           ` Ed Tomlinson
  2007-04-27 18:22           ` Bill Davidsen
  0 siblings, 2 replies; 8+ messages in thread
From: Con Kolivas @ 2007-04-26 22:56 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Ingo Molnar, Ed Tomlinson, Linux Kernel M/L

On Friday 27 April 2007 08:00, Bill Davidsen wrote:
> Ingo Molnar wrote:
> > * Ed Tomlinson <edt@aei.ca> wrote:
> >>> SD 0.46		1-2 FPS
> >>> cfs v5 nice -19	219-233 FPS
> >>> cfs v5 nice 0 	1000-1996
> >>
> >>    cfs v5 nice -10  60-65 FPS
> >
> > the problem is, the glxgears portion of this test is an _inverse_
> > testcase.
> >
> > The reason? glxgears on true 3D hardware will _not_ use X, it will
> > directly use the 3D driver of the kernel. So by renicing X to -19 you
> > give the xterms more chance to show stuff - the performance of the
> > glxgears will 'degrade' - but that is what you asked for: glxgears is
> > 'just another CPU hog' that competes with X, it's not a "true" X client.
> >
> > if you are after glxgears performance in this test then you'll get the
> > best performance out of this by renicing X to +19 or even SCHED_BATCH.
>
> Several points on this...
>
> First, I don't think this is accelerated in the way you mean, the
> machine is a test server, with motherboard video using the 945G video
> driver. Given the limitations of the support in that setup, I don't
> think it qualified as "true 3D hardware," although I guess I could try
> using the vesafb version as a test.
>
> The 2nd thing I note is that on FC6 this scheduler seems to confuse
> 'top' to some degree, since the glxgears is shown as taking 51% of the
> CPU (one core), while the state breakdown shows about 73% in idle,
> waitio, and int. image attached.

top by itself certainly cannot be trusted to give true representation of the 
cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
usage of an application, but top's resolution being tied to HZ accounting 
makes it not reliable in that regard.
>
> After I upgrade the kernel and cfs to the absolute latest I'll repeat
> this, as well as test with vesafb, and my planned run under heavy load.

I have a problem with your test case Bill. Its behaviour would depend on how 
gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
is. I get completely different results to those of the other testers given 
the different hardware configuration and I don't think my results are 
valuable. My problem with this testcase is - What would you define 
as "perfect" behaviour for your test case? It seems far too arbitrary.

-- 
-ck

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD  0.46
  2007-04-26 22:56         ` Con Kolivas
@ 2007-04-27  2:52           ` Ed Tomlinson
  2007-04-27 18:22           ` Bill Davidsen
  1 sibling, 0 replies; 8+ messages in thread
From: Ed Tomlinson @ 2007-04-27  2:52 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Bill Davidsen, Ingo Molnar, Linux Kernel M/L

On Thursday 26 April 2007 18:56, Con Kolivas wrote:
> On Friday 27 April 2007 08:00, Bill Davidsen wrote:
> > Ingo Molnar wrote:
> > > * Ed Tomlinson <edt@aei.ca> wrote:
> > >>> SD 0.46		1-2 FPS
> > >>> cfs v5 nice -19	219-233 FPS
> > >>> cfs v5 nice 0 	1000-1996
> > >>
> > >>    cfs v5 nice -10  60-65 FPS
> > >
> > > the problem is, the glxgears portion of this test is an _inverse_
> > > testcase.
> > >
> > > The reason? glxgears on true 3D hardware will _not_ use X, it will
> > > directly use the 3D driver of the kernel. So by renicing X to -19 you
> > > give the xterms more chance to show stuff - the performance of the
> > > glxgears will 'degrade' - but that is what you asked for: glxgears is
> > > 'just another CPU hog' that competes with X, it's not a "true" X client.
> > >
> > > if you are after glxgears performance in this test then you'll get the
> > > best performance out of this by renicing X to +19 or even SCHED_BATCH.
> >
> > Several points on this...
> >
> > First, I don't think this is accelerated in the way you mean, the
> > machine is a test server, with motherboard video using the 945G video
> > driver. Given the limitations of the support in that setup, I don't
> > think it qualified as "true 3D hardware," although I guess I could try
> > using the vesafb version as a test.
> >
> > The 2nd thing I note is that on FC6 this scheduler seems to confuse
> > 'top' to some degree, since the glxgears is shown as taking 51% of the
> > CPU (one core), while the state breakdown shows about 73% in idle,
> > waitio, and int. image attached.
> 
> top by itself certainly cannot be trusted to give true representation of the 
> cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
> usage of an application, but top's resolution being tied to HZ accounting 
> makes it not reliable in that regard.
> >
> > After I upgrade the kernel and cfs to the absolute latest I'll repeat
> > this, as well as test with vesafb, and my planned run under heavy load.
> 
> I have a problem with your test case Bill. Its behaviour would depend on how 
> gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
> is. I get completely different results to those of the other testers given 
> the different hardware configuration and I don't think my results are 
> valuable. My problem with this testcase is - What would you define 
> as "perfect" behaviour for your test case? It seems far too arbitrary.

Con,

One thing I did not mention in all this is that renicing the glxgears process to -10
gets SD to give about 1000FPS, indeed you get most of this performance at -5 too.
All in all SD does a very good job here.

Get well soon!
Ed

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46
  2007-04-26 22:56         ` Con Kolivas
  2007-04-27  2:52           ` Ed Tomlinson
@ 2007-04-27 18:22           ` Bill Davidsen
  1 sibling, 0 replies; 8+ messages in thread
From: Bill Davidsen @ 2007-04-27 18:22 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Ingo Molnar, Ed Tomlinson, Linux Kernel M/L

Con Kolivas wrote:
> On Friday 27 April 2007 08:00, Bill Davidsen wrote:
>   
>> Ingo Molnar wrote:
>>     
>>> * Ed Tomlinson <edt@aei.ca> wrote:
>>>       
>>>>> SD 0.46		1-2 FPS
>>>>> cfs v5 nice -19	219-233 FPS
>>>>> cfs v5 nice 0 	1000-1996
>>>>>           
>>>>    cfs v5 nice -10  60-65 FPS
>>>>         
>>> the problem is, the glxgears portion of this test is an _inverse_
>>> testcase.
>>>
>>> The reason? glxgears on true 3D hardware will _not_ use X, it will
>>> directly use the 3D driver of the kernel. So by renicing X to -19 you
>>> give the xterms more chance to show stuff - the performance of the
>>> glxgears will 'degrade' - but that is what you asked for: glxgears is
>>> 'just another CPU hog' that competes with X, it's not a "true" X client.
>>>
>>> if you are after glxgears performance in this test then you'll get the
>>> best performance out of this by renicing X to +19 or even SCHED_BATCH.
>>>       
>> Several points on this...
>>
>> First, I don't think this is accelerated in the way you mean, the
>> machine is a test server, with motherboard video using the 945G video
>> driver. Given the limitations of the support in that setup, I don't
>> think it qualified as "true 3D hardware," although I guess I could try
>> using the vesafb version as a test.
>>
>> The 2nd thing I note is that on FC6 this scheduler seems to confuse
>> 'top' to some degree, since the glxgears is shown as taking 51% of the
>> CPU (one core), while the state breakdown shows about 73% in idle,
>> waitio, and int. image attached.
>>     
>
> top by itself certainly cannot be trusted to give true representation of the 
> cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
> usage of an application, but top's resolution being tied to HZ accounting 
> makes it not reliable in that regard.
>   
>> After I upgrade the kernel and cfs to the absolute latest I'll repeat
>> this, as well as test with vesafb, and my planned run under heavy load.
>>     
>
> I have a problem with your test case Bill. Its behaviour would depend on how 
> gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
> is. I get completely different results to those of the other testers given 
> the different hardware configuration and I don't think my results are 
> valuable. My problem with this testcase is - What would you define 
> as "perfect" behaviour for your test case? It seems far too arbitrary.
>
>   
It was more intended to give an immediate feedback on gross behavior. On 
some old schedulers (2.4.x) it visibly ran one xterm after the other, 
while on 2.6.2[01] that behavior is gone and all schedulers give equal 
time as seen by the eye. Looking at the behavior with line and jump 
scroll, under load or not, X nice or nasty, allows a quick check on 
where the bad cases are if any exist.

I intended it as a quick way to determine really, visibly, bad 
scheduling, not a a test for quantifying performance. The fact that fps 
varies by almost an order of magnitude with some earlier versions of the 
schedulers is certainly a red flag to me that there's a corner case, and 
something I care more about than glxgears will be inconsistent as well.

Hopefully in that context, as a relatively quick way to try nice and 
load values, it's a useful tool.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2007-04-27 18:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-04-23 21:57 [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 Bill Davidsen
2007-04-23 23:45 ` [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46 Ed Tomlinson
2007-04-23 23:49   ` Ed Tomlinson
2007-04-24  6:57     ` Ingo Molnar
2007-04-26 22:00       ` Bill Davidsen
2007-04-26 22:56         ` Con Kolivas
2007-04-27  2:52           ` Ed Tomlinson
2007-04-27 18:22           ` Bill Davidsen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).