LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* question regarding strange kernel stats under high (network) load
@ 2004-05-23 15:33 Willem de Bruijn
  0 siblings, 0 replies; only message in thread
From: Willem de Bruijn @ 2004-05-23 15:33 UTC (permalink / raw)
  To: linux-kernel

Hi,

  for a network monitoring project that I'm working on (see my sig if you're 
interested) I have to compare system load under high throughput. Directly 
comparing my implementation to tcpdump (LSF) seems fine, our user% and 
kernel% are lower. However, when I added an idle system I noticed some 
strange behaviour that remains on re-executing the tests: 

the kernel% is actually higher when nothing is running than when my code is 
running! Overall processing is lower (idle% is higher on the idle system), 
but that is thanks to the high %si (soft ints) that my code generates. 

In order to better explain my problem here is the output of top (the new one, 
with %si, %hi and %wa) for a 600Mbit throughput, the first line is for the 
idle system, the second for FFPF (my system) and the third for Linux Socket 
Filters:

Mbit		iperf in	iperf out		capt	recv	kern drop	% drop		tcpdump %	iperf %	user %	
kernel %	idle %																																																																																																																																																																																																																																																	
600		608	608								33	6	30	40																																																																																																																																																																																																																																																	
600		608	608		514084	517314	33	0.63		17	20	13	15	25																																																																																																																																																																																																																																																	
600		609	608		421215	517597	96382	31.4		76	33	17	49.5	0																																																																																																																																																																																																																																																	

the usage statistics are user%, kernel% and idle% (I haven't saved the si% 
results). What's odd is that although idle is higher for the idle system, 
kernel% is actually lower. I can understand that my code generates extra soft 
interrupts, but not that it has an effect on the `other' kernel load. Perhaps 
I don't understand the values good enough, in which case I would greatly 
appreciate your help. However, it could also signal a bug in the statistics 
reporting (although I don't hope that's the case, these results are great for 
my paper :). FYI, I used the command

top -b -n 5 -d 3 | grep "Cpu\|tcpdump\|iperf"

to generate these results.

thanks for your help,

  Willem
-- 
Willem de Bruijn
wdebruij_at_dds.nl
http://www.wdebruij.dds.nl/

current project : 
Fairly Fast Packet Filter (FFPF)
http://ffpf.sourceforge.net/

-- 
Dit bericht is gescand op virussen en andere gevaarlijke inhoud door ULCN MailScanner en het bericht lijkt schoon te zijn.
This message has been scanned for viruses and dangerous content by ULCN MailScanner, and is believed to be clean.


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2004-05-23 15:25 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-05-23 15:33 question regarding strange kernel stats under high (network) load Willem de Bruijn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).