LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: ebiederman@uswest.net (Eric W. Biederman)
To: Larry McVoy <lm@bitmover.com>
Cc: Rob Landley <landley@trommello.org>, linux-kernel@vger.kernel.org
Subject: Re: sis630/celeron perf sucks?
Date: 08 Oct 2001 09:58:02 -0600	[thread overview]
Message-ID: <m1adz2w2d1.fsf@frodo.biederman.org> (raw)
In-Reply-To: <20011006130647.B26223@work.bitmover.com> <01100618241801.05593@localhost.localdomain> <20011007214009.A3608@work.bitmover.com>
In-Reply-To: <20011007214009.A3608@work.bitmover.com>

Larry McVoy <lm@bitmover.com> writes:

> > Run memtest86 to see what your memory bandwidth is.
> 
> As far as I know, LMbench tells me what my memory bandwidth is just fine.
> I don't care if it is telling me the limit (I know it isn't) I only need
> to know relative speeds across platforms.  It does that.

Getting some real memory bandwidth data out of it would be interesting
for tracking the proble.  By the time you get to pipe bandwidth the
fact you changed kernels could easily have an effect.

I think LMbench has the equivalent of streams in it so those would be useful.

> > Yup.  Blame Intel's marketing department.  This isn't a SIS problem, that's 
> > pure Intel's crippling of the DeCeleron...
> 
> I checked with a guy who works here, he used to work in Intel's processor
> group on performance, and he tells me it isn't the processor, it's the 
> motherboard.  Which jives nicely with the data.

The PII bus has 64 data pins and transfers data over them at the processor
FSB clock rate.  So the processor maximums look something like:
66Mhz  528MB/s
100Mhz 800MB/s
133Mhz 1064MB/s

The data bus for SDRAM happens to follow the same rules, except the
address for the data also happens to go over the data bus, which gives
the processor a small advantage, in pure bandwidth.  The biggest hit
SDRAM takes from protocol overhead is when either (a) you don't burst 
or (b) you are doing back to back reads and writes.  Writes go into
the SDRAM pipeline immediately but reads can't come out of the
pipeline immediately.

And you were getting at most 1/5th of the theoretical which looks
ugly.  Not that I have seen the PII core get close to it's bus
potential except under special conditions.

> I'm just hoping there is some SiS genius out there who will ask me
> 
> "Did you remember to turn off the go 3x slower mode in the BIOS?"
> 
> and I'll hang my head in shame and ask to be directed to that magic
> BIOS switch.

Have you verified that the MTRR's are enabled on your memory?  I
suppose there could also be a bad memory controller setting as well.
You might be able to look at the at the linuxBIOS code and see if
your BIOS is doing something different.

Eric


  reply	other threads:[~2001-10-08 16:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-10-06 20:06 Larry McVoy
2001-10-06 22:24 ` Rob Landley
2001-10-08  4:40   ` Larry McVoy
2001-10-08 15:58     ` Eric W. Biederman [this message]
2001-10-08 19:50   ` Stefan Smietanowski
2001-10-08 20:54     ` Benjamin LaHaise

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=m1adz2w2d1.fsf@frodo.biederman.org \
    --to=ebiederman@uswest.net \
    --cc=landley@trommello.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lm@bitmover.com \
    --subject='Re: sis630/celeron perf sucks?' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).