LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Roland Dreier <rdreier@cisco.com>,
	linux-kernel@vger.kernel.org,
	Thomas Mingarelli <thomas.mingarelli@hp.com>
Subject: Re: hpwdt oops in clflush_cache_range
Date: Wed, 27 Feb 2008 21:36:55 +0100	[thread overview]
Message-ID: <20080227203655.GA30054@elte.hu> (raw)
In-Reply-To: <alpine.LFD.1.00.0802272011130.7583@apollo.tec.linutronix.de>


* Thomas Gleixner <tglx@linutronix.de> wrote:

> > [    0.004000] Intel(R) Xeon(R) CPU            5160  @ 3.00GHz stepping 06
> 
> This one has 36bit physical address space. You can verify that via
> /proc/cpuinfo
> 
> > [ 8425.910898] ACPI: PCI Interrupt 0000:01:04.0[A] -> GSI 21 (level, low) -> IRQ 21
> > [ 8425.915097] hpwdt: New timer passed in is 30 seconds.
> > [ 8425.915139] BUG: unable to handle kernel paging request at ffffc20001a0a000
> > [ 8425.919087] IP: [<ffffffff8021dacc>] clflush_cache_range+0xc/0x25
> > [ 8425.919087] PGD 1bf80e067 PUD 1bf80f067 PMD 1bb497067 PTE 80000047000ee17b
> 
> While the physical address of your ioremap is 47000ee000.
> 
> 2^ 36 == 1000000000
> ---->    47000ee000
> 
> So the fault is not very surprising. Unfortunately we do not check, 
> whether physaddr is inside the valid physical address space. I whip up 
> a patch to do that.

also note that the driver would have faulted in a similar same way 
anyway, the first time it tried to access that ioremap range. It's just 
that due to the clflush we took the fault first in ioremap().

via the physical range check we'll do a more graceful exit and the 
driver wont crash either. (it will just not work)

	Ingo

  reply	other threads:[~2008-02-27 20:37 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-27 17:08 Roland Dreier
2008-02-27 17:37 ` Mingarelli, Thomas
2008-02-27 18:14 ` Thomas Gleixner
2008-02-27 18:38   ` Roland Dreier
2008-02-27 19:48     ` Thomas Gleixner
2008-02-27 20:36       ` Ingo Molnar [this message]
2008-02-27 20:42         ` Thomas Gleixner
2008-02-27 20:44           ` Ingo Molnar
2008-02-27 20:59           ` Thomas Gleixner
2008-02-27 21:14             ` Ingo Molnar
2008-02-27 21:17             ` Roland Dreier
2008-02-27 21:35               ` Roland Dreier
2008-02-27 23:44               ` Mingarelli, Thomas
2008-02-28  0:12                 ` Roland Dreier
2008-02-28  3:09                   ` Mingarelli, Thomas
     [not found]                   ` <E14D1C2A44812C4F9C3ED321127EDF6505829D5D6C@G3W0854.americas.hpqcorp.net>
2008-02-28 17:38                     ` [PATCH] [WATCHDOG] Fix declaration of struct smbios_entry_point in hpwdt Roland Dreier
2008-02-28 17:48                       ` [PATCH for 2.6.26] [WATCHDOG] Fix return value warning " Roland Dreier
2008-02-28 20:34                       ` [PATCH] [WATCHDOG] hpwdt: Use dmi_walk() instead of own copy Roland Dreier
2008-02-28 21:24                         ` Mingarelli, Thomas
2008-02-28 21:26                         ` Wim Van Sebroeck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080227203655.GA30054@elte.hu \
    --to=mingo@elte.hu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rdreier@cisco.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.mingarelli@hp.com \
    --subject='Re: hpwdt oops in clflush_cache_range' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).