LKML Archive on
help / color / mirror / Atom feed
From: Bill Waddington <>
Cc: Robert Hancock <>
Subject: Re: DMA mapping API on 32-bit X86 with CONFIG_HIGHMEM64G
Date: Sun, 17 Feb 2008 07:31:53 -0800	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Tue, 12 Feb 2008 04:16:58 UTC, in fa.linux.kernel you wrote:

>I was looking at the out-of-tree driver for a PCI high-security module 
>(from a vendor who shall remain nameless) today, as we had a problem 
>reported where the device didn't work properly if the computer had more 
>than 4GB of RAM (this is x86 32-bit, with CONFIG_HIGHMEM64G enabled).
>Essentially what it was doing was taking some memory that the userspace 
>app was transferring to/from the device, doing get_user_pages on it, and 
>then using the old-style page_to_phys, etc. functions to DMA on that 
>memory instead of the modern DMA API.
>However, I'm not sure this strategy would have worked on this platform 
>even if it had been using the proper DMA API. This device has 32-bit DMA 
>limits and is transferring userspace buffers which with HIGHMEM64G 
>enabled could easily have physical addresses over 4GB. The strategy that 
>Linux Device Drivers, 3rd Edition (chapter 15) suggests is doing 
>get_user_pages, creating an SG list from the returned pages and then 
>using dma_map_sg on that list. However, essentially all dma_map_sg in 
>include/asm-x86/dma-mapping_32.h is:
>	for_each_sg(sglist, sg, nents, i) {
>		BUG_ON(!sg_page(sg));
>		sg->dma_address = sg_phys(sg);
>	}
>which does nothing to ensure that the returned physical address is 
>within the device's DMA mask. On 64-bit this triggers IOMMU mapping but 
>on 32-bit it doesn't seem like this case is handled at all. I believe 
>the block and networking layers have their own ways of ensuring that 
>they don't feed such buffers to their drivers if they can't handle it, 
>but a basic character device driver is kind of left out in the cold here 
>and the DMA API doesn't appear to work as documented in this case. Given 
>that x86-32 kernels don't implement any IOMMU support I'm not sure what 
>it actually could do, other than implementing some kind of software 
>bounce buffering of its own..
>Are there any in-tree drivers that use this DMA mapping on 
>get_user_pages strategy that could be affected by this?

No takers?  This got me worried about _my_ out-of-tree driver...

>I think the get_free_pages trick is actually pretty silly in this case, 
>the size of the data being transferred is likely such that it would be 
>just as fast or faster to copy to a kernel buffer and DMA to/from there..

That's what I do currently.  If HIGHMEM64G is defined I switch from
user space DMA to an in-driver copy/DMA buffer.

Is there a more elegant/simpler way to do this?  At one time I thought
there was a kernel bounce-buffer hidden behind the DMA API - at least
on some architectures and some memory configurations.

Just my imagination, or is this problem already taken care of in the

William D Waddington
"Even bugs...are unexpected signposts on
the long road of creativity..." - Ken Burtch

       reply	other threads:[~2008-02-17 16:27 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <>
2008-02-17 15:31 ` Bill Waddington [this message]
2008-02-12  4:15 DMA mapping API on 32-bit X86 with CONFIG_HIGHMEM64G Robert Hancock

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).