LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Robert Hancock <hancockr@shaw.ca>
To: linux-kernel <linux-kernel@vger.kernel.org>
Subject: DMA mapping API on 32-bit X86 with CONFIG_HIGHMEM64G
Date: Mon, 11 Feb 2008 22:15:20 -0600	[thread overview]
Message-ID: <47B11D58.2070201@shaw.ca> (raw)

I was looking at the out-of-tree driver for a PCI high-security module 
(from a vendor who shall remain nameless) today, as we had a problem 
reported where the device didn't work properly if the computer had more 
than 4GB of RAM (this is x86 32-bit, with CONFIG_HIGHMEM64G enabled).

Essentially what it was doing was taking some memory that the userspace 
app was transferring to/from the device, doing get_user_pages on it, and 
then using the old-style page_to_phys, etc. functions to DMA on that 
memory instead of the modern DMA API.

However, I'm not sure this strategy would have worked on this platform 
even if it had been using the proper DMA API. This device has 32-bit DMA 
limits and is transferring userspace buffers which with HIGHMEM64G 
enabled could easily have physical addresses over 4GB. The strategy that 
Linux Device Drivers, 3rd Edition (chapter 15) suggests is doing 
get_user_pages, creating an SG list from the returned pages and then 
using dma_map_sg on that list. However, essentially all dma_map_sg in 
include/asm-x86/dma-mapping_32.h is:

	for_each_sg(sglist, sg, nents, i) {
		BUG_ON(!sg_page(sg));

		sg->dma_address = sg_phys(sg);
	}

which does nothing to ensure that the returned physical address is 
within the device's DMA mask. On 64-bit this triggers IOMMU mapping but 
on 32-bit it doesn't seem like this case is handled at all. I believe 
the block and networking layers have their own ways of ensuring that 
they don't feed such buffers to their drivers if they can't handle it, 
but a basic character device driver is kind of left out in the cold here 
and the DMA API doesn't appear to work as documented in this case. Given 
that x86-32 kernels don't implement any IOMMU support I'm not sure what 
it actually could do, other than implementing some kind of software 
bounce buffering of its own..

Are there any in-tree drivers that use this DMA mapping on 
get_user_pages strategy that could be affected by this?

I think the get_free_pages trick is actually pretty silly in this case, 
the size of the data being transferred is likely such that it would be 
just as fast or faster to copy to a kernel buffer and DMA to/from there..

             reply	other threads:[~2008-02-12  4:16 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-12  4:15 Robert Hancock [this message]
     [not found] <fa.UILz1HxdpfM019FfTnX6mSuY9KE@ifi.uio.no>
2008-02-17 15:31 ` DMA mapping API on 32-bit X86 with CONFIG_HIGHMEM64G Bill Waddington

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47B11D58.2070201@shaw.ca \
    --to=hancockr@shaw.ca \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).