From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6140CC04AAC for ; Thu, 23 May 2019 07:01:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 24AD32133D for ; Thu, 23 May 2019 07:01:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="AjrmbPv1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730229AbfEWHBT (ORCPT ); Thu, 23 May 2019 03:01:19 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:37830 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730188AbfEWHBQ (ORCPT ); Thu, 23 May 2019 03:01:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=FfhUHwHMdXtQOmWN1P1YD03jSRRvFl/iYAEstoOk/T8=; b=AjrmbPv19u/u6aj1Dcmk44RUFo bjnGt1Pwe3rffHa8T4DjIxh09QR9JHUnoxFp/hCeZg6dzSBMllzdooselAtjxN/fn5bKEg3qMoHi4 xClEkhA6cLYg+QG9QgQXgt5Dwrv43pRcVeHF4/mHITjOAamoXwKIfgG9YkCWQbHByQxaOW47UB7Of xTxoSks5jItqD4K17dJ8c6UY3LYxGQ8CjXt3VpkOa8Q05JaVvi0Qp0YMZWzWXGVPOgGWmnMDbBfJw L5T1BLwOw0r6qw52MOBqoZpmObG/monihtS9w9ikdROs2kkeB+LiQROYad4zNxcGAuq7jGLXKV2i4 936qsgIA==; Received: from 213-225-10-46.nat.highway.a1.net ([213.225.10.46] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hThix-0005ZM-17; Thu, 23 May 2019 07:01:11 +0000 From: Christoph Hellwig To: Robin Murphy Cc: Tom Murphy , Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 14/23] iommu/dma: Merge the CMA and alloc_pages allocation paths Date: Thu, 23 May 2019 09:00:19 +0200 Message-Id: <20190523070028.7435-15-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190523070028.7435-1-hch@lst.de> References: <20190523070028.7435-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Instead of having a separate code path for the non-blocking alloc_pages and CMA allocations paths merge them into one. There is a slight behavior change here in that we try the page allocator if CMA fails. This matches what dma-direct and other iommu drivers do and will be needed to use the dma-iommu code on architectures without DMA remapping later on. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 32 ++++++++++++-------------------- 1 file changed, 12 insertions(+), 20 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 3629bc2f59ee..6b8cedae7cff 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -974,7 +974,7 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, bool coherent = dev_is_dma_coherent(dev); int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); size_t iosize = size; - struct page *page; + struct page *page = NULL; void *addr; size = PAGE_ALIGN(size); @@ -984,35 +984,26 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, !(attrs & DMA_ATTR_FORCE_CONTIGUOUS)) return iommu_dma_alloc_remap(dev, iosize, handle, gfp, attrs); - if (!gfpflags_allow_blocking(gfp)) { - /* - * In atomic context we can't remap anything, so we'll only - * get the virtually contiguous buffer we need by way of a - * physically contiguous allocation. - */ - if (coherent) { - page = alloc_pages(gfp, get_order(size)); - addr = page ? page_address(page) : NULL; - } else { - addr = dma_alloc_from_pool(size, &page, gfp); - } + if (!gfpflags_allow_blocking(gfp) && !coherent) { + addr = dma_alloc_from_pool(size, &page, gfp); if (!addr) return NULL; *handle = __iommu_dma_map(dev, page_to_phys(page), iosize, ioprot); if (*handle == DMA_MAPPING_ERROR) { - if (coherent) - __free_pages(page, get_order(size)); - else - dma_free_from_pool(addr, size); + dma_free_from_pool(addr, size); return NULL; } return addr; } - page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, - get_order(size), gfp & __GFP_NOWARN); + if (gfpflags_allow_blocking(gfp)) + page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, + get_order(size), + gfp & __GFP_NOWARN); + if (!page) + page = alloc_pages(gfp, get_order(size)); if (!page) return NULL; @@ -1038,7 +1029,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, out_unmap: __iommu_dma_unmap(dev, *handle, iosize); out_free_pages: - dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); + if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) + __free_pages(page, get_order(size)); return NULL; } -- 2.20.1