LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
@ 2018-10-24  3:09 Joe Jin
  2018-10-24 13:02 ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 17+ messages in thread
From: Joe Jin @ 2018-10-24  3:09 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, DONGLI.ZHANG, konrad, Christoph Helwig,
	John Sobecki, Boris Ostrovsky
  Cc: xen-devel, linux-kernel@vger.kernel.org"

Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
xen_swiotlb_free_coherent" only fixed memory address check condition
on xen_swiotlb_free_coherent(), when memory was not physically
contiguous and tried to exchanged with Xen via 
xen_destroy_contiguous_region it will lead kernel panic.

The correct check condition should be memory is in DMA area and
physically contiguous.

Thank you Boris for pointing it out.

Signed-off-by: Joe Jin <joe.jin@oracle.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Christoph Helwig <hch@lst.de>
Cc: Dongli Zhang <dongli.zhang@oracle.com>
Cc: John Sobecki <john.sobecki@oracle.com>
---
 drivers/xen/swiotlb-xen.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index f5c1af4ce9ab..aed92fa019f9 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
-	if (((dev_addr + size - 1 <= dma_mask)) ||
-	    range_straddles_page_boundary(phys, size))
+	if ((dev_addr + size - 1 <= dma_mask) &&
+	    !range_straddles_page_boundary(phys, size))
 		xen_destroy_contiguous_region(phys, order);
 
 	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
-- 
2.17.1 (Apple Git-112)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-24  3:09 [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous Joe Jin
@ 2018-10-24 13:02 ` Konrad Rzeszutek Wilk
  2018-10-24 13:57   ` Boris Ostrovsky
  2018-10-24 16:05   ` Joe Jin
  0 siblings, 2 replies; 17+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-10-24 13:02 UTC (permalink / raw)
  To: Joe Jin
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki,
	Boris Ostrovsky, xen-devel, linux-kernel@vger.kernel.org"

On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
> xen_swiotlb_free_coherent" only fixed memory address check condition
> on xen_swiotlb_free_coherent(), when memory was not physically
> contiguous and tried to exchanged with Xen via 
> xen_destroy_contiguous_region it will lead kernel panic.

s/it will lead/which lead to/?

> 
> The correct check condition should be memory is in DMA area and
> physically contiguous.

"The correct check condition to make Xen hypercall to revert the
memory back from its 32-bit pool is if it is:
 1) Above its DMA bit mask (for example 32-bit devices can only address
up to 4GB, and we may want 4GB+2K), and
 2) If it not physically contingous

N.B. The logic in the code is inverted, which leads to all sorts of
confusions."

Does that sound correct?

> 
> Thank you Boris for pointing it out.
> 

Fixes: 4855c92dbb7 ("xen-sw..") ?

> Signed-off-by: Joe Jin <joe.jin@oracle.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reported-by: Boris Ostrovs... ?
> Cc: Christoph Helwig <hch@lst.de>
> Cc: Dongli Zhang <dongli.zhang@oracle.com>
> Cc: John Sobecki <john.sobecki@oracle.com>
> ---
>  drivers/xen/swiotlb-xen.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index f5c1af4ce9ab..aed92fa019f9 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>  	/* Convert the size to actually allocated. */
>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>  
> -	if (((dev_addr + size - 1 <= dma_mask)) ||
> -	    range_straddles_page_boundary(phys, size))
> +	if ((dev_addr + size - 1 <= dma_mask) &&
> +	    !range_straddles_page_boundary(phys, size))
>  		xen_destroy_contiguous_region(phys, order);
>  
>  	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
> -- 
> 2.17.1 (Apple Git-112)
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-24 13:02 ` Konrad Rzeszutek Wilk
@ 2018-10-24 13:57   ` Boris Ostrovsky
  2018-10-24 14:43     ` Joe Jin
  2018-10-24 16:05   ` Joe Jin
  1 sibling, 1 reply; 17+ messages in thread
From: Boris Ostrovsky @ 2018-10-24 13:57 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Joe Jin
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>> xen_swiotlb_free_coherent" only fixed memory address check condition
>> on xen_swiotlb_free_coherent(), when memory was not physically
>> contiguous and tried to exchanged with Xen via 
>> xen_destroy_contiguous_region it will lead kernel panic.
> s/it will lead/which lead to/?
>
>> The correct check condition should be memory is in DMA area and
>> physically contiguous.
> "The correct check condition to make Xen hypercall to revert the
> memory back from its 32-bit pool is if it is:
>  1) Above its DMA bit mask (for example 32-bit devices can only address
> up to 4GB, and we may want 4GB+2K), and

Is this "and' or 'or'?

>  2) If it not physically contingous
>
> N.B. The logic in the code is inverted, which leads to all sorts of
> confusions."


I would, in fact, suggest to make the logic the same in both
xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
this. This will involve swapping if and else in the former.


>
> Does that sound correct?
>
>> Thank you Boris for pointing it out.
>>
> Fixes: 4855c92dbb7 ("xen-sw..") ?
>
>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reported-by: Boris Ostrovs... ?
>> Cc: Christoph Helwig <hch@lst.de>
>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>> Cc: John Sobecki <john.sobecki@oracle.com>
>> ---
>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>> index f5c1af4ce9ab..aed92fa019f9 100644
>> --- a/drivers/xen/swiotlb-xen.c
>> +++ b/drivers/xen/swiotlb-xen.c
>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>  	/* Convert the size to actually allocated. */
>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>  
>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>> -	    range_straddles_page_boundary(phys, size))
>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>> +	    !range_straddles_page_boundary(phys, size))
>>  		xen_destroy_contiguous_region(phys, order);


I don't think this is right.

if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size))

No?

-boris





>>  
>>  	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
>> -- 
>> 2.17.1 (Apple Git-112)
>>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-24 13:57   ` Boris Ostrovsky
@ 2018-10-24 14:43     ` Joe Jin
  2018-10-25 11:45       ` Boris Ostrovsky
  0 siblings, 1 reply; 17+ messages in thread
From: Joe Jin @ 2018-10-24 14:43 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

On 10/24/18 6:57 AM, Boris Ostrovsky wrote:
> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>>> xen_swiotlb_free_coherent" only fixed memory address check condition
>>> on xen_swiotlb_free_coherent(), when memory was not physically
>>> contiguous and tried to exchanged with Xen via 
>>> xen_destroy_contiguous_region it will lead kernel panic.
>> s/it will lead/which lead to/?
>>
>>> The correct check condition should be memory is in DMA area and
>>> physically contiguous.
>> "The correct check condition to make Xen hypercall to revert the
>> memory back from its 32-bit pool is if it is:
>>  1) Above its DMA bit mask (for example 32-bit devices can only address
>> up to 4GB, and we may want 4GB+2K), and
> 
> Is this "and' or 'or'?
> 
>>  2) If it not physically contingous
>>
>> N.B. The logic in the code is inverted, which leads to all sorts of
>> confusions."
> 
> 
> I would, in fact, suggest to make the logic the same in both
> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
> this. This will involve swapping if and else in the former.
> 
> 
>>
>> Does that sound correct?
>>
>>> Thank you Boris for pointing it out.
>>>
>> Fixes: 4855c92dbb7 ("xen-sw..") ?
>>
>>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Reported-by: Boris Ostrovs... ?
>>> Cc: Christoph Helwig <hch@lst.de>
>>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>>> Cc: John Sobecki <john.sobecki@oracle.com>
>>> ---
>>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index f5c1af4ce9ab..aed92fa019f9 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>>  	/* Convert the size to actually allocated. */
>>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>  
>>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>>> -	    range_straddles_page_boundary(phys, size))
>>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>>> +	    !range_straddles_page_boundary(phys, size))
>>>  		xen_destroy_contiguous_region(phys, order);
> 
> 
> I don't think this is right.
> 
> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size))
> 
> No?

No this is not correct.

When allocate memory, it tried to allocated from Dom0/Guest, then check if physical
address is DMA memory also contiguous, if no, exchange with Hypervisor, code as below:

326         phys = *dma_handle;                                                     
327         dev_addr = xen_phys_to_bus(phys);                                       
328         if (((dev_addr + size - 1 <= dma_mask)) &&                              
329             !range_straddles_page_boundary(phys, size))                         
330                 *dma_handle = dev_addr;                                         
331         else {                                                                  
332                 if (xen_create_contiguous_region(phys, order,                   
333                                                  fls64(dma_mask), dma_handle) != 0) {
334                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
335                         return NULL;                                            
336                 }                                                               
337         }                                                                       
                                                                     

On freeing, need to return the memory to Xen, otherwise DMA memory will be used
up(this is the issue the patch intend to fix), so when memory is DMAable and
contiguous then call xen_destroy_contiguous_region(), return DMA memory to Xen.

Thanks,
Joe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-24 13:02 ` Konrad Rzeszutek Wilk
  2018-10-24 13:57   ` Boris Ostrovsky
@ 2018-10-24 16:05   ` Joe Jin
  1 sibling, 0 replies; 17+ messages in thread
From: Joe Jin @ 2018-10-24 16:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki,
	Boris Ostrovsky, xen-devel, linux-kernel@vger.kernel.org"

On 10/24/18 6:02 AM, Konrad Rzeszutek Wilk wrote:
> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>> xen_swiotlb_free_coherent" only fixed memory address check condition
>> on xen_swiotlb_free_coherent(), when memory was not physically
>> contiguous and tried to exchanged with Xen via 
>> xen_destroy_contiguous_region it will lead kernel panic.
> 
> s/it will lead/which lead to/?
> 
>>
>> The correct check condition should be memory is in DMA area and
>> physically contiguous.
> 
> "The correct check condition to make Xen hypercall to revert the
> memory back from its 32-bit pool is if it is:
>  1) Above its DMA bit mask (for example 32-bit devices can only address
> up to 4GB, and we may want 4GB+2K), and
>  2) If it not physically contingous

Here should be physically contiguous? on allocating, it asked physically
contiguous memory for DMA, if not contiguous, it can not do DMA,
am I right?

Andy range_straddles_page_boundary() return 0 means is physically contiguous,
is that correct?

Thanks,
Joe

> 
> N.B. The logic in the code is inverted, which leads to all sorts of
> confusions."
> 
> Does that sound correct?
> 
>>
>> Thank you Boris for pointing it out.
>>
> 
> Fixes: 4855c92dbb7 ("xen-sw..") ?
> 
>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> Reported-by: Boris Ostrovs... ?
>> Cc: Christoph Helwig <hch@lst.de>
>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>> Cc: John Sobecki <john.sobecki@oracle.com>
>> ---
>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>> index f5c1af4ce9ab..aed92fa019f9 100644
>> --- a/drivers/xen/swiotlb-xen.c
>> +++ b/drivers/xen/swiotlb-xen.c
>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>  	/* Convert the size to actually allocated. */
>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>  
>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>> -	    range_straddles_page_boundary(phys, size))
>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>> +	    !range_straddles_page_boundary(phys, size))
>>  		xen_destroy_contiguous_region(phys, order);
>>  
>>  	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
>> -- 
>> 2.17.1 (Apple Git-112)
>>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-24 14:43     ` Joe Jin
@ 2018-10-25 11:45       ` Boris Ostrovsky
  2018-10-25 14:23         ` Joe Jin
  0 siblings, 1 reply; 17+ messages in thread
From: Boris Ostrovsky @ 2018-10-25 11:45 UTC (permalink / raw)
  To: Joe Jin, Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

On 10/24/18 10:43 AM, Joe Jin wrote:
> On 10/24/18 6:57 AM, Boris Ostrovsky wrote:
>> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>>>> xen_swiotlb_free_coherent" only fixed memory address check condition
>>>> on xen_swiotlb_free_coherent(), when memory was not physically
>>>> contiguous and tried to exchanged with Xen via 
>>>> xen_destroy_contiguous_region it will lead kernel panic.
>>> s/it will lead/which lead to/?
>>>
>>>> The correct check condition should be memory is in DMA area and
>>>> physically contiguous.
>>> "The correct check condition to make Xen hypercall to revert the
>>> memory back from its 32-bit pool is if it is:
>>>  1) Above its DMA bit mask (for example 32-bit devices can only address
>>> up to 4GB, and we may want 4GB+2K), and
>> Is this "and' or 'or'?
>>
>>>  2) If it not physically contingous
>>>
>>> N.B. The logic in the code is inverted, which leads to all sorts of
>>> confusions."
>>
>> I would, in fact, suggest to make the logic the same in both
>> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
>> this. This will involve swapping if and else in the former.
>>
>>
>>> Does that sound correct?
>>>
>>>> Thank you Boris for pointing it out.
>>>>
>>> Fixes: 4855c92dbb7 ("xen-sw..") ?
>>>
>>>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Reported-by: Boris Ostrovs... ?
>>>> Cc: Christoph Helwig <hch@lst.de>
>>>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>>>> Cc: John Sobecki <john.sobecki@oracle.com>
>>>> ---
>>>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>>> index f5c1af4ce9ab..aed92fa019f9 100644
>>>> --- a/drivers/xen/swiotlb-xen.c
>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>>>  	/* Convert the size to actually allocated. */
>>>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>>  
>>>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>>>> -	    range_straddles_page_boundary(phys, size))
>>>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>>>> +	    !range_straddles_page_boundary(phys, size))
>>>>  		xen_destroy_contiguous_region(phys, order);
>>
>> I don't think this is right.
>>
>> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size))
>>
>> No?
> No this is not correct.
>
> When allocate memory, it tried to allocated from Dom0/Guest, then check if physical
> address is DMA memory also contiguous, if no, exchange with Hypervisor, code as below:
>
> 326         phys = *dma_handle;                                                     
> 327         dev_addr = xen_phys_to_bus(phys);                                       
> 328         if (((dev_addr + size - 1 <= dma_mask)) &&                              
> 329             !range_straddles_page_boundary(phys, size))                         
> 330                 *dma_handle = dev_addr;                                         
> 331         else {                                                                  
> 332                 if (xen_create_contiguous_region(phys, order,                   
> 333                                                  fls64(dma_mask), dma_handle) != 0) {
> 334                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
> 335                         return NULL;                                            
> 336                 }                                                               
> 337         }                                                                       
>                                                                      
>
> On freeing, need to return the memory to Xen, otherwise DMA memory will be used
> up(this is the issue the patch intend to fix), so when memory is DMAable and
> contiguous then call xen_destroy_contiguous_region(), return DMA memory to Xen.

So if you want to allocate 1 byte at address 0 (and dev_addr=phys),
xen_create_contiguous_region() will not be called. And yet you will call
xen_destroy_contiguous_region() in the free path.

Is this the expected behavior?

-boris

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-25 11:45       ` Boris Ostrovsky
@ 2018-10-25 14:23         ` Joe Jin
  2018-10-25 16:10           ` Boris Ostrovsky
  0 siblings, 1 reply; 17+ messages in thread
From: Joe Jin @ 2018-10-25 14:23 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

On 10/25/18 4:45 AM, Boris Ostrovsky wrote:
> On 10/24/18 10:43 AM, Joe Jin wrote:
>> On 10/24/18 6:57 AM, Boris Ostrovsky wrote:
>>> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
>>>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>>>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>>>>> xen_swiotlb_free_coherent" only fixed memory address check condition
>>>>> on xen_swiotlb_free_coherent(), when memory was not physically
>>>>> contiguous and tried to exchanged with Xen via 
>>>>> xen_destroy_contiguous_region it will lead kernel panic.
>>>> s/it will lead/which lead to/?
>>>>
>>>>> The correct check condition should be memory is in DMA area and
>>>>> physically contiguous.
>>>> "The correct check condition to make Xen hypercall to revert the
>>>> memory back from its 32-bit pool is if it is:
>>>>  1) Above its DMA bit mask (for example 32-bit devices can only address
>>>> up to 4GB, and we may want 4GB+2K), and
>>> Is this "and' or 'or'?
>>>
>>>>  2) If it not physically contingous
>>>>
>>>> N.B. The logic in the code is inverted, which leads to all sorts of
>>>> confusions."
>>>
>>> I would, in fact, suggest to make the logic the same in both
>>> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
>>> this. This will involve swapping if and else in the former.
>>>
>>>
>>>> Does that sound correct?
>>>>
>>>>> Thank you Boris for pointing it out.
>>>>>
>>>> Fixes: 4855c92dbb7 ("xen-sw..") ?
>>>>
>>>>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>>>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>> Reported-by: Boris Ostrovs... ?
>>>>> Cc: Christoph Helwig <hch@lst.de>
>>>>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>>>>> Cc: John Sobecki <john.sobecki@oracle.com>
>>>>> ---
>>>>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>>>> index f5c1af4ce9ab..aed92fa019f9 100644
>>>>> --- a/drivers/xen/swiotlb-xen.c
>>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>>>>  	/* Convert the size to actually allocated. */
>>>>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>>>  
>>>>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>>>>> -	    range_straddles_page_boundary(phys, size))
>>>>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>>>>> +	    !range_straddles_page_boundary(phys, size))
>>>>>  		xen_destroy_contiguous_region(phys, order);
>>>
>>> I don't think this is right.
>>>
>>> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size))
>>>
>>> No?
>> No this is not correct.
>>
>> When allocate memory, it tried to allocated from Dom0/Guest, then check if physical
>> address is DMA memory also contiguous, if no, exchange with Hypervisor, code as below:
>>
>> 326         phys = *dma_handle;                                                     
>> 327         dev_addr = xen_phys_to_bus(phys);                                       
>> 328         if (((dev_addr + size - 1 <= dma_mask)) &&                              
>> 329             !range_straddles_page_boundary(phys, size))                         
>> 330                 *dma_handle = dev_addr;                                         
>> 331         else {                                                                  
>> 332                 if (xen_create_contiguous_region(phys, order,                   
>> 333                                                  fls64(dma_mask), dma_handle) != 0) {
>> 334                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>> 335                         return NULL;                                            
>> 336                 }                                                               
>> 337         }                                                                       
>>                                                                      
>>
>> On freeing, need to return the memory to Xen, otherwise DMA memory will be used
>> up(this is the issue the patch intend to fix), so when memory is DMAable and
>> contiguous then call xen_destroy_contiguous_region(), return DMA memory to Xen.
> 
> So if you want to allocate 1 byte at address 0 (and dev_addr=phys),
> xen_create_contiguous_region() will not be called. And yet you will call
> xen_destroy_contiguous_region() in the free path.
> 
> Is this the expected behavior?

I could not say it's expected behavior, but I think it's reasonable.

On allocating, it used __get_free_pages() to allocate memory, if lucky the memory is 
DMAable, will not exchange memory with hypervisor, obviously this is not guaranteed.

And on freeing it could not be identified if memory from Dom0/guest own memory
or hypervisor, if don't back memory to hypervisor which will lead hypervisor DMA 
memory be used up, then on Dom0/guest, DMA request maybe failed, the worse thing is
could not start any new guest.

Thanks,
Joe

> 
> -boris
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-25 14:23         ` Joe Jin
@ 2018-10-25 16:10           ` Boris Ostrovsky
  2018-10-25 16:28             ` Joe Jin
  0 siblings, 1 reply; 17+ messages in thread
From: Boris Ostrovsky @ 2018-10-25 16:10 UTC (permalink / raw)
  To: Joe Jin, Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

On 10/25/18 10:23 AM, Joe Jin wrote:
> On 10/25/18 4:45 AM, Boris Ostrovsky wrote:
>> On 10/24/18 10:43 AM, Joe Jin wrote:
>>> On 10/24/18 6:57 AM, Boris Ostrovsky wrote:
>>>> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
>>>>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>>>>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>>>>>> xen_swiotlb_free_coherent" only fixed memory address check condition
>>>>>> on xen_swiotlb_free_coherent(), when memory was not physically
>>>>>> contiguous and tried to exchanged with Xen via 
>>>>>> xen_destroy_contiguous_region it will lead kernel panic.
>>>>> s/it will lead/which lead to/?
>>>>>
>>>>>> The correct check condition should be memory is in DMA area and
>>>>>> physically contiguous.
>>>>> "The correct check condition to make Xen hypercall to revert the
>>>>> memory back from its 32-bit pool is if it is:
>>>>>  1) Above its DMA bit mask (for example 32-bit devices can only address
>>>>> up to 4GB, and we may want 4GB+2K), and
>>>> Is this "and' or 'or'?
>>>>
>>>>>  2) If it not physically contingous
>>>>>
>>>>> N.B. The logic in the code is inverted, which leads to all sorts of
>>>>> confusions."
>>>> I would, in fact, suggest to make the logic the same in both
>>>> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
>>>> this. This will involve swapping if and else in the former.
>>>>
>>>>
>>>>> Does that sound correct?
>>>>>
>>>>>> Thank you Boris for pointing it out.
>>>>>>
>>>>> Fixes: 4855c92dbb7 ("xen-sw..") ?
>>>>>
>>>>>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>>>>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>>> Reported-by: Boris Ostrovs... ?
>>>>>> Cc: Christoph Helwig <hch@lst.de>
>>>>>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>>>>>> Cc: John Sobecki <john.sobecki@oracle.com>
>>>>>> ---
>>>>>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>>>>> index f5c1af4ce9ab..aed92fa019f9 100644
>>>>>> --- a/drivers/xen/swiotlb-xen.c
>>>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>>>>>  	/* Convert the size to actually allocated. */
>>>>>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>>>>  
>>>>>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>>>>>> -	    range_straddles_page_boundary(phys, size))
>>>>>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>>>>>> +	    !range_straddles_page_boundary(phys, size))
>>>>>>  		xen_destroy_contiguous_region(phys, order);
>>>> I don't think this is right.
>>>>
>>>> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size))
>>>>
>>>> No?
>>> No this is not correct.
>>>
>>> When allocate memory, it tried to allocated from Dom0/Guest, then check if physical
>>> address is DMA memory also contiguous, if no, exchange with Hypervisor, code as below:
>>>
>>> 326         phys = *dma_handle;                                                     
>>> 327         dev_addr = xen_phys_to_bus(phys);                                       
>>> 328         if (((dev_addr + size - 1 <= dma_mask)) &&                              
>>> 329             !range_straddles_page_boundary(phys, size))                         
>>> 330                 *dma_handle = dev_addr;                                         
>>> 331         else {                                                                  
>>> 332                 if (xen_create_contiguous_region(phys, order,                   
>>> 333                                                  fls64(dma_mask), dma_handle) != 0) {
>>> 334                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>>> 335                         return NULL;                                            
>>> 336                 }                                                               
>>> 337         }                                                                       
>>>                                                                      
>>>
>>> On freeing, need to return the memory to Xen, otherwise DMA memory will be used
>>> up(this is the issue the patch intend to fix), so when memory is DMAable and
>>> contiguous then call xen_destroy_contiguous_region(), return DMA memory to Xen.
>> So if you want to allocate 1 byte at address 0 (and dev_addr=phys),
>> xen_create_contiguous_region() will not be called. And yet you will call
>> xen_destroy_contiguous_region() in the free path.
>>
>> Is this the expected behavior?
> I could not say it's expected behavior, but I think it's reasonable.

I would expect xen_create_contiguous_region() and
xen_destroy_contiguous_region() to come in pairs. If a region is
created, it needs to be destroyed. And vice versa.


>
> On allocating, it used __get_free_pages() to allocate memory, if lucky the memory is 
> DMAable, will not exchange memory with hypervisor, obviously this is not guaranteed.
>
> And on freeing it could not be identified if memory from Dom0/guest own memory
> or hypervisor


I think it can be. if (!(dev_addr + size - 1 <= dma_mask) ||
range_straddles_page_boundary()) then it must have come from the
hypervisor, because that's the check we make in
xen_swiotlb_alloc_coherent().


-boris


> , if don't back memory to hypervisor which will lead hypervisor DMA 
> memory be used up, then on Dom0/guest, DMA request maybe failed, the worse thing is
> could not start any new guest.
>
> Thanks,
> Joe
>
>> -boris
>>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-25 16:10           ` Boris Ostrovsky
@ 2018-10-25 16:28             ` Joe Jin
  2018-10-25 18:56               ` Joe Jin
  0 siblings, 1 reply; 17+ messages in thread
From: Joe Jin @ 2018-10-25 16:28 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

On 10/25/18 9:10 AM, Boris Ostrovsky wrote:
> On 10/25/18 10:23 AM, Joe Jin wrote:
>> On 10/25/18 4:45 AM, Boris Ostrovsky wrote:
>>> On 10/24/18 10:43 AM, Joe Jin wrote:
>>>> On 10/24/18 6:57 AM, Boris Ostrovsky wrote:
>>>>> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
>>>>>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>>>>>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>>>>>>> xen_swiotlb_free_coherent" only fixed memory address check condition
>>>>>>> on xen_swiotlb_free_coherent(), when memory was not physically
>>>>>>> contiguous and tried to exchanged with Xen via 
>>>>>>> xen_destroy_contiguous_region it will lead kernel panic.
>>>>>> s/it will lead/which lead to/?
>>>>>>
>>>>>>> The correct check condition should be memory is in DMA area and
>>>>>>> physically contiguous.
>>>>>> "The correct check condition to make Xen hypercall to revert the
>>>>>> memory back from its 32-bit pool is if it is:
>>>>>>  1) Above its DMA bit mask (for example 32-bit devices can only address
>>>>>> up to 4GB, and we may want 4GB+2K), and
>>>>> Is this "and' or 'or'?
>>>>>
>>>>>>  2) If it not physically contingous
>>>>>>
>>>>>> N.B. The logic in the code is inverted, which leads to all sorts of
>>>>>> confusions."
>>>>> I would, in fact, suggest to make the logic the same in both
>>>>> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
>>>>> this. This will involve swapping if and else in the former.
>>>>>
>>>>>
>>>>>> Does that sound correct?
>>>>>>
>>>>>>> Thank you Boris for pointing it out.
>>>>>>>
>>>>>> Fixes: 4855c92dbb7 ("xen-sw..") ?
>>>>>>
>>>>>>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>>>>>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>>>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>>>> Reported-by: Boris Ostrovs... ?
>>>>>>> Cc: Christoph Helwig <hch@lst.de>
>>>>>>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>>>>>>> Cc: John Sobecki <john.sobecki@oracle.com>
>>>>>>> ---
>>>>>>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>>>>>> index f5c1af4ce9ab..aed92fa019f9 100644
>>>>>>> --- a/drivers/xen/swiotlb-xen.c
>>>>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>>>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>>>>>>  	/* Convert the size to actually allocated. */
>>>>>>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>>>>>  
>>>>>>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>>>>>>> -	    range_straddles_page_boundary(phys, size))
>>>>>>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>>>>>>> +	    !range_straddles_page_boundary(phys, size))
>>>>>>>  		xen_destroy_contiguous_region(phys, order);
>>>>> I don't think this is right.
>>>>>
>>>>> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size))
>>>>>
>>>>> No?
>>>> No this is not correct.
>>>>
>>>> When allocate memory, it tried to allocated from Dom0/Guest, then check if physical
>>>> address is DMA memory also contiguous, if no, exchange with Hypervisor, code as below:
>>>>
>>>> 326         phys = *dma_handle;                                                     
>>>> 327         dev_addr = xen_phys_to_bus(phys);                                       
>>>> 328         if (((dev_addr + size - 1 <= dma_mask)) &&                              
>>>> 329             !range_straddles_page_boundary(phys, size))                         
>>>> 330                 *dma_handle = dev_addr;                                         
>>>> 331         else {                                                                  
>>>> 332                 if (xen_create_contiguous_region(phys, order,                   
>>>> 333                                                  fls64(dma_mask), dma_handle) != 0) {
>>>> 334                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>>>> 335                         return NULL;                                            
>>>> 336                 }                                                               
>>>> 337         }                                                                       
>>>>                                                                      
>>>>
>>>> On freeing, need to return the memory to Xen, otherwise DMA memory will be used
>>>> up(this is the issue the patch intend to fix), so when memory is DMAable and
>>>> contiguous then call xen_destroy_contiguous_region(), return DMA memory to Xen.
>>> So if you want to allocate 1 byte at address 0 (and dev_addr=phys),
>>> xen_create_contiguous_region() will not be called. And yet you will call
>>> xen_destroy_contiguous_region() in the free path.
>>>
>>> Is this the expected behavior?
>> I could not say it's expected behavior, but I think it's reasonable.
> 
> I would expect xen_create_contiguous_region() and
> xen_destroy_contiguous_region() to come in pairs. If a region is
> created, it needs to be destroyed. And vice versa.
> 
> 
>>
>> On allocating, it used __get_free_pages() to allocate memory, if lucky the memory is 
>> DMAable, will not exchange memory with hypervisor, obviously this is not guaranteed.
>>
>> And on freeing it could not be identified if memory from Dom0/guest own memory
>> or hypervisor
> 
> 
> I think it can be. if (!(dev_addr + size - 1 <= dma_mask) ||
> range_straddles_page_boundary()) then it must have come from the
> hypervisor, because that's the check we make in
> xen_swiotlb_alloc_coherent().

This is not true.

dev_addr was came from dma_handle, *dma_handle will be changed  after called
xen_create_contiguous_region():

2590 int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,        
2591                                  unsigned int address_bits,                     
2592                                  dma_addr_t *dma_handle)                        
2593 {                                                                               
......
2617         success = xen_exchange_memory(1UL << order, 0, in_frames,               
2618                                       1, order, &out_frame,                     
2619                                       address_bits);                            
2620                                                                                 
2621         /* 3. Map the new extent in place of old pages. */                      
2622         if (success)                                                            
2623                 xen_remap_exchanged_ptes(vstart, order, NULL, out_frame);       
2624         else                                                                    
2625                 xen_remap_exchanged_ptes(vstart, order, in_frames, 0);          
2626                                                                                 
2627         spin_unlock_irqrestore(&xen_reservation_lock, flags);                   
2628                                                                                 
2629         *dma_handle = virt_to_machine(vstart).maddr;                            
2630         return success ? 0 : -ENOMEM;                                           
2631 }                                                                               


So means dev_addr check on xen_swiotlb_alloc_coherent() is not same one on
xen_swiotlb_free_coherent().

Thanks,
Joe


> 
> 
> -boris
> 
> 
>> , if don't back memory to hypervisor which will lead hypervisor DMA 
>> memory be used up, then on Dom0/guest, DMA request maybe failed, the worse thing is
>> could not start any new guest.
>>
>> Thanks,
>> Joe
>>
>>> -boris
>>>
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-25 16:28             ` Joe Jin
@ 2018-10-25 18:56               ` Joe Jin
  2018-10-26  7:48                 ` Christoph Helwig
  2018-10-30  2:51                 ` Joe Jin
  0 siblings, 2 replies; 17+ messages in thread
From: Joe Jin @ 2018-10-25 18:56 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

Hi all,

I just discussed this patch with Boris in private, his opinions(Boris,
please correct me if any misunderstood) are:

1. With/without the check, both are incorrect, he thought we need to
   prevented unalloc'd free at here. 
2. On freeing, if upper layer already checked the memory was DMA-able,
   the checking at here does not make sense, we can remove all checks.
3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
   to come in pairs.

For #1 and #3, I think we need something associate it, like a list, on
allocating, add addr to it, on freeing, check if in the list.

For #2, I'm was not found anywhere validated the address on 
dma_free_coherent() callpath, not just xen-swiotlb.

From my side, I think the checks are make sense, it prevented to exchange
non-contiguous memory with Xen also make sure Xen has enough DMA memory
for DMA also for guest creation. I'm not sure if we can merge this patch
to avoid exchanged non-contiguous memory with Xen?

Any input will appreciate.

Thanks,
Joe 

On 10/25/18 9:28 AM, Joe Jin wrote:
> On 10/25/18 9:10 AM, Boris Ostrovsky wrote:
>> On 10/25/18 10:23 AM, Joe Jin wrote:
>>> On 10/25/18 4:45 AM, Boris Ostrovsky wrote:
>>>> On 10/24/18 10:43 AM, Joe Jin wrote:
>>>>> On 10/24/18 6:57 AM, Boris Ostrovsky wrote:
>>>>>> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote:
>>>>>>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote:
>>>>>>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for
>>>>>>>> xen_swiotlb_free_coherent" only fixed memory address check condition
>>>>>>>> on xen_swiotlb_free_coherent(), when memory was not physically
>>>>>>>> contiguous and tried to exchanged with Xen via 
>>>>>>>> xen_destroy_contiguous_region it will lead kernel panic.
>>>>>>> s/it will lead/which lead to/?
>>>>>>>
>>>>>>>> The correct check condition should be memory is in DMA area and
>>>>>>>> physically contiguous.
>>>>>>> "The correct check condition to make Xen hypercall to revert the
>>>>>>> memory back from its 32-bit pool is if it is:
>>>>>>>  1) Above its DMA bit mask (for example 32-bit devices can only address
>>>>>>> up to 4GB, and we may want 4GB+2K), and
>>>>>> Is this "and' or 'or'?
>>>>>>
>>>>>>>  2) If it not physically contingous
>>>>>>>
>>>>>>> N.B. The logic in the code is inverted, which leads to all sorts of
>>>>>>> confusions."
>>>>>> I would, in fact, suggest to make the logic the same in both
>>>>>> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid
>>>>>> this. This will involve swapping if and else in the former.
>>>>>>
>>>>>>
>>>>>>> Does that sound correct?
>>>>>>>
>>>>>>>> Thank you Boris for pointing it out.
>>>>>>>>
>>>>>>> Fixes: 4855c92dbb7 ("xen-sw..") ?
>>>>>>>
>>>>>>>> Signed-off-by: Joe Jin <joe.jin@oracle.com>
>>>>>>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>>>>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>>>>> Reported-by: Boris Ostrovs... ?
>>>>>>>> Cc: Christoph Helwig <hch@lst.de>
>>>>>>>> Cc: Dongli Zhang <dongli.zhang@oracle.com>
>>>>>>>> Cc: John Sobecki <john.sobecki@oracle.com>
>>>>>>>> ---
>>>>>>>>  drivers/xen/swiotlb-xen.c | 4 ++--
>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>>>>>>> index f5c1af4ce9ab..aed92fa019f9 100644
>>>>>>>> --- a/drivers/xen/swiotlb-xen.c
>>>>>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>>>>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>>>>>>>>  	/* Convert the size to actually allocated. */
>>>>>>>>  	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>>>>>>  
>>>>>>>> -	if (((dev_addr + size - 1 <= dma_mask)) ||
>>>>>>>> -	    range_straddles_page_boundary(phys, size))
>>>>>>>> +	if ((dev_addr + size - 1 <= dma_mask) &&
>>>>>>>> +	    !range_straddles_page_boundary(phys, size))
>>>>>>>>  		xen_destroy_contiguous_region(phys, order);
>>>>>> I don't think this is right.
>>>>>>
>>>>>> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size))
>>>>>>
>>>>>> No?
>>>>> No this is not correct.
>>>>>
>>>>> When allocate memory, it tried to allocated from Dom0/Guest, then check if physical
>>>>> address is DMA memory also contiguous, if no, exchange with Hypervisor, code as below:
>>>>>
>>>>> 326         phys = *dma_handle;                                                     
>>>>> 327         dev_addr = xen_phys_to_bus(phys);                                       
>>>>> 328         if (((dev_addr + size - 1 <= dma_mask)) &&                              
>>>>> 329             !range_straddles_page_boundary(phys, size))                         
>>>>> 330                 *dma_handle = dev_addr;                                         
>>>>> 331         else {                                                                  
>>>>> 332                 if (xen_create_contiguous_region(phys, order,                   
>>>>> 333                                                  fls64(dma_mask), dma_handle) != 0) {
>>>>> 334                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>>>>> 335                         return NULL;                                            
>>>>> 336                 }                                                               
>>>>> 337         }                                                                       
>>>>>                                                                      
>>>>>
>>>>> On freeing, need to return the memory to Xen, otherwise DMA memory will be used
>>>>> up(this is the issue the patch intend to fix), so when memory is DMAable and
>>>>> contiguous then call xen_destroy_contiguous_region(), return DMA memory to Xen.
>>>> So if you want to allocate 1 byte at address 0 (and dev_addr=phys),
>>>> xen_create_contiguous_region() will not be called. And yet you will call
>>>> xen_destroy_contiguous_region() in the free path.
>>>>
>>>> Is this the expected behavior?
>>> I could not say it's expected behavior, but I think it's reasonable.
>>
>> I would expect xen_create_contiguous_region() and
>> xen_destroy_contiguous_region() to come in pairs. If a region is
>> created, it needs to be destroyed. And vice versa.
>>
>>
>>>
>>> On allocating, it used __get_free_pages() to allocate memory, if lucky the memory is 
>>> DMAable, will not exchange memory with hypervisor, obviously this is not guaranteed.
>>>
>>> And on freeing it could not be identified if memory from Dom0/guest own memory
>>> or hypervisor
>>
>>
>> I think it can be. if (!(dev_addr + size - 1 <= dma_mask) ||
>> range_straddles_page_boundary()) then it must have come from the
>> hypervisor, because that's the check we make in
>> xen_swiotlb_alloc_coherent().
> 
> This is not true.
> 
> dev_addr was came from dma_handle, *dma_handle will be changed  after called
> xen_create_contiguous_region():
> 
> 2590 int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,        
> 2591                                  unsigned int address_bits,                     
> 2592                                  dma_addr_t *dma_handle)                        
> 2593 {                                                                               
> ......
> 2617         success = xen_exchange_memory(1UL << order, 0, in_frames,               
> 2618                                       1, order, &out_frame,                     
> 2619                                       address_bits);                            
> 2620                                                                                 
> 2621         /* 3. Map the new extent in place of old pages. */                      
> 2622         if (success)                                                            
> 2623                 xen_remap_exchanged_ptes(vstart, order, NULL, out_frame);       
> 2624         else                                                                    
> 2625                 xen_remap_exchanged_ptes(vstart, order, in_frames, 0);          
> 2626                                                                                 
> 2627         spin_unlock_irqrestore(&xen_reservation_lock, flags);                   
> 2628                                                                                 
> 2629         *dma_handle = virt_to_machine(vstart).maddr;                            
> 2630         return success ? 0 : -ENOMEM;                                           
> 2631 }                                                                               
> 
> 
> So means dev_addr check on xen_swiotlb_alloc_coherent() is not same one on
> xen_swiotlb_free_coherent().
> 
> Thanks,
> Joe
> 
> 
>>
>>
>> -boris
>>
>>
>>> , if don't back memory to hypervisor which will lead hypervisor DMA 
>>> memory be used up, then on Dom0/guest, DMA request maybe failed, the worse thing is
>>> could not start any new guest.
>>>
>>> Thanks,
>>> Joe
>>>
>>>> -boris
>>>>
>>


-- 
Oracle <http://www.oracle.com>
Joe Jin | Software Development Director 
ORACLE | Linux and Virtualization
500 Oracle Parkway Redwood City, CA US 94065

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-25 18:56               ` Joe Jin
@ 2018-10-26  7:48                 ` Christoph Helwig
  2018-10-26  8:54                   ` Dongli Zhang
  2018-10-26 14:40                   ` Joe Jin
  2018-10-30  2:51                 ` Joe Jin
  1 sibling, 2 replies; 17+ messages in thread
From: Christoph Helwig @ 2018-10-26  7:48 UTC (permalink / raw)
  To: Joe Jin
  Cc: Boris Ostrovsky, Konrad Rzeszutek Wilk, DONGLI.ZHANG, konrad,
	Christoph Helwig, John Sobecki, xen-devel, linux-kernel

On Thu, Oct 25, 2018 at 11:56:02AM -0700, Joe Jin wrote:
> I just discussed this patch with Boris in private, his opinions(Boris,
> please correct me if any misunderstood) are:
> 
> 1. With/without the check, both are incorrect, he thought we need to
>    prevented unalloc'd free at here. 
> 2. On freeing, if upper layer already checked the memory was DMA-able,
>    the checking at here does not make sense, we can remove all checks.
> 3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
>    to come in pairs.
> 
> For #1 and #3, I think we need something associate it, like a list, on
> allocating, add addr to it, on freeing, check if in the list.

Is there any way to figure out based on an address if the exchange
operation happened?

> For #2, I'm was not found anywhere validated the address on 
> dma_free_coherent() callpath, not just xen-swiotlb.

At least for simple direct mappings there is no easy way to verify that
without keeping a list, and for some of the ops that do vmap like
operations we have basic santiy checks, but nothing that really catches
a wrong free.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-26  7:48                 ` Christoph Helwig
@ 2018-10-26  8:54                   ` Dongli Zhang
  2018-10-26 14:48                     ` Joe Jin
  2018-10-26 14:40                   ` Joe Jin
  1 sibling, 1 reply; 17+ messages in thread
From: Dongli Zhang @ 2018-10-26  8:54 UTC (permalink / raw)
  To: Joe Jin
  Cc: Christoph Helwig, Boris Ostrovsky, Konrad Rzeszutek Wilk, konrad,
	John Sobecki, xen-devel, linux-kernel

Hi Joe,

On 10/26/2018 03:48 PM, Christoph Helwig wrote:
> On Thu, Oct 25, 2018 at 11:56:02AM -0700, Joe Jin wrote:
>> I just discussed this patch with Boris in private, his opinions(Boris,
>> please correct me if any misunderstood) are:
>>
>> 1. With/without the check, both are incorrect, he thought we need to
>>    prevented unalloc'd free at here. 
>> 2. On freeing, if upper layer already checked the memory was DMA-able,
>>    the checking at here does not make sense, we can remove all checks.
>> 3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
>>    to come in pairs.
>>
>> For #1 and #3, I think we need something associate it, like a list, on
>> allocating, add addr to it, on freeing, check if in the list.

If dom0 (or any domain) is happy, although it could try to exchange all its
continuous dma pages back to xen hypervisor. From the perspective of each
domain, they always would like to keep as much continuous dma page as possible.

I am thinking something different. If there is malicious domU keep exchanging
memory and allocating continuous pages from xen hypervisor, will the
continuously dma pages be used up (sort of DoS attack)?

I am not sure if there is anything in xen hypervisor to prevent such behavior?

Dongli Zhang

> 
> Is there any way to figure out based on an address if the exchange
> operation happened?
> 
>> For #2, I'm was not found anywhere validated the address on 
>> dma_free_coherent() callpath, not just xen-swiotlb.
> 
> At least for simple direct mappings there is no easy way to verify that
> without keeping a list, and for some of the ops that do vmap like
> operations we have basic santiy checks, but nothing that really catches
> a wrong free.
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-26  7:48                 ` Christoph Helwig
  2018-10-26  8:54                   ` Dongli Zhang
@ 2018-10-26 14:40                   ` Joe Jin
  1 sibling, 0 replies; 17+ messages in thread
From: Joe Jin @ 2018-10-26 14:40 UTC (permalink / raw)
  To: Christoph Helwig
  Cc: Boris Ostrovsky, Konrad Rzeszutek Wilk, DONGLI.ZHANG, konrad,
	John Sobecki, xen-devel, linux-kernel

Hi Christoph,

On 10/26/18 12:48 AM, Christoph Helwig wrote:
> On Thu, Oct 25, 2018 at 11:56:02AM -0700, Joe Jin wrote:
>> I just discussed this patch with Boris in private, his opinions(Boris,
>> please correct me if any misunderstood) are:
>>
>> 1. With/without the check, both are incorrect, he thought we need to
>>    prevented unalloc'd free at here. 
>> 2. On freeing, if upper layer already checked the memory was DMA-able,
>>    the checking at here does not make sense, we can remove all checks.
>> 3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
>>    to come in pairs.
>>
>> For #1 and #3, I think we need something associate it, like a list, on
>> allocating, add addr to it, on freeing, check if in the list.
> 
> Is there any way to figure out based on an address if the exchange
> operation happened?

Read the code path and I was not found anywhere will store related info,
on current code, it assuming if memory in DMA area also contiguous then
it from Xen, most time it's true, but if lucky that __get_free_pages()
returned memory is DMAable, it will not exchange with Xen, during my testing
I observed same(Xen DMA heap increased).

> 
>> For #2, I'm was not found anywhere validated the address on 
>> dma_free_coherent() callpath, not just xen-swiotlb.
> 
> At least for simple direct mappings there is no easy way to verify that
> without keeping a list, and for some of the ops that do vmap like> operations we have basic santiy checks, but nothing that really catches
> a wrong free.

I agree with you, add a list will help this issue, but it may introduce some
performance issue, especially on heavy DMA system.

Driver use DMA pool will help to avoid this issue, but not all kinds DMA ops
are suitable to create a pool.

Thanks,
Joe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-26  8:54                   ` Dongli Zhang
@ 2018-10-26 14:48                     ` Joe Jin
  0 siblings, 0 replies; 17+ messages in thread
From: Joe Jin @ 2018-10-26 14:48 UTC (permalink / raw)
  To: Dongli Zhang
  Cc: Christoph Helwig, Boris Ostrovsky, Konrad Rzeszutek Wilk, konrad,
	John Sobecki, xen-devel, linux-kernel

On 10/26/18 1:54 AM, Dongli Zhang wrote:
> If dom0 (or any domain) is happy, although it could try to exchange all its
> continuous dma pages back to xen hypervisor. From the perspective of each
> domain, they always would like to keep as much continuous dma page as possible.
> 
> I am thinking something different. If there is malicious domU keep exchanging
> memory and allocating continuous pages from xen hypervisor, will the
> continuously dma pages be used up (sort of DoS attack)?

This is a problem.

> 
> I am not sure if there is anything in xen hypervisor to prevent such behavior?

I'm not sure but I guess it hard to prevent it, xen hypervisor could not identify
if the requirement is reasonable or no.

Maybe Xen reserve some low memory for guest start?

Thanks,
Joe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
  2018-10-25 18:56               ` Joe Jin
  2018-10-26  7:48                 ` Christoph Helwig
@ 2018-10-30  2:51                 ` Joe Jin
       [not found]                   ` <57e5593233c64dc0a36c7d4c750a1ed4@AMSPEX02CL03.citrite.net>
  1 sibling, 1 reply; 17+ messages in thread
From: Joe Jin @ 2018-10-30  2:51 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: DONGLI.ZHANG, konrad, Christoph Helwig, John Sobecki, xen-devel,
	linux-kernel@vger.kernel.org"

On 10/25/18 11:56 AM, Joe Jin wrote:
> I just discussed this patch with Boris in private, his opinions(Boris,
> please correct me if any misunderstood) are:
> 
> 1. With/without the check, both are incorrect, he thought we need to
>    prevented unalloc'd free at here. 
> 2. On freeing, if upper layer already checked the memory was DMA-able,
>    the checking at here does not make sense, we can remove all checks.
> 3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
>    to come in pairs.
I tried to added radix_tree to track allocating/freeing and I found some
memory only allocated but was not freed, I guess it caused by driver used
dma_pool, that means if lots of such requests, the list will consume lot
of memory for it. Will continue to work on it, if anyone have good idea
for it please let me know, I'd like to try it.

Thanks,
Joe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
       [not found]                   ` <57e5593233c64dc0a36c7d4c750a1ed4@AMSPEX02CL03.citrite.net>
@ 2018-10-30 14:12                     ` Joe Jin
       [not found]                       ` <097f1f6f16f7415aa3a52a7c4f5e52dc@AMSPEX02CL03.citrite.net>
  0 siblings, 1 reply; 17+ messages in thread
From: Joe Jin @ 2018-10-30 14:12 UTC (permalink / raw)
  To: Paul Durrant, Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: John Sobecki, DONGLI.ZHANG, linux-kernel@vger.kernel.org",
	konrad, xen-devel, Christoph Helwig

On 10/30/18 1:59 AM, Paul Durrant wrote:
>> On 10/25/18 11:56 AM, Joe Jin wrote:
>>> I just discussed this patch with Boris in private, his opinions(Boris,
>>> please correct me if any misunderstood) are:
>>>
>>> 1. With/without the check, both are incorrect, he thought we need to
>>>    prevented unalloc'd free at here.
>>> 2. On freeing, if upper layer already checked the memory was DMA-able,
>>>    the checking at here does not make sense, we can remove all checks.
>>> 3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
>>>    to come in pairs.
>> I tried to added radix_tree to track allocating/freeing and I found some
>> memory only allocated but was not freed, I guess it caused by driver used
>> dma_pool, that means if lots of such requests, the list will consume lot
>> of memory for it. Will continue to work on it, if anyone have good idea
>> for it please let me know, I'd like to try it.
>>
> FWIW, in my Xen PV-IOMMU test patches, I have also tried keeping a list of ranges mapped for DMA and have discovered apparent issues with some drivers, particularly tg3, that seem to free mappings that have not been allocated (or possibly double-free). I've never fully tracked down the issue.

Call trace of first called xen_swiotlb_alloc_coherent(The pages never backed to Xen):

[   23.436333]  [<ffffffff814040c9>] xen_swiotlb_alloc_coherent+0x169/0x510
[   23.436623]  [<ffffffff811eb38d>] ? kmem_cache_alloc_trace+0x1ed/0x280
[   23.436900]  [<ffffffff811d72af>] dma_pool_alloc+0x11f/0x260
[   23.437190]  [<ffffffff81537442>] ehci_qh_alloc+0x52/0x120
[   23.437481]  [<ffffffff8153b80f>] ehci_setup+0x2bf/0x8e0
[   23.437760]  [<ffffffff81476d06>] ? __dev_printk+0x46/0xa0
[   23.438042]  [<ffffffff814770b3>] ? _dev_info+0x53/0x60
[   23.438327]  [<ffffffff8153f620>] ehci_pci_setup+0xc0/0x5f0
[   23.438615]  [<ffffffff81519fcd>] usb_add_hcd+0x25d/0xaf0
[   23.438901]  [<ffffffff8152c9a6>] usb_hcd_pci_probe+0x406/0x520
[   23.439177]  [<ffffffff8153f486>] ehci_pci_probe+0x36/0x40
[   23.439469]  [<ffffffff8136e99a>] local_pci_probe+0x4a/0xb0
[   23.439752]  [<ffffffff8136fba5>] ? pci_match_device+0xe5/0x110
[   23.440027]  [<ffffffff8136fce1>] pci_device_probe+0xd1/0x120
[   23.440320]  [<ffffffff8147b13c>] driver_probe_device+0x20c/0x4d0
[   23.440599]  [<ffffffff8147b4eb>] __driver_attach+0x9b/0xa0
[   23.440879]  [<ffffffff8147b450>] ? __device_attach+0x50/0x50

Above was EHCI used DMA pool to allocate DMA memory.

During my testing, ~1000 entries was not freed, if more PCI devices
use DMA pool, the tree/list will have more entries, looks it's not a
good idea that use a list to track it.

Thanks,
Joe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
       [not found]                       ` <097f1f6f16f7415aa3a52a7c4f5e52dc@AMSPEX02CL03.citrite.net>
@ 2018-10-30 14:48                         ` Joe Jin
  0 siblings, 0 replies; 17+ messages in thread
From: Joe Jin @ 2018-10-30 14:48 UTC (permalink / raw)
  To: Paul Durrant, Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: John Sobecki, DONGLI.ZHANG, linux-kernel@vger.kernel.org",
	konrad, xen-devel, Christoph Helwig

On 10/30/18 7:21 AM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Joe Jin
>> Sent: 30 October 2018 14:13
>> To: Paul Durrant <Paul.Durrant@citrix.com>; Boris Ostrovsky
>> <boris.ostrovsky@oracle.com>; Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>
>> Cc: John Sobecki <john.sobecki@oracle.com>; DONGLI.ZHANG
>> <dongli.zhang@oracle.com>; linux-kernel@vger.kernel.org" <linux-
>> kernel@vger.kernel.org>; konrad@kernel.org; xen-
>> devel@lists.xenproject.org; Christoph Helwig <hch@lst.de>
>> Subject: Re: [Xen-devel] [PATCH] xen-swiotlb: exchange memory with Xen
>> only when pages are contiguous
>>
>> On 10/30/18 1:59 AM, Paul Durrant wrote:
>>>> On 10/25/18 11:56 AM, Joe Jin wrote:
>>>>> I just discussed this patch with Boris in private, his opinions(Boris,
>>>>> please correct me if any misunderstood) are:
>>>>>
>>>>> 1. With/without the check, both are incorrect, he thought we need to
>>>>>    prevented unalloc'd free at here.
>>>>> 2. On freeing, if upper layer already checked the memory was DMA-able,
>>>>>    the checking at here does not make sense, we can remove all checks.
>>>>> 3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
>>>>>    to come in pairs.
>>>> I tried to added radix_tree to track allocating/freeing and I found
>> some
>>>> memory only allocated but was not freed, I guess it caused by driver
>> used
>>>> dma_pool, that means if lots of such requests, the list will consume
>> lot
>>>> of memory for it. Will continue to work on it, if anyone have good idea
>>>> for it please let me know, I'd like to try it.
>>>>
>>> FWIW, in my Xen PV-IOMMU test patches, I have also tried keeping a list
>> of ranges mapped for DMA and have discovered apparent issues with some
>> drivers, particularly tg3, that seem to free mappings that have not been
>> allocated (or possibly double-free). I've never fully tracked down the
>> issue.
>>
>> Call trace of first called xen_swiotlb_alloc_coherent(The pages never
>> backed to Xen):
>>
>> [   23.436333]  [<ffffffff814040c9>]
>> xen_swiotlb_alloc_coherent+0x169/0x510
>> [   23.436623]  [<ffffffff811eb38d>] ? kmem_cache_alloc_trace+0x1ed/0x280
>> [   23.436900]  [<ffffffff811d72af>] dma_pool_alloc+0x11f/0x260
>> [   23.437190]  [<ffffffff81537442>] ehci_qh_alloc+0x52/0x120
>> [   23.437481]  [<ffffffff8153b80f>] ehci_setup+0x2bf/0x8e0
>> [   23.437760]  [<ffffffff81476d06>] ? __dev_printk+0x46/0xa0
>> [   23.438042]  [<ffffffff814770b3>] ? _dev_info+0x53/0x60
>> [   23.438327]  [<ffffffff8153f620>] ehci_pci_setup+0xc0/0x5f0
>> [   23.438615]  [<ffffffff81519fcd>] usb_add_hcd+0x25d/0xaf0
>> [   23.438901]  [<ffffffff8152c9a6>] usb_hcd_pci_probe+0x406/0x520
>> [   23.439177]  [<ffffffff8153f486>] ehci_pci_probe+0x36/0x40
>> [   23.439469]  [<ffffffff8136e99a>] local_pci_probe+0x4a/0xb0
>> [   23.439752]  [<ffffffff8136fba5>] ? pci_match_device+0xe5/0x110
>> [   23.440027]  [<ffffffff8136fce1>] pci_device_probe+0xd1/0x120
>> [   23.440320]  [<ffffffff8147b13c>] driver_probe_device+0x20c/0x4d0
>> [   23.440599]  [<ffffffff8147b4eb>] __driver_attach+0x9b/0xa0
>> [   23.440879]  [<ffffffff8147b450>] ? __device_attach+0x50/0x50
>>
>> Above was EHCI used DMA pool to allocate DMA memory.
>>
>> During my testing, ~1000 entries was not freed, if more PCI devices
>> use DMA pool, the tree/list will have more entries, looks it's not a
>> good idea that use a list to track it.
>>
> 
> Yes, it seems pools can hang onto a serious number of allocations so a list is probably not wise.

I agree with you.

> What I was pointing out, though, is that it appears you can't even track mappings (as opposed to allocations) with a list.

Right.


> either because drivers apparently try to unmap things they have not mapped.

If this happened, should be fixed by driver :)

Thanks,
Joe
> 
>   Paul
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-10-30 14:48 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-24  3:09 [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous Joe Jin
2018-10-24 13:02 ` Konrad Rzeszutek Wilk
2018-10-24 13:57   ` Boris Ostrovsky
2018-10-24 14:43     ` Joe Jin
2018-10-25 11:45       ` Boris Ostrovsky
2018-10-25 14:23         ` Joe Jin
2018-10-25 16:10           ` Boris Ostrovsky
2018-10-25 16:28             ` Joe Jin
2018-10-25 18:56               ` Joe Jin
2018-10-26  7:48                 ` Christoph Helwig
2018-10-26  8:54                   ` Dongli Zhang
2018-10-26 14:48                     ` Joe Jin
2018-10-26 14:40                   ` Joe Jin
2018-10-30  2:51                 ` Joe Jin
     [not found]                   ` <57e5593233c64dc0a36c7d4c750a1ed4@AMSPEX02CL03.citrite.net>
2018-10-30 14:12                     ` [Xen-devel] " Joe Jin
     [not found]                       ` <097f1f6f16f7415aa3a52a7c4f5e52dc@AMSPEX02CL03.citrite.net>
2018-10-30 14:48                         ` Joe Jin
2018-10-24 16:05   ` Joe Jin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).