LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH rdma-rc] RDMA/mlx5: Delay emptying a cache entry when a new MR is added to it recently
@ 2021-07-27  7:16 Leon Romanovsky
  2021-08-03 15:56 ` Jason Gunthorpe
  0 siblings, 1 reply; 2+ messages in thread
From: Leon Romanovsky @ 2021-07-27  7:16 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Aharon Landau, linux-kernel, linux-rdma, Maor Gottlieb

From: Aharon Landau <aharonl@nvidia.com>

Fixing a typo that causes a cache entry to shrink immediately after
adding to it new MRs if the entry size exceeds the high limit.
In doing so, the cache misses its purpose to prevent the creation of new
mkeys on the runtime by using the cached ones.

Fixes: b9358bdbc713 ("RDMA/mlx5: Fix locking in MR cache work queue")
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 3263851ea574..3f1c5a4f158b 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -531,8 +531,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
 		 */
 		spin_unlock_irq(&ent->lock);
 		need_delay = need_resched() || someone_adding(cache) ||
-			     time_after(jiffies,
-					READ_ONCE(cache->last_add) + 300 * HZ);
+			     !time_after(jiffies,
+					 READ_ONCE(cache->last_add) + 300 * HZ);
 		spin_lock_irq(&ent->lock);
 		if (ent->disabled)
 			goto out;
-- 
2.31.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH rdma-rc] RDMA/mlx5: Delay emptying a cache entry when a new MR is added to it recently
  2021-07-27  7:16 [PATCH rdma-rc] RDMA/mlx5: Delay emptying a cache entry when a new MR is added to it recently Leon Romanovsky
@ 2021-08-03 15:56 ` Jason Gunthorpe
  0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2021-08-03 15:56 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Aharon Landau, linux-kernel, linux-rdma, Maor Gottlieb

On Tue, Jul 27, 2021 at 10:16:06AM +0300, Leon Romanovsky wrote:
> From: Aharon Landau <aharonl@nvidia.com>
> 
> Fixing a typo that causes a cache entry to shrink immediately after
> adding to it new MRs if the entry size exceeds the high limit.
> In doing so, the cache misses its purpose to prevent the creation of new
> mkeys on the runtime by using the cached ones.
> 
> Fixes: b9358bdbc713 ("RDMA/mlx5: Fix locking in MR cache work queue")
> Signed-off-by: Aharon Landau <aharonl@nvidia.com>
> Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>  drivers/infiniband/hw/mlx5/mr.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Applied to for-rc, thanks

Jason

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-08-03 15:56 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-27  7:16 [PATCH rdma-rc] RDMA/mlx5: Delay emptying a cache entry when a new MR is added to it recently Leon Romanovsky
2021-08-03 15:56 ` Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).