LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user()
@ 2021-03-09 21:21 ira.weiny
  2021-03-09 21:21 ` [PATCH 1/3] iov_iter: Lift memzero_page() to highmem.h ira.weiny
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: ira.weiny @ 2021-03-09 21:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Ira Weiny, David Sterba, Chris Mason, Josef Bacik, linux-btrfs,
	linux-kernel

From: Ira Weiny <ira.weiny@intel.com>

Previously this was submitted to convert to zero_user()[1].  zero_user() is not
the same as memzero_user() and in fact some zero_user() calls may be better off
as memzero_user().  Regardless it was incorrect to convert btrfs to
zero_user().

This series corrects this by lifting memzero_user(), converting it to
kmap_local_page(), and then using it in btrfs.

Thanks,
Ira

[1] https://lore.kernel.org/lkml/20210223192506.GY3014244@iweiny-DESK2.sc.intel.com/


Ira Weiny (3):
  iov_iter: Lift memzero_page() to highmem.h
  mm/highmem: Convert memzero_page() to kmap_local_page()
  btrfs: Use memzero_page() instead of open coded kmap pattern

 fs/btrfs/compression.c  |  5 +----
 fs/btrfs/extent_io.c    | 22 ++++------------------
 fs/btrfs/inode.c        | 33 ++++++++++-----------------------
 fs/btrfs/reflink.c      |  6 +-----
 fs/btrfs/zlib.c         |  5 +----
 fs/btrfs/zstd.c         |  5 +----
 include/linux/highmem.h |  7 +++++++
 lib/iov_iter.c          |  8 +-------
 8 files changed, 26 insertions(+), 65 deletions(-)

-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] iov_iter: Lift memzero_page() to highmem.h
  2021-03-09 21:21 [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() ira.weiny
@ 2021-03-09 21:21 ` ira.weiny
  2021-03-09 21:21 ` [PATCH 2/3] mm/highmem: Convert memzero_page() to kmap_local_page() ira.weiny
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: ira.weiny @ 2021-03-09 21:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Ira Weiny, Alexander Viro, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, linux-kernel

From: Ira Weiny <ira.weiny@intel.com>

memzero_page() can replace the kmap/memset/kunmap pattern in other
places in the code.  While zero_user() has the same interface it is not
the same call and its use should be limited and some of those calls may
be better converted from zero_user() to memzero_page().[1]  But that is
not addressed in this series.

Lift memzero_page() to highmem.

To: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: David Sterba <dsterba@suse.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>

[1] https://lore.kernel.org/lkml/CAHk-=wijdojzo56FzYqE5TOYw2Vws7ik3LEMGj9SPQaJJ+Z73Q@mail.gmail.com/
---
 include/linux/highmem.h | 7 +++++++
 lib/iov_iter.c          | 8 +-------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 44170f312ae7..832b49b50c7b 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -332,4 +332,11 @@ static inline void memcpy_to_page(struct page *page, size_t offset,
 	kunmap_local(to);
 }
 
+static inline void memzero_page(struct page *page, size_t offset, size_t len)
+{
+	char *addr = kmap_atomic(page);
+	memset(addr + offset, 0, len);
+	kunmap_atomic(addr);
+}
+
 #endif /* _LINUX_HIGHMEM_H */
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index f66c62aa7154..b0b1c8a01fae 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -5,6 +5,7 @@
 #include <linux/fault-inject-usercopy.h>
 #include <linux/uio.h>
 #include <linux/pagemap.h>
+#include <linux/highmem.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
 #include <linux/splice.h>
@@ -464,13 +465,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction,
 }
 EXPORT_SYMBOL(iov_iter_init);
 
-static void memzero_page(struct page *page, size_t offset, size_t len)
-{
-	char *addr = kmap_atomic(page);
-	memset(addr + offset, 0, len);
-	kunmap_atomic(addr);
-}
-
 static inline bool allocated(struct pipe_buffer *buf)
 {
 	return buf->ops == &default_pipe_buf_ops;
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 2/3] mm/highmem: Convert memzero_page() to kmap_local_page()
  2021-03-09 21:21 [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() ira.weiny
  2021-03-09 21:21 ` [PATCH 1/3] iov_iter: Lift memzero_page() to highmem.h ira.weiny
@ 2021-03-09 21:21 ` ira.weiny
  2021-03-09 21:21 ` [PATCH 3/3] btrfs: Use memzero_page() instead of open coded kmap pattern ira.weiny
  2021-03-10 23:58 ` [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() Andrew Morton
  3 siblings, 0 replies; 8+ messages in thread
From: ira.weiny @ 2021-03-09 21:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Ira Weiny, Chaitanya Kulkarni, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, linux-kernel

From: Ira Weiny <ira.weiny@intel.com>

The memset() does not need to be performed atomically.  Use
kmap_local_page() which will improved performance for this call.

Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Cc: David Sterba <dsterba@suse.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 include/linux/highmem.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 832b49b50c7b..0dc0451cf1d1 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -334,9 +334,9 @@ static inline void memcpy_to_page(struct page *page, size_t offset,
 
 static inline void memzero_page(struct page *page, size_t offset, size_t len)
 {
-	char *addr = kmap_atomic(page);
+	char *addr = kmap_local_page(page);
 	memset(addr + offset, 0, len);
-	kunmap_atomic(addr);
+	kunmap_local(addr);
 }
 
 #endif /* _LINUX_HIGHMEM_H */
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 3/3] btrfs: Use memzero_page() instead of open coded kmap pattern
  2021-03-09 21:21 [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() ira.weiny
  2021-03-09 21:21 ` [PATCH 1/3] iov_iter: Lift memzero_page() to highmem.h ira.weiny
  2021-03-09 21:21 ` [PATCH 2/3] mm/highmem: Convert memzero_page() to kmap_local_page() ira.weiny
@ 2021-03-09 21:21 ` ira.weiny
  2021-03-12 11:23   ` David Sterba
  2021-03-10 23:58 ` [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() Andrew Morton
  3 siblings, 1 reply; 8+ messages in thread
From: ira.weiny @ 2021-03-09 21:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Ira Weiny, David Sterba, Chris Mason, Josef Bacik, linux-btrfs,
	linux-kernel

From: Ira Weiny <ira.weiny@intel.com>

There are many places where kmap/memset/kunmap patterns occur.

Use the newly lifted memzero_page() to eliminate direct uses of kmap and
leverage the new core functions use of kmap_local_page().

The development of this patch was aided by the following coccinelle
script:

// <smpl>
// SPDX-License-Identifier: GPL-2.0-only
// Find kmap/memset/kunmap pattern and replace with memset*page calls
//
// NOTE: Offsets and other expressions may be more complex than what the script
// will automatically generate.  Therefore a catchall rule is provided to find
// the pattern which then must be evaluated by hand.
//
// Confidence: Low
// Copyright: (C) 2021 Intel Corporation
// URL: http://coccinelle.lip6.fr/
// Comments:
// Options:

//
// Then the memset pattern
//
@ memset_rule1 @
expression page, V, L, Off;
identifier ptr;
type VP;
@@

(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
-memset(ptr, 0, L);
+memzero_page(page, 0, L);
|
-memset(ptr + Off, 0, L);
+memzero_page(page, Off, L);
|
-memset(ptr, V, L);
+memset_page(page, V, 0, L);
|
-memset(ptr + Off, V, L);
+memset_page(page, V, Off, L);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)

// Remove any pointers left unused
@
depends on memset_rule1
@
identifier memset_rule1.ptr;
type VP, VP1;
@@

-VP ptr;
	... when != ptr;
? VP1 ptr;

//
// Catch all
//
@ memset_rule2 @
expression page;
identifier ptr;
expression GenTo, GenSize, GenValue;
type VP;
@@

(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
//
// Some call sites have complex expressions within the memset/memcpy
// The follow are catch alls which need to be evaluated by hand.
//
-memset(GenTo, 0, GenSize);
+memzero_pageExtra(page, GenTo, GenSize);
|
-memset(GenTo, GenValue, GenSize);
+memset_pageExtra(page, GenValue, GenTo, GenSize);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)

// Remove any pointers left unused
@
depends on memset_rule2
@
identifier memset_rule2.ptr;
type VP, VP1;
@@

-VP ptr;
	... when != ptr;
? VP1 ptr;

// </smpl>

Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
Changes from v2:
	Refactor to use memzero_page() per Linus

Changes from v1:
	Update commit message per David
		https://lore.kernel.org/lkml/20210209151442.GU1993@suse.cz/
---
 fs/btrfs/compression.c |  5 +----
 fs/btrfs/extent_io.c   | 22 ++++------------------
 fs/btrfs/inode.c       | 33 ++++++++++-----------------------
 fs/btrfs/reflink.c     |  6 +-----
 fs/btrfs/zlib.c        |  5 +----
 fs/btrfs/zstd.c        |  5 +----
 6 files changed, 18 insertions(+), 58 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 2600703fab83..b4ed708b0edb 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -586,16 +586,13 @@ static noinline int add_ra_bio_pages(struct inode *inode,
 		free_extent_map(em);
 
 		if (page->index == end_index) {
-			char *userpage;
 			size_t zero_offset = offset_in_page(isize);
 
 			if (zero_offset) {
 				int zeros;
 				zeros = PAGE_SIZE - zero_offset;
-				userpage = kmap_atomic(page);
-				memset(userpage + zero_offset, 0, zeros);
+				memzero_page(page, zero_offset, zeros);
 				flush_dcache_page(page);
-				kunmap_atomic(userpage);
 			}
 		}
 
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 4dfb3ead1175..4aea921e33b3 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3381,15 +3381,12 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
 	}
 
 	if (page->index == last_byte >> PAGE_SHIFT) {
-		char *userpage;
 		size_t zero_offset = offset_in_page(last_byte);
 
 		if (zero_offset) {
 			iosize = PAGE_SIZE - zero_offset;
-			userpage = kmap_atomic(page);
-			memset(userpage + zero_offset, 0, iosize);
+			memzero_page(page, zero_offset, iosize);
 			flush_dcache_page(page);
-			kunmap_atomic(userpage);
 		}
 	}
 	begin_page_read(fs_info, page);
@@ -3398,14 +3395,11 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
 		u64 disk_bytenr;
 
 		if (cur >= last_byte) {
-			char *userpage;
 			struct extent_state *cached = NULL;
 
 			iosize = PAGE_SIZE - pg_offset;
-			userpage = kmap_atomic(page);
-			memset(userpage + pg_offset, 0, iosize);
+			memzero_page(page, pg_offset, iosize);
 			flush_dcache_page(page);
-			kunmap_atomic(userpage);
 			set_extent_uptodate(tree, cur, cur + iosize - 1,
 					    &cached, GFP_NOFS);
 			unlock_extent_cached(tree, cur,
@@ -3488,13 +3482,10 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
 
 		/* we've found a hole, just zero and go on */
 		if (block_start == EXTENT_MAP_HOLE) {
-			char *userpage;
 			struct extent_state *cached = NULL;
 
-			userpage = kmap_atomic(page);
-			memset(userpage + pg_offset, 0, iosize);
+			memzero_page(page, pg_offset, iosize);
 			flush_dcache_page(page);
-			kunmap_atomic(userpage);
 
 			set_extent_uptodate(tree, cur, cur + iosize - 1,
 					    &cached, GFP_NOFS);
@@ -3805,12 +3796,7 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc,
 	}
 
 	if (page->index == end_index) {
-		char *userpage;
-
-		userpage = kmap_atomic(page);
-		memset(userpage + pg_offset, 0,
-		       PAGE_SIZE - pg_offset);
-		kunmap_atomic(userpage);
+		memzero_page(page, pg_offset, PAGE_SIZE - pg_offset);
 		flush_dcache_page(page);
 	}
 
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9ae1aa9166aa..a9db214c6397 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -641,17 +641,12 @@ static noinline int compress_file_range(struct async_chunk *async_chunk)
 		if (!ret) {
 			unsigned long offset = offset_in_page(total_compressed);
 			struct page *page = pages[nr_pages - 1];
-			char *kaddr;
 
 			/* zero the tail end of the last page, we might be
 			 * sending it down to disk
 			 */
-			if (offset) {
-				kaddr = kmap_atomic(page);
-				memset(kaddr + offset, 0,
-				       PAGE_SIZE - offset);
-				kunmap_atomic(kaddr);
-			}
+			if (offset)
+				memzero_page(page, offset, PAGE_SIZE - offset);
 			will_compress = 1;
 		}
 	}
@@ -4829,7 +4824,6 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
 	struct btrfs_ordered_extent *ordered;
 	struct extent_state *cached_state = NULL;
 	struct extent_changeset *data_reserved = NULL;
-	char *kaddr;
 	bool only_release_metadata = false;
 	u32 blocksize = fs_info->sectorsize;
 	pgoff_t index = from >> PAGE_SHIFT;
@@ -4921,15 +4915,13 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
 	if (offset != blocksize) {
 		if (!len)
 			len = blocksize - offset;
-		kaddr = kmap(page);
 		if (front)
-			memset(kaddr + (block_start - page_offset(page)),
-				0, offset);
+			memzero_page(page, (block_start - page_offset(page)),
+				     offset);
 		else
-			memset(kaddr + (block_start - page_offset(page)) +  offset,
-				0, len);
+			memzero_page(page, (block_start - page_offset(page)) + offset,
+				     len);
 		flush_dcache_page(page);
-		kunmap(page);
 	}
 	ClearPageChecked(page);
 	set_page_dirty(page);
@@ -6828,11 +6820,9 @@ static noinline int uncompress_inline(struct btrfs_path *path,
 	 * cover that region here.
 	 */
 
-	if (max_size + pg_offset < PAGE_SIZE) {
-		char *map = kmap(page);
-		memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset);
-		kunmap(page);
-	}
+	if (max_size + pg_offset < PAGE_SIZE)
+		memzero_page(page,  pg_offset + max_size,
+			     PAGE_SIZE - max_size - pg_offset);
 	kfree(tmp);
 	return ret;
 }
@@ -8498,7 +8488,6 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
 	struct btrfs_ordered_extent *ordered;
 	struct extent_state *cached_state = NULL;
 	struct extent_changeset *data_reserved = NULL;
-	char *kaddr;
 	unsigned long zero_start;
 	loff_t size;
 	vm_fault_t ret;
@@ -8610,10 +8599,8 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
 		zero_start = PAGE_SIZE;
 
 	if (zero_start != PAGE_SIZE) {
-		kaddr = kmap(page);
-		memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start);
+		memzero_page(page, zero_start, PAGE_SIZE - zero_start);
 		flush_dcache_page(page);
-		kunmap(page);
 	}
 	ClearPageChecked(page);
 	set_page_dirty(page);
diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
index 762881b777b3..83126f0e952c 100644
--- a/fs/btrfs/reflink.c
+++ b/fs/btrfs/reflink.c
@@ -129,12 +129,8 @@ static int copy_inline_to_page(struct btrfs_inode *inode,
 	 * So what's in the range [500, 4095] corresponds to zeroes.
 	 */
 	if (datal < block_size) {
-		char *map;
-
-		map = kmap(page);
-		memset(map + datal, 0, block_size - datal);
+		memzero_page(page, datal, block_size - datal);
 		flush_dcache_page(page);
-		kunmap(page);
 	}
 
 	SetPageUptodate(page);
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index d524acf7b3e5..c3fa7d3fa770 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -375,7 +375,6 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 	unsigned long bytes_left;
 	unsigned long total_out = 0;
 	unsigned long pg_offset = 0;
-	char *kaddr;
 
 	destlen = min_t(unsigned long, destlen, PAGE_SIZE);
 	bytes_left = destlen;
@@ -455,9 +454,7 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 	 * end of the inline extent (destlen) to the end of the page
 	 */
 	if (pg_offset < destlen) {
-		kaddr = kmap_atomic(dest_page);
-		memset(kaddr + pg_offset, 0, destlen - pg_offset);
-		kunmap_atomic(kaddr);
+		memzero_page(dest_page, pg_offset, destlen - pg_offset);
 	}
 	return ret;
 }
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 8e9626d63976..3e26b466476a 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -631,7 +631,6 @@ int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 	size_t ret2;
 	unsigned long total_out = 0;
 	unsigned long pg_offset = 0;
-	char *kaddr;
 
 	stream = ZSTD_initDStream(
 			ZSTD_BTRFS_MAX_INPUT, workspace->mem, workspace->size);
@@ -696,9 +695,7 @@ int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 	ret = 0;
 finish:
 	if (pg_offset < destlen) {
-		kaddr = kmap_atomic(dest_page);
-		memset(kaddr + pg_offset, 0, destlen - pg_offset);
-		kunmap_atomic(kaddr);
+		memzero_page(dest_page, pg_offset, destlen - pg_offset);
 	}
 	return ret;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user()
  2021-03-09 21:21 [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() ira.weiny
                   ` (2 preceding siblings ...)
  2021-03-09 21:21 ` [PATCH 3/3] btrfs: Use memzero_page() instead of open coded kmap pattern ira.weiny
@ 2021-03-10 23:58 ` Andrew Morton
  2021-03-11 15:57   ` Ira Weiny
  3 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2021-03-10 23:58 UTC (permalink / raw)
  To: ira.weiny
  Cc: David Sterba, Chris Mason, Josef Bacik, linux-btrfs, linux-kernel

On Tue,  9 Mar 2021 13:21:34 -0800 ira.weiny@intel.com wrote:

> Previously this was submitted to convert to zero_user()[1].  zero_user() is not
> the same as memzero_user() and in fact some zero_user() calls may be better off
> as memzero_user().  Regardless it was incorrect to convert btrfs to
> zero_user().
> 
> This series corrects this by lifting memzero_user(), converting it to
> kmap_local_page(), and then using it in btrfs.

This impacts btrfs more than MM.  I suggest the btrfs developers grab
it, with my

Acked-by: Andrew Morton <akpm@linux-foundation.org>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user()
  2021-03-10 23:58 ` [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() Andrew Morton
@ 2021-03-11 15:57   ` Ira Weiny
  2021-03-12 11:57     ` David Sterba
  0 siblings, 1 reply; 8+ messages in thread
From: Ira Weiny @ 2021-03-11 15:57 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Chris Mason, Josef Bacik, linux-btrfs, linux-kernel

On Wed, Mar 10, 2021 at 03:58:36PM -0800, Andrew Morton wrote:
> On Tue,  9 Mar 2021 13:21:34 -0800 ira.weiny@intel.com wrote:
> 
> > Previously this was submitted to convert to zero_user()[1].  zero_user() is not
> > the same as memzero_user() and in fact some zero_user() calls may be better off
> > as memzero_user().  Regardless it was incorrect to convert btrfs to
> > zero_user().
> > 
> > This series corrects this by lifting memzero_user(), converting it to
> > kmap_local_page(), and then using it in btrfs.
> 
> This impacts btrfs more than MM.  I suggest the btrfs developers grab
> it, with my

I thought David wanted you to take these this time?

"I can play the messenger again but now it seems a round of review is needed
and with some testing it'll be possible in some -rc. At that point you may take
the patches via the mm tree, unless Linus is ok with a late pull."

	-- https://lore.kernel.org/lkml/20210224123049.GX1993@twin.jikos.cz/

But reading that again I'm not sure what he meant.

David?

Ira

> 
> Acked-by: Andrew Morton <akpm@linux-foundation.org>
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] btrfs: Use memzero_page() instead of open coded kmap pattern
  2021-03-09 21:21 ` [PATCH 3/3] btrfs: Use memzero_page() instead of open coded kmap pattern ira.weiny
@ 2021-03-12 11:23   ` David Sterba
  0 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2021-03-12 11:23 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, linux-kernel

On Tue, Mar 09, 2021 at 01:21:37PM -0800, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> There are many places where kmap/memset/kunmap patterns occur.
> 
> Use the newly lifted memzero_page() to eliminate direct uses of kmap and
> leverage the new core functions use of kmap_local_page().
> 
> The development of this patch was aided by the following coccinelle
> script:
> 
> // <smpl>
> // SPDX-License-Identifier: GPL-2.0-only
> // Find kmap/memset/kunmap pattern and replace with memset*page calls
> //
> // NOTE: Offsets and other expressions may be more complex than what the script
> // will automatically generate.  Therefore a catchall rule is provided to find
> // the pattern which then must be evaluated by hand.
> //
> // Confidence: Low
> // Copyright: (C) 2021 Intel Corporation
> // URL: http://coccinelle.lip6.fr/
> // Comments:
> // Options:
> 
> //
> // Then the memset pattern
> //
> @ memset_rule1 @
> expression page, V, L, Off;
> identifier ptr;
> type VP;
> @@
> 
> (
> -VP ptr = kmap(page);
> |
> -ptr = kmap(page);
> |
> -VP ptr = kmap_atomic(page);
> |
> -ptr = kmap_atomic(page);
> )
> <+...
> (
> -memset(ptr, 0, L);
> +memzero_page(page, 0, L);
> |
> -memset(ptr + Off, 0, L);
> +memzero_page(page, Off, L);
> |
> -memset(ptr, V, L);
> +memset_page(page, V, 0, L);
> |
> -memset(ptr + Off, V, L);
> +memset_page(page, V, Off, L);
> )
> ...+>
> (
> -kunmap(page);
> |
> -kunmap_atomic(ptr);
> )
> 
> // Remove any pointers left unused
> @
> depends on memset_rule1
> @
> identifier memset_rule1.ptr;
> type VP, VP1;
> @@
> 
> -VP ptr;
> 	... when != ptr;
> ? VP1 ptr;
> 
> //
> // Catch all
> //
> @ memset_rule2 @
> expression page;
> identifier ptr;
> expression GenTo, GenSize, GenValue;
> type VP;
> @@
> 
> (
> -VP ptr = kmap(page);
> |
> -ptr = kmap(page);
> |
> -VP ptr = kmap_atomic(page);
> |
> -ptr = kmap_atomic(page);
> )
> <+...
> (
> //
> // Some call sites have complex expressions within the memset/memcpy
> // The follow are catch alls which need to be evaluated by hand.
> //
> -memset(GenTo, 0, GenSize);
> +memzero_pageExtra(page, GenTo, GenSize);
> |
> -memset(GenTo, GenValue, GenSize);
> +memset_pageExtra(page, GenValue, GenTo, GenSize);
> )
> ...+>
> (
> -kunmap(page);
> |
> -kunmap_atomic(ptr);
> )
> 
> // Remove any pointers left unused
> @
> depends on memset_rule2
> @
> identifier memset_rule2.ptr;
> type VP, VP1;
> @@
> 
> -VP ptr;
> 	... when != ptr;
> ? VP1 ptr;
> 
> // </smpl>
> 
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>

Reviewed-by: David Sterba <dsterba@suse.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user()
  2021-03-11 15:57   ` Ira Weiny
@ 2021-03-12 11:57     ` David Sterba
  0 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2021-03-12 11:57 UTC (permalink / raw)
  To: Ira Weiny
  Cc: Andrew Morton, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, linux-kernel

On Thu, Mar 11, 2021 at 07:57:48AM -0800, Ira Weiny wrote:
> On Wed, Mar 10, 2021 at 03:58:36PM -0800, Andrew Morton wrote:
> > On Tue,  9 Mar 2021 13:21:34 -0800 ira.weiny@intel.com wrote:
> > 
> > > Previously this was submitted to convert to zero_user()[1].  zero_user() is not
> > > the same as memzero_user() and in fact some zero_user() calls may be better off
> > > as memzero_user().  Regardless it was incorrect to convert btrfs to
> > > zero_user().
> > > 
> > > This series corrects this by lifting memzero_user(), converting it to
> > > kmap_local_page(), and then using it in btrfs.
> > 
> > This impacts btrfs more than MM.  I suggest the btrfs developers grab
> > it, with my
> 
> I thought David wanted you to take these this time?
> 
> "I can play the messenger again but now it seems a round of review is needed
> and with some testing it'll be possible in some -rc. At that point you may take
> the patches via the mm tree, unless Linus is ok with a late pull."
> 
> 	-- https://lore.kernel.org/lkml/20210224123049.GX1993@twin.jikos.cz/
> 
> But reading that again I'm not sure what he meant.

As Linus had some objections I was not sure it was still feasible for
the merge window, but this is now sorted. This new patchset does further
changes in MM and the btrfs part is a straightforward cleanup. I've
noticed Andrew added the patches to his queue which I'd prefer so I've
added my reviewed-by to the third patch. Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-03-12 12:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-09 21:21 [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() ira.weiny
2021-03-09 21:21 ` [PATCH 1/3] iov_iter: Lift memzero_page() to highmem.h ira.weiny
2021-03-09 21:21 ` [PATCH 2/3] mm/highmem: Convert memzero_page() to kmap_local_page() ira.weiny
2021-03-09 21:21 ` [PATCH 3/3] btrfs: Use memzero_page() instead of open coded kmap pattern ira.weiny
2021-03-12 11:23   ` David Sterba
2021-03-10 23:58 ` [PATCH 0/3] btrfs: Convert kmap/memset/kunmap to memzero_user() Andrew Morton
2021-03-11 15:57   ` Ira Weiny
2021-03-12 11:57     ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).