* [PATCH 01/10] include/linux/pagemap.h: introduce attach/detach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 02/10] md: remove __clear_page_buffers and use attach/detach_page_private Guoqing Jiang
` (8 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Darrick J. Wong, William Kucharski, Kirill A. Shutemov,
Andreas Gruenbacher, Yang Shi, Yafang Shao, Song Liu, linux-raid,
Chris Mason, Josef Bacik, David Sterba, linux-btrfs, Jaegeuk Kim,
Chao Yu, linux-f2fs-devel, linux-xfs, Anton Altaparmakov,
linux-ntfs-dev, Mike Marshall, Martin Brandenburg, devel,
Thomas Gleixner, Sebastian Andrzej Siewior, Roman Gushchin,
Andreas Dilger
The logic in attach_page_buffers and __clear_page_buffers are quite
paired, but
1. they are located in different files.
2. attach_page_buffers is implemented in buffer_head.h, so it could be
used by other files. But __clear_page_buffers is static function in
buffer.c and other potential users can't call the function, md-bitmap
even copied the function.
So, introduce the new attach/detach_page_private to replace them. With
the new pair of function, we will remove the usage of attach_page_buffers
and __clear_page_buffers in next patches. Thanks for suggestions about
the function name from Alexander Viro, Andreas Grünbacher, Christoph
Hellwig and Matthew Wilcox.
Suggested-by: Matthew Wilcox <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andreas Gruenbacher <agruenba@redhat.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: Yafang Shao <laoar.shao@gmail.com>
Cc: Song Liu <song@kernel.org>
Cc: linux-raid@vger.kernel.org
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Chao Yu <chao@kernel.org>
Cc: linux-f2fs-devel@lists.sourceforge.net
Cc: Christoph Hellwig <hch@infradead.org>
Cc: linux-xfs@vger.kernel.org
Cc: Anton Altaparmakov <anton@tuxera.com>
Cc: linux-ntfs-dev@lists.sourceforge.net
Cc: Mike Marshall <hubcap@omnibond.com>
Cc: Martin Brandenburg <martin@omnibond.com>
Cc: devel@lists.orangefs.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Roman Gushchin <guro@fb.com>
Cc: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
RFC V2 -> RFC V3:
1. rename clear_page_private to detach_page_private.
2. updated the comments for the two functions.
RFC -> RFC V2: Address the comments from Christoph Hellwig
1. change function names to attach/clear_page_private and add comments.
2. change the return type of attach_page_private.
include/linux/pagemap.h | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index c6348c50136f..8e085713150c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -208,6 +208,43 @@ static inline int page_cache_add_speculative(struct page *page, int count)
return __page_cache_add_speculative(page, count);
}
+/**
+ * attach_page_private - Attach private data to a page.
+ * @page: Page to attach data to.
+ * @data: Data to attach to page.
+ *
+ * Attaching private data to a page increments the page's reference count.
+ * The data must be detached before the page will be freed.
+ */
+static inline void attach_page_private(struct page *page, void *data)
+{
+ get_page(page);
+ set_page_private(page, (unsigned long)data);
+ SetPagePrivate(page);
+}
+
+/**
+ * detach_page_private - Detach private data from a page.
+ * @page: Page to detach data from.
+ *
+ * Removes the data that was previously attached to the page and decrements
+ * the refcount on the page.
+ *
+ * Return: Data that was attached to the page.
+ */
+static inline void *detach_page_private(struct page *page)
+{
+ void *data = (void *)page_private(page);
+
+ if (!PagePrivate(page))
+ return NULL;
+ ClearPagePrivate(page);
+ set_page_private(page, 0);
+ put_page(page);
+
+ return data;
+}
+
#ifdef CONFIG_NUMA
extern struct page *__page_cache_alloc(gfp_t gfp);
#else
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 02/10] md: remove __clear_page_buffers and use attach/detach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
2020-05-17 21:47 ` [PATCH 01/10] include/linux/pagemap.h: introduce attach/detach_page_private Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 03/10] btrfs: " Guoqing Jiang
` (7 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Song Liu, linux-raid
After introduce attach/detach_page_private in pagemap.h, we can remove
the duplicat code and call the new functions.
Cc: Song Liu <song@kernel.org>
Cc: linux-raid@vger.kernel.org
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
RFC V2 -> RFC V3
1. rename clear_page_private to detach_page_private.
RFC -> RFC V2
1. change the name of new functions to attach/clear_page_private.
drivers/md/md-bitmap.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index b952bd45bd6a..95a5f3757fa3 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -324,14 +324,6 @@ static void end_bitmap_write(struct buffer_head *bh, int uptodate)
wake_up(&bitmap->write_wait);
}
-/* copied from buffer.c */
-static void
-__clear_page_buffers(struct page *page)
-{
- ClearPagePrivate(page);
- set_page_private(page, 0);
- put_page(page);
-}
static void free_buffers(struct page *page)
{
struct buffer_head *bh;
@@ -345,7 +337,7 @@ static void free_buffers(struct page *page)
free_buffer_head(bh);
bh = next;
}
- __clear_page_buffers(page);
+ detach_page_private(page);
put_page(page);
}
@@ -374,7 +366,7 @@ static int read_page(struct file *file, unsigned long index,
ret = -ENOMEM;
goto out;
}
- attach_page_buffers(page, bh);
+ attach_page_private(page, bh);
blk_cur = index << (PAGE_SHIFT - inode->i_blkbits);
while (bh) {
block = blk_cur;
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 03/10] btrfs: use attach/detach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
2020-05-17 21:47 ` [PATCH 01/10] include/linux/pagemap.h: introduce attach/detach_page_private Guoqing Jiang
2020-05-17 21:47 ` [PATCH 02/10] md: remove __clear_page_buffers and use attach/detach_page_private Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 04/10] fs/buffer.c: " Guoqing Jiang
` (6 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Chris Mason, Josef Bacik, David Sterba, linux-btrfs
Since the new pair function is introduced, we can call them to clean the
code in btrfs.
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
RFC V2 -> RFC V3
1. rename clear_page_private to detach_page_private.
RFC -> RFC V2
1. change the name of new functions to attach/clear_page_private.
2. call attach_page_private(newpage, clear_page_private(page)) to
cleanup code further as suggested by Dave Chinner.
fs/btrfs/disk-io.c | 4 +---
fs/btrfs/extent_io.c | 21 ++++++---------------
fs/btrfs/inode.c | 23 +++++------------------
3 files changed, 12 insertions(+), 36 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 714b57553ed6..44745d5e05cf 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -976,9 +976,7 @@ static void btree_invalidatepage(struct page *page, unsigned int offset,
btrfs_warn(BTRFS_I(page->mapping->host)->root->fs_info,
"page private not zero on page %llu",
(unsigned long long)page_offset(page));
- ClearPagePrivate(page);
- set_page_private(page, 0);
- put_page(page);
+ detach_page_private(page);
}
}
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 451d17bfafd8..704239546093 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3099,22 +3099,16 @@ static int submit_extent_page(unsigned int opf,
static void attach_extent_buffer_page(struct extent_buffer *eb,
struct page *page)
{
- if (!PagePrivate(page)) {
- SetPagePrivate(page);
- get_page(page);
- set_page_private(page, (unsigned long)eb);
- } else {
+ if (!PagePrivate(page))
+ attach_page_private(page, eb);
+ else
WARN_ON(page->private != (unsigned long)eb);
- }
}
void set_page_extent_mapped(struct page *page)
{
- if (!PagePrivate(page)) {
- SetPagePrivate(page);
- get_page(page);
- set_page_private(page, EXTENT_PAGE_PRIVATE);
- }
+ if (!PagePrivate(page))
+ attach_page_private(page, (void *)EXTENT_PAGE_PRIVATE);
}
static struct extent_map *
@@ -4935,10 +4929,7 @@ static void btrfs_release_extent_buffer_pages(struct extent_buffer *eb)
* We need to make sure we haven't be attached
* to a new eb.
*/
- ClearPagePrivate(page);
- set_page_private(page, 0);
- /* One for the page private */
- put_page(page);
+ detach_page_private(page);
}
if (mapped)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 037efc25d993..6c2d6ecedd6a 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7927,11 +7927,8 @@ static void btrfs_readahead(struct readahead_control *rac)
static int __btrfs_releasepage(struct page *page, gfp_t gfp_flags)
{
int ret = try_release_extent_mapping(page, gfp_flags);
- if (ret == 1) {
- ClearPagePrivate(page);
- set_page_private(page, 0);
- put_page(page);
- }
+ if (ret == 1)
+ detach_page_private(page);
return ret;
}
@@ -7953,14 +7950,8 @@ static int btrfs_migratepage(struct address_space *mapping,
if (ret != MIGRATEPAGE_SUCCESS)
return ret;
- if (page_has_private(page)) {
- ClearPagePrivate(page);
- get_page(newpage);
- set_page_private(newpage, page_private(page));
- set_page_private(page, 0);
- put_page(page);
- SetPagePrivate(newpage);
- }
+ if (page_has_private(page))
+ attach_page_private(newpage, detach_page_private(page));
if (PagePrivate2(page)) {
ClearPagePrivate2(page);
@@ -8082,11 +8073,7 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset,
}
ClearPageChecked(page);
- if (PagePrivate(page)) {
- ClearPagePrivate(page);
- set_page_private(page, 0);
- put_page(page);
- }
+ detach_page_private(page);
}
/*
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 04/10] fs/buffer.c: use attach/detach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
` (2 preceding siblings ...)
2020-05-17 21:47 ` [PATCH 03/10] btrfs: " Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 05/10] f2fs: " Guoqing Jiang
` (5 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro; +Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang
Since the new pair function is introduced, we can call them to clean the
code in buffer.c.
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
RFC V2 -> RFC V3
1. rename clear_page_private to detach_page_private.
RFC -> RFC V2
1. change the name of new functions to attach/clear_page_private.
fs/buffer.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 85b4be1939ce..fc8831c392d7 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -123,14 +123,6 @@ void __wait_on_buffer(struct buffer_head * bh)
}
EXPORT_SYMBOL(__wait_on_buffer);
-static void
-__clear_page_buffers(struct page *page)
-{
- ClearPagePrivate(page);
- set_page_private(page, 0);
- put_page(page);
-}
-
static void buffer_io_error(struct buffer_head *bh, char *msg)
{
if (!test_bit(BH_Quiet, &bh->b_state))
@@ -906,7 +898,7 @@ link_dev_buffers(struct page *page, struct buffer_head *head)
bh = bh->b_this_page;
} while (bh);
tail->b_this_page = head;
- attach_page_buffers(page, head);
+ attach_page_private(page, head);
}
static sector_t blkdev_max_block(struct block_device *bdev, unsigned int size)
@@ -1624,7 +1616,7 @@ void create_empty_buffers(struct page *page,
bh = bh->b_this_page;
} while (bh != head);
}
- attach_page_buffers(page, head);
+ attach_page_private(page, head);
spin_unlock(&page->mapping->private_lock);
}
EXPORT_SYMBOL(create_empty_buffers);
@@ -2611,7 +2603,7 @@ static void attach_nobh_buffers(struct page *page, struct buffer_head *head)
bh->b_this_page = head;
bh = bh->b_this_page;
} while (bh != head);
- attach_page_buffers(page, head);
+ attach_page_private(page, head);
spin_unlock(&page->mapping->private_lock);
}
@@ -3276,7 +3268,7 @@ drop_buffers(struct page *page, struct buffer_head **buffers_to_free)
bh = next;
} while (bh != head);
*buffers_to_free = head;
- __clear_page_buffers(page);
+ detach_page_private(page);
return 1;
failed:
return 0;
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 05/10] f2fs: use attach/detach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
` (3 preceding siblings ...)
2020-05-17 21:47 ` [PATCH 04/10] fs/buffer.c: " Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 06/10] iomap: " Guoqing Jiang
` (4 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Jaegeuk Kim, Chao Yu, linux-f2fs-devel
Since the new pair function is introduced, we can call them to clean the
code in f2fs.h.
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Chao Yu <chao@kernel.org>
Cc: linux-f2fs-devel@lists.sourceforge.net
Acked-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
RFC V2 -> RFC V3
1. rename clear_page_private to detach_page_private.
RFC -> RFC V2
1. change the name of new functions to attach/clear_page_private.
fs/f2fs/f2fs.h | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 3574629b75ba..a4d4a947f603 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -3128,19 +3128,12 @@ static inline void f2fs_set_page_private(struct page *page,
if (PagePrivate(page))
return;
- get_page(page);
- SetPagePrivate(page);
- set_page_private(page, data);
+ attach_page_private(page, (void *)data);
}
static inline void f2fs_clear_page_private(struct page *page)
{
- if (!PagePrivate(page))
- return;
-
- set_page_private(page, 0);
- ClearPagePrivate(page);
- f2fs_put_page(page, 0);
+ detach_page_private(page);
}
/*
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 06/10] iomap: use attach/detach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
` (4 preceding siblings ...)
2020-05-17 21:47 ` [PATCH 05/10] f2fs: " Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 07/10] ntfs: replace attach_page_buffers with attach_page_private Guoqing Jiang
` (3 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Darrick J. Wong, linux-xfs
Since the new pair function is introduced, we can call them to clean the
code in iomap.
Cc: Christoph Hellwig <hch@infradead.org>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>
Cc: linux-xfs@vger.kernel.org
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
RFC V2 -> RFC V3
1. rename clear_page_private to detach_page_private.
RFC -> RFC V2
1. change the name of new functions to attach/clear_page_private.
2. call attach_page_private(newpage, clear_page_private(page)) to
cleanup code further as suggested by Matthew Wilcox.
3. don't return attach_page_private in iomap_page_create per the
comment from Christoph Hellwig.
fs/iomap/buffered-io.c | 19 ++++---------------
1 file changed, 4 insertions(+), 15 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 890c8fcda4f3..a1ed7620fbac 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -59,24 +59,19 @@ iomap_page_create(struct inode *inode, struct page *page)
* migrate_page_move_mapping() assumes that pages with private data have
* their count elevated by 1.
*/
- get_page(page);
- set_page_private(page, (unsigned long)iop);
- SetPagePrivate(page);
+ attach_page_private(page, iop);
return iop;
}
static void
iomap_page_release(struct page *page)
{
- struct iomap_page *iop = to_iomap_page(page);
+ struct iomap_page *iop = detach_page_private(page);
if (!iop)
return;
WARN_ON_ONCE(atomic_read(&iop->read_count));
WARN_ON_ONCE(atomic_read(&iop->write_count));
- ClearPagePrivate(page);
- set_page_private(page, 0);
- put_page(page);
kfree(iop);
}
@@ -526,14 +521,8 @@ iomap_migrate_page(struct address_space *mapping, struct page *newpage,
if (ret != MIGRATEPAGE_SUCCESS)
return ret;
- if (page_has_private(page)) {
- ClearPagePrivate(page);
- get_page(newpage);
- set_page_private(newpage, page_private(page));
- set_page_private(page, 0);
- put_page(page);
- SetPagePrivate(newpage);
- }
+ if (page_has_private(page))
+ attach_page_private(newpage, detach_page_private(page));
if (mode != MIGRATE_SYNC_NO_COPY)
migrate_page_copy(newpage, page);
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 07/10] ntfs: replace attach_page_buffers with attach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
` (5 preceding siblings ...)
2020-05-17 21:47 ` [PATCH 06/10] iomap: " Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 08/10] orangefs: use attach/detach_page_private Guoqing Jiang
` (2 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Anton Altaparmakov, linux-ntfs-dev
Call the new function since attach_page_buffers will be removed.
Cc: Anton Altaparmakov <anton@tuxera.com>
Cc: linux-ntfs-dev@lists.sourceforge.net
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V2.
RFC -> RFC V2
1. change the name of new function to attach_page_private.
fs/ntfs/aops.c | 2 +-
fs/ntfs/mft.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
index 554b744f41bf..bb0a43860ad2 100644
--- a/fs/ntfs/aops.c
+++ b/fs/ntfs/aops.c
@@ -1732,7 +1732,7 @@ void mark_ntfs_record_dirty(struct page *page, const unsigned int ofs) {
bh = bh->b_this_page;
} while (bh);
tail->b_this_page = head;
- attach_page_buffers(page, head);
+ attach_page_private(page, head);
} else
buffers_to_free = bh;
}
diff --git a/fs/ntfs/mft.c b/fs/ntfs/mft.c
index 3aac5c917afe..fbb9f1bc623d 100644
--- a/fs/ntfs/mft.c
+++ b/fs/ntfs/mft.c
@@ -504,7 +504,7 @@ int ntfs_sync_mft_mirror(ntfs_volume *vol, const unsigned long mft_no,
bh = bh->b_this_page;
} while (bh);
tail->b_this_page = head;
- attach_page_buffers(page, head);
+ attach_page_private(page, head);
}
bh = head = page_buffers(page);
BUG_ON(!bh);
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 08/10] orangefs: use attach/detach_page_private
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
` (6 preceding siblings ...)
2020-05-17 21:47 ` [PATCH 07/10] ntfs: replace attach_page_buffers with attach_page_private Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-26 21:54 ` Mike Marshall
2020-05-17 21:47 ` [PATCH 09/10] buffer_head.h: remove attach_page_buffers Guoqing Jiang
2020-05-17 21:47 ` [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code Guoqing Jiang
9 siblings, 1 reply; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Mike Marshall, Martin Brandenburg, devel
Since the new pair function is introduced, we can call them to clean the
code in orangefs.
Cc: Mike Marshall <hubcap@omnibond.com>
Cc: Martin Brandenburg <martin@omnibond.com>
Cc: devel@lists.orangefs.org
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
RFC V2 -> RFC V3
1. rename clear_page_private to detach_page_private.
RFC -> RFC V2
1. change the name of new functions to attach/clear_page_private.
2. avoid potential use-after-free as suggested by Dave Chinner.
fs/orangefs/inode.c | 32 ++++++--------------------------
1 file changed, 6 insertions(+), 26 deletions(-)
diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c
index 12ae630fbed7..48f0547d4850 100644
--- a/fs/orangefs/inode.c
+++ b/fs/orangefs/inode.c
@@ -62,12 +62,7 @@ static int orangefs_writepage_locked(struct page *page,
} else {
ret = 0;
}
- if (wr) {
- kfree(wr);
- set_page_private(page, 0);
- ClearPagePrivate(page);
- put_page(page);
- }
+ kfree(detach_page_private(page));
return ret;
}
@@ -409,9 +404,7 @@ static int orangefs_write_begin(struct file *file,
wr->len = len;
wr->uid = current_fsuid();
wr->gid = current_fsgid();
- SetPagePrivate(page);
- set_page_private(page, (unsigned long)wr);
- get_page(page);
+ attach_page_private(page, wr);
okay:
return 0;
}
@@ -459,18 +452,12 @@ static void orangefs_invalidatepage(struct page *page,
wr = (struct orangefs_write_range *)page_private(page);
if (offset == 0 && length == PAGE_SIZE) {
- kfree((struct orangefs_write_range *)page_private(page));
- set_page_private(page, 0);
- ClearPagePrivate(page);
- put_page(page);
+ kfree(detach_page_private(page));
return;
/* write range entirely within invalidate range (or equal) */
} else if (page_offset(page) + offset <= wr->pos &&
wr->pos + wr->len <= page_offset(page) + offset + length) {
- kfree((struct orangefs_write_range *)page_private(page));
- set_page_private(page, 0);
- ClearPagePrivate(page);
- put_page(page);
+ kfree(detach_page_private(page));
/* XXX is this right? only caller in fs */
cancel_dirty_page(page);
return;
@@ -535,12 +522,7 @@ static int orangefs_releasepage(struct page *page, gfp_t foo)
static void orangefs_freepage(struct page *page)
{
- if (PagePrivate(page)) {
- kfree((struct orangefs_write_range *)page_private(page));
- set_page_private(page, 0);
- ClearPagePrivate(page);
- put_page(page);
- }
+ kfree(detach_page_private(page));
}
static int orangefs_launder_page(struct page *page)
@@ -740,9 +722,7 @@ vm_fault_t orangefs_page_mkwrite(struct vm_fault *vmf)
wr->len = PAGE_SIZE;
wr->uid = current_fsuid();
wr->gid = current_fsgid();
- SetPagePrivate(page);
- set_page_private(page, (unsigned long)wr);
- get_page(page);
+ attach_page_private(page, wr);
okay:
file_update_time(vmf->vma->vm_file);
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 08/10] orangefs: use attach/detach_page_private
2020-05-17 21:47 ` [PATCH 08/10] orangefs: use attach/detach_page_private Guoqing Jiang
@ 2020-05-26 21:54 ` Mike Marshall
2020-05-28 8:39 ` Guoqing Jiang
0 siblings, 1 reply; 25+ messages in thread
From: Mike Marshall @ 2020-05-26 21:54 UTC (permalink / raw)
To: Guoqing Jiang
Cc: Andrew Morton, Al Viro, linux-fsdevel, LKML, Dave Chinner,
Christoph Hellwig, Matthew Wilcox, Martin Brandenburg, devel
I apologize for not mentioning that I ran this patch set
through orangefs xfstests at 5.7 rc5 with no problems
or regressions.
-Mike
On Sun, May 17, 2020 at 5:47 PM Guoqing Jiang
<guoqing.jiang@cloud.ionos.com> wrote:
>
> Since the new pair function is introduced, we can call them to clean the
> code in orangefs.
>
> Cc: Mike Marshall <hubcap@omnibond.com>
> Cc: Martin Brandenburg <martin@omnibond.com>
> Cc: devel@lists.orangefs.org
> Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
> ---
> No change since RFC V3.
>
> RFC V2 -> RFC V3
> 1. rename clear_page_private to detach_page_private.
>
> RFC -> RFC V2
> 1. change the name of new functions to attach/clear_page_private.
> 2. avoid potential use-after-free as suggested by Dave Chinner.
>
> fs/orangefs/inode.c | 32 ++++++--------------------------
> 1 file changed, 6 insertions(+), 26 deletions(-)
>
> diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c
> index 12ae630fbed7..48f0547d4850 100644
> --- a/fs/orangefs/inode.c
> +++ b/fs/orangefs/inode.c
> @@ -62,12 +62,7 @@ static int orangefs_writepage_locked(struct page *page,
> } else {
> ret = 0;
> }
> - if (wr) {
> - kfree(wr);
> - set_page_private(page, 0);
> - ClearPagePrivate(page);
> - put_page(page);
> - }
> + kfree(detach_page_private(page));
> return ret;
> }
>
> @@ -409,9 +404,7 @@ static int orangefs_write_begin(struct file *file,
> wr->len = len;
> wr->uid = current_fsuid();
> wr->gid = current_fsgid();
> - SetPagePrivate(page);
> - set_page_private(page, (unsigned long)wr);
> - get_page(page);
> + attach_page_private(page, wr);
> okay:
> return 0;
> }
> @@ -459,18 +452,12 @@ static void orangefs_invalidatepage(struct page *page,
> wr = (struct orangefs_write_range *)page_private(page);
>
> if (offset == 0 && length == PAGE_SIZE) {
> - kfree((struct orangefs_write_range *)page_private(page));
> - set_page_private(page, 0);
> - ClearPagePrivate(page);
> - put_page(page);
> + kfree(detach_page_private(page));
> return;
> /* write range entirely within invalidate range (or equal) */
> } else if (page_offset(page) + offset <= wr->pos &&
> wr->pos + wr->len <= page_offset(page) + offset + length) {
> - kfree((struct orangefs_write_range *)page_private(page));
> - set_page_private(page, 0);
> - ClearPagePrivate(page);
> - put_page(page);
> + kfree(detach_page_private(page));
> /* XXX is this right? only caller in fs */
> cancel_dirty_page(page);
> return;
> @@ -535,12 +522,7 @@ static int orangefs_releasepage(struct page *page, gfp_t foo)
>
> static void orangefs_freepage(struct page *page)
> {
> - if (PagePrivate(page)) {
> - kfree((struct orangefs_write_range *)page_private(page));
> - set_page_private(page, 0);
> - ClearPagePrivate(page);
> - put_page(page);
> - }
> + kfree(detach_page_private(page));
> }
>
> static int orangefs_launder_page(struct page *page)
> @@ -740,9 +722,7 @@ vm_fault_t orangefs_page_mkwrite(struct vm_fault *vmf)
> wr->len = PAGE_SIZE;
> wr->uid = current_fsuid();
> wr->gid = current_fsgid();
> - SetPagePrivate(page);
> - set_page_private(page, (unsigned long)wr);
> - get_page(page);
> + attach_page_private(page, wr);
> okay:
>
> file_update_time(vmf->vma->vm_file);
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 08/10] orangefs: use attach/detach_page_private
2020-05-26 21:54 ` Mike Marshall
@ 2020-05-28 8:39 ` Guoqing Jiang
0 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-28 8:39 UTC (permalink / raw)
To: Mike Marshall
Cc: Andrew Morton, Al Viro, linux-fsdevel, LKML, Dave Chinner,
Christoph Hellwig, Matthew Wilcox, Martin Brandenburg, devel
On 5/26/20 11:54 PM, Mike Marshall wrote:
> I apologize for not mentioning that I ran this patch set
> through orangefs xfstests at 5.7 rc5 with no problems
> or regressions.
Glad to hear that, thanks for your effort.
Thanks,
Guoqing
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 09/10] buffer_head.h: remove attach_page_buffers
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
` (7 preceding siblings ...)
2020-05-17 21:47 ` [PATCH 08/10] orangefs: use attach/detach_page_private Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-17 21:47 ` [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code Guoqing Jiang
9 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro
Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang,
Thomas Gleixner, Sebastian Andrzej Siewior, Roman Gushchin,
Andreas Dilger
All the callers have replaced attach_page_buffers with the new function
attach_page_private, so remove it.
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Roman Gushchin <guro@fb.com>
Cc: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC.
include/linux/buffer_head.h | 8 --------
1 file changed, 8 deletions(-)
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 15b765a181b8..22fb11e2d2e0 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -272,14 +272,6 @@ void buffer_init(void);
* inline definitions
*/
-static inline void attach_page_buffers(struct page *page,
- struct buffer_head *head)
-{
- get_page(page);
- SetPagePrivate(page);
- set_page_private(page, (unsigned long)head);
-}
-
static inline void get_bh(struct buffer_head *bh)
{
atomic_inc(&bh->b_count);
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-17 21:47 [PATCH 00/10] Introduce attach/detach_page_private to cleanup code Guoqing Jiang
` (8 preceding siblings ...)
2020-05-17 21:47 ` [PATCH 09/10] buffer_head.h: remove attach_page_buffers Guoqing Jiang
@ 2020-05-17 21:47 ` Guoqing Jiang
2020-05-19 5:12 ` Andrew Morton
2020-05-21 22:52 ` Dave Chinner
9 siblings, 2 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-17 21:47 UTC (permalink / raw)
To: akpm, viro; +Cc: linux-fsdevel, linux-kernel, david, hch, willy, Guoqing Jiang
We can cleanup code a little by call detach_page_private here.
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.
mm/migrate.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 5fed0305d2ec..f99502bc113c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
if (rc != MIGRATEPAGE_SUCCESS)
goto unlock_buffers;
- ClearPagePrivate(page);
- set_page_private(newpage, page_private(page));
- set_page_private(page, 0);
- put_page(page);
+ set_page_private(newpage, detach_page_private(page));
get_page(newpage);
bh = head;
--
2.17.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-17 21:47 ` [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code Guoqing Jiang
@ 2020-05-19 5:12 ` Andrew Morton
2020-05-19 7:35 ` Guoqing Jiang
2020-05-21 22:52 ` Dave Chinner
1 sibling, 1 reply; 25+ messages in thread
From: Andrew Morton @ 2020-05-19 5:12 UTC (permalink / raw)
To: Guoqing Jiang; +Cc: viro, linux-fsdevel, linux-kernel, david, hch, willy
On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
> We can cleanup code a little by call detach_page_private here.
>
> ...
>
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
> if (rc != MIGRATEPAGE_SUCCESS)
> goto unlock_buffers;
>
> - ClearPagePrivate(page);
> - set_page_private(newpage, page_private(page));
> - set_page_private(page, 0);
> - put_page(page);
> + set_page_private(newpage, detach_page_private(page));
> get_page(newpage);
>
> bh = head;
mm/migrate.c: In function '__buffer_migrate_page':
./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
#define set_page_private(page, v) ((page)->private = (v))
^
mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
set_page_private(newpage, detach_page_private(page));
^~~~~~~~~~~~~~~~
The fact that set_page_private(detach_page_private()) generates a type
mismatch warning seems deeply wrong, surely.
Please let's get the types sorted out - either unsigned long or void *,
not half-one and half-the other. Whatever needs the least typecasting
at callsites, I suggest.
And can we please implement set_page_private() and page_private() with
inlined C code? There is no need for these to be macros.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-19 5:12 ` Andrew Morton
@ 2020-05-19 7:35 ` Guoqing Jiang
2020-05-19 10:06 ` Gao Xiang
2020-05-19 20:44 ` Guoqing Jiang
0 siblings, 2 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-19 7:35 UTC (permalink / raw)
To: Andrew Morton; +Cc: viro, linux-fsdevel, linux-kernel, david, hch, willy
On 5/19/20 7:12 AM, Andrew Morton wrote:
> On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
>
>> We can cleanup code a little by call detach_page_private here.
>>
>> ...
>>
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>> if (rc != MIGRATEPAGE_SUCCESS)
>> goto unlock_buffers;
>>
>> - ClearPagePrivate(page);
>> - set_page_private(newpage, page_private(page));
>> - set_page_private(page, 0);
>> - put_page(page);
>> + set_page_private(newpage, detach_page_private(page));
>> get_page(newpage);
>>
>> bh = head;
> mm/migrate.c: In function '__buffer_migrate_page':
> ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
> #define set_page_private(page, v) ((page)->private = (v))
> ^
> mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
> set_page_private(newpage, detach_page_private(page));
> ^~~~~~~~~~~~~~~~
>
> The fact that set_page_private(detach_page_private()) generates a type
> mismatch warning seems deeply wrong, surely.
>
> Please let's get the types sorted out - either unsigned long or void *,
> not half-one and half-the other. Whatever needs the least typecasting
> at callsites, I suggest.
Sorry about that, I should notice the warning before. I will double
check if other
places need the typecast or not, then send a new version.
> And can we please implement set_page_private() and page_private() with
> inlined C code? There is no need for these to be macros.
Just did a quick change.
-#define page_private(page) ((page)->private)
-#define set_page_private(page, v) ((page)->private = (v))
+static inline unsigned long page_private(struct page *page)
+{
+ return page->private;
+}
+
+static inline void set_page_private(struct page *page, unsigned long
priv_data)
+{
+ page->private = priv_data;
+}
Then I get error like.
fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
u.v = &page_private(page);
^
I guess it is better to keep page_private as macro, please correct me in
case I
missed something.
Thanks,
Guoqing
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-19 7:35 ` Guoqing Jiang
@ 2020-05-19 10:06 ` Gao Xiang
2020-05-19 11:02 ` Guoqing Jiang
2020-05-19 15:16 ` Matthew Wilcox
2020-05-19 20:44 ` Guoqing Jiang
1 sibling, 2 replies; 25+ messages in thread
From: Gao Xiang @ 2020-05-19 10:06 UTC (permalink / raw)
To: Guoqing Jiang
Cc: Andrew Morton, viro, linux-fsdevel, linux-kernel, david, hch, willy
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8\"", Size: 4632 bytes --]
On Tue, May 19, 2020 at 09:35:59AM +0200, Guoqing Jiang wrote:
> On 5/19/20 7:12 AM, Andrew Morton wrote:
> > On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
> >
> > > We can cleanup code a little by call detach_page_private here.
> > >
> > > ...
> > >
> > > --- a/mm/migrate.c
> > > +++ b/mm/migrate.c
> > > @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
> > > if (rc != MIGRATEPAGE_SUCCESS)
> > > goto unlock_buffers;
> > > - ClearPagePrivate(page);
> > > - set_page_private(newpage, page_private(page));
> > > - set_page_private(page, 0);
> > > - put_page(page);
> > > + set_page_private(newpage, detach_page_private(page));
> > > get_page(newpage);
> > > bh = head;
> > mm/migrate.c: In function '__buffer_migrate_page':
> > ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
> > #define set_page_private(page, v) ((page)->private = (v))
> > ^
> > mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
> > set_page_private(newpage, detach_page_private(page));
> > ^~~~~~~~~~~~~~~~
> >
> > The fact that set_page_private(detach_page_private()) generates a type
> > mismatch warning seems deeply wrong, surely.
> >
> > Please let's get the types sorted out - either unsigned long or void *,
> > not half-one and half-the other. Whatever needs the least typecasting
> > at callsites, I suggest.
>
> Sorry about that, I should notice the warning before. I will double check if
> other
> places need the typecast or not, then send a new version.
>
> > And can we please implement set_page_private() and page_private() with
> > inlined C code? There is no need for these to be macros.
>
> Just did a quick change.
>
> -#define page_private(page)Â Â Â Â Â Â Â Â Â Â Â Â ((page)->private)
> -#define set_page_private(page, v)Â Â Â Â Â ((page)->private = (v))
> +static inline unsigned long page_private(struct page *page)
> +{
> +Â Â Â Â Â Â return page->private;
> +}
> +
> +static inline void set_page_private(struct page *page, unsigned long
> priv_data)
> +{
> +Â Â Â Â Â Â page->private = priv_data;
> +}
>
> Then I get error like.
>
> fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
> fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
> Â u.v = &page_private(page);
> Â Â Â Â Â Â Â ^
>
> I guess it is better to keep page_private as macro, please correct me in
> case I
> missed something.
I guess that you could Cc me in the reply.
In that case, EROFS uses page->private as an atomic integer to
trace 2 partial subpages in one page.
I think that you could also use &page->private instead directly to
replace &page_private(page) here since I didn't find some hint to
pick &page_private(page) or &page->private.
In addition, I found some limitation of new {attach,detach}_page_private
helper (that is why I was interested in this series at that time [1] [2],
but I gave up finally) since many patterns (not all) in EROFS are
io_submit (origin, page locked):
attach_page_private(page);
...
put_page(page);
end_io (page locked):
SetPageUptodate(page);
unlock_page(page);
since the page is always locked, so io_submit could be simplified as
set_page_private(page, ...);
SetPagePrivate(page);
, which can save both one temporary get_page(page) and one
put_page(page) since it could be regarded as safe with page locked.
btw, I noticed the patchset versions are PATCH [3], RFC PATCH [4],
RFC PATCH v2 [5], RFC PATCH v3 [6], PATCH [7]. Although I also
noticed the patchset title was once changed, but it could be some
harder to trace the whole history discussion.
[1] https://lore.kernel.org/linux-fsdevel/20200419051404.GA30986@hsiangkao-HP-ZHAN-66-Pro-G1/
[2] https://lore.kernel.org/linux-fsdevel/20200427025752.GA3979@hsiangkao-HP-ZHAN-66-Pro-G1/
[3] https://lore.kernel.org/linux-fsdevel/20200418225123.31850-1-guoqing.jiang@cloud.ionos.com/
[4] https://lore.kernel.org/linux-fsdevel/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
[5] https://lore.kernel.org/linux-fsdevel/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
[6] https://lore.kernel.org/linux-fsdevel/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
[7] https://lore.kernel.org/linux-fsdevel/20200517214718.468-1-guoqing.jiang@cloud.ionos.com/
Thanks,
Gao Xiang
>
> Thanks,
> Guoqing
>
>
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-19 10:06 ` Gao Xiang
@ 2020-05-19 11:02 ` Guoqing Jiang
2020-05-19 11:31 ` Gao Xiang
2020-05-19 15:16 ` Matthew Wilcox
1 sibling, 1 reply; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-19 11:02 UTC (permalink / raw)
To: Gao Xiang
Cc: Andrew Morton, viro, linux-fsdevel, linux-kernel, david, hch, willy
On 5/19/20 12:06 PM, Gao Xiang wrote:
> On Tue, May 19, 2020 at 09:35:59AM +0200, Guoqing Jiang wrote:
>> On 5/19/20 7:12 AM, Andrew Morton wrote:
>>> On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
>>>
>>>> We can cleanup code a little by call detach_page_private here.
>>>>
>>>> ...
>>>>
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>>>> if (rc != MIGRATEPAGE_SUCCESS)
>>>> goto unlock_buffers;
>>>> - ClearPagePrivate(page);
>>>> - set_page_private(newpage, page_private(page));
>>>> - set_page_private(page, 0);
>>>> - put_page(page);
>>>> + set_page_private(newpage, detach_page_private(page));
>>>> get_page(newpage);
>>>> bh = head;
>>> mm/migrate.c: In function '__buffer_migrate_page':
>>> ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
>>> #define set_page_private(page, v) ((page)->private = (v))
>>> ^
>>> mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
>>> set_page_private(newpage, detach_page_private(page));
>>> ^~~~~~~~~~~~~~~~
>>>
>>> The fact that set_page_private(detach_page_private()) generates a type
>>> mismatch warning seems deeply wrong, surely.
>>>
>>> Please let's get the types sorted out - either unsigned long or void *,
>>> not half-one and half-the other. Whatever needs the least typecasting
>>> at callsites, I suggest.
>> Sorry about that, I should notice the warning before. I will double check if
>> other
>> places need the typecast or not, then send a new version.
>>
>>> And can we please implement set_page_private() and page_private() with
>>> inlined C code? There is no need for these to be macros.
>> Just did a quick change.
>>
>> -#define page_private(page)Â Â Â Â Â Â Â Â Â Â Â Â ((page)->private)
>> -#define set_page_private(page, v)Â Â Â Â Â ((page)->private = (v))
>> +static inline unsigned long page_private(struct page *page)
>> +{
>> +Â Â Â Â Â Â return page->private;
>> +}
>> +
>> +static inline void set_page_private(struct page *page, unsigned long
>> priv_data)
>> +{
>> +Â Â Â Â Â Â page->private = priv_data;
>> +}
>>
>> Then I get error like.
>>
>> fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
>> fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
>> Â u.v = &page_private(page);
>> Â Â Â Â Â Â Â ^
>>
>> I guess it is better to keep page_private as macro, please correct me in
>> case I
>> missed something.
> I guess that you could Cc me in the reply.
Sorry for not do that ...
> In that case, EROFS uses page->private as an atomic integer to
> trace 2 partial subpages in one page.
>
> I think that you could also use &page->private instead directly to
> replace &page_private(page) here since I didn't find some hint to
> pick &page_private(page) or &page->private.
Thanks for the input, I just did a quick test, so need to investigate more.
And I think it is better to have another thread to change those macros to
inline function, then fix related issues due to the change.
> In addition, I found some limitation of new {attach,detach}_page_private
> helper (that is why I was interested in this series at that time [1] [2],
> but I gave up finally) since many patterns (not all) in EROFS are
>
> io_submit (origin, page locked):
> attach_page_private(page);
> ...
> put_page(page);
>
> end_io (page locked):
> SetPageUptodate(page);
> unlock_page(page);
>
> since the page is always locked, so io_submit could be simplified as
> set_page_private(page, ...);
> SetPagePrivate(page);
> , which can save both one temporary get_page(page) and one
> put_page(page) since it could be regarded as safe with page locked.
The SetPageUptodate is not called inside {attach,detach}_page_private,
I could probably misunderstand your point, maybe you want the new pairs
can handle the locked page, care to elaborate more?
> btw, I noticed the patchset versions are PATCH [3], RFC PATCH [4],
> RFC PATCH v2 [5], RFC PATCH v3 [6], PATCH [7]. Although I also
> noticed the patchset title was once changed, but it could be some
> harder to trace the whole history discussion.
>
> [1] https://lore.kernel.org/linux-fsdevel/20200419051404.GA30986@hsiangkao-HP-ZHAN-66-Pro-G1/
> [2] https://lore.kernel.org/linux-fsdevel/20200427025752.GA3979@hsiangkao-HP-ZHAN-66-Pro-G1/
> [3] https://lore.kernel.org/linux-fsdevel/20200418225123.31850-1-guoqing.jiang@cloud.ionos.com/
> [4] https://lore.kernel.org/linux-fsdevel/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
> [5] https://lore.kernel.org/linux-fsdevel/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
> [6] https://lore.kernel.org/linux-fsdevel/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
> [7] https://lore.kernel.org/linux-fsdevel/20200517214718.468-1-guoqing.jiang@cloud.ionos.com/
All the cover letter of those series are here.
RFC V3:https://lore.kernel.org/lkml/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
RFC V2:https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
RFC:https://lore.kernel.org/lkml/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
And the latest one:
https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
Thanks,
Guoqing
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-19 11:02 ` Guoqing Jiang
@ 2020-05-19 11:31 ` Gao Xiang
0 siblings, 0 replies; 25+ messages in thread
From: Gao Xiang @ 2020-05-19 11:31 UTC (permalink / raw)
To: Guoqing Jiang
Cc: Andrew Morton, viro, linux-fsdevel, linux-kernel, david, hch, willy
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8\"", Size: 6615 bytes --]
On Tue, May 19, 2020 at 01:02:26PM +0200, Guoqing Jiang wrote:
> On 5/19/20 12:06 PM, Gao Xiang wrote:
> > On Tue, May 19, 2020 at 09:35:59AM +0200, Guoqing Jiang wrote:
> > > On 5/19/20 7:12 AM, Andrew Morton wrote:
> > > > On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
> > > >
> > > > > We can cleanup code a little by call detach_page_private here.
> > > > >
> > > > > ...
> > > > >
> > > > > --- a/mm/migrate.c
> > > > > +++ b/mm/migrate.c
> > > > > @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
> > > > > if (rc != MIGRATEPAGE_SUCCESS)
> > > > > goto unlock_buffers;
> > > > > - ClearPagePrivate(page);
> > > > > - set_page_private(newpage, page_private(page));
> > > > > - set_page_private(page, 0);
> > > > > - put_page(page);
> > > > > + set_page_private(newpage, detach_page_private(page));
> > > > > get_page(newpage);
> > > > > bh = head;
> > > > mm/migrate.c: In function '__buffer_migrate_page':
> > > > ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
> > > > #define set_page_private(page, v) ((page)->private = (v))
> > > > ^
> > > > mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
> > > > set_page_private(newpage, detach_page_private(page));
> > > > ^~~~~~~~~~~~~~~~
> > > >
> > > > The fact that set_page_private(detach_page_private()) generates a type
> > > > mismatch warning seems deeply wrong, surely.
> > > >
> > > > Please let's get the types sorted out - either unsigned long or void *,
> > > > not half-one and half-the other. Whatever needs the least typecasting
> > > > at callsites, I suggest.
> > > Sorry about that, I should notice the warning before. I will double check if
> > > other
> > > places need the typecast or not, then send a new version.
> > >
> > > > And can we please implement set_page_private() and page_private() with
> > > > inlined C code? There is no need for these to be macros.
> > > Just did a quick change.
> > >
> > > -#define page_private(page)Â Â Â Â Â Â Â Â Â Â Â Â ((page)->private)
> > > -#define set_page_private(page, v)Â Â Â Â Â ((page)->private = (v))
> > > +static inline unsigned long page_private(struct page *page)
> > > +{
> > > +Â Â Â Â Â Â return page->private;
> > > +}
> > > +
> > > +static inline void set_page_private(struct page *page, unsigned long
> > > priv_data)
> > > +{
> > > +Â Â Â Â Â Â page->private = priv_data;
> > > +}
> > >
> > > Then I get error like.
> > >
> > > fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
> > > fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
> > > Â u.v = &page_private(page);
> > > Â Â Â Â Â Â Â ^
> > >
> > > I guess it is better to keep page_private as macro, please correct me in
> > > case I
> > > missed something.
> > I guess that you could Cc me in the reply.
>
> Sorry for not do that ...
>
> > In that case, EROFS uses page->private as an atomic integer to
> > trace 2 partial subpages in one page.
> >
> > I think that you could also use &page->private instead directly to
> > replace &page_private(page) here since I didn't find some hint to
> > pick &page_private(page) or &page->private.
>
> Thanks for the input, I just did a quick test, so need to investigate more.
> And I think it is better to have another thread to change those macros to
> inline function, then fix related issues due to the change.
I have no problem with that. Actually I did some type punning,
but I'm not sure if it's in a proper way. I'm very happy to improve
that as well.
>
> > In addition, I found some limitation of new {attach,detach}_page_private
> > helper (that is why I was interested in this series at that time [1] [2],
> > but I gave up finally) since many patterns (not all) in EROFS are
> >
> > io_submit (origin, page locked):
> > attach_page_private(page);
> > ...
> > put_page(page);
> >
> > end_io (page locked):
> > SetPageUptodate(page);
> > unlock_page(page);
> >
> > since the page is always locked, so io_submit could be simplified as
> > set_page_private(page, ...);
> > SetPagePrivate(page);
> > , which can save both one temporary get_page(page) and one
> > put_page(page) since it could be regarded as safe with page locked.
>
> The SetPageUptodate is not called inside {attach,detach}_page_private,
> I could probably misunderstand your point, maybe you want the new pairs
> can handle the locked page, care to elaborate more?
It doesn't relate to SetPageUptodate.
I just want to say that some patterns might not be benefited.
These helpers are useful indeed.
>
> > btw, I noticed the patchset versions are PATCH [3], RFC PATCH [4],
> > RFC PATCH v2 [5], RFC PATCH v3 [6], PATCH [7]. Although I also
> > noticed the patchset title was once changed, but it could be some
> > harder to trace the whole history discussion.
> >
> > [1] https://lore.kernel.org/linux-fsdevel/20200419051404.GA30986@hsiangkao-HP-ZHAN-66-Pro-G1/
> > [2] https://lore.kernel.org/linux-fsdevel/20200427025752.GA3979@hsiangkao-HP-ZHAN-66-Pro-G1/
> > [3] https://lore.kernel.org/linux-fsdevel/20200418225123.31850-1-guoqing.jiang@cloud.ionos.com/
> > [4] https://lore.kernel.org/linux-fsdevel/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
> > [5] https://lore.kernel.org/linux-fsdevel/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
> > [6] https://lore.kernel.org/linux-fsdevel/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
> > [7] https://lore.kernel.org/linux-fsdevel/20200517214718.468-1-guoqing.jiang@cloud.ionos.com/
>
> All the cover letter of those series are here.
>
> RFC V3:https://lore.kernel.org/lkml/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
> RFC V2:https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
> RFC:https://lore.kernel.org/lkml/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
>
> And the latest one:
>
> https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
Yeah, I noticed these links in this cover letter as well.
I was just little confused about these version numbers, especially
when the original patchset "[PATCH 0/5] export __clear_page_buffers
to cleanup code" included. That is minor as well.
Thanks for the explanation.
Thanks,
Gao Xiang
>
>
> Thanks,
> Guoqing
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-19 10:06 ` Gao Xiang
2020-05-19 11:02 ` Guoqing Jiang
@ 2020-05-19 15:16 ` Matthew Wilcox
2020-05-19 15:27 ` Gao Xiang
1 sibling, 1 reply; 25+ messages in thread
From: Matthew Wilcox @ 2020-05-19 15:16 UTC (permalink / raw)
To: Gao Xiang
Cc: Guoqing Jiang, Andrew Morton, viro, linux-fsdevel, linux-kernel,
david, hch
On Tue, May 19, 2020 at 06:06:19PM +0800, Gao Xiang wrote:
> In addition, I found some limitation of new {attach,detach}_page_private
> helper (that is why I was interested in this series at that time [1] [2],
> but I gave up finally) since many patterns (not all) in EROFS are
>
> io_submit (origin, page locked):
> attach_page_private(page);
> ...
> put_page(page);
>
> end_io (page locked):
> SetPageUptodate(page);
> unlock_page(page);
>
> since the page is always locked, so io_submit could be simplified as
> set_page_private(page, ...);
> SetPagePrivate(page);
> , which can save both one temporary get_page(page) and one
> put_page(page) since it could be regarded as safe with page locked.
It's fine to use page private like this without incrementing the refcount,
and I can't find any problematic cases in EROFS like those fixed by commit
8e47a457321ca1a74ad194ab5dcbca764bc70731
So I think the new helpers are not for you, and that's fine. They'll be
useful for other filesystems which are using page_private differently
from the way that you do.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-19 15:16 ` Matthew Wilcox
@ 2020-05-19 15:27 ` Gao Xiang
0 siblings, 0 replies; 25+ messages in thread
From: Gao Xiang @ 2020-05-19 15:27 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Guoqing Jiang, Andrew Morton, viro, linux-fsdevel, linux-kernel,
david, hch
Hi Matthew,
On Tue, May 19, 2020 at 08:16:32AM -0700, Matthew Wilcox wrote:
> On Tue, May 19, 2020 at 06:06:19PM +0800, Gao Xiang wrote:
> > In addition, I found some limitation of new {attach,detach}_page_private
> > helper (that is why I was interested in this series at that time [1] [2],
> > but I gave up finally) since many patterns (not all) in EROFS are
> >
> > io_submit (origin, page locked):
> > attach_page_private(page);
> > ...
> > put_page(page);
> >
> > end_io (page locked):
> > SetPageUptodate(page);
> > unlock_page(page);
> >
> > since the page is always locked, so io_submit could be simplified as
> > set_page_private(page, ...);
> > SetPagePrivate(page);
> > , which can save both one temporary get_page(page) and one
> > put_page(page) since it could be regarded as safe with page locked.
>
> It's fine to use page private like this without incrementing the refcount,
> and I can't find any problematic cases in EROFS like those fixed by commit
> 8e47a457321ca1a74ad194ab5dcbca764bc70731
>
> So I think the new helpers are not for you, and that's fine. They'll be
> useful for other filesystems which are using page_private differently
> from the way that you do.
Yes, I agree. Although there are some dead code in EROFS to handle
some truncated case, which I'd like to use in the future. Maybe I
can get rid of it temporarily... But let me get LZMA fixed-sized
output compression for EROFS in shape at first, which seems useful
as a complement of LZ4...
Thanks,
Gao Xiang
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-19 7:35 ` Guoqing Jiang
2020-05-19 10:06 ` Gao Xiang
@ 2020-05-19 20:44 ` Guoqing Jiang
1 sibling, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-19 20:44 UTC (permalink / raw)
To: Andrew Morton; +Cc: viro, linux-fsdevel, linux-kernel, david, hch, willy
Hi Andrew,
On 5/19/20 9:35 AM, Guoqing Jiang wrote:
> On 5/19/20 7:12 AM, Andrew Morton wrote:
>> On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang
>> <guoqing.jiang@cloud.ionos.com> wrote:
>>
>>> We can cleanup code a little by call detach_page_private here.
>>>
>>> ...
>>>
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct
>>> address_space *mapping,
>>> if (rc != MIGRATEPAGE_SUCCESS)
>>> goto unlock_buffers;
>>> - ClearPagePrivate(page);
>>> - set_page_private(newpage, page_private(page));
>>> - set_page_private(page, 0);
>>> - put_page(page);
>>> + set_page_private(newpage, detach_page_private(page));
>>> get_page(newpage);
>>> bh = head;
>> mm/migrate.c: In function '__buffer_migrate_page':
>> ./include/linux/mm_types.h:243:52: warning: assignment makes integer
>> from pointer without a cast [-Wint-conversion]
>> #define set_page_private(page, v) ((page)->private = (v))
>> ^
>> mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
>> set_page_private(newpage, detach_page_private(page));
>> ^~~~~~~~~~~~~~~~
>>
>> The fact that set_page_private(detach_page_private()) generates a type
>> mismatch warning seems deeply wrong, surely.
>>
>> Please let's get the types sorted out - either unsigned long or void *,
>> not half-one and half-the other. Whatever needs the least typecasting
>> at callsites, I suggest.
>
> Sorry about that, I should notice the warning before. I will double
> check if other
> places need the typecast or not, then send a new version.
Only this patch missed the typecast. I guess I just need to send an
updated patch
to replace this one (I am fine to send a new patch set if you want),
sorry again for
the trouble.
>
>> And can we please implement set_page_private() and page_private() with
>> inlined C code? There is no need for these to be macros.
>
> Just did a quick change.
>
> -#define page_private(page) ((page)->private)
> -#define set_page_private(page, v) ((page)->private = (v))
> +static inline unsigned long page_private(struct page *page)
> +{
> + return page->private;
> +}
> +
> +static inline void set_page_private(struct page *page, unsigned long
> priv_data)
> +{
> + page->private = priv_data;
> +}
>
> Then I get error like.
>
> fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
> fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
> u.v = &page_private(page);
> ^
>
> I guess it is better to keep page_private as macro, please correct me
> in case I
> missed something.
Lost of problems need to be fixed if change page_private to inline
function, so I
think it is better to keep it and only convert set_page_private.
mm/compaction.c: In function ‘isolate_migratepages_block’:
./include/linux/compiler.h:287:20: error: lvalue required as unary ‘&’
operand
__read_once_size(&(x), __u.__c, sizeof(x)); \
^
./include/linux/compiler.h:293:22: note: in expansion of macro ‘__READ_ONCE’
#define READ_ONCE(x) __READ_ONCE(x, 1)
^~~~~~~~~~~
mm/internal.h:293:34: note: in expansion of macro ‘READ_ONCE’
#define page_order_unsafe(page) READ_ONCE(page_private(page))
Thanks,
Guoqing
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-17 21:47 ` [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code Guoqing Jiang
2020-05-19 5:12 ` Andrew Morton
@ 2020-05-21 22:52 ` Dave Chinner
2020-05-22 7:18 ` Guoqing Jiang
1 sibling, 1 reply; 25+ messages in thread
From: Dave Chinner @ 2020-05-21 22:52 UTC (permalink / raw)
To: Guoqing Jiang; +Cc: akpm, viro, linux-fsdevel, linux-kernel, hch, willy
On Sun, May 17, 2020 at 11:47:18PM +0200, Guoqing Jiang wrote:
> We can cleanup code a little by call detach_page_private here.
>
> Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
> ---
> No change since RFC V3.
>
> mm/migrate.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5fed0305d2ec..f99502bc113c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
> if (rc != MIGRATEPAGE_SUCCESS)
> goto unlock_buffers;
>
> - ClearPagePrivate(page);
> - set_page_private(newpage, page_private(page));
> - set_page_private(page, 0);
> - put_page(page);
> + set_page_private(newpage, detach_page_private(page));
attach_page_private(newpage, detach_page_private(page));
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-21 22:52 ` Dave Chinner
@ 2020-05-22 7:18 ` Guoqing Jiang
2020-05-22 23:53 ` Andrew Morton
0 siblings, 1 reply; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-22 7:18 UTC (permalink / raw)
To: Dave Chinner; +Cc: akpm, viro, linux-fsdevel, linux-kernel, hch, willy
Hi Dave,
On 5/22/20 12:52 AM, Dave Chinner wrote:
> On Sun, May 17, 2020 at 11:47:18PM +0200, Guoqing Jiang wrote:
>> We can cleanup code a little by call detach_page_private here.
>>
>> Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
>> ---
>> No change since RFC V3.
>>
>> mm/migrate.c | 5 +----
>> 1 file changed, 1 insertion(+), 4 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 5fed0305d2ec..f99502bc113c 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>> if (rc != MIGRATEPAGE_SUCCESS)
>> goto unlock_buffers;
>>
>> - ClearPagePrivate(page);
>> - set_page_private(newpage, page_private(page));
>> - set_page_private(page, 0);
>> - put_page(page);
>> + set_page_private(newpage, detach_page_private(page));
> attach_page_private(newpage, detach_page_private(page));
Mattew had suggested it as follows, but not sure if we can reorder of
the setup of
the bh and setting PagePrivate, so I didn't want to break the original
syntax.
@@ -797,11 +797,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
if (rc != MIGRATEPAGE_SUCCESS)
goto unlock_buffers;
- ClearPagePrivate(page);
- set_page_private(newpage, page_private(page));
- set_page_private(page, 0);
- put_page(page);
- get_page(newpage);
+ attach_page_private(newpage, detach_page_private(page));
bh = head;
do {
@@ -810,8 +806,6 @@ static int __buffer_migrate_page(struct address_space *mapping,
} while (bh != head);
- SetPagePrivate(newpage);
-
if (mode != MIGRATE_SYNC_NO_COPY)
[1].
https://lore.kernel.org/lkml/20200502004158.GD29705@bombadil.infradead.org/
[2].
https://lore.kernel.org/lkml/e4d5ddc0-877f-6499-f697-2b7c0ddbf386@cloud.ionos.com/
Thanks,
Guoqing
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-22 7:18 ` Guoqing Jiang
@ 2020-05-22 23:53 ` Andrew Morton
2020-05-24 19:02 ` Guoqing Jiang
0 siblings, 1 reply; 25+ messages in thread
From: Andrew Morton @ 2020-05-22 23:53 UTC (permalink / raw)
To: Guoqing Jiang; +Cc: Dave Chinner, viro, linux-fsdevel, linux-kernel, hch, willy
On Fri, 22 May 2020 09:18:25 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
> >> - ClearPagePrivate(page);
> >> - set_page_private(newpage, page_private(page));
> >> - set_page_private(page, 0);
> >> - put_page(page);
> >> + set_page_private(newpage, detach_page_private(page));
> > attach_page_private(newpage, detach_page_private(page));
>
> Mattew had suggested it as follows, but not sure if we can reorder of
> the setup of
> the bh and setting PagePrivate, so I didn't want to break the original
> syntax.
>
> @@ -797,11 +797,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
> if (rc != MIGRATEPAGE_SUCCESS)
> goto unlock_buffers;
>
> - ClearPagePrivate(page);
> - set_page_private(newpage, page_private(page));
> - set_page_private(page, 0);
> - put_page(page);
> - get_page(newpage);
> + attach_page_private(newpage, detach_page_private(page));
>
> bh = head;
> do {
> @@ -810,8 +806,6 @@ static int __buffer_migrate_page(struct address_space *mapping,
>
> } while (bh != head);
>
> - SetPagePrivate(newpage);
> -
> if (mode != MIGRATE_SYNC_NO_COPY)
This is OK - coherency between PG_private and the page's buffer
ring is maintained by holding lock_page().
I have (effectively) applied the above change.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/10] mm/migrate.c: call detach_page_private to cleanup code
2020-05-22 23:53 ` Andrew Morton
@ 2020-05-24 19:02 ` Guoqing Jiang
0 siblings, 0 replies; 25+ messages in thread
From: Guoqing Jiang @ 2020-05-24 19:02 UTC (permalink / raw)
To: Andrew Morton; +Cc: Dave Chinner, viro, linux-fsdevel, linux-kernel, hch, willy
On 5/23/20 1:53 AM, Andrew Morton wrote:
> On Fri, 22 May 2020 09:18:25 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
>
>>>> - ClearPagePrivate(page);
>>>> - set_page_private(newpage, page_private(page));
>>>> - set_page_private(page, 0);
>>>> - put_page(page);
>>>> + set_page_private(newpage, detach_page_private(page));
>>> attach_page_private(newpage, detach_page_private(page));
>> Mattew had suggested it as follows, but not sure if we can reorder of
>> the setup of
>> the bh and setting PagePrivate, so I didn't want to break the original
>> syntax.
>>
>> @@ -797,11 +797,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>> if (rc != MIGRATEPAGE_SUCCESS)
>> goto unlock_buffers;
>>
>> - ClearPagePrivate(page);
>> - set_page_private(newpage, page_private(page));
>> - set_page_private(page, 0);
>> - put_page(page);
>> - get_page(newpage);
>> + attach_page_private(newpage, detach_page_private(page));
>>
>> bh = head;
>> do {
>> @@ -810,8 +806,6 @@ static int __buffer_migrate_page(struct address_space *mapping,
>>
>> } while (bh != head);
>>
>> - SetPagePrivate(newpage);
>> -
>> if (mode != MIGRATE_SYNC_NO_COPY)
> This is OK - coherency between PG_private and the page's buffer
> ring is maintained by holding lock_page().
Appreciate for your explanation.
Thanks,
Guoqing
> I have (effectively) applied the above change.
^ permalink raw reply [flat|nested] 25+ messages in thread