LKML Archive on
help / color / mirror / Atom feed
From: Jeff Mahoney <>
To: Dave Chinner <>
Cc: Mark Fasheh <>,,,
Subject: [RFC][PATCH 0/76] vfs: 'views' for filesystems with more than one root
Date: Tue, 5 Jun 2018 16:17:04 -0400	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <20180509064103.GP10363@dastard>

[-- Attachment #1.1: Type: text/plain, Size: 14291 bytes --]

Sorry, just getting back to this.

On 5/9/18 2:41 AM, Dave Chinner wrote:
> On Tue, May 08, 2018 at 10:06:44PM -0400, Jeff Mahoney wrote:
>> On 5/8/18 7:38 PM, Dave Chinner wrote:
>>> On Tue, May 08, 2018 at 11:03:20AM -0700, Mark Fasheh wrote:
>>>> Hi,
>>>> The VFS's super_block covers a variety of filesystem functionality. In
>>>> particular we have a single structure representing both I/O and
>>>> namespace domains.
>>>> There are requirements to de-couple this functionality. For example,
>>>> filesystems with more than one root (such as btrfs subvolumes) can
>>>> have multiple inode namespaces. This starts to confuse userspace when
>>>> it notices multiple inodes with the same inode/device tuple on a
>>>> filesystem.
>>> Devil's Advocate - I'm not looking at the code, I'm commenting on
>>> architectural issues I see here.
>>> The XFS subvolume work I've been doing explicitly uses a superblock
>>> per subvolume. That's because subvolumes are designed to be
>>> completely independent of the backing storage - they know nothing
>>> about the underlying storage except to share a BDI for writeback
>>> purposes and write to whatever block device the remapping layer
>>> gives them at IO time.  Hence XFS subvolumes have (at this point)
>>> their own unique s_dev, on-disk format configuration, journal, space
>>> accounting, etc. i.e. They are fully independent filesystems in
>>> their own right, and as such we do not have multiple inode
>>> namespaces per superblock.
>> That's a fundamental difference between how your XFS subvolumes work and
>> how btrfs subvolumes do.
> Yup, you've just proved my point: this is not a "subvolume problem";
> but rather a "multiple namespace per root" problem.

Mark framed it as a namespace issue initially.  The btrfs subvolume
problem is the biggest one we're concerned with.  More below.

>> There is no independence among btrfs
>> subvolumes.  When a snapshot is created, it has a few new blocks but
>> otherwise shares the metadata of the source subvolume.  The metadata
>> trees are shared across all of the subvolumes and there are several
>> internal trees used to manage all of it.
> I don't need btrfs 101 stuff explained to me. :/

No offense meant.  This was crossposted to a few lists.

>> a single transaction engine.  There are housekeeping and maintenance
>> tasks that operate across the entire file system internally.  I
>> understand that there are several problems you need to solve at the VFS
>> layer to get your version of subvolumes up and running, but trying to
>> shoehorn one into the other is bound to fail.
> Actually, the VFS has provided everything I need for XFS subvolumes
> so far without requiring any sort of modifications. That's the
> perspective I'm approaching this from - if the VFS can do what we
> need for XFS subvolumes, as well as overlay (which are effectively
> VFS-based COW subvolumes), then lets see if we can make that work
> for btrfs too.

I'm glad that the VFS provides what you need for XFS subvolumes, but
you're effectively creating independent file systems using a backend
storage remapping technique.  That's pretty much exactly what the VFS is
already designed to do.  Overlayfs is definitely not just VFS-based CoW
subvolumes.  I ended up having to wade more deeply into overlayfs than I
would've liked to respond to your email.  I'd say it's better described
as Overlayfs can abuse what the VFS offers just enough to work well in
common cases.  It has a similar disconnect between the superblock and
inode to btrfs except that it's not as easily resolved since the dentry
belongs to overlayfs and the inode belongs to the underlying file
system. There are still places getting this wrong in the kernel.  In
order to reliably present the right inode/dev pair for overlayfs, we
need the dentry to go with it.  We don't always have it and, even if we
did, we'd need both the dentry and inode.

At any rate, anything with a ->getattr that modifies dev or ino so that
it's different from inode->i_sb->s_dev or inode->i_ino will misbehave
somehow.  The question is just how badly, which depends on the context.
It could be something simple like /proc/pid/maps not showing the right
device/ino pair.  Many file systems that use 64-bit inode numbers
already remap them on 32-bit systems, which can already cause collisions
(even inside the kernel) when comparisons are made.  There's no excuse
to not use 64-bit inode numbers everywhere inside the kernel now,
especially since we have the capability to export them as 64-bit inode
numbers on 32-bit systems now via statx.

One thing is clear: If we want to solve the btrfs and overlayfs problems
in the same way, the view approach with a simple static mapping doesn't
work.  Sticking something between the inode and superblock doesn't get
the job done when the belongs to a different file system.  Overlayfs
needs a per-object remapper, which means a callback that takes a path.
Suddenly the way we do things in the SUSE kernel doesn't seem so hacky

I'm not sure we need the same solution for btrfs and overlayfs.  It's
not the same problem.  Every object in overlayfs as a unique mapping
already.  If we report s_dev and i_ino from the inode, it still maps to
a unique user-visible object.  It may not map back to the overlayfs
name, but that's a separate issue that's more difficult to solve.  The
btrfs issue isn't one of presenting an alternative namespace to the
user.  Btrfs has multiple internal namespaces and no way to publish them
to the rest of the kernel.

>>> So this doesn't sound like a "subvolume problem" - it's a "how do we
>>> sanely support multiple independent namespaces per superblock"
>>> problem. AFAICT, this same problem exists with bind mounts and mount
>>> namespaces - they are effectively multiple roots on a single
>>> superblock, but it's done at the vfsmount level and so the
>>> superblock knows nothing about them.
>> In this case, you're talking about the user-visible file system
>> hierarchy namespace that has no bearing on the underlying file system
>> outside of per-mount flags.
> Except that it tracks and provides infrastructure that allows user
> visible  "multiple namespace per root" constructs. Subvolumes - as a
> user visible namespace construct - are little different to bind
> mounts in behaviour and functionality. 
> How the underlying filesystem implements subvolumes is really up to
> the filesystem, but we should be trying to implement a clean model
> for "multiple namespaces on a single root" at the VFS so we have
> consistent behaviour across all filesystems that implement similar
> functionality.
> FWIW, bind mounts and overlay also have similar inode number
> namespace problems to what Mark describes for btrfs subvolumes.
> e.g. overlay recently introduce the "xino" mount option to separate
> the user presented inode number namespace for overlay inode from the
> underlying parent filesystem inodes. How is that different to btrfs
> subvolumes needing to present different inode number namespaces from
> the underlying parent?
> This sort of "namespace shifting" is needed for several different
> pieces of information the kernel reports to userspace. The VFS
> replacement for shiftfs is an example of this. So is inode number
> remapping. I'm sure there's more.

Djalal's work?  That rightfully belongs above the file system layer.
The only bit that's needed within the file system is the part that sets
the super flag to enable it.  The different namespaces have nothing to
do with the file system below it.

> My point is that if we are talking about infrastructure to remap
> what userspace sees from different mountpoint views into a
> filesystem, then it should be done above the filesystem layers in
> the VFS so all filesystems behave the same way. And in this case,
> the vfsmount maps exactly to the "fs_view" that Mark has proposed we
> add to the superblock.....

It's proposed to be above the superblock with a default view in the
superblock.  It would sit between the inode and the superblock so we
have access to it anywhere we already have an inode.  That's the main
difference.  We already have the inode everywhere it's needed.  Plumbing
a vfsmount everywhere needed means changing code that only requires an
inode and doesn't need a vfsmount.

The two biggest problem areas:
- Writeback tracepoints print a dev/inode pair.  Do we want to plumb a
vfsmount into __mark_inode_dirty, super_operations->write_inode,
__writeback_single_inode, writeback_sb_inodes, etc?
- Audit.  As it happens, most of audit has a path or file that can be
used.  We do run into problems with fsnotify.  fsnotify_move is called
from vfs_rename which turns into a can of worms pretty quickly.

>> It makes sense for that to be above the
>> superblock because the file system doesn't care about them.  We're
>> interested in the inode namespace, which for every other file system can
>> be described using an inode and a superblock pair, but btrfs has another
>> layer in the middle: inode -> btrfs_root -> superblock. 
> Which seems to me to be irrelevant if there's a vfsmount per
> subvolume that can hold per-subvolume information.

I disagree.  There are a ton of places where we only have access to an
inode and only need access to an inode.  It also doesn't solve the
overlayfs issue.

>>> So this kinda feel like there's still a impedence mismatch between
>>> btrfs subvolumes being mounted as subtrees on the underlying root
>>> vfsmount rather than being created as truly independent vfs
>>> namespaces that share a superblock. To put that as a question: why
>>> aren't btrfs subvolumes vfsmounts in their own right, and the unique
>>> information subvolume information get stored in (or obtained from)
>>> the vfsmount?
>> Those are two separate problems.   Using a vfsmount to export the
>> btrfs_root is on my roadmap.  I have a WIP patch set that automounts the
>> subvolumes when stepping into a new one, but it's to fix a longstanding
>> UX wart.
> IMO that's more than a UX wart - th elack of vfsmounts for internal
> subvolume mount point traversals could be considered the root cause
> of the issues we are discussing here. Extending the mounted
> namespace should trigger vfs mounts, not be hidden deep inside the
> filesystem. Hence I'd suggest this needs changing before anything
> else....

I disagree.  Yes, it would be nice if btrfs did this, but it doesn't get
us the dev/ino pair to places where we need one, like the depths of audit.

>>>> During the discussion, one question did come up - why can't
>>>> filesystems like Btrfs use a superblock per subvolume? There's a
>>>> couple of problems with that:
>>>> - It's common for a single Btrfs filesystem to have thousands of
>>>>   subvolumes. So keeping a superblock for each subvol in memory would
>>>>   get prohibively expensive - imagine having 8000 copies of struct
>>>>   super_block for a file system just because we wanted some separation
>>>>   of say, s_dev.
>>> That's no different to using individual overlay mounts for the
>>> thousands of containers that are on the system. This doesn't seem to
>>> be a major problem...
>> Overlay mounts are indepedent of one another and don't need coordination
>> among them.  The memory usage is relatively unimportant.  The important
>> part is having a bunch of superblocks that all correspond to the same
>> resources and coordinating them at the VFS level.  Your assumptions
>> below follow how your XFS subvolumes work, where there's a clear
>> hierarchy between the subvolumes and the master filesystem with a
>> mapping layer between them.  Btrfs subvolumes have no such hierarchy.
>> Everything is shared. 
> Yup, that's the impedence mismatch between the VFS infrastructure
> and btrfs that I was talking about. What I'm trying to communicate
> is that I think the proposal is attacking the impedence mismatch
> from the wrong direction.
> i.e. The proposal is to modify btrfs code to propagate stuff that
> btrfs needs to know to deal with it's internal "everything is
> shared" problems up into the VFS where it's probably not useful to
> anything other than btrfs. We already have the necessary construct
> in the VFS - I think we should be trying to use the information held
> by the generic VFS infrastructure to solve the solve the specific
> btrfs issue at hand....

The many namespaces to one I/O domain relationship isn't a
btrfs-internal problem.  It's visible to the user and it's visible
inside the kernel.  The view implementation gives the VFS a generic
mechanism to allow it to represent that.  The plan isn't to only use the
view for that but to also allow separate security contexts for things
like containers.  Those have a similar issue to the inode availability
discussed above.  We have superblocks where we need them.  We could have
a view just as a easily.  Plumbing a vfsmount is the wrong approach.

>> So while we could use a writeback hierarchy to
>> merge all of the inode lists before doing writeback on the 'master'
>> superblock, we'd gain nothing by it.  Handling anything involving
>> s_umount with a superblock per subvolume would be a nightmare.
>> Ultimately, it would be a ton of effort that amounts to working around
>> the VFS instead of with it.
> I'm not suggesting that btrfs needs to use a superblock per
> subvolume. Please don't confuse my statements along the lines of
> "this doesn't seem to be a problem for others" with "you must change
> btrfs to do this". I'm just saying that the problems arising from
> using a superblock per subvolume are not as dire as is being
> implied.

It's possible to solve the btrfs problem this way but it adds a bunch of
complexity for... what benefit, exactly? As I said, it's working around
the VFS instead of working with it.  As Mark initially stated, we handle
I/O and namespace domains.  Btrfs has many namespaces and only one I/O


Jeff Mahoney

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2018-06-05 20:17 UTC|newest]

Thread overview: 88+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-08 18:03 Mark Fasheh
2018-05-08 18:03 ` [PATCH 01/76] vfs: Introduce struct fs_view Mark Fasheh
2018-05-08 18:03 ` [PATCH 02/76] arch: Use inode_sb() helper instead of inode->i_sb Mark Fasheh
2018-05-08 18:03 ` [PATCH 03/76] drivers: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 04/76] fs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 05/76] include: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 06/76] ipc: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 07/76] kernel: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 08/76] mm: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 09/76] net: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 10/76] security: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 11/76] fs/9p: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 12/76] fs/adfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 13/76] fs/affs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 14/76] fs/afs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 15/76] fs/autofs4: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 16/76] fs/befs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 17/76] fs/bfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 18/76] fs/btrfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 19/76] fs/ceph: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 20/76] fs/cifs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 21/76] fs/coda: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 22/76] fs/configfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 23/76] fs/cramfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 24/76] fs/crypto: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 25/76] fs/ecryptfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 26/76] fs/efivarfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 27/76] fs/efs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 28/76] fs/exofs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 29/76] fs/exportfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 30/76] fs/ext2: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 31/76] fs/ext4: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 32/76] fs/f2fs: " Mark Fasheh
2018-05-10 10:10   ` Chao Yu
2018-05-08 18:03 ` [PATCH 33/76] fs/fat: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 34/76] fs/freevxfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 35/76] fs/fuse: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 36/76] fs/gfs2: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 37/76] fs/hfs: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 38/76] fs/hfsplus: " Mark Fasheh
2018-05-08 18:03 ` [PATCH 39/76] fs/hostfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 40/76] fs/hpfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 41/76] fs/hugetlbfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 42/76] fs/isofs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 43/76] fs/jbd2: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 44/76] fs/jffs2: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 45/76] fs/jfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 46/76] fs/kernfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 47/76] fs/lockd: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 48/76] fs/minix: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 49/76] fs/nfsd: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 50/76] fs/nfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 51/76] fs/nilfs2: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 52/76] fs/notify: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 53/76] fs/ntfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 54/76] fs/ocfs2: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 55/76] fs/omfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 56/76] fs/openpromfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 57/76] fs/orangefs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 58/76] fs/overlayfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 59/76] fs/proc: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 60/76] fs/qnx4: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 61/76] fs/qnx6: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 62/76] fs/quota: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 63/76] fs/ramfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 64/76] fs/read: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 65/76] fs/reiserfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 66/76] fs/romfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 67/76] fs/squashfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 68/76] fs/sysv: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 69/76] fs/ubifs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 70/76] fs/udf: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 71/76] fs/ufs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 72/76] fs/xfs: " Mark Fasheh
2018-05-08 18:04 ` [PATCH 73/76] vfs: Move s_dev to to struct fs_view Mark Fasheh
2018-05-08 18:04 ` [PATCH 74/76] fs: Use fs_view device from struct inode Mark Fasheh
2018-05-08 18:04 ` [PATCH 75/76] fs: Use fs view device from struct super_block Mark Fasheh
2018-05-08 18:04 ` [PATCH 76/76] btrfs: Use fs_view in roots, point inodes to it Mark Fasheh
2018-05-08 23:38 ` [RFC][PATCH 0/76] vfs: 'views' for filesystems with more than one root Dave Chinner
2018-05-09  2:06   ` Jeff Mahoney
2018-05-09  6:41     ` Dave Chinner
2018-06-05 20:17       ` Jeff Mahoney [this message]
2018-06-06  9:49         ` Amir Goldstein
2018-06-06 20:42           ` Mark Fasheh
2018-06-07  6:06             ` Amir Goldstein
2018-06-07 20:44               ` Mark Fasheh
2018-06-06 21:19           ` Jeff Mahoney
2018-06-07  6:17             ` Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \
    --subject='Re: [RFC][PATCH 0/76] vfs: '\''views'\'' for filesystems with more than one root' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).