LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Kent Overstreet <kent.overstreet@gmail.com>
To: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-bcache@vger.kernel.org, Dave Chinner <dchinner@redhat.com>,
	"Darrick J . Wong" <darrick.wong@oracle.com>,
	hare@suse.com
Subject: Re: [PATCH 1/2] bcachefs: On disk data structures
Date: Sun, 13 May 2018 18:29:48 -0400	[thread overview]
Message-ID: <20180513222948.GA20114@kmo-pixel> (raw)
In-Reply-To: <dea41935-a52c-e720-0ca3-eef4367d3641@infradead.org>

On Sun, May 13, 2018 at 01:30:06PM -0700, Randy Dunlap wrote:
> On 05/08/2018 03:17 PM, Kent Overstreet wrote:
> > + * The btree is the primary structure, most metadata exists as keys in the
> 
> s/,/;/

nitpicky, but ok :P

> > + * various btrees. There are only a small number of btrees, they're not
> > + * sharded - we have one btree for extents, another for inodes, et cetera.
> 
>    or shared?

No, I mean sharded - we're e.g. not splitting up the extents btree into one
btree per inode. 

> > +/* Btree keys - all units are in sectors */
> 
> Are sectors fixed size?  I.e., can 2 different physical storage devices have
> different sized sectors?
> or is this just the "traditional" 512-byte sector?

512 byte sectors.

Multi device filesystems must use the same block size for each device, but
actually it just occurred to me that it probably wouldn't be that difficult to
lift that restriction. The main reason we care about block size is that btree
node entries and journal entries don't record how much padding we wrote - to
figure out where to look for the next journal/btree node entry, we take the size
of the current entry and round it up by the block size.

But adding the block size that particular entry was written with to the header
would be pretty easy.


>                                          to the start of the start  [intentional?]

Whoops, thanks

> I know that you have already answered a few comments about endianness,
> so maybe you answered this and I missed it.
> 
> Can a bcachefs fs be shared, a la NFS?  I.e., can multiple different-endian
> clients be accessing the same bcachefs?

NFS works. I haven't tested NFS to different endian clients, wasn't aware there
was any reason to... are there potential issues there I'm not aware of?

  reply	other threads:[~2018-05-13 22:29 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-08 22:17 [PATCH 0/2] bcachefs: on disk data structures, ioctl interface - review requested for upstreaming Kent Overstreet
2018-05-08 22:17 ` [PATCH 1/2] bcachefs: On disk data structures Kent Overstreet
2018-05-11  8:32   ` Dave Chinner
2018-05-11 22:04     ` Kent Overstreet
2018-05-13 20:30   ` Randy Dunlap
2018-05-13 22:29     ` Kent Overstreet [this message]
2018-05-13 22:49       ` Randy Dunlap
2018-05-08 22:18 ` [PATCH 2/2] bcachefs: Ioctl interface Kent Overstreet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180513222948.GA20114@kmo-pixel \
    --to=kent.overstreet@gmail.com \
    --cc=darrick.wong@oracle.com \
    --cc=dchinner@redhat.com \
    --cc=hare@suse.com \
    --cc=linux-bcache@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rdunlap@infradead.org \
    --subject='Re: [PATCH 1/2] bcachefs: On disk data structures' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).