LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Todd Kjos <tkjos@google.com>
To: Dan Carpenter <dan.carpenter@oracle.com>
Cc: gregkh@linuxfoundation.org, christian@brauner.io,
	arve@android.com, devel@driverdev.osuosl.org,
	linux-kernel@vger.kernel.org, maco@google.com,
	joel@joelfernandes.org, kernel-team@android.com
Subject: Re: [PATCH 3/3] binder: defer copies of pre-patched txn data
Date: Wed, 24 Nov 2021 12:39:44 -0800	[thread overview]
Message-ID: <CAHRSSEzPXwho9OBhbMFbizTgkJkFUvaQQbqN2RmK_KF8cXhofQ@mail.gmail.com> (raw)
In-Reply-To: <20211124110949.GF6514@kadam>

On Wed, Nov 24, 2021 at 3:10 AM Dan Carpenter <dan.carpenter@oracle.com> wrote:
>
> On Tue, Nov 23, 2021 at 11:17:37AM -0800, Todd Kjos wrote:
> > +/**
> > + * binder_do_deferred_txn_copies() - copy and fixup scatter-gather data
> > + * @alloc:   binder_alloc associated with @buffer
> > + * @buffer:  binder buffer in target process
> > + * @sgc_head:        list_head of scatter-gather copy list
> > + * @pf_head: list_head of pointer fixup list
> > + *
> > + * Processes all elements of @sgc_head, applying fixups from @pf_head
> > + * and copying the scatter-gather data from the source process' user
> > + * buffer to the target's buffer. It is expected that the list creation
> > + * and processing all occurs during binder_transaction() so these lists
> > + * are only accessed in local context.
> > + *
> > + * Return: 0=success, else -errno
>       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> This function is supposed to return negatives on error.
>
> > + */
> > +static int binder_do_deferred_txn_copies(struct binder_alloc *alloc,
> > +                                      struct binder_buffer *buffer,
> > +                                      struct list_head *sgc_head,
> > +                                      struct list_head *pf_head)
> > +{
> > +     int ret = 0;
> > +     struct list_head *entry, *tmp;
> > +     struct binder_ptr_fixup *pf =
> > +             list_first_entry_or_null(pf_head, struct binder_ptr_fixup,
> > +                                      node);
> > +
> > +     list_for_each_safe(entry, tmp, sgc_head) {
> > +             size_t bytes_copied = 0;
> > +             struct binder_sg_copy *sgc =
> > +                     container_of(entry, struct binder_sg_copy, node);
> > +
> > +             while (bytes_copied < sgc->length) {
> > +                     size_t copy_size;
> > +                     size_t bytes_left = sgc->length - bytes_copied;
> > +                     size_t offset = sgc->offset + bytes_copied;
> > +
> > +                     /*
> > +                      * We copy up to the fixup (pointed to by pf)
> > +                      */
> > +                     copy_size = pf ? min(bytes_left, (size_t)pf->offset - offset)
> > +                                    : bytes_left;
> > +                     if (!ret && copy_size)
> > +                             ret = binder_alloc_copy_user_to_buffer(
>                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> "ret" is the number of bytes remaining to be copied.

Good catch. Thanks. Will fix.

>
>
> > +                                             alloc, buffer,
> > +                                             offset,
> > +                                             sgc->uaddr + bytes_copied,
> > +                                             copy_size);
> > +                     bytes_copied += copy_size;
> > +                     if (copy_size != bytes_left) {
> > +                             BUG_ON(!pf);
> > +                             /* we stopped at a fixup offset */
> > +                             if (pf->skip_size) {
> > +                                     /*
> > +                                      * we are just skipping. This is for
> > +                                      * BINDER_TYPE_FDA where the translated
> > +                                      * fds will be fixed up when we get
> > +                                      * to target context.
> > +                                      */
> > +                                     bytes_copied += pf->skip_size;
> > +                             } else {
> > +                                     /* apply the fixup indicated by pf */
> > +                                     if (!ret)
> > +                                             ret = binder_alloc_copy_to_buffer(
> > +                                                     alloc, buffer,
> > +                                                     pf->offset,
> > +                                                     &pf->fixup_data,
> > +                                                     sizeof(pf->fixup_data));
> > +                                     bytes_copied += sizeof(pf->fixup_data);
> > +                             }
> > +                             list_del(&pf->node);
> > +                             kfree(pf);
> > +                             pf = list_first_entry_or_null(pf_head,
> > +                                             struct binder_ptr_fixup, node);
> > +                     }
> > +             }
> > +             list_del(&sgc->node);
> > +             kfree(sgc);
> > +     }
> > +     BUG_ON(!list_empty(pf_head));
> > +     BUG_ON(!list_empty(sgc_head));
> > +
> > +     return ret;
> > +}
> > +
> > +/**
> > + * binder_cleanup_deferred_txn_lists() - free specified lists
> > + * @sgc_head:        list_head of scatter-gather copy list
> > + * @pf_head: list_head of pointer fixup list
> > + *
> > + * Called to clean up @sgc_head and @pf_head if there is an
> > + * error.
> > + */
> > +static void binder_cleanup_deferred_txn_lists(struct list_head *sgc_head,
> > +                                           struct list_head *pf_head)
> > +{
> > +     struct list_head *entry, *tmp;
> > +
> > +     list_for_each_safe(entry, tmp, sgc_head) {
> > +             struct binder_sg_copy *sgc =
> > +                     container_of(entry, struct binder_sg_copy, node);
> > +             list_del(&sgc->node);
> > +             kfree(sgc);
> > +     }
> > +     list_for_each_safe(entry, tmp, pf_head) {
> > +             struct binder_ptr_fixup *pf =
> > +                     container_of(entry, struct binder_ptr_fixup, node);
> > +             list_del(&pf->node);
> > +             kfree(pf);
> > +     }
> > +}
> > +
> > +/**
> > + * binder_defer_copy() - queue a scatter-gather buffer for copy
> > + * @sgc_head:        list_head of scatter-gather copy list
> > + * @offset:  binder buffer offset in target process
> > + * @uaddr:   user address in source process
> > + * @length:  bytes to copy
> > + *
> > + * Specify a scatter-gather block to be copied. The actual copy must
> > + * be deferred until all the needed fixups are identified and queued.
> > + * Then the copy and fixups are done together so un-translated values
> > + * from the source are never visible in the target buffer.
> > + *
> > + * We are guaranteed that repeated calls to this function will have
> > + * monotonically increasing @offset values so the list will naturally
> > + * be ordered.
> > + *
> > + * Return: 0=success, else -errno
> > + */
> > +static int binder_defer_copy(struct list_head *sgc_head, binder_size_t offset,
> > +                          const void __user *uaddr, size_t length)
> > +{
> > +     struct binder_sg_copy *bc = kzalloc(sizeof(*bc), GFP_KERNEL);
> > +
> > +     if (!bc)
> > +             return -ENOMEM;
> > +
> > +     bc->offset = offset;
> > +     bc->uaddr = uaddr;
> > +     bc->length = length;
> > +     INIT_LIST_HEAD(&bc->node);
> > +
> > +     /*
> > +      * We are guaranteed that the deferred copies are in-order
> > +      * so just add to the tail.
> > +      */
> > +     list_add_tail(&bc->node, sgc_head);
> > +
> > +     return 0;
> > +}
> > +
> > +/**
> > + * binder_add_fixup() - queue a fixup to be applied to sg copy
> > + * @pf_head: list_head of binder ptr fixup list
> > + * @offset:  binder buffer offset in target process
> > + * @fixup:   bytes to be copied for fixup
> > + * @skip_size:       bytes to skip when copying (fixup will be applied later)
> > + *
> > + * Add the specified fixup to a list ordered by @offset. When copying
> > + * the scatter-gather buffers, the fixup will be copied instead of
> > + * data from the source buffer. For BINDER_TYPE_FDA fixups, the fixup
> > + * will be applied later (in target process context), so we just skip
> > + * the bytes specified by @skip_size. If @skip_size is 0, we copy the
> > + * value in @fixup.
> > + *
> > + * This function is called *mostly* in @offset order, but there are
> > + * exceptions. Since out-of-order inserts are relatively uncommon,
> > + * we insert the new element by searching backward from the tail of
> > + * the list.
> > + *
> > + * Return: 0=success, else -errno
> > + */
> > +static int binder_add_fixup(struct list_head *pf_head, binder_size_t offset,
> > +                         binder_uintptr_t fixup, size_t skip_size)
> > +{
> > +     struct binder_ptr_fixup *pf = kzalloc(sizeof(*pf), GFP_KERNEL);
> > +     struct list_head *tmp;
> > +
> > +     if (!pf)
> > +             return -ENOMEM;
> > +
> > +     pf->offset = offset;
> > +     pf->fixup_data = fixup;
> > +     pf->skip_size = skip_size;
> > +     INIT_LIST_HEAD(&pf->node);
> > +
> > +     /* Fixups are *mostly* added in-order, but there are some
> > +      * exceptions. Look backwards through list for insertion point.
> > +      */
> > +     if (!list_empty(pf_head)) {
>
> This condition is not required.  If list is empty we add it to the tail,
> but when there is only one item on the list, the first and last item are
> going to be the same.

Good point.

>
> > +             list_for_each_prev(tmp, pf_head) {
> > +                     struct binder_ptr_fixup *tmppf =
> > +                             list_entry(tmp, struct binder_ptr_fixup, node);
> > +
> > +                     if (tmppf->offset < pf->offset) {
> > +                             list_add(&pf->node, tmp);
> > +                             return 0;
> > +                     }
> > +             }
> > +             /*
> > +              * if we get here, then the new offset is the lowest so
> > +              * insert at the head
> > +              */
> > +             list_add(&pf->node, pf_head);
> > +             return 0;
> > +     }
> > +     list_add_tail(&pf->node, pf_head);
> > +     return 0;
> > +}
> > +
>
> regards,
> dan carpenter

Dan, Thanks for the detailed review on all 3 patches.

>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

  reply	other threads:[~2021-11-24 20:40 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-23 19:17 [PATCH 0/3] binder: Prevent untranslated sender data from being copied to target Todd Kjos
2021-11-23 19:17 ` [PATCH 1/3] binder: avoid potential data leakage when copying txn Todd Kjos
2021-11-24  7:50   ` Martijn Coenen
2021-11-24 13:01   ` Dan Carpenter
2021-11-24 20:11     ` Todd Kjos
2021-11-25  2:05   ` kernel test robot
2021-11-25 12:18   ` kernel test robot
2021-11-23 19:17 ` [PATCH 2/3] binder: read pre-translated fds from sender buffer Todd Kjos
2021-11-24  7:50   ` Martijn Coenen
2021-11-24 12:37   ` Dan Carpenter
2021-11-24 20:33     ` Todd Kjos
2021-11-25  6:37       ` Dan Carpenter
2021-11-23 19:17 ` [PATCH 3/3] binder: defer copies of pre-patched txn data Todd Kjos
2021-11-24  7:50   ` Martijn Coenen
2021-11-24 11:09   ` Dan Carpenter
2021-11-24 20:39     ` Todd Kjos [this message]
2021-11-24 12:43   ` Dan Carpenter
2021-11-24 20:37     ` Todd Kjos
2021-11-24  8:08 ` [PATCH 0/3] binder: Prevent untranslated sender data from being copied to target Greg KH
2021-11-24 15:54   ` Todd Kjos

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHRSSEzPXwho9OBhbMFbizTgkJkFUvaQQbqN2RmK_KF8cXhofQ@mail.gmail.com \
    --to=tkjos@google.com \
    --cc=arve@android.com \
    --cc=christian@brauner.io \
    --cc=dan.carpenter@oracle.com \
    --cc=devel@driverdev.osuosl.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=joel@joelfernandes.org \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maco@google.com \
    --subject='Re: [PATCH 3/3] binder: defer copies of pre-patched txn data' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).