LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Stefano Garzarella <sgarzare@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	"Michael S . Tsirkin" <mst@redhat.com>
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
Date: Wed, 29 May 2019 12:58:32 +0200	[thread overview]
Message-ID: <20190529105832.oz3sagbne5teq3nt@steredhat> (raw)
In-Reply-To: <9ac9fc4b-5c39-2503-dfbb-660a7bdcfbfd@redhat.com>

On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> 
> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > We flush all pending works before to call vdev->config->reset(vdev),
> > but other works can be queued before the vdev->config->del_vqs(vdev),
> > so we add another flush after it, to avoid use after free.
> > 
> > Suggested-by: Michael S. Tsirkin <mst@redhat.com>
> > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > ---
> >   net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> >   1 file changed, 17 insertions(+), 6 deletions(-)
> > 
> > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > index e694df10ab61..ad093ce96693 100644
> > --- a/net/vmw_vsock/virtio_transport.c
> > +++ b/net/vmw_vsock/virtio_transport.c
> > @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> >   	return ret;
> >   }
> > +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> > +{
> > +	flush_work(&vsock->loopback_work);
> > +	flush_work(&vsock->rx_work);
> > +	flush_work(&vsock->tx_work);
> > +	flush_work(&vsock->event_work);
> > +	flush_work(&vsock->send_pkt_work);
> > +}
> > +
> >   static void virtio_vsock_remove(struct virtio_device *vdev)
> >   {
> >   	struct virtio_vsock *vsock = vdev->priv;
> > @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> >   	mutex_lock(&the_virtio_vsock_mutex);
> >   	the_virtio_vsock = NULL;
> > -	flush_work(&vsock->loopback_work);
> > -	flush_work(&vsock->rx_work);
> > -	flush_work(&vsock->tx_work);
> > -	flush_work(&vsock->event_work);
> > -	flush_work(&vsock->send_pkt_work);
> > -
> >   	/* Reset all connected sockets when the device disappear */
> >   	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> >   	vsock->event_run = false;
> >   	mutex_unlock(&vsock->event_lock);
> > +	/* Flush all pending works */
> > +	virtio_vsock_flush_works(vsock);
> > +
> >   	/* Flush all device writes and interrupts, device will not use any
> >   	 * more buffers.
> >   	 */
> > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> >   	/* Delete virtqueues and flush outstanding callbacks if any */
> >   	vdev->config->del_vqs(vdev);
> > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > +	 * all works before to free the vsock object to avoid use after free.
> > +	 */
> > +	virtio_vsock_flush_works(vsock);
> 
> 
> Some questions after a quick glance:
> 
> 1) It looks to me that the work could be queued from the path of
> vsock_transport_cancel_pkt() . Is that synchronized here?
>

Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
queue work from the upper layer (socket).

Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
a rare issue could happen:
we are setting the_virtio_vsock to NULL at the start of .remove() and we
are freeing the object pointed by it at the end of .remove(), so
virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
running, accessing the object that we are freed.

Should I use something like RCU to prevent this issue?

    virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
    {
        rcu_read_lock();
        vsock = rcu_dereference(the_virtio_vsock_mutex);
        ...
        rcu_read_unlock();
    }

    virtio_vsock_remove()
    {
        rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
        synchronize_rcu();

        ...

        free(vsock);
    }

Could there be a better approach?


> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> needed? It looks to me we've already done except that we need flush rx_work
> in the end since send_pkt_work can requeue rx_work.

The main reason of tx_run/rx_run/event_run is to prevent that a worker
function is running while we are calling config->reset().

E.g. if an interrupt comes between virtio_vsock_flush_works() and
config->reset(), it can queue new works that can access the device while
we are in config->reset().

IMHO they are still needed.

What do you think?


Thanks for your questions,
Stefano

  reply	other threads:[~2019-05-29 10:58 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
2019-05-30  4:28   ` David Miller
2019-05-30 10:27     ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 2/4] vsock/virtio: stop workers during the .remove() Stefano Garzarella
2019-05-28 10:56 ` [PATCH 3/4] vsock/virtio: fix flush of works " Stefano Garzarella
2019-05-29  3:22   ` Jason Wang
2019-05-29 10:58     ` Stefano Garzarella [this message]
2019-05-30  9:46       ` Jason Wang
2019-05-30 10:10         ` Stefano Garzarella
2019-05-30 11:59           ` Jason Wang
2019-05-31  8:18             ` Stefano Garzarella
2019-05-31  9:56               ` Jason Wang
2019-06-06  8:11                 ` Stefano Garzarella
2019-06-13  8:57                   ` Jason Wang
2019-06-27 10:26                     ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 4/4] vsock/virtio: free used buffers " Stefano Garzarella
2019-06-10 13:09 ` [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefan Hajnoczi
2019-06-27 10:05   ` Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190529105832.oz3sagbne5teq3nt@steredhat \
    --to=sgarzare@redhat.com \
    --cc=davem@davemloft.net \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --subject='Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).