LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Stefano Garzarella <sgarzare@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Michael S . Tsirkin" <mst@redhat.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
Date: Thu, 6 Jun 2019 10:11:09 +0200	[thread overview]
Message-ID: <20190606081109.gdx4rsly5i6gtg57@steredhat> (raw)
In-Reply-To: <dbc9964c-65b1-0993-488b-cb44aea55e90@redhat.com>

On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
> On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> > > On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > > > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > >      	vsock->event_run = false;
> > > > > > > >      	mutex_unlock(&vsock->event_lock);
> > > > > > > > +	/* Flush all pending works */
> > > > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > > > +
> > > > > > > >      	/* Flush all device writes and interrupts, device will not use any
> > > > > > > >      	 * more buffers.
> > > > > > > >      	 */
> > > > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > >      	/* Delete virtqueues and flush outstanding callbacks if any */
> > > > > > > >      	vdev->config->del_vqs(vdev);
> > > > > > > > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > > > +	 * all works before to free the vsock object to avoid use after free.
> > > > > > > > +	 */
> > > > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > > Some questions after a quick glance:
> > > > > > > 
> > > > > > > 1) It looks to me that the work could be queued from the path of
> > > > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > > > > 
> > > > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > > > queue work from the upper layer (socket).
> > > > > > 
> > > > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > > > a rare issue could happen:
> > > > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > > > are freeing the object pointed by it at the end of .remove(), so
> > > > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > > > running, accessing the object that we are freed.
> > > > > Yes, that's my point.
> > > > > 
> > > > > 
> > > > > > Should I use something like RCU to prevent this issue?
> > > > > > 
> > > > > >        virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > > > >        {
> > > > > >            rcu_read_lock();
> > > > > >            vsock = rcu_dereference(the_virtio_vsock_mutex);
> > > > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > > > > 
> > > > Okay, I'm going this way.
> > > > 
> > > > > >            ...
> > > > > >            rcu_read_unlock();
> > > > > >        }
> > > > > > 
> > > > > >        virtio_vsock_remove()
> > > > > >        {
> > > > > >            rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > > > >            synchronize_rcu();
> > > > > > 
> > > > > >            ...
> > > > > > 
> > > > > >            free(vsock);
> > > > > >        }
> > > > > > 
> > > > > > Could there be a better approach?
> > > > > > 
> > > > > > 
> > > > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > > > in the end since send_pkt_work can requeue rx_work.
> > > > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > > > function is running while we are calling config->reset().
> > > > > > 
> > > > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > > > config->reset(), it can queue new works that can access the device while
> > > > > > we are in config->reset().
> > > > > > 
> > > > > > IMHO they are still needed.
> > > > > > 
> > > > > > What do you think?
> > > > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > > > tricks?
> > > > > 
> > > > > rest();
> > > > > 
> > > > > virtio_vsock_flush_work();
> > > > > 
> > > > > virtio_vsock_free_buf();
> > > > My only doubt is:
> > > > is it safe to call config->reset() while a worker function could access
> > > > the device?
> > > > 
> > > > I had this doubt reading the Michael's advice[1] and looking at
> > > > virtnet_remove() where there are these lines before the config->reset():
> > > > 
> > > > 	/* Make sure no work handler is accessing the device. */
> > > > 	flush_work(&vi->config_work);
> > > > 
> > > > Thanks,
> > > > Stefano
> > > > 
> > > > [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
> > > 
> > > Good point. Then I agree with you. But if we can use the RCU to detect the
> > > detach of device from socket for these, it would be even better.
> > > 
> > What about checking 'the_virtio_vsock' in the worker functions in a RCU
> > critical section?
> > In this way, I can remove the rx_run/tx_run/event_run.
> > 
> > Do you think it's cleaner?
> 
> 
> Yes, I think so.
> 

Hi Jason,
while I was trying to use RCU also for workers, I discovered that it can
not be used if we can sleep. (Workers have mutex, memory allocation, etc.).
There is SRCU, but I think the rx_run/tx_run/event_run is cleaner.

So, if you agree I'd send a v2 using RCU only for the
virtio_transport_send_pkt() or vsock_transport_cancel_pkt(), and leave
this patch as is to be sure that no one is accessing the device while we
call config->reset().

Thanks,
Stefano

  reply	other threads:[~2019-06-06  8:11 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-28 10:56 [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefano Garzarella
2019-05-28 10:56 ` [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Stefano Garzarella
2019-05-30  4:28   ` David Miller
2019-05-30 10:27     ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 2/4] vsock/virtio: stop workers during the .remove() Stefano Garzarella
2019-05-28 10:56 ` [PATCH 3/4] vsock/virtio: fix flush of works " Stefano Garzarella
2019-05-29  3:22   ` Jason Wang
2019-05-29 10:58     ` Stefano Garzarella
2019-05-30  9:46       ` Jason Wang
2019-05-30 10:10         ` Stefano Garzarella
2019-05-30 11:59           ` Jason Wang
2019-05-31  8:18             ` Stefano Garzarella
2019-05-31  9:56               ` Jason Wang
2019-06-06  8:11                 ` Stefano Garzarella [this message]
2019-06-13  8:57                   ` Jason Wang
2019-06-27 10:26                     ` Stefano Garzarella
2019-05-28 10:56 ` [PATCH 4/4] vsock/virtio: free used buffers " Stefano Garzarella
2019-06-10 13:09 ` [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove() Stefan Hajnoczi
2019-06-27 10:05   ` Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190606081109.gdx4rsly5i6gtg57@steredhat \
    --to=sgarzare@redhat.com \
    --cc=davem@davemloft.net \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --subject='Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).