LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Long Li <firstname.lastname@example.org>
To: Greg Kroah-Hartman <email@example.com>,
Cc: Bart Van Assche <firstname.lastname@example.org>,
Jonathan Corbet <email@example.com>,
KY Srinivasan <firstname.lastname@example.org>,
Haiyang Zhang <email@example.com>,
Stephen Hemminger <firstname.lastname@example.org>,
Wei Liu <email@example.com>, Dexuan Cui <firstname.lastname@example.org>,
Bjorn Andersson <email@example.com>,
Hans de Goede <firstname.lastname@example.org>,
"Williams, Dan J" <email@example.com>,
Maximilian Luz <firstname.lastname@example.org>,
Mike Rapoport <email@example.com>,
Ben Widawsky <firstname.lastname@example.org>,
Jiri Slaby <email@example.com>,
Andra Paraschiv <firstname.lastname@example.org>,
Siddharth Gupta <email@example.com>,
Hannes Reinecke <firstname.lastname@example.org>
Subject: RE: [Patch v5 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob for Azure VM
Date: Mon, 11 Oct 2021 17:55:41 +0000 [thread overview]
Message-ID: <BY5PR21MB1506118F97D27E3D402A5102CEB59@BY5PR21MB1506.namprd21.prod.outlook.com> (raw)
> Subject: Re: [Patch v5 0/3] Introduce a driver to support host accelerated access
> to Microsoft Azure Blob for Azure VM
> On Fri, Oct 08, 2021 at 01:11:02PM +0200, Vitaly Kuznetsov wrote:
> > Greg Kroah-Hartman <email@example.com> writes:
> > ...
> > >
> > > Not to mention the whole crazy idea of "let's implement our REST api
> > > that used to go over a network connection over an ioctl instead!"
> > > That's the main problem that you need to push back on here.
> > >
> > > What is forcing you to put all of this into the kernel in the first
> > > place? What's wrong with the userspace network connection/protocol
> > > that you have today?
> > >
> > > Does this mean that we now have to implement all REST apis that
> > > people dream up as ioctl interfaces over a hyperv transport? That
> > > would be insane.
> > As far as I understand, the purpose of the driver is to replace a "slow"
> > network connection to API endpoint with a "fast" transport over Vmbus.
> Given that the network connection is already over vmbus, how is this "slow"
> today? I have yet to see any benchmark numbers anywhere :(
The problem statement and benchmark numbers are in this patch. Maybe it's getting lost because of the long discussion. I'm pasting them again in the email:
Azure Blob storage  is Microsoft's object storage solution for the cloud. Users or client applications can access objects in Blob storage via HTTP, from anywhere in the world. Objects in Blob storage are accessible via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library. The Blob storage interface is not designed to be a POSIX compliant interface.
Problem: When a client accesses Blob storage via HTTP, it must go through the Blob storage boundary of Azure and get to the storage server through multiple servers. This is also true for an Azure VM.
Solution: For an Azure VM, the Blob storage access can be accelerated by having Azure host execute the Blob storage requests to the backend storage server directly.
This driver implements a VSC (Virtual Service Client) for accelerating Blob storage access for an Azure VM by communicating with a VSP (Virtual Service
Provider) on the Azure host. Instead of using HTTP to access the Blob storage, an Azure VM passes the Blob storage request to the VSP on the Azure host. The Azure host uses its native network to perform Blob storage requests to the backend server directly.
This driver doesn't implement Blob storage APIs. It acts as a fast channel to pass user-mode Blob storage requests to the Azure host. The user-mode program using this driver implements Blob storage APIs and packages the Blob storage request as structured data to VSC. The request data is modeled as three user provided buffers (request, response and data buffers), that are patterned on the HTTP model used by existing Azure Blob clients. The VSC passes those buffers to VSP for Blob storage requests.
The driver optimizes Blob storage access for an Azure VM in two ways:
1. The Blob storage requests are performed by the Azure host to the Azure Blob backend storage server directly.
2. It allows the Azure host to use transport technologies (e.g. RDMA) available to the Azure host but not available to the VM, to reach to Azure Blob backend servers.
Test results using this driver for an Azure VM:
100 Blob clients running on an Azure VM, each reading 100GB Block Blobs.
(10 TB total read data)
With REST API over HTTP: 94.4 mins
Using this driver: 72.5 mins
Performance (measured in throughput) gain: 30%.
> > So what if instead of implementing this new driver we just use Hyper-V
> > Vsock and move API endpoint to the host?
> What is running on the host in the hypervisor that is supposed to be handling
> these requests? Isn't that really on some other guest?
The requests are handled by Hyper-V via a dedicated Blob service on behalf of the VM. The Blob service is running in the Hyper-V root partition for all the VMs on this Hyper-V server. The request to the "Blob server" is sent by this service over native TCP or RDMA used by Azure backend.
> greg k-h
next prev parent reply other threads:[~2021-10-11 17:55 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-05 7:00 longli
2021-08-05 7:00 ` [Patch v5 1/3] Drivers: hv: vmbus: add support to ignore certain PCIE devices longli
2021-08-05 7:00 ` [Patch v5 2/3] Drivers: hv: add Azure Blob driver longli
2021-08-05 7:11 ` Greg Kroah-Hartman
2021-08-05 18:07 ` Long Li
2021-08-05 18:16 ` Greg Kroah-Hartman
2021-08-05 17:06 ` Bart Van Assche
2021-08-05 18:10 ` Long Li
2021-08-05 18:17 ` Greg Kroah-Hartman
2021-09-07 21:42 ` Michael Kelley
2021-08-05 7:00 ` [Patch v5 3/3] Drivers: hv: Add to maintainer for Hyper-V/Azure drivers longli
2021-08-05 7:08 ` [Patch v5 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob for Azure VM Greg Kroah-Hartman
2021-08-05 18:27 ` Long Li
2021-08-05 18:33 ` Greg Kroah-Hartman
2021-08-05 17:09 ` Bart Van Assche
2021-08-05 18:24 ` Long Li
2021-08-05 18:34 ` Greg Kroah-Hartman
2021-08-07 18:29 ` Long Li
2021-08-08 5:14 ` Greg Kroah-Hartman
2021-08-10 3:01 ` Long Li
2021-09-22 23:55 ` Long Li
2021-09-30 22:25 ` Long Li
2021-10-01 7:36 ` Greg Kroah-Hartman
2021-10-07 18:15 ` Long Li
2021-10-08 5:54 ` Greg Kroah-Hartman
2021-10-08 11:11 ` Vitaly Kuznetsov
2021-10-08 11:19 ` Greg Kroah-Hartman
2021-10-08 13:28 ` Vitaly Kuznetsov
2021-10-11 17:57 ` Long Li
2021-10-13 0:58 ` Long Li
2021-10-13 7:03 ` Greg Kroah-Hartman
2021-10-11 17:55 ` Long Li [this message]
2021-10-11 17:46 ` Long Li
2021-10-11 17:58 ` Greg Kroah-Hartman
2021-10-11 19:38 ` Long Li
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--subject='RE: [Patch v5 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob for Azure VM' \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).