Netdev Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Michael Kelley <mikelley@microsoft.com>
To: Boqun Feng <boqun.feng@gmail.com>
Cc: "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-input@vger.kernel.org" <linux-input@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	KY Srinivasan <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Jiri Kosina <jikos@kernel.org>,
	Benjamin Tissoires <benjamin.tissoires@redhat.com>,
	Dmitry Torokhov <dmitry.torokhov@gmail.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	"James E.J. Bottomley" <jejb@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>
Subject: RE: [RFC v2 11/11] scsi: storvsc: Support PAGE_SIZE larger than 4K
Date: Sat, 5 Sep 2020 15:37:47 +0000	[thread overview]
Message-ID: <MW2PR2101MB10526293A0A67DA6716BE615D72A0@MW2PR2101MB1052.namprd21.prod.outlook.com> (raw)
In-Reply-To: <20200905141503.GD7503@debian-boqun.qqnc3lrjykvubdpftowmye0fmh.lx.internal.cloudapp.net>

From: Boqun Feng <boqun.feng@gmail.com> Sent: Saturday, September 5, 2020 7:15 AM
> 
> On Sat, Sep 05, 2020 at 02:55:48AM +0000, Michael Kelley wrote:
> > From: Boqun Feng <boqun.feng@gmail.com> Sent: Tuesday, September 1, 2020 8:01 PM
> > >
> > > Hyper-V always use 4k page size (HV_HYP_PAGE_SIZE), so when
> > > communicating with Hyper-V, a guest should always use HV_HYP_PAGE_SIZE
> > > as the unit for page related data. For storvsc, the data is
> > > vmbus_packet_mpb_array. And since in scsi_cmnd, sglist of pages (in unit
> > > of PAGE_SIZE) is used, we need convert pages in the sglist of scsi_cmnd
> > > into Hyper-V pages in vmbus_packet_mpb_array.
> > >
> > > This patch does the conversion by dividing pages in sglist into Hyper-V
> > > pages, offset and indexes in vmbus_packet_mpb_array are recalculated
> > > accordingly.
> > >
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > >  drivers/scsi/storvsc_drv.c | 60 ++++++++++++++++++++++++++++++++++----
> > >  1 file changed, 54 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> > > index 8f5f5dc863a4..3f6610717d4e 100644
> > > --- a/drivers/scsi/storvsc_drv.c
> > > +++ b/drivers/scsi/storvsc_drv.c
> > > @@ -1739,23 +1739,71 @@ static int storvsc_queuecommand(struct Scsi_Host *host,
> struct
> > > scsi_cmnd *scmnd)
> > >  	payload_sz = sizeof(cmd_request->mpb);
> > >
> > >  	if (sg_count) {
> > > -		if (sg_count > MAX_PAGE_BUFFER_COUNT) {
> > > +		unsigned int hvpg_idx = 0;
> > > +		unsigned int j = 0;
> > > +		unsigned long hvpg_offset = sgl->offset & ~HV_HYP_PAGE_MASK;
> > > +		unsigned int hvpg_count = HVPFN_UP(hvpg_offset + length);
> > >
> > > -			payload_sz = (sg_count * sizeof(u64) +
> > > +		if (hvpg_count > MAX_PAGE_BUFFER_COUNT) {
> > > +
> > > +			payload_sz = (hvpg_count * sizeof(u64) +
> > >  				      sizeof(struct vmbus_packet_mpb_array));
> > >  			payload = kzalloc(payload_sz, GFP_ATOMIC);
> > >  			if (!payload)
> > >  				return SCSI_MLQUEUE_DEVICE_BUSY;
> > >  		}
> > >
> > > +		/*
> > > +		 * sgl is a list of PAGEs, and payload->range.pfn_array
> > > +		 * expects the page number in the unit of HV_HYP_PAGE_SIZE (the
> > > +		 * page size that Hyper-V uses, so here we need to divide PAGEs
> > > +		 * into HV_HYP_PAGE in case that PAGE_SIZE > HV_HYP_PAGE_SIZE.
> > > +		 */
> > >  		payload->range.len = length;
> > > -		payload->range.offset = sgl[0].offset;
> > > +		payload->range.offset = sgl[0].offset & ~HV_HYP_PAGE_MASK;
> > > +		hvpg_idx = sgl[0].offset >> HV_HYP_PAGE_SHIFT;
> > >
> > >  		cur_sgl = sgl;
> > > -		for (i = 0; i < sg_count; i++) {
> > > -			payload->range.pfn_array[i] =
> > > -				page_to_pfn(sg_page((cur_sgl)));
> > > +		for (i = 0, j = 0; i < sg_count; i++) {
> > > +			/*
> > > +			 * "PAGE_SIZE / HV_HYP_PAGE_SIZE - hvpg_idx" is the #
> > > +			 * of HV_HYP_PAGEs in the current PAGE.
> > > +			 *
> > > +			 * "hvpg_count - j" is the # of unhandled HV_HYP_PAGEs.
> > > +			 *
> > > +			 * As shown in the following, the minimal of both is
> > > +			 * the # of HV_HYP_PAGEs, we need to handle in this
> > > +			 * PAGE.
> > > +			 *
> > > +			 * |------------------ PAGE ----------------------|
> > > +			 * |   PAGE_SIZE / HV_HYP_PAGE_SIZE in total      |
> > > +			 * |hvpg|hvpg| ...                 |hvpg|... |hvpg|
> > > +			 *           ^                     ^
> > > +			 *         hvpg_idx                |
> > > +			 *           ^                     |
> > > +			 *           +---(hvpg_count - j)--+
> > > +			 *
> > > +			 * or
> > > +			 *
> > > +			 * |------------------ PAGE ----------------------|
> > > +			 * |   PAGE_SIZE / HV_HYP_PAGE_SIZE in total      |
> > > +			 * |hvpg|hvpg| ...                 |hvpg|... |hvpg|
> > > +			 *           ^                                           ^
> > > +			 *         hvpg_idx                                      |
> > > +			 *           ^                                           |
> > > +			 *           +---(hvpg_count - j)------------------------+
> > > +			 */
> > > +			unsigned int nr_hvpg = min((unsigned int)(PAGE_SIZE /
> HV_HYP_PAGE_SIZE) - hvpg_idx,
> > > +						   hvpg_count - j);
> > > +			unsigned int k;
> > > +
> > > +			for (k = 0; k < nr_hvpg; k++) {
> > > +				payload->range.pfn_array[j] =
> > > +					page_to_hvpfn(sg_page((cur_sgl))) + hvpg_idx + k;
> > > +				j++;
> > > +			}
> > >  			cur_sgl = sg_next(cur_sgl);
> > > +			hvpg_idx = 0;
> > >  		}
> >
> > This code works; I don't see any errors.  But I think it can be made simpler based
> > on doing two things:
> > 1)  Rather than iterating over the sg_count, and having to calculate nr_hvpg on
> > each iteration, base the exit decision on having filled up the pfn_array[].  You've
> > already calculated the exact size of the array that is needed given the data
> > length, so it's easy to exit when the array is full.
> > 2) In the inner loop, iterate from hvpg_idx to PAGE_SIZE/HV_HYP_PAGE_SIZE
> > rather than from 0 to a calculated value.
> >
> > Also, as an optimization, pull page_to_hvpfn(sg_page((cur_sgl)) out of the
> > inner loop.
> >
> > I think this code does it (though I haven't tested it):
> >
> >                 for (j = 0; ; sgl = sg_next(sgl)) {
> >                         unsigned int k;
> >                         unsigned long pfn;
> >
> >                         pfn = page_to_hvpfn(sg_page(sgl));
> >                         for (k = hvpg_idx; k < (unsigned int)(PAGE_SIZE /HV_HYP_PAGE_SIZE); k++) {
> >                                 payload->range.pfn_array[j] = pfn + k;
> >                                 if (++j == hvpg_count)
> >                                         goto done;
> >                         }
> >                         hvpg_idx = 0;
> >                 }
> > done:
> >
> > This approach also makes the limit of the inner loop a constant, and that
> > constant will be 1 when page size is 4K.  So the compiler should be able to
> > optimize away the loop in that case.
> >
> 
> Good point! I like your suggestion, and after thinking a bit harder
> based on your approach, I come up with the following:
> 
> #define HV_HYP_PAGES_IN_PAGE ((unsigned int)(PAGE_SIZE / HV_HYP_PAGE_SIZE))
> 
> 		for (j = 0; j < hvpg_count; j++) {
> 			unsigned int k = (j + hvpg_idx) % HV_HYP_PAGES_IN_PAGE;
> 
> 			/*
> 			 * Two cases that we need to fetch a page:
> 			 * a) j == 0: the first step or
> 			 * b) k == 0: when we reach the boundary of a
> 			 * page.
> 			 *
> 			if (k == 0 || j == 0) {
> 				pfn = page_to_hvpfn(sg_page(cur_sgl));
> 				cur_sgl = sg_next(cur_sgl);
> 			}
> 
> 			payload->range.pfn_arrary[j] = pfn + k;
> 		}
> 
> , given the HV_HYP_PAGES_IN_PAGE is always a power of 2, so I think
> compilers could easily optimize the "%" into bit masking operation. And
> when HV_HYP_PAGES_IN_PAGE is 1, I think compilers can easily figure out
> k is always zero, then the if-statement can be optimized as always
> taken. And that gives us the same code as before ;-)
> 
> Thoughts? I will try with a test to see if I'm missing something subtle.
> 
> Thanks for looking into this!
> 

Your newest version looks right to me -- very clever!  I like it even better 
than my version.

Michael

      reply	other threads:[~2020-09-05 15:38 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-02  3:00 [RFC v2 00/11] Hyper-V: " Boqun Feng
2020-09-02  3:00 ` [RFC v2 01/11] Drivers: hv: vmbus: Always use HV_HYP_PAGE_SIZE for gpadl Boqun Feng
2020-09-02  3:00 ` [RFC v2 02/11] Drivers: hv: vmbus: Move __vmbus_open() Boqun Feng
2020-09-05  0:04   ` Michael Kelley
2020-09-02  3:00 ` [RFC v2 03/11] Drivers: hv: vmbus: Introduce types of GPADL Boqun Feng
2020-09-05  0:19   ` Michael Kelley
2020-09-06  4:51     ` Boqun Feng
2020-09-02  3:01 ` [RFC v2 04/11] Drivers: hv: Use HV_HYP_PAGE in hv_synic_enable_regs() Boqun Feng
2020-09-02  3:01 ` [RFC v2 05/11] Drivers: hv: vmbus: Move virt_to_hvpfn() to hyperv header Boqun Feng
2020-09-02  3:01 ` [RFC v2 06/11] hv: hyperv.h: Introduce some hvpfn helper functions Boqun Feng
2020-09-02  3:01 ` [RFC v2 07/11] hv_netvsc: Use HV_HYP_PAGE_SIZE for Hyper-V communication Boqun Feng
2020-09-05  0:30   ` Michael Kelley
2020-09-07  0:09     ` Boqun Feng
2020-09-02  3:01 ` [RFC v2 08/11] Input: hyperv-keyboard: Make ringbuffer at least take two pages Boqun Feng
2020-09-02  3:01 ` [RFC v2 09/11] HID: hyperv: " Boqun Feng
2020-09-02  8:18   ` Jiri Kosina
2020-09-02  3:01 ` [RFC v2 10/11] Driver: hv: util: " Boqun Feng
2020-09-02  3:01 ` [RFC v2 11/11] scsi: storvsc: Support PAGE_SIZE larger than 4K Boqun Feng
2020-09-05  2:55   ` Michael Kelley
2020-09-05 14:15     ` Boqun Feng
2020-09-05 15:37       ` Michael Kelley [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW2PR2101MB10526293A0A67DA6716BE615D72A0@MW2PR2101MB1052.namprd21.prod.outlook.com \
    --to=mikelley@microsoft.com \
    --cc=benjamin.tissoires@redhat.com \
    --cc=boqun.feng@gmail.com \
    --cc=davem@davemloft.net \
    --cc=dmitry.torokhov@gmail.com \
    --cc=haiyangz@microsoft.com \
    --cc=jejb@linux.ibm.com \
    --cc=jikos@kernel.org \
    --cc=kuba@kernel.org \
    --cc=kys@microsoft.com \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-input@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=netdev@vger.kernel.org \
    --cc=sthemmin@microsoft.com \
    --cc=wei.liu@kernel.org \
    --subject='RE: [RFC v2 11/11] scsi: storvsc: Support PAGE_SIZE larger than 4K' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).