Netdev Archive on lore.kernel.org
help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: Lorenzo Bianconi <lorenzo@kernel.org>, bpf@vger.kernel.org
Cc: netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
	davem@davemloft.net, kuba@kernel.org, ast@kernel.org,
	daniel@iogearbox.net, alexander.duyck@gmail.com,
	brouer@redhat.com, echaudro@redhat.com,
	magnus.karlsson@intel.com, tirthendu.sarkar@intel.com,
	toke@redhat.com
Subject: RE: [PATCH bpf-next 2/2] net: xdp: add xdp_update_skb_shared_info utility routine
Date: Mon, 12 Jul 2021 11:46:22 -0700	[thread overview]
Message-ID: <60ec8dfeb42aa_50e1d20857@john-XPS-13-9370.notmuch> (raw)
In-Reply-To: <16f4244f5a506143f5becde501f1ecb120255b42.1625828537.git.lorenzo@kernel.org>

Lorenzo Bianconi wrote:
> Introduce xdp_update_skb_shared_info routine to update frags array
> metadata from a given xdp_buffer/xdp_frame. We do not need to reset
> frags array since it is already initialized by the driver.
> Rely on xdp_update_skb_shared_info in mvneta driver.

Some more context here would really help. I had to jump into the mvneta
driver to see what is happening.

So as I read this we have a loop processing the descriptor in
mvneta_rx_swbm()

 mvneta_rx_swbm()
   while (rx_proc < budget && rx_proc < rx_todo) {
     if (rx_status & MVNETA_RXD_FIRST_DESC) ...
     else {
       mvneta_swbm_add_rx_fragment()
     }
     ..
     if (!rx_status & MVNETA_RXD_LAST_DESC)
         continue;
     ..
     if (xdp_prog)
       mvneta_run_xdp(...)
   }

roughly looking like above. First question, do you ever hit
!MVNETA_RXD_LAST_DESC today? I assume this is avoided by hardware
setup when XDP is enabled, otherwise _run_xdp() would be
broken correct? Next question, given last descriptor bit
logic whats the condition to hit the code added in this patch?
wouldn't we need more than 1 descriptor and then we would
skip the xdp_run... sorry lost me and its probably easier
to let you give the flow vs spending an hour trying to
track it down.

But, in theory as you handle a hardware discriptor you can build
up a set of pages using them to create a single skb rather than a
skb per descriptor. But don't we know if pfmemalloc should be
done while we are building the frag list? Can't se just set it
vs this for loop in xdp_update_skb_shared_info(),

> +	for (i = 0; i < nr_frags; i++) {
> +		struct page *page = skb_frag_page(&sinfo->frags[i]);
> +
> +		page = compound_head(page);
> +		if (page_is_pfmemalloc(page)) {
> +			skb->pfmemalloc = true;
> +			break;
> +		}
> +	}
> +}

...

> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 361bc4fbe20b..abf2e50880e0 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -2294,18 +2294,29 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
>  	rx_desc->buf_phys_addr = 0;
>  
>  	if (data_len > 0 && xdp_sinfo->nr_frags < MAX_SKB_FRAGS) {
> -		skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags++];
> +		skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags];
>  
>  		skb_frag_off_set(frag, pp->rx_offset_correction);
>  		skb_frag_size_set(frag, data_len);
>  		__skb_frag_set_page(frag, page);
> +		/* We don't need to reset pp_recycle here. It's already set, so
> +		 * just mark fragments for recycling.
> +		 */
> +		page_pool_store_mem_info(page, rxq->page_pool);
> +
> +		/* first fragment */
> +		if (!xdp_sinfo->nr_frags)
> +			xdp_sinfo->gso_type = *size;

Would be nice to also change 'int size' -> 'unsigned int size' so the
types matched. Presumably you really can't have a negative size.

Also how about giving gso_type a better name. xdp_sinfo->size maybe?


> +		xdp_sinfo->nr_frags++;
>  
>  		/* last fragment */
>  		if (len == *size) {
>  			struct skb_shared_info *sinfo;
>  
>  			sinfo = xdp_get_shared_info_from_buff(xdp);
> +			sinfo->xdp_frags_tsize = xdp_sinfo->nr_frags * PAGE_SIZE;
>  			sinfo->nr_frags = xdp_sinfo->nr_frags;
> +			sinfo->gso_type = xdp_sinfo->gso_type;
>  			memcpy(sinfo->frags, xdp_sinfo->frags,
>  			       sinfo->nr_frags * sizeof(skb_frag_t));
>  		}

Thanks,
John

  reply	other threads:[~2021-07-12 18:46 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09 11:10 [PATCH bpf-next 0/2] Add " Lorenzo Bianconi
2021-07-09 11:10 ` [PATCH bpf-next 1/2] net: skbuff: add xdp_frags_tsize field to skb_shared_info Lorenzo Bianconi
2021-07-09 11:10 ` [PATCH bpf-next 2/2] net: xdp: add xdp_update_skb_shared_info utility routine Lorenzo Bianconi
2021-07-12 18:46   ` John Fastabend [this message]
2021-07-12 20:22     ` Lorenzo Bianconi
2021-07-14 23:44       ` John Fastabend
2021-07-15 23:27         ` Lorenzo Bianconi
2021-07-16  3:05           ` John Fastabend

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=60ec8dfeb42aa_50e1d20857@john-XPS-13-9370.notmuch \
    --to=john.fastabend@gmail.com \
    --cc=alexander.duyck@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=echaudro@redhat.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=tirthendu.sarkar@intel.com \
    --cc=toke@redhat.com \
    --subject='RE: [PATCH bpf-next 2/2] net: xdp: add xdp_update_skb_shared_info utility routine' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).