Netdev Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: Jason Xing <kerneljasonxing@gmail.com>
Cc: "Jesper Dangaard Brouer" <jbrouer@redhat.com>,
"Jesse Brandeburg" <jesse.brandeburg@intel.com>,
"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
"David Miller" <davem@davemloft.net>,
kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net,
hawk@kernel.org, john.fastabend@gmail.com, andrii@kernel.org,
kafai@fb.com, songliubraving@fb.com, yhs@fb.com,
kpsingh@kernel.org, brouer@redhat.com,
intel-wired-lan@lists.osuosl.org, netdev <netdev@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
bpf@vger.kernel.org, "Jason Xing" <xingwanli@kuaishou.com>,
"Shujin Li" <lishujin@kuaishou.com>,
"Íñigo Huguet" <ihuguet@redhat.com>
Subject: Re: [PATCH] ixgbe: let the xdpdrv work with more than 64 cpus
Date: Tue, 24 Aug 2021 17:32:25 +0200 [thread overview]
Message-ID: <20210824153225.GA16546@ranger.igk.intel.com> (raw)
In-Reply-To: <CAL+tcoDERDZqtjK1BCc0vYYwYtvgRtb8H6z2FTVbGqr+N7bVmA@mail.gmail.com>
On Tue, Aug 24, 2021 at 11:23:29PM +0800, Jason Xing wrote:
> On Tue, Aug 24, 2021 at 9:32 PM Jesper Dangaard Brouer
> <jbrouer@redhat.com> wrote:
> >
> >
> >
> > On 24/08/2021 12.49, kerneljasonxing@gmail.com wrote:
> > > From: Jason Xing <xingwanli@kuaishou.com>
> > >
> > > Originally, ixgbe driver doesn't allow the mounting of xdpdrv if the
> > > server is equipped with more than 64 cpus online. So it turns out that
> > > the loading of xdpdrv causes the "NOMEM" failure.
> > >
> > > Actually, we can adjust the algorithm and then make it work, which has
> > > no harm at all, only if we set the maxmium number of xdp queues.
> >
> > This is not true, it can cause harm, because XDP transmission queues are
> > used without locking. See drivers ndo_xdp_xmit function ixgbe_xdp_xmit().
> > As driver assumption is that each CPU have its own XDP TX-queue.
Thanks Jesper for chiming in.
> >
>
> Point taken. I indeed miss that part which would cause bad behavior if it
> happens.
>
> At this point, I think I should find all the allocation and use of XDP
> related, say,
> queues and rings, then adjust them all?
>
> Let's say if the server is shipped with 128 cpus, we could map 128 cpus to 64
> rings in the function ixgbe_xdp_xmit(). However, it sounds a little bit odd.
>
> Do you think that it makes any sense?
We need a fallback path for ixgbe. I did the following for ice:
https://x-lore.kernel.org/bpf/20210819120004.34392-9-maciej.fijalkowski@intel.com/T/#u
>
> Thanks,
> Jason
>
> > This patch is not a proper fix.
> >
> > I do think we need a proper fix for this issue on ixgbe.
> >
> >
> > > Fixes: 33fdc82f08 ("ixgbe: add support for XDP_TX action")
> > > Co-developed-by: Shujin Li <lishujin@kuaishou.com>
> > > Signed-off-by: Shujin Li <lishujin@kuaishou.com>
> > > Signed-off-by: Jason Xing <xingwanli@kuaishou.com>
> > > ---
> > > drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 2 +-
> > > drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 ---
> > > 2 files changed, 1 insertion(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > > index 0218f6c..5953996 100644
> > > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > > @@ -299,7 +299,7 @@ static void ixgbe_cache_ring_register(struct ixgbe_adapter *adapter)
> > >
> > > static int ixgbe_xdp_queues(struct ixgbe_adapter *adapter)
> > > {
> > > - return adapter->xdp_prog ? nr_cpu_ids : 0;
> > > + return adapter->xdp_prog ? min_t(int, MAX_XDP_QUEUES, nr_cpu_ids) : 0;
> > > }
> > >
> > > #define IXGBE_RSS_64Q_MASK 0x3F
> > > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > > index 14aea40..b36d16b 100644
> > > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > > @@ -10130,9 +10130,6 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
> > > return -EINVAL;
> > > }
> > >
> > > - if (nr_cpu_ids > MAX_XDP_QUEUES)
> > > - return -ENOMEM;
> > > -
> > > old_prog = xchg(&adapter->xdp_prog, prog);
> > > need_reset = (!!prog != !!old_prog);
> > >
> > >
> >
next prev parent reply other threads:[~2021-08-24 15:48 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-24 10:49 kerneljasonxing
2021-08-24 13:32 ` Jesper Dangaard Brouer
2021-08-24 15:23 ` Jason Xing
2021-08-24 15:32 ` Maciej Fijalkowski [this message]
2021-08-25 11:59 ` Jason Xing
2021-08-25 12:02 ` [PATCH v2] " kerneljasonxing
2021-08-26 8:51 ` Maciej Fijalkowski
2021-08-26 10:56 ` Jason Xing
2021-08-26 14:01 ` [PATCH v3] " kerneljasonxing
2021-08-26 14:04 ` Jason Xing
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210824153225.GA16546@ranger.igk.intel.com \
--to=maciej.fijalkowski@intel.com \
--cc=andrii@kernel.org \
--cc=anthony.l.nguyen@intel.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=hawk@kernel.org \
--cc=ihuguet@redhat.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jbrouer@redhat.com \
--cc=jesse.brandeburg@intel.com \
--cc=john.fastabend@gmail.com \
--cc=kafai@fb.com \
--cc=kerneljasonxing@gmail.com \
--cc=kpsingh@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lishujin@kuaishou.com \
--cc=netdev@vger.kernel.org \
--cc=songliubraving@fb.com \
--cc=xingwanli@kuaishou.com \
--cc=yhs@fb.com \
--subject='Re: [PATCH] ixgbe: let the xdpdrv work with more than 64 cpus' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).