Netdev Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Jussi Maki <joamaki@gmail.com>
To: Jay Vosburgh <jay.vosburgh@canonical.com>
Cc: bpf <bpf@vger.kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andy Gospodarek <andy@greyhouse.net>,
	vfalico@gmail.com, Andrii Nakryiko <andrii@kernel.org>,
	Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
	"Karlsson, Magnus" <magnus.karlsson@intel.com>
Subject: Re: [PATCH bpf-next v2 0/4] XDP bonding support
Date: Mon, 5 Jul 2021 13:32:47 +0300	[thread overview]
Message-ID: <CAHn8xcn0CpcmX7fzkWwHWJpXTxVD=9XScCsDYT-bg3+d+5M5zA@mail.gmail.com> (raw)
In-Reply-To: <31459.1625163601@famine>

On Thu, Jul 1, 2021 at 9:20 PM Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>
> joamaki@gmail.com wrote:
>
> >From: Jussi Maki <joamaki@gmail.com>
> >
> >This patchset introduces XDP support to the bonding driver.
> >
> >The motivation for this change is to enable use of bonding (and
> >802.3ad) in hairpinning L4 load-balancers such as [1] implemented with
> >XDP and also to transparently support bond devices for projects that
> >use XDP given most modern NICs have dual port adapters.  An alternative
> >to this approach would be to implement 802.3ad in user-space and
> >implement the bonding load-balancing in the XDP program itself, but
> >is rather a cumbersome endeavor in terms of slave device management
> >(e.g. by watching netlink) and requires separate programs for native
> >vs bond cases for the orchestrator. A native in-kernel implementation
> >overcomes these issues and provides more flexibility.
> >
> >Below are benchmark results done on two machines with 100Gbit
> >Intel E810 (ice) NIC and with 32-core 3970X on sending machine, and
> >16-core 3950X on receiving machine. 64 byte packets were sent with
> >pktgen-dpdk at full rate. Two issues [2, 3] were identified with the
> >ice driver, so the tests were performed with iommu=off and patch [2]
> >applied. Additionally the bonding round robin algorithm was modified
> >to use per-cpu tx counters as high CPU load (50% vs 10%) and high rate
> >of cache misses were caused by the shared rr_tx_counter. Fix for this
> >has been already merged into net-next. The statistics were collected
> >using "sar -n dev -u 1 10".
> >
> > -----------------------|  CPU  |--| rxpck/s |--| txpck/s |----
> > without patch (1 dev):
> >   XDP_DROP:              3.15%      48.6Mpps
> >   XDP_TX:                3.12%      18.3Mpps     18.3Mpps
> >   XDP_DROP (RSS):        9.47%      116.5Mpps
> >   XDP_TX (RSS):          9.67%      25.3Mpps     24.2Mpps
> > -----------------------
> > with patch, bond (1 dev):
> >   XDP_DROP:              3.14%      46.7Mpps
> >   XDP_TX:                3.15%      13.9Mpps     13.9Mpps
> >   XDP_DROP (RSS):        10.33%     117.2Mpps
> >   XDP_TX (RSS):          10.64%     25.1Mpps     24.0Mpps
> > -----------------------
> > with patch, bond (2 devs):
> >   XDP_DROP:              6.27%      92.7Mpps
> >   XDP_TX:                6.26%      17.6Mpps     17.5Mpps
> >   XDP_DROP (RSS):       11.38%      117.2Mpps
> >   XDP_TX (RSS):         14.30%      28.7Mpps     27.4Mpps
> > --------------------------------------------------------------
>
>         To be clear, the fact that the performance numbers for XDP_DROP
> and XDP_TX are lower for "with patch, bond (1 dev)" than "without patch
> (1 dev)" is expected, correct?

Yes that is correct. With the patch the ndo callback for choosing the
slave device is invoked which in this test (mode=xor) hashes L2&L3
headers (I seem to have failed to mention this in the original
message). In round-robin mode I recall it being about 16Mpps versus
the 18Mpps without the patch. I did also try "INDIRECT_CALL" to avoid
going via ndo_ops, but that had no discernible effect.

  reply	other threads:[~2021-07-05 10:33 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 13:55 [PATCH bpf-next 0/3] " Jussi Maki
2021-06-09 13:55 ` [PATCH bpf-next 1/3] net: bonding: Add XDP support to the bonding driver Jussi Maki
2021-06-09 22:29   ` Maciej Fijalkowski
2021-06-09 23:29   ` Jay Vosburgh
2021-06-14  8:02     ` Jussi Maki
2021-06-17  3:40   ` kernel test robot
2021-06-17  6:35   ` kernel test robot
2021-06-22  7:24   ` kernel test robot
2021-06-09 13:55 ` [PATCH bpf-next 2/3] net: bonding: Use per-cpu rr_tx_counter Jussi Maki
2021-06-10  0:04   ` Jay Vosburgh
2021-06-14  7:54     ` Jussi Maki
2021-06-09 13:55 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for XDP bonding Jussi Maki
2021-06-09 22:07   ` Maciej Fijalkowski
2021-06-14  8:08     ` Jussi Maki
2021-06-14  8:48       ` Magnus Karlsson
2021-06-14 12:20         ` Jussi Maki
2021-06-10 17:24 ` [PATCH bpf-next 0/3] XDP bonding support Andrii Nakryiko
2021-06-14 12:25   ` Jussi Maki
2021-06-14 15:37     ` Jay Vosburgh
2021-06-15  5:34     ` Andrii Nakryiko
2021-06-24  9:18 ` [PATCH bpf-next v2 0/4] " joamaki
2021-06-24  9:18   ` [PATCH bpf-next v2 1/4] net: bonding: Refactor bond_xmit_hash for use with xdp_buff joamaki
2021-06-24  9:18   ` [PATCH bpf-next v2 2/4] net: core: Add support for XDP redirection to slave device joamaki
2021-06-24  9:18   ` [PATCH bpf-next v2 3/4] net: bonding: Add XDP support to the bonding driver joamaki
2021-06-24  9:18   ` [PATCH bpf-next v2 4/4] devmap: Exclude XDP broadcast to master device joamaki
2021-07-01 18:12     ` Jay Vosburgh
2021-07-05 11:44       ` Jussi Maki
2021-07-01 18:20   ` [PATCH bpf-next v2 0/4] XDP bonding support Jay Vosburgh
2021-07-05 10:32     ` Jussi Maki [this message]
2021-07-07 11:25 ` [PATCH bpf-next v3 0/5] " Jussi Maki
2021-07-07 11:25   ` [PATCH bpf-next v3 1/5] net: bonding: Refactor bond_xmit_hash for use with xdp_buff Jussi Maki
2021-07-07 11:25   ` [PATCH bpf-next v3 2/5] net: core: Add support for XDP redirection to slave device Jussi Maki
2021-07-07 11:25   ` [PATCH bpf-next v3 3/5] net: bonding: Add XDP support to the bonding driver Jussi Maki
2021-07-13  7:14     ` kernel test robot
2021-07-07 11:25   ` [PATCH bpf-next v3 4/5] devmap: Exclude XDP broadcast to master device Jussi Maki
2021-07-07 11:25   ` [PATCH bpf-next v3 5/5] net: core: Allow netdev_lower_get_next_private_rcu in bh context Jussi Maki
2021-07-28 23:43 ` [PATCH bpf-next v4 0/6] XDP bonding support joamaki
2021-07-28 23:43   ` [PATCH bpf-next v4 1/6] net: bonding: Refactor bond_xmit_hash for use with xdp_buff joamaki
2021-07-28 23:43   ` [PATCH bpf-next v4 2/6] net: core: Add support for XDP redirection to slave device joamaki
2021-07-28 23:43   ` [PATCH bpf-next v4 3/6] net: bonding: Add XDP support to the bonding driver joamaki
2021-07-28 23:43   ` [PATCH bpf-next v4 4/6] devmap: Exclude XDP broadcast to master device joamaki
2021-07-28 23:43   ` [PATCH bpf-next v4 5/6] net: core: Allow netdev_lower_get_next_private_rcu in bh context joamaki
2021-07-28 23:43   ` [PATCH bpf-next v4 6/6] selftests/bpf: Add tests for XDP bonding joamaki
2021-08-03  0:19     ` Andrii Nakryiko
2021-08-03  9:40       ` Jussi Maki
2021-07-30  6:18 ` [PATCH bpf-next v5 0/7] XDP bonding support Jussi Maki
2021-07-30  6:18   ` [PATCH bpf-next v5 1/7] net: bonding: Refactor bond_xmit_hash for use with xdp_buff Jussi Maki
2021-07-30  6:18   ` [PATCH bpf-next v5 2/7] net: core: Add support for XDP redirection to slave device Jussi Maki
2021-07-30  6:18   ` [PATCH bpf-next v5 3/7] net: bonding: Add XDP support to the bonding driver Jussi Maki
2021-07-30  6:18   ` [PATCH bpf-next v5 4/7] devmap: Exclude XDP broadcast to master device Jussi Maki
2021-07-30  6:18   ` [PATCH bpf-next v5 5/7] net: core: Allow netdev_lower_get_next_private_rcu in bh context Jussi Maki
2021-07-30  6:18   ` [PATCH bpf-next v5 6/7] selftests/bpf: Fix xdp_tx.c prog section name Jussi Maki
2021-08-04 23:35     ` Andrii Nakryiko
2021-07-30  6:18   ` [PATCH bpf-next v5 7/7] selftests/bpf: Add tests for XDP bonding Jussi Maki
2021-08-04 23:33     ` Andrii Nakryiko
2021-07-31  5:57 ` [PATCH bpf-next v6 0/7]: XDP bonding support Jussi Maki
2021-07-31  5:57   ` [PATCH bpf-next v6 1/7] net: bonding: Refactor bond_xmit_hash for use with xdp_buff Jussi Maki
2021-08-11  1:52     ` Jonathan Toppins
2021-08-11  8:22       ` Jussi Maki
2021-08-11 14:05         ` Jonathan Toppins
2021-08-16  9:05           ` Jussi Maki
2021-07-31  5:57   ` [PATCH bpf-next v6 2/7] net: core: Add support for XDP redirection to slave device Jussi Maki
2021-07-31  5:57   ` [PATCH bpf-next v6 3/7] net: bonding: Add XDP support to the bonding driver Jussi Maki
2021-07-31  5:57   ` [PATCH bpf-next v6 4/7] devmap: Exclude XDP broadcast to master device Jussi Maki
2021-07-31  5:57   ` [PATCH bpf-next v6 5/7] net: core: Allow netdev_lower_get_next_private_rcu in bh context Jussi Maki
2021-07-31  5:57   ` [PATCH bpf-next v6 6/7] selftests/bpf: Fix xdp_tx.c prog section name Jussi Maki
2021-08-06 22:53     ` Andrii Nakryiko
2021-07-31  5:57   ` [PATCH bpf-next v6 7/7] selftests/bpf: Add tests for XDP bonding Jussi Maki
2021-08-06 22:50     ` Andrii Nakryiko
2021-08-09 14:24       ` Jussi Maki
2021-08-09 21:41         ` Daniel Borkmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHn8xcn0CpcmX7fzkWwHWJpXTxVD=9XScCsDYT-bg3+d+5M5zA@mail.gmail.com' \
    --to=joamaki@gmail.com \
    --cc=andrii@kernel.org \
    --cc=andy@greyhouse.net \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jay.vosburgh@canonical.com \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=vfalico@gmail.com \
    --subject='Re: [PATCH bpf-next v2 0/4] XDP bonding support' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).