Netdev Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: David Thompson <davthompson@nvidia.com>
Cc: David Thompson <dthompson@mellanox.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"davem@davemloft.net" <davem@davemloft.net>,
	Jiri Pirko <jiri@mellanox.com>, Asmaa Mnebhi <Asmaa@mellanox.com>
Subject: Re: [PATCH net-next v2] Add Mellanox BlueField Gigabit Ethernet driver
Date: Thu, 20 Aug 2020 12:25:14 -0700	[thread overview]
Message-ID: <20200820122514.5b552e42@kicinski-fedora-PC1C0HJN> (raw)
In-Reply-To: <MN2PR12MB2975DAA7292C27DEB0B518A8C75A0@MN2PR12MB2975.namprd12.prod.outlook.com>

On Thu, 20 Aug 2020 18:51:39 +0000 David Thompson wrote:
> > > +	for (i = 0; i < priv->rx_q_entries; i++) {
> > > +		/* Allocate a receive buffer for this RX WQE. The DMA
> > > +		 * form (dma_addr_t) of the receive buffer address is
> > > +		 * stored in the RX WQE array (via 'rx_wqe_ptr') where
> > > +		 * it is accessible by the GigE device. The VA form of
> > > +		 * the receive buffer is stored in 'rx_buf[]' array in
> > > +		 * the driver private storage for housekeeping.
> > > +		 */
> > > +		priv->rx_buf[i] = dma_alloc_coherent(priv->dev,
> > > +  
> > MLXBF_GIGE_DEFAULT_BUF_SZ,  
> > > +						     &rx_buf_dma,
> > > +						     GFP_KERNEL);  
> > 
> > Do the buffers have to be in coherent memory? That's kinda strange.
> >   
> 
> Yes, the mlxbf_gige silicon block needs to be programmed with the
> buffer's physical address so that the silicon logic can DMA incoming
> packet data into the buffer.  The kernel API "dma_alloc_coherent()"
> meets the driver's requirements in that it returns a CPU-useable address
> as well as a bus/physical address (used by silicon).

It's highly unusual, all drivers I know use the streaming DMA interface.
IDK what the performance implications for using coherent mappings on
your platforms are, but I'd prefer if you took the more common approach.

> > > +static void mlxbf_gige_get_ethtool_stats(struct net_device *netdev,
> > > +					 struct ethtool_stats *estats,
> > > +					 u64 *data)
> > > +{
> > > +	struct mlxbf_gige *priv = netdev_priv(netdev);
> > > +	unsigned long flags;
> > > +
> > > +	spin_lock_irqsave(&priv->lock, flags);  
> > 
> > Why do you take a lock around stats?
> 
> I wrote the logic with a lock so that it implements an atomic "snapshot"
> of the driver's statistics.  This is useful since the standard TX/RX stats
> are being incremented in packet completion logic triggered by the 
> NAPI framework.  Do you see a disadvantage to using a lock here?

The Linux APIs don't provide any "snapshot" guarantees, and you're
stalling the datapath to read stats.

> > > +static const struct net_device_ops mlxbf_gige_netdev_ops = {
> > > +	.ndo_open		= mlxbf_gige_open,
> > > +	.ndo_stop		= mlxbf_gige_stop,
> > > +	.ndo_start_xmit		= mlxbf_gige_start_xmit,
> > > +	.ndo_set_mac_address	= eth_mac_addr,
> > > +	.ndo_validate_addr	= eth_validate_addr,
> > > +	.ndo_do_ioctl		= mlxbf_gige_do_ioctl,
> > > +	.ndo_set_rx_mode        = mlxbf_gige_set_rx_mode,  
> > 
> > You must report standard stats.
> 
> Are you referring to the three possible methods that a driver
> must use the implement support of standard stats reporting:
> 
> From include/linux/netdevice.h -->
> * void (*ndo_get_stats64)(struct net_device *dev,
>  *                         struct rtnl_link_stats64 *storage);
>  * struct net_device_stats* (*ndo_get_stats)(struct net_device *dev);
>  *      Called when a user wants to get the network device usage
>  *      statistics. Drivers must do one of the following:
>  *      1. Define @ndo_get_stats64 to fill in a zero-initialised
>  *         rtnl_link_stats64 structure passed by the caller.
>  *      2. Define @ndo_get_stats to update a net_device_stats structure
>  *         (which should normally be dev->stats) and return a pointer to
>  *         it. The structure may be changed asynchronously only if each
>  *         field is written atomically.
>  *      3. Update dev->stats asynchronously and atomically, and define
>  *         neither operation.
> 
> The mlxbf_gige driver has implemented #3 above, as there is logic
> in the RX and TX completion handlers that increments RX/TX packet
> and byte counts in the net_device->stats structure.  Is that sufficient
> for support of standard stats?

You only update the basic stats. Please take a look at all members,
some of the hw stats will probably fit there, and other HW errors need
to be added to the cumulative rx/tx error stats.

  reply	other threads:[~2020-08-20 19:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-30 22:53 David Thompson
2020-07-31  0:30 ` Jakub Kicinski
2020-08-20 18:51   ` David Thompson
2020-08-20 19:25     ` Jakub Kicinski [this message]
2020-08-20 22:48     ` David Miller
2020-08-20 23:04   ` Vladimir Oltean
2020-08-21  0:14     ` Jakub Kicinski
2020-08-21 10:35       ` Vladimir Oltean

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200820122514.5b552e42@kicinski-fedora-PC1C0HJN \
    --to=kuba@kernel.org \
    --cc=Asmaa@mellanox.com \
    --cc=davem@davemloft.net \
    --cc=davthompson@nvidia.com \
    --cc=dthompson@mellanox.com \
    --cc=jiri@mellanox.com \
    --cc=netdev@vger.kernel.org \
    --subject='Re: [PATCH net-next v2] Add Mellanox BlueField Gigabit Ethernet driver' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).