Netdev Archive on lore.kernel.org help / color / mirror / Atom feed
From: Boris Pismenny <borisp@nvidia.com> To: <dsahern@gmail.com>, <kuba@kernel.org>, <davem@davemloft.net>, <saeedm@nvidia.com>, <hch@lst.de>, <sagi@grimberg.me>, <axboe@fb.com>, <kbusch@kernel.org>, <viro@zeniv.linux.org.uk>, <edumazet@google.com>, <smalin@marvell.com> Cc: <boris.pismenny@gmail.com>, <linux-nvme@lists.infradead.org>, <netdev@vger.kernel.org>, <benishay@nvidia.com>, <ogerlitz@nvidia.com>, <yorayz@nvidia.com> Subject: [PATCH v5 net-next 23/36] net: Add to ulp_ddp support for fallback flow Date: Thu, 22 Jul 2021 14:03:12 +0300 [thread overview] Message-ID: <20210722110325.371-24-borisp@nvidia.com> (raw) In-Reply-To: <20210722110325.371-1-borisp@nvidia.com> From: Yoray Zack <yorayz@nvidia.com> Add ddp_ddgest_falback(), and ddp_get_pdu_info function to ulp. During DDP CRC Tx offload, the HW is responsible for calculate the crc, and therefore the SW not calculates it. If the HW changes for some reason, the SW should fallback from the offload and calculate the crc. This is checking in the ulp_ddp_validate_skb and if need fallback it do it. Signed-off-by: Yoray Zack <yorayz@nvidia.com> --- include/net/ulp_ddp.h | 7 +++++ net/core/ulp_ddp.c | 69 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+) diff --git a/include/net/ulp_ddp.h b/include/net/ulp_ddp.h index 8f48fc121c3a..40bfcfe94cef 100644 --- a/include/net/ulp_ddp.h +++ b/include/net/ulp_ddp.h @@ -77,6 +77,7 @@ struct ulp_ddp_io { * @hdr_len: the size (in bytes) of the pdu header. * @hdr: pdu header. * @req: the ulp request for the original pdu. + * @ddgst: pdu data digest. */ struct ulp_ddp_pdu_info { struct list_head list; @@ -86,6 +87,7 @@ struct ulp_ddp_pdu_info { u32 hdr_len; void *hdr; struct request *req; + __le32 ddgst; }; /* struct ulp_ddp_dev_ops - operations used by an upper layer protocol to configure ddp offload @@ -129,6 +131,8 @@ struct ulp_ddp_ulp_ops { bool (*resync_request)(struct sock *sk, u32 seq, u32 flags); /* NIC driver informs the ulp that ddp teardown is done - used for async completions*/ void (*ddp_teardown_done)(void *ddp_ctx); + /* NIC request ulp to calculate the ddgst and store it in pdu_info->ddgst */ + void (*ddp_ddgst_fallback)(struct ulp_ddp_pdu_info *pdu_info); }; /** @@ -182,4 +186,7 @@ int ulp_ddp_map_pdu_info(struct sock *sk, u32 start_seq, void *hdr, void ulp_ddp_close_pdu_info(struct sock *sk); bool ulp_ddp_need_map(struct sock *sk); struct ulp_ddp_pdu_info *ulp_ddp_get_pdu_info(struct sock *sk, u32 seq); +struct sk_buff *ulp_ddp_validate_xmit_skb(struct sock *sk, + struct net_device *dev, + struct sk_buff *skb); #endif //_ULP_DDP_H diff --git a/net/core/ulp_ddp.c b/net/core/ulp_ddp.c index 06ed4ad59e88..80366c7840a8 100644 --- a/net/core/ulp_ddp.c +++ b/net/core/ulp_ddp.c @@ -164,3 +164,72 @@ struct ulp_ddp_pdu_info *ulp_ddp_get_pdu_info(struct sock *sk, u32 seq) return info; } EXPORT_SYMBOL(ulp_ddp_get_pdu_info); +static void ulp_ddp_ddgst_recalc(const struct ulp_ddp_ulp_ops *ulp_ops, + struct ulp_ddp_pdu_info *pdu_info) +{ + if (pdu_info->ddgst) + return; + + ulp_ops->ddp_ddgst_fallback(pdu_info); +} + +static struct sk_buff *ulp_ddp_fallback_skb(struct ulp_ddp_ctx *ctx, + struct sk_buff *skb, + struct sock *sk) +{ + const struct ulp_ddp_ulp_ops *ulp_ops = inet_csk(sk)->icsk_ulp_ddp_ops; + int datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb)); + struct ulp_ddp_pdu_info *pdu_info = NULL; + int ddgst_start, ddgst_offset, ddgst_len; + u32 seq = ntohl(tcp_hdr(skb)->seq); + u32 end_skb_seq = seq + datalen; + u32 first_seq = seq; + + if (!(ulp_ops && ulp_ops->ddp_ddgst_fallback)) + return skb; + +again: + /* check if we can't use the last pdu_info + * Reasons we can't use it: + * 1. first time and then pdu_info is NULL. + * 2. seq doesn't Map to this pdu_info (out of bounds). + */ + if (!pdu_info || !between(seq, pdu_info->start_seq, pdu_info->end_seq - 1)) { + pdu_info = ulp_ddp_get_pdu_info(sk, seq); + if (!pdu_info) + return skb; + } + + ddgst_start = pdu_info->end_seq - ctx->ddgst_len; + + //check if this skb contains ddgst field + if (between(ddgst_start, seq, end_skb_seq - 1) && pdu_info->data_len) { + ulp_ddp_ddgst_recalc(ulp_ops, pdu_info); + ddgst_offset = ddgst_start - first_seq + skb_headlen(skb); + ddgst_len = min_t(int, ctx->ddgst_len, end_skb_seq - ddgst_start); + skb_store_bits(skb, ddgst_offset, &pdu_info->ddgst, ddgst_len); + } + + //check if there is more PDU's in this skb + if (between(pdu_info->end_seq, seq + 1, end_skb_seq - 1)) { + seq = pdu_info->end_seq; + goto again; + } + + return skb; +} + +struct sk_buff *ulp_ddp_validate_xmit_skb(struct sock *sk, + struct net_device *dev, + struct sk_buff *skb) +{ + struct ulp_ddp_ctx *ctx = ulp_ddp_get_ctx(sk); + + if (!ctx) + return skb; + + if (dev == ctx->netdev) + return skb; + + return ulp_ddp_fallback_skb(ctx, skb, sk); +} EXPORT_SYMBOL(ulp_ddp_validate_xmit_skb); -- 2.24.1
next prev parent reply other threads:[~2021-07-22 11:06 UTC|newest] Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-07-22 11:02 [PATCH v5 net-next 00/36] nvme-tcp receive and tarnsmit offloads Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 01/36] net: Introduce direct data placement tcp offload Boris Pismenny 2021-07-22 11:26 ` Eric Dumazet 2021-07-22 12:18 ` Boris Pismenny 2021-07-22 13:10 ` Eric Dumazet 2021-07-22 13:33 ` Boris Pismenny 2021-07-22 13:39 ` Eric Dumazet 2021-07-22 14:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 02/36] iov_iter: DDP copy to iter/pages Boris Pismenny 2021-07-22 13:31 ` Christoph Hellwig 2021-07-22 20:23 ` Boris Pismenny 2021-07-23 5:03 ` Christoph Hellwig 2021-07-23 5:21 ` Al Viro 2021-08-04 14:13 ` Or Gerlitz 2021-08-10 13:29 ` Or Gerlitz 2021-07-22 20:55 ` Al Viro 2021-07-22 11:02 ` [PATCH v5 net-next 03/36] net: skb copy(+hash) iterators for DDP offloads Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 04/36] net/tls: expose get_netdev_for_sock Boris Pismenny 2021-07-23 6:06 ` Christoph Hellwig 2021-08-04 13:26 ` Or Gerlitz [not found] ` <20210804072918.17ba9cff@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> 2021-08-04 15:07 ` Or Gerlitz 2021-08-10 13:25 ` Or Gerlitz 2021-07-22 11:02 ` [PATCH v5 net-next 05/36] nvme-tcp: Add DDP offload control path Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 06/36] nvme-tcp: Add DDP data-path Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 07/36] nvme-tcp: RX DDGST offload Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 08/36] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 09/36] net/mlx5: Header file changes for nvme-tcp offload Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 10/36] net/mlx5: Add 128B CQE for NVMEoTCP offload Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 11/36] net/mlx5e: TCP flow steering for nvme-tcp Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 12/36] net/mlx5e: NVMEoTCP offload initialization Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 13/36] net/mlx5e: KLM UMR helper macros Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 14/36] net/mlx5e: NVMEoTCP use KLM UMRs Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 15/36] net/mlx5e: NVMEoTCP queue init/teardown Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 16/36] net/mlx5e: NVMEoTCP async ddp invalidation Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 17/36] net/mlx5e: NVMEoTCP ddp setup and resync Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 18/36] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 19/36] net/mlx5e: NVMEoTCP statistics Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 20/36] Documentation: add ULP DDP offload documentation Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 21/36] net: drop ULP DDP HW offload feature if no CSUM offload feature Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 22/36] net: Add ulp_ddp_pdu_info struct Boris Pismenny 2021-07-23 19:42 ` Sagi Grimberg 2021-07-22 11:03 ` Boris Pismenny [this message] 2021-07-23 6:09 ` [PATCH v5 net-next 23/36] net: Add to ulp_ddp support for fallback flow Christoph Hellwig 2021-07-22 11:03 ` [PATCH v5 net-next 24/36] net: Add MSG_DDP_CRC flag Boris Pismenny 2021-07-22 14:23 ` Eric Dumazet 2021-07-22 11:03 ` [PATCH v5 net-next 25/36] nvme-tcp: TX DDGST offload Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 26/36] nvme-tcp: Mapping between Tx NVMEoTCP pdu and TCP sequence Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 27/36] mlx5e: make preparation in TLS code for NVMEoTCP CRC Tx offload Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 28/36] mlx5: Add sq state test bit for nvmeotcp Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 29/36] mlx5: Add support to NETIF_F_HW_TCP_DDP_CRC_TX feature Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 30/36] net/mlx5e: NVMEoTCP DDGST TX offload TIS Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 31/36] net/mlx5e: NVMEoTCP DDGST Tx offload queue init/teardown Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 32/36] net/mlx5e: NVMEoTCP DDGST TX BSF and PSV Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 33/36] net/mlx5e: NVMEoTCP DDGST TX Data path Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 34/36] net/mlx5e: NVMEoTCP DDGST TX handle OOO packets Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 35/36] net/mlx5e: NVMEoTCP DDGST TX offload optimization Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 36/36] net/mlx5e: NVMEoTCP DDGST TX statistics Boris Pismenny 2021-07-23 5:56 ` [PATCH v5 net-next 00/36] nvme-tcp receive and tarnsmit offloads Christoph Hellwig 2021-07-23 19:58 ` Sagi Grimberg 2021-08-04 13:51 ` Or Gerlitz 2021-08-06 19:46 ` Sagi Grimberg 2021-08-10 13:37 ` Or Gerlitz
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210722110325.371-24-borisp@nvidia.com \ --to=borisp@nvidia.com \ --cc=axboe@fb.com \ --cc=benishay@nvidia.com \ --cc=boris.pismenny@gmail.com \ --cc=davem@davemloft.net \ --cc=dsahern@gmail.com \ --cc=edumazet@google.com \ --cc=hch@lst.de \ --cc=kbusch@kernel.org \ --cc=kuba@kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=netdev@vger.kernel.org \ --cc=ogerlitz@nvidia.com \ --cc=saeedm@nvidia.com \ --cc=sagi@grimberg.me \ --cc=smalin@marvell.com \ --cc=viro@zeniv.linux.org.uk \ --cc=yorayz@nvidia.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).