LKML Archive on
help / color / mirror / Atom feed
From: Iuliana Prodan <>
To: Herbert Xu <>,
	Baolin Wang <>,
	Ard Biesheuvel <>,
	Corentin Labbe <>,
	Horia Geanta <>,
	Maxime Coquelin <>,
	Alexandre Torgue <>,
	Maxime Ripard <>
Cc: Aymen Sghaier <>,
	"David S. Miller" <>,
	Silvano Di Ninno <>,
	Franck Lenormand <>,,,
	linux-imx <>,
	Iuliana Prodan <>
Subject: [PATCH v4 0/2] crypto: engine - support for parallel and batch requests
Date: Mon,  9 Mar 2020 00:51:31 +0200	[thread overview]
Message-ID: <> (raw)

Added support for executing multiple, independent or not, requests
for crypto engine. This is based on the return value of
do_one_request(), which is expected to be:
> 0: if hardware still has space in its queue. A driver can implement
do_one_request() to return the number of free entries in
hardware queue;
0: if the request was accepted, by hardware, with success, but the
hardware doesn't support multiple requests or there is no space
left in the hardware queue.
This is to keep the backward compatibility of crypto-engine.
< 0: error.

If hardware supports batch requests, crypto-engine can handle this use-case
through do_batch_requests callback.

Since, these new features, cannot be supported by all hardware,
the crypto-engine framework is backward compatible:
- by using the crypto_engine_alloc_init function, to initialize
crypto-engine, the new callback is NULL and, if do_one_request()
returns 0, crypto-engine will work as before these changes;
- to support multiple requests, in parallel, do_one_request()
needs to be updated to return > 0. 
On crypto_pump_requests(), if do_one_request() returns > 0,
a new request is send to hardware, until there is no space
and do_one_request() returns 0.
- to support batch requests, do_batch_requests callback must be
implemented in driver, to execute a batch of requests. The link
between the requests, is expected to be done in driver, in

Changes since V3:
- removed can_enqueue_hardware callback and added a start-stop
mechanism based on the on the return value of do_one_request().

Changes since V2:
- readded cur_req in crypto-engine, to keep, the exact behavior as before
these changes, if can_enqueue_more is not implemented: send requests
to hardware, _one-by-one_, on crypto_pump_requests, and complete it,
on crypto_finalize_request, and so on.
- do_batch_requests is available only with can_enqueue_more.

Changes since V1:
- changed the name of can_enqueue_hardware callback to can_enqueue_more, and
the argument of this callback to crypto_engine structure (for cases when more
than ore crypto-engine is used).
- added a new patch with support for batch requests.

Changes since V0 (RFC):
- removed max_no_req and no_req, as the number of request that can be
processed in parallel;
- added a new callback, can_enqueue_more, to check whether the hardware
can process a new request.

Iuliana Prodan (2):
  crypto: engine - support for parallel requests
  crypto: engine - support for batch requests

 crypto/crypto_engine.c  | 150 ++++++++++++++++++++++++++++++++++++------------
 include/crypto/engine.h |  15 +++--
 2 files changed, 124 insertions(+), 41 deletions(-)


             reply	other threads:[~2020-03-08 22:51 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-08 22:51 Iuliana Prodan [this message]
2020-03-08 22:51 ` [PATCH v4 1/2] crypto: engine - support for parallel requests Iuliana Prodan
2020-03-12  3:25   ` Herbert Xu
2020-03-12 11:05     ` Horia Geantă
2020-03-17  3:22       ` Herbert Xu
2020-03-12 12:45     ` Iuliana Prodan
2020-03-12 12:52       ` Iuliana Prodan
2020-03-17  3:29       ` Herbert Xu
2020-03-17 13:08         ` Iuliana Prodan
2020-03-27  4:44           ` Herbert Xu
2020-03-27 10:44             ` Iuliana Prodan
2020-04-03  6:19               ` Herbert Xu
2020-03-08 22:51 ` [PATCH v4 2/2] crypto: engine - support for batch requests Iuliana Prodan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: [PATCH v4 0/2] crypto: engine - support for parallel and batch requests' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).