LKML Archive on
help / color / mirror / Atom feed
From: Cristian Marussi <>
To: Floris Westermann <>
Subject: Re: [PATCH v6 00/17] Introduce SCMI transport based on VirtIO
Date: Wed, 11 Aug 2021 16:26:41 +0100	[thread overview]
Message-ID: <20210811152641.GX6592@e120937-lin> (raw)
In-Reply-To: <>

On Wed, Aug 11, 2021 at 09:31:21AM +0000, Floris Westermann wrote:
> Hi Cristian,

Hi Floris,

> I am currently working on an interface for VMs to communicate their
> performance requirements to the hosts by passing through cpu frequency
> adjustments.

So something like looking up SCMI requests from VMs in the hypervisor
and act on VM underlying hw accordingly ? Where the SCMI server is meant
to live ?

> Your patch looks very interesting but I have some questions:

Happy to hear that, a new V7 (with minor cleanups) which is (hopefully)
being pulled these days is at:
> On Mon, Jul 12, 2021 at 03:18:16PM +0100, Cristian Marussi wrote:
> >
> > The series has been tested using an emulated fake SCMI device and also a
> > proper SCP-fw stack running through QEMU vhost-users, with the SCMI stack
> > compiled, in both cases, as builtin and as a loadable module, running tests
> > against mocked SCMI Sensors using HWMON and IIO interfaces to check the
> > functionality of notifications and sync/async commands.
> >
> > Virtio-scmi support has been exercised in the following testing scenario
> > on a JUNO board:
> >
> >  - normal sync/async command transfers
> >  - notifications
> >  - concurrent delivery of correlated response and delayed responses
> >  - out-of-order delivery of delayed responses before related responses
> >  - unexpected delayed response delivery for sync commands
> >  - late delivery of timed-out responses and delayed responses
> >
> > Some basic regression testing against mailbox transport has been performed
> > for commands and notifications too.
> >
> > No sensible overhead in total handling time of commands and notifications
> > has been observed, even though this series do indeed add a considerable
> > amount of code to execute on TX path.
> > More test and measurements could be needed in these regards.
> >
> Can you share any data and benchmarks using you fake SCMI device.
> Also, could you provide the emulated device code so that the results can
> be reproduced.

Not really, because the testing based on the fake SCMI VirtIO device was
purely functional, just to exercise some rare limit conditions not
easily reproducible with a regular SCMI stack, I've made no benchmark
using the fake emulated SCMI virtio device because it mimics VirtIO
transfers but there's not even an host/guest in my fake emulation.
Moreover is a hacked driver+userspace blob not really in a state to be
shared :P

While developing this series I needed somehow to be able to let the Kernel
SCMI-agent in the guest "speak" some basic SCMI commands and notifs to
some SCMI server platform sitting somewhere across the new VirtIO SCMI
transport, which basically means that it's not enough to create an SCMI
VirtIO device reachable from the VirtIO layer, the SCMI stack itself
(or part of it) must live behind such device somehow/somewhere to be
able to receive meaningful replies.

One proper way to do that is to use some QEMU/SCP-fw vhost-users support
cooked by Linaro (not in the upstream for now) so as to basically run a
full proper SCP SCMI-platform fw stack in host userspace and let it speak
to a guest through the new scmi virtio transport and the vhost-users magic:
the drawback of this kind of approach is that it made hard to test limit
conditions like stale or out-of-order SCMI replies because to do so you
have to patch the official SCP/SCMI stack to behave badly and out-of specs,
which is not something is designed to do. (and also the fact that the
vhost-users QEMU/SCP-fw solution was only available later during devel).
Hence the emulation hack for testing rare limit conditions.

Now, the emulation hack, beside really ugly, is clearly a totally fake
VirtIO environment (there's not even a guest really...) and, as said, it
just served the need to exercise the SCMI virtio transport code enough to
test anomalous and bad-behaving SCMI commands flows and notifications and
as such made really no sense to be used as a performance testbed.

In fact, what I was really meaning (poorly) while saying:

> > No sensible overhead in total handling time of commands and notifications
> > has been observed, even though this series do indeed add a considerable
> > amount of code to execute on TX path.

is that I have NOT seen any sensible overhead/slowdown in the context of
OTHER real SCMI transports (like mailboxes), because the virtio series
contains a number of preliminary SCMI common core changes unrelated to
virtio (monotonic tokens/handle concurrent and out-of-order replies) that,
even though easing the devel of SCMI virtio, are really needed and used
by any other existent SCMI transport, so my fear was to introduce some
common slowdown in the core: in those regards only, I said that I have
not seen (looking at cmd traces) any slowdown with such additional core
changes even though more code is clearly now run in the TX path.
(contention can indeed only happen under very rare limit conditions)

Having said that (sorry for the flood of words) what I can give you are
a few traces (non statistically significant probably) showing the typical
round-trip time for some plain SCMI command sensor requests on an idle
system (JUNO)

In both cases the sensors being read are mocked, so the time is purely
related to SCMI core stack and virtio exchanges (i.e. there's no delay
introduced by reading real hw sensors)

-> Using a proper SCP-fw/QEMU vhost-users stack:

root@deb-guest:~# cat /sys/class/hwmon/hwmon0/temp1_input 
             cat-195     [000] ....  7044.614295: scmi_xfer_begin: transfer_id=27 msg_id=6 protocol_id=21 seq=27 poll=0
          <idle>-0       [000] d.h3  7044.615342: scmi_rx_done: transfer_id=27 msg_id=6 protocol_id=21 seq=27 msg_type=0
             cat-195     [000] ....  7044.615420: scmi_xfer_end: transfer_id=27 msg_id=6 protocol_id=21 seq=27 status=0

root@deb-guest:~# cat /sys/class/hwmon/hwmon0/temp1_input 
             cat-196     [000] ....  7049.200349: scmi_xfer_begin: transfer_id=28 msg_id=6 protocol_id=21 seq=28 poll=0
          <idle>-0       [000] d.h3  7049.202053: scmi_rx_done: transfer_id=28 msg_id=6 protocol_id=21 seq=28 msg_type=0
             cat-196     [000] ....  7049.202152: scmi_xfer_end: transfer_id=28 msg_id=6 protocol_id=21 seq=28 status=0

root@deb-guest:~# cat /sys/class/hwmon/hwmon0/temp1_input 
             cat-197     [000] ....  7053.699713: scmi_xfer_begin: transfer_id=29 msg_id=6 protocol_id=21 seq=29 poll=0
          <idle>-0       [000] d.H3  7053.700366: scmi_rx_done: transfer_id=29 msg_id=6 protocol_id=21 seq=29 msg_type=0
             cat-197     [000] ....  7053.700468: scmi_xfer_end: transfer_id=29 msg_id=6 protocol_id=21 seq=29 status=0

root@deb-guest:~# cat /sys/class/hwmon/hwmon0/temp1_input 
             cat-198     [001] ....  7058.944442: scmi_xfer_begin: transfer_id=30 msg_id=6 protocol_id=21 seq=30 poll=0
             cat-173     [000] d.h2  7058.944959: scmi_rx_done: transfer_id=30 msg_id=6 protocol_id=21 seq=30 msg_type=0
             cat-198     [001] ....  7058.945500: scmi_xfer_end: transfer_id=30 msg_id=6 protocol_id=21 seq=30 status=0

root@deb-guest:~# cat /sys/class/hwmon/hwmon0/temp1_input 
             cat-199     [000] ....  7064.598797: scmi_xfer_begin: transfer_id=31 msg_id=6 protocol_id=21 seq=31 poll=0
          <idle>-0       [000] d.h3  7064.599710: scmi_rx_done: transfer_id=31 msg_id=6 protocol_id=21 seq=31 msg_type=0
             cat-199     [000] ....  7064.599787: scmi_xfer_end: transfer_id=31 msg_id=6 protocol_id=21 seq=31 status=0

-> Using the fake hack SCMI device that relays packets to userspace:

             cat-1306    [000] ....  7614.373161: scmi_xfer_begin: transfer_id=78 msg_id=6 protocol_id=21 seq=78 poll=0
 scmi_sniffer_ng-342     [000] d.h2  7614.373699: scmi_rx_done: transfer_id=78 msg_id=6 protocol_id=21 seq=78 msg_type=0
             cat-1306    [000] ....  7614.377653: scmi_xfer_end: transfer_id=78 msg_id=6 protocol_id=21 seq=78 status=0

             cat-1308    [004] ....  7626.677176: scmi_xfer_begin: transfer_id=79 msg_id=6 protocol_id=21 seq=79 poll=0
 scmi_sniffer_ng-342     [000] d.h2  7626.677653: scmi_rx_done: transfer_id=79 msg_id=6 protocol_id=21 seq=79 msg_type=0
             cat-1308    [004] ....  7626.677705: scmi_xfer_end: transfer_id=79 msg_id=6 protocol_id=21 seq=79 status=0

             cat-1309    [004] ....  7631.249412: scmi_xfer_begin: transfer_id=80 msg_id=6 protocol_id=21 seq=80 poll=0
 scmi_sniffer_ng-342     [000] d.h2  7631.250182: scmi_rx_done: transfer_id=80 msg_id=6 protocol_id=21 seq=80 msg_type=0
             cat-1309    [004] ....  7631.250237: scmi_xfer_end: transfer_id=80 msg_id=6 protocol_id=21 seq=80 status=0

             cat-1312    [004] ....  7642.210034: scmi_xfer_begin: transfer_id=81 msg_id=6 protocol_id=21 seq=81 poll=0
 scmi_sniffer_ng-342     [000] d.h2  7642.210514: scmi_rx_done: transfer_id=81 msg_id=6 protocol_id=21 seq=81 msg_type=0
             cat-1312    [004] ....  7642.210567: scmi_xfer_end: transfer_id=81 msg_id=6 protocol_id=21 seq=81 status=0

             cat-1314    [003] ....  7645.810775: scmi_xfer_begin: transfer_id=82 msg_id=6 protocol_id=21 seq=82 poll=0
 scmi_sniffer_ng-342     [000] d.h2  7645.811255: scmi_rx_done: transfer_id=82 msg_id=6 protocol_id=21 seq=82 msg_type=0
             cat-1314    [003] ....  7645.811307: scmi_xfer_end: transfer_id=82 msg_id=6 protocol_id=21 seq=82 status=0

In both cases SCMI requests are effectively relayed to userspace so
that's probably the reason timings are similar. (despite the hackish
internals of latter solution)

Not sure if all the above madness helped you at all :D


      reply	other threads:[~2021-08-11 15:26 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-12 14:18 Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 01/17] firmware: arm_scmi: Avoid padding in sensor message structure Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 02/17] firmware: arm_scmi: Fix max pending messages boundary check Cristian Marussi
2021-07-14 16:46   ` Sudeep Holla
2021-07-12 14:18 ` [PATCH v6 03/17] firmware: arm_scmi: Add support for type handling in common functions Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 04/17] firmware: arm_scmi: Remove scmi_dump_header_dbg() helper Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 05/17] firmware: arm_scmi: Add transport optional init/exit support Cristian Marussi
2021-07-28 11:40   ` Sudeep Holla
2021-07-28 12:28     ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 06/17] firmware: arm_scmi: Introduce monotonically increasing tokens Cristian Marussi
2021-07-28 14:17   ` Sudeep Holla
2021-07-28 16:54     ` Cristian Marussi
2021-08-02 10:24       ` Sudeep Holla
2021-08-03 12:52         ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 07/17] firmware: arm_scmi: Handle concurrent and out-of-order messages Cristian Marussi
2021-07-15 16:36   ` Peter Hilber
2021-07-19  9:14     ` Cristian Marussi
2021-07-22  8:32       ` Peter Hilber
2021-07-28  8:31         ` Cristian Marussi
2021-08-02 10:10   ` Sudeep Holla
2021-08-02 10:27     ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 08/17] firmware: arm_scmi: Add priv parameter to scmi_rx_callback Cristian Marussi
2021-07-28 14:26   ` Sudeep Holla
2021-07-28 17:25     ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 09/17] firmware: arm_scmi: Make .clear_channel optional Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 10/17] firmware: arm_scmi: Make polling mode optional Cristian Marussi
2021-07-15 16:36   ` Peter Hilber
2021-07-19  9:15     ` Cristian Marussi
2021-07-28 14:34   ` Sudeep Holla
2021-07-28 17:41     ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 11/17] firmware: arm_scmi: Make SCMI transports configurable Cristian Marussi
2021-07-28 14:50   ` Sudeep Holla
2021-07-29 16:18     ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 12/17] firmware: arm_scmi: Make shmem support optional for transports Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 13/17] firmware: arm_scmi: Add method to override max message number Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 14/17] firmware: arm_scmi: Add message passing abstractions for transports Cristian Marussi
2021-07-15 16:36   ` Peter Hilber
2021-07-19  9:16     ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 15/17] firmware: arm_scmi: Add optional link_supplier() transport op Cristian Marussi
2021-07-28 15:36   ` Sudeep Holla
2021-07-29 16:19     ` Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 16/17] dt-bindings: arm: Add virtio transport for SCMI Cristian Marussi
2021-07-12 14:18 ` [PATCH v6 17/17] firmware: arm_scmi: Add virtio transport Cristian Marussi
2021-07-15 16:35 ` [PATCH v6 00/17] Introduce SCMI transport based on VirtIO Peter Hilber
2021-07-19 11:36   ` Cristian Marussi
2021-07-22  8:30     ` Peter Hilber
2021-08-11  9:31 ` Floris Westermann
2021-08-11 15:26   ` Cristian Marussi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210811152641.GX6592@e120937-lin \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: [PATCH v6 00/17] Introduce SCMI transport based on VirtIO' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).