LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "Winiarska, Iwona" <iwona.winiarska@intel.com>
To: "zweiss@equinix.com" <zweiss@equinix.com>
Cc: "corbet@lwn.net" <corbet@lwn.net>,
	"jae.hyun.yoo@linux.intel.com" <jae.hyun.yoo@linux.intel.com>,
	"linux-hwmon@vger.kernel.org" <linux-hwmon@vger.kernel.org>,
	"Lutomirski, Andy" <luto@kernel.org>,
	"Luck, Tony" <tony.luck@intel.com>,
	"andrew@aj.id.au" <andrew@aj.id.au>,
	"mchehab@kernel.org" <mchehab@kernel.org>,
	"jdelvare@suse.com" <jdelvare@suse.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"linux-aspeed@lists.ozlabs.org" <linux-aspeed@lists.ozlabs.org>,
	"yazen.ghannam@amd.com" <yazen.ghannam@amd.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux@roeck-us.net" <linux@roeck-us.net>,
	"robh+dt@kernel.org" <robh+dt@kernel.org>,
	"openbmc@lists.ozlabs.org" <openbmc@lists.ozlabs.org>,
	"bp@alien8.de" <bp@alien8.de>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	"pierre-louis.bossart@linux.intel.com" 
	<pierre-louis.bossart@linux.intel.com>,
	"andriy.shevchenko@linux.intel.com" 
	<andriy.shevchenko@linux.intel.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
Subject: Re: [PATCH 08/14] peci: Add device detection
Date: Fri, 30 Jul 2021 20:10:22 +0000	[thread overview]
Message-ID: <e09a84385be48304d01584c6d629c0f56caad8a1.camel@intel.com> (raw)
In-Reply-To: <20210729205013.GW8018@packtop>

On Thu, 2021-07-29 at 20:50 +0000, Zev Weiss wrote:
> On Thu, Jul 29, 2021 at 01:55:19PM CDT, Winiarska, Iwona wrote:
> > On Tue, 2021-07-27 at 17:49 +0000, Zev Weiss wrote:
> > > On Mon, Jul 12, 2021 at 05:04:41PM CDT, Iwona Winiarska wrote:
> > > > 
> > > > +
> > > > +static int peci_detect(struct peci_controller *controller, u8 addr)
> > > > +{
> > > > +       struct peci_request *req;
> > > > +       int ret;
> > > > +
> > > > +       req = peci_request_alloc(NULL, 0, 0);
> > > > +       if (!req)
> > > > +               return -ENOMEM;
> > > > +
> > > 
> > > Might be worth a brief comment here noting that an empty request happens
> > > to be the format of a PECI ping command (and/or change the name of the
> > > function to peci_ping()).
> > 
> > I'll add a comment:
> > "We are using PECI Ping command to detect presence of PECI devices."
> > 
> 
> Well, what I was more aiming to get at was that to someone not
> intimately familiar with the PECI protocol it's not immediately obvious
> from the code that it in fact implements a ping (there's no 'msg->cmd =
> PECI_CMD_PING' or anything), so I was hoping for something that would
> just make that slightly more explicit.

/*
 * PECI Ping is a command encoded by tx_len = 0, rx_len = 0.
 * We expect correct Write FCS if the device at the target address is
 * able to respond.
 */

I would like to avoid doing a peci_ping wrapper that doesn't operate on
peci_device - note that at this point we don't have a struct peci_device yet,
we're using ping to figure out whether we should create one.

> > > > +
> > > > +/**
> > > > + * peci_request_alloc() - allocate &struct peci_request with buffers
> > > > with
> > > > given lengths
> > > > + * @device: PECI device to which request is going to be sent
> > > > + * @tx_len: requested TX buffer length
> > > > + * @rx_len: requested RX buffer length
> > > > + *
> > > > + * Return: A pointer to a newly allocated &struct peci_request on
> > > > success
> > > > or NULL otherwise.
> > > > + */
> > > > +struct peci_request *peci_request_alloc(struct peci_device *device, u8
> > > > tx_len, u8 rx_len)
> > > > +{
> > > > +       struct peci_request *req;
> > > > +       u8 *tx_buf, *rx_buf;
> > > > +
> > > > +       req = kzalloc(sizeof(*req), GFP_KERNEL);
> > > > +       if (!req)
> > > > +               return NULL;
> > > > +
> > > > +       req->device = device;
> > > > +
> > > > +       /*
> > > > +        * PECI controllers that we are using now don't support DMA,
> > > > this
> > > > +        * should be converted to DMA API once support for controllers
> > > > that
> > > > do
> > > > +        * allow it is added to avoid an extra copy.
> > > > +        */
> > > > +       if (tx_len) {
> > > > +               tx_buf = kzalloc(tx_len, GFP_KERNEL);
> > > > +               if (!tx_buf)
> > > > +                       goto err_free_req;
> > > > +
> > > > +               req->tx.buf = tx_buf;
> > > > +               req->tx.len = tx_len;
> > > > +       }
> > > > +
> > > > +       if (rx_len) {
> > > > +               rx_buf = kzalloc(rx_len, GFP_KERNEL);
> > > > +               if (!rx_buf)
> > > > +                       goto err_free_tx;
> > > > +
> > > > +               req->rx.buf = rx_buf;
> > > > +               req->rx.len = rx_len;
> > > > +       }
> > > > +
> > > 
> > > As long as we're punting on DMA support, could we do the whole thing in
> > > a single allocation instead of three?  It'd add some pointer arithmetic,
> > > but would also simplify the error-handling/deallocation paths a bit.
> > > 
> > > Or, given that the one controller we're currently supporting has a
> > > hardware limit of 32 bytes per transfer anyway, maybe just inline
> > > fixed-size rx/tx buffers into struct peci_request and have callers keep
> > > them on the stack instead of kmalloc()-ing them?
> > 
> > I disagree on error handling (it's not complicated) - however, one argument
> > for
> > doing a single alloc (or moving the buffers as fixed-size arrays inside
> > struct
> > peci_request) is that single kzalloc is going to be faster than 3. But I
> > don't
> > expect it to show up on any perf profiles for now (since peci-wire interface
> > is
> > not a speed demon).
> > 
> > I wanted to avoid defining max size for TX and RX in peci-core.
> > Do you have a strong opinion against multiple alloc? If yes, I can go with
> > fixed-size arrays inside struct peci_request.
> > 
> 
> As is it's certainly not terribly complicated in an absolute sense, but
> comparatively speaking the cleanup path for a single allocation is still
> simpler, no?
> 
> Making it more efficient would definitely be a nice benefit too (perhaps
> a more significant one) -- in a typical deployment I'd guess this code
> path will see roughly socket_count + total_core_count executions per
> second?  On a big multi-socket system that could end up being a
> reasonably large number (>100), so while it may not end up as a major
> hot spot in a system-wide profile, it seems like it might be worth
> having it do 1/3 as many allocations if it's reasonably easy to do.
> (And while I don't think the kernel is generally at fault for this, from
> what I've seen of OpenBMC as a whole I think it might benefit from a bit
> more overall frugality with CPU cycles.)
> 
> As for a fixed max request size and inlined buffers, I definitely
> understand not wanting to put a cap on that in the generic PECI core --
> and actually, looking at the peci-npcm code from previous iterations of
> the PECI patchset, it looks like the Nuvoton hardware has significantly
> larger size limits (127 bytes if I'm reading things right) that might be
> a bit bulky for on-stack allocation.  So while that's appealing
> efficiency-wise and (IMO) aesthetically, perhaps it's not ultimately
> real viable.
> 
> Hmm, though (thinking out loud) I suppose we could also get down to a
> zero-allocation common case by having the driver hold on to a request
> struct and reuse it across transfers, given that they're all serialized
> by a mutex anyway?

With the "zero-allocation" case we still need some memory to copy the necessary
data from the "request area" (now "global" - per-controller).

After more consideration, I think this doesn't have to rely on controller
capabilities, we can just define a max value based on the commands we're using
and use that with single alloc (with rx and tx having fixed size arrays).
I'll change it in v2.

Thank you
-Iwona
> 

  reply	other threads:[~2021-07-30 20:10 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-12 22:04 [PATCH 00/14] Introduce PECI subsystem Iwona Winiarska
2021-07-12 22:04 ` [PATCH 01/14] x86/cpu: Move intel-family to arch-independent headers Iwona Winiarska
2021-07-14 16:54   ` Williams, Dan J
2021-07-15 16:47     ` Winiarska, Iwona
2021-07-15 18:13       ` Dan Williams
2021-07-15 18:29         ` Luck, Tony
2021-07-12 22:04 ` [PATCH 02/14] x86/cpu: Extract cpuid helpers to arch-independent Iwona Winiarska
2021-07-14 16:58   ` Williams, Dan J
2021-07-15 16:51     ` Winiarska, Iwona
2021-07-15 16:58       ` Winiarska, Iwona
2021-07-12 22:04 ` [PATCH 03/14] dt-bindings: Add generic bindings for PECI Iwona Winiarska
2021-07-12 22:04 ` [PATCH 04/14] dt-bindings: Add bindings for peci-aspeed Iwona Winiarska
2021-07-15 16:28   ` Rob Herring
2021-07-16 21:22     ` Winiarska, Iwona
2021-07-12 22:04 ` [PATCH 05/14] ARM: dts: aspeed: Add PECI controller nodes Iwona Winiarska
2021-07-12 22:04 ` [PATCH 06/14] peci: Add core infrastructure Iwona Winiarska
2021-07-14 17:19   ` Williams, Dan J
2021-07-16 21:08     ` Winiarska, Iwona
2021-07-16 21:50       ` Dan Williams
2021-07-17  6:12         ` gregkh
2021-07-17 20:54           ` Dan Williams
2021-07-12 22:04 ` [PATCH 07/14] peci: Add peci-aspeed controller driver Iwona Winiarska
2021-07-13  5:02   ` Randy Dunlap
2021-07-15 16:42     ` Winiarska, Iwona
2021-07-14 17:39   ` Williams, Dan J
2021-07-16 21:17     ` Winiarska, Iwona
2021-07-27  8:49   ` Zev Weiss
2021-07-29 14:03     ` Winiarska, Iwona
2021-07-29 18:15       ` Zev Weiss
2021-07-12 22:04 ` [PATCH 08/14] peci: Add device detection Iwona Winiarska
2021-07-14 21:05   ` Williams, Dan J
2021-07-16 21:20     ` Winiarska, Iwona
2021-07-27 17:49   ` Zev Weiss
2021-07-29 18:55     ` Winiarska, Iwona
2021-07-29 20:50       ` Zev Weiss
2021-07-30 20:10         ` Winiarska, Iwona [this message]
2021-07-12 22:04 ` [PATCH 09/14] peci: Add support for PECI device drivers Iwona Winiarska
2021-07-27 20:10   ` Zev Weiss
2021-07-27 21:23     ` Guenter Roeck
2021-07-29 21:17     ` Winiarska, Iwona
2021-07-29 23:22       ` Zev Weiss
2021-07-30 20:13         ` Winiarska, Iwona
2021-07-12 22:04 ` [PATCH 10/14] peci: Add peci-cpu driver Iwona Winiarska
2021-07-27 11:16   ` David Müller (ELSOFT AG)
2021-07-30 20:14     ` Winiarska, Iwona
2021-07-27 21:33   ` Zev Weiss
2021-07-30 21:21     ` Winiarska, Iwona
2021-07-12 22:04 ` [PATCH 11/14] hwmon: peci: Add cputemp driver Iwona Winiarska
2021-07-15 17:45   ` Guenter Roeck
2021-07-19 20:12     ` Winiarska, Iwona
2021-07-19 20:35       ` Guenter Roeck
2021-07-27  7:06   ` Zev Weiss
2021-07-30 21:51     ` Winiarska, Iwona
2021-07-30 22:04       ` Guenter Roeck
2021-07-12 22:04 ` [PATCH 12/14] hwmon: peci: Add dimmtemp driver Iwona Winiarska
2021-07-15 17:56   ` Guenter Roeck
2021-07-19 20:31     ` Winiarska, Iwona
2021-07-19 20:36       ` Guenter Roeck
2021-07-26 22:08   ` Zev Weiss
2021-07-30 22:48     ` Winiarska, Iwona
2021-07-12 22:04 ` [PATCH 13/14] docs: hwmon: Document PECI drivers Iwona Winiarska
2021-07-27 22:58   ` Zev Weiss
2021-07-28  0:49     ` Guenter Roeck
2021-08-02 11:39       ` Winiarska, Iwona
2021-08-02 11:37     ` Winiarska, Iwona
2021-08-04 17:52       ` Zev Weiss
2021-08-04 18:05         ` Guenter Roeck
2021-08-05 21:42           ` Winiarska, Iwona
2021-07-12 22:04 ` [PATCH 14/14] docs: Add PECI documentation Iwona Winiarska
2021-07-14 16:51 ` [PATCH 00/14] Introduce PECI subsystem Williams, Dan J
2021-07-15 17:33   ` Winiarska, Iwona
2021-07-15 19:34     ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e09a84385be48304d01584c6d629c0f56caad8a1.camel@intel.com \
    --to=iwona.winiarska@intel.com \
    --cc=andrew@aj.id.au \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=devicetree@vger.kernel.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jae.hyun.yoo@linux.intel.com \
    --cc=jdelvare@suse.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-aspeed@lists.ozlabs.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-hwmon@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@roeck-us.net \
    --cc=luto@kernel.org \
    --cc=mchehab@kernel.org \
    --cc=mingo@redhat.com \
    --cc=openbmc@lists.ozlabs.org \
    --cc=pierre-louis.bossart@linux.intel.com \
    --cc=robh+dt@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=x86@kernel.org \
    --cc=yazen.ghannam@amd.com \
    --cc=zweiss@equinix.com \
    --subject='Re: [PATCH 08/14] peci: Add device detection' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).