LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Rob Herring <firstname.lastname@example.org>
To: Rishabh Bhatnagar <email@example.com>
Cc: "moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE"
Trilok Soni <firstname.lastname@example.org>,
Kyle Yan <email@example.com>,
firstname.lastname@example.org, Evan Green <email@example.com>
Subject: Re: [PATCH v5 1/2] dt-bindings: Documentation for qcom, llcc
Date: Mon, 30 Apr 2018 09:33:25 -0500 [thread overview]
Message-ID: <CAL_JsqJEQU-BysQ7LmQRJ5ySPpUXYCMPAQ1JqeboGoQL-SyAqg@mail.gmail.com> (raw)
On Fri, Apr 27, 2018 at 5:57 PM, <firstname.lastname@example.org> wrote:
> On 2018-04-27 07:21, Rob Herring wrote:
>> On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote:
>>> Documentation for last level cache controller device tree bindings,
>>> client bindings usage examples.
>>> Signed-off-by: Channagoud Kadabi <email@example.com>
>>> Signed-off-by: Rishabh Bhatnagar <firstname.lastname@example.org>
>>> .../devicetree/bindings/arm/msm/qcom,llcc.txt | 60
>>> 1 file changed, 60 insertions(+)
>>> create mode 100644
>> My comments on v4 still apply.
> Hi Rob,
> Reposting our replies to your comments on v4:
> This is partially true, a bunch of SoCs would support this design but
> clients IDs are not expected to change. So Ideally client drivers could
> hard code these IDs.
> However I have other concerns of moving the client Ids in the driver.
> The way the APIs implemented today are as follows:
> #1. Client calls into system cache driver to get cache slice handle
> with the usecase Id as input.
> #2. System cache driver gets the phandle of system cache instance from
> the client device to obtain the private data.
> #3. Based on the usecase Id perform look up in the private data to get
> cache slice handle.
> #4. Return the cache slice handle to client
> If we don't have the connection between client & system cache then the
> private data needs to declared as static global in the system cache driver,
> that limits us to have just once instance of system cache block.
How many instances do you have?
It is easier to put the data into the kernel and move it to DT later
than vice-versa. I don't think it is a good idea to do a custom
binding here and one that only addresses caches and nothing else in
the interconnect. So either we define an extensible and future-proof
binding or put the data into the kernel for now.
next prev parent reply other threads:[~2018-04-30 14:33 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-23 23:09 [PATCH v5 0/2] SDM845 System Cache Driver Rishabh Bhatnagar
2018-04-23 23:09 ` [PATCH v5 1/2] dt-bindings: Documentation for qcom, llcc Rishabh Bhatnagar
2018-04-27 14:21 ` Rob Herring
2018-04-27 22:57 ` rishabhb
2018-04-30 14:33 ` Rob Herring [this message]
2018-05-01 0:37 ` rishabhb
2018-05-08 15:35 ` Rob Herring
2018-04-23 23:09 ` [PATCH v5 2/2] drivers: soc: Add LLCC driver Rishabh Bhatnagar
2018-04-24 3:25 ` Randy Dunlap
2018-04-26 18:32 ` Evan Green
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--subject='Re: [PATCH v5 1/2] dt-bindings: Documentation for qcom, llcc' \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).