LKML Archive on
help / color / mirror / Atom feed
From: Ulf Hansson <>
To: Eugeniy Paltsev <>
Cc: "" <>,
	"" <>,
	"" <>,
	"" <>,
	"" <>,
	"" <>,
	"" <>,
Subject: Re: [RFC 0/2] dw_mmc: add multislot support
Date: Mon, 23 Apr 2018 08:47:09 +0200	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On 20 April 2018 at 17:53, Eugeniy Paltsev <> wrote:
> Hi Ulf,
> On Fri, 2018-04-20 at 09:35 +0200, Ulf Hansson wrote:
>> [...]
>> >
>> > 2. Add missing stuff to support multislot mode in DesignWare MMC driver.
>> >  * Add missing slot switch to __dw_mci_start_request() function.
>> >  * Refactor set_ios function:
>> >    a) Calculate common clock which is
>> >       suitable for all slots instead of directly use clock value
>> >       provided by mmc core. We calculate common clock as the minimum
>> >       among each used slot clocks. This clock is calculated in
>> >       dw_mci_calc_common_clock() function which is called
>> >       from set_ios()
>> >    b) Disable clock only if no other slots are ON.
>> >    c) Setup clock directly in set_ios() only if no other slots
>> >       are ON. Otherwise adjust clock in __dw_mci_start_request()
>> >       function before slot switch.
>> >    d) Move timings and bus_width setup to separate funcions.
>> >  * Use timing field in each slot structure instead of common field in
>> >    host structure.
>> >  * Add locks to serialize access to registers.
>> Sorry, but this is a hack to *try* to make multi-slot work and this
>> isn't sufficient. There were good reasons to why the earlier
>> non-working multi slot support was removed from dw_mmc.
> Previous multi slot implementation was removed as nobody used it and
> nobody tested it. There are lots of mistakes in previous implementation
> which are not related to request serialization
> like lack of slot switch / lack of adding slot id to CIU commands / ets...
> So obviously it was never tested and never used at real multi slot hardware.
>> Let me elaborate a bit for your understanding. The core uses a host
>> lock (mmc_claim|release_host()) to serialize operations and commands,
>> as to confirm to the SD/SDIO/(e)MMC specs. The above changes gives no
>> guarantees for this. To make that work, we would need a "mmc bus lock"
>> to be managed by the core.
> In current implementation data transfers and commands to different
> hosts (slots) are serialized internally in the dw_mmc driver. We have
> request queue and when .request() is called we add new request to the
> queue. We take new request from the queue only if the previous one
> has already finished.

That isn't sufficient. The core expects all calls to *any* of the host
ops to be serialized for one host. It does so to conform to the specs.

For example it may call:

> So although hosts (slots) have separate locks (mmc_claim|release_host())
> the requests to different slots are serialized by driver.
> Isn't that enough?

Sorry, but no.

> I'm not very familiar with SD/SDIO/(e)MMC specs so my assumptions might be wrong
> in that case please correct me.

Well, that kind of explains your simplistic approach.

I would suggest you to study the specs and the behavior of the mmc
core a bit more carefully, that should give you a better understanding
of the problems.

>> However, inventing a "mmc bus lock" would lead to other problems
>> related to I/O scheduling for upper layers - it simply breaks. For
>> example, I/O requests for one card/slot can then starve I/O requests
>> reaching another card/slot.
> Nevertheless we had to deal somehow with existing hardware which
> has multislot dw mmc controller and both slots are used...
> This patch at least shouldn't break anything for current users (which use
> it in single slot mode)
> Moreover we tested this dual-slot implementation and don't catch any problems
> (probably yet) except bus performance decrease in dual-slot mode (which is
> quite expected).

Honestly, I don't think efforts of implementing this is worth it!

Even if we would be able to solve the problem from an mmc subsystem
point of view, we would still have the I/O scheduling problem to
address. To solve that, we would need to be able to configure upper
block layer code to run one scheduling instance over multiple block

Kind regards

  reply	other threads:[~2018-04-23  6:47 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <>
2018-04-17 12:11 ` Eugeniy Paltsev
2018-04-17 12:11   ` [RFC 1/2] dw_mmc: revert removal " Eugeniy Paltsev
2018-04-17 12:11   ` [RFC 2/2] dw_mmc: add " Eugeniy Paltsev
2018-04-20  7:35   ` [RFC 0/2] " Ulf Hansson
2018-04-20  7:42     ` Alexey Brodkin
2018-04-20  8:56       ` Ulf Hansson
2018-04-20 15:53     ` Eugeniy Paltsev
2018-04-23  6:47       ` Ulf Hansson [this message]
2018-04-25 13:53         ` Eugeniy Paltsev
2018-04-26  6:28           ` Ulf Hansson
2018-04-26 10:30   ` Jaehoon Chung

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='' \ \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: [RFC 0/2] dw_mmc: add multislot support' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).