LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
[not found] <1182373258.30574.30.camel@caritas-dev.intel.com>
@ 2007-06-20 15:09 ` Stefan Richter
2007-06-20 15:14 ` Stefan Richter
2007-06-21 9:38 ` Huang, Ying
2007-06-24 7:06 ` Greg KH
1 sibling, 2 replies; 14+ messages in thread
From: Stefan Richter @ 2007-06-20 15:09 UTC (permalink / raw)
To: Huang, Ying
Cc: Greg K-H, Cornelia Huck, Adrian Bunk, ; david@lang.hm,
David Miller, Duncan Sands, Phillip Susi, linux-kernel
Huang, Ying wrote:
> This is a new version of multithreaded probing patch, with more
> parallelism control added.
Thanks. (I'd like to try it out but will probably be busy with other
stuff during the next few weeks.)
...
> A field named "probe_queue_no" is added to "struct device", which
I'd call it probe_queue_number or maybe probe_queue_id. The term "no"
is ambiguous.
> indicates probing queue No. on which the probing of the device will be
> done. The subsystem can control the parallelism through this field.
Is the queue number kernel-global or per subsystem?
...
> + * schedule_probe - schedule a probing to be done later
> + * @probe_queue_no: probing queue No. on which the probing will be done
> + * @probe: probing infromation include probing function and parameter
^^^^^^^^^^^
typo: information
--
Stefan Richter
-=====-=-=== -==- =-=--
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-20 15:09 ` [PATCH] driver core: multithreaded probing - more parallelism control Stefan Richter
@ 2007-06-20 15:14 ` Stefan Richter
2007-06-21 9:38 ` Huang, Ying
1 sibling, 0 replies; 14+ messages in thread
From: Stefan Richter @ 2007-06-20 15:14 UTC (permalink / raw)
To: Huang, Ying
Cc: Greg K-H, Cornelia Huck, Adrian Bunk, ; david@lang.hm,
David Miller, Duncan Sands, Phillip Susi, linux-kernel
> Huang, Ying wrote:
...
>> + * @probe: probing infromation include probing function and parameter
> ^^^^^^^^^^^
> typo: information
Also, the meaning of the rest of the sentence is unclear.
--
Stefan Richter
-=====-=-=== -==- =-=--
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-21 9:38 ` Huang, Ying
@ 2007-06-21 8:49 ` Stefan Richter
2007-06-21 13:51 ` Huang, Ying
0 siblings, 1 reply; 14+ messages in thread
From: Stefan Richter @ 2007-06-21 8:49 UTC (permalink / raw)
To: Huang, Ying
Cc: Greg K-H, Cornelia Huck, Adrian Bunk, david, David Miller,
Duncan Sands, Phillip Susi, linux-kernel
Huang, Ying wrote:
>> Is the queue number kernel-global or per subsystem?
>
> The queue number is kernel-global.
Then there is an API required to allocate and deallocate exclusive queue
IDs. This feels strange as a mechanism for (de-)serialization, and
might introduce some bulk WRT code and data.
Really, I don't believe there is anything else required from subsystems'
point of view than
- the possibility to keep plain serial driver matching/probing,
- to allow unrestricted parallelism,
- to mark devices whose child devices shall be matched/probed
serially.
Should there be a subsystem which has more special demands on mixture of
parallelism and serialization, it can easily use private means to
serialize certain parts of driver probes, for example with the familiar
mechanism of mutexes.
Or if need be, such a subsystem can implement its own threading model.
The FireWire subsystem for example first fetches so-called configuration
ROMs from each node on a bus by means of asynchronous split
transactions. The ROMs are scanned for device properties and
capabilities, and then drivers are matched/probed. The new FireWire
subsystem currently uses workqueue jobs to read the ROMs. The old
FireWire subsystem uses one kernel thread per bus. Before the new
FireWire subsystem was announced, I planned to let the bus kthread spawn
node kthreads which (1) fetch and scan ROMs and (2) match and probe
drivers for each unit in a node.
If the old FireWire subsystem had a future, I would most certainly not
use your mechanism but implement what you described. I am not sure
about the new FireWire subsystem; there isn't much practical experience
with it yet.
--
Stefan Richter
-=====-=-=== -==- =-=-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-20 15:09 ` [PATCH] driver core: multithreaded probing - more parallelism control Stefan Richter
2007-06-20 15:14 ` Stefan Richter
@ 2007-06-21 9:38 ` Huang, Ying
2007-06-21 8:49 ` Stefan Richter
1 sibling, 1 reply; 14+ messages in thread
From: Huang, Ying @ 2007-06-21 9:38 UTC (permalink / raw)
To: Stefan Richter
Cc: Greg K-H, Cornelia Huck, Adrian Bunk, david, David Miller,
Duncan Sands, Phillip Susi, linux-kernel
On Wed, 2007-06-20 at 17:09 +0200, Stefan Richter wrote:
> I'd call it probe_queue_number or maybe probe_queue_id. The term "no"
> is ambiguous.
Yes, I think probe_queue_id is better.
> Is the queue number kernel-global or per subsystem?
The queue number is kernel-global. I think this is easy to be
implemented. And the serialization demand between subsystem can be
satisfied too.
> > + * @probe: probing infromation include probing function and parameter
> ^^^^^^^^^^^
> typo: information
Sorry, I will correct it in the next version.
Best Regards,
Huang Ying
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-21 8:49 ` Stefan Richter
@ 2007-06-21 13:51 ` Huang, Ying
2007-06-21 16:21 ` Stefan Richter
0 siblings, 1 reply; 14+ messages in thread
From: Huang, Ying @ 2007-06-21 13:51 UTC (permalink / raw)
To: Stefan Richter
Cc: Greg K-H, Cornelia Huck, Adrian Bunk, david, David Miller,
Duncan Sands, Phillip Susi, linux-kernel
>-----Original Message-----
>From: Stefan Richter [mailto:stefanr@s5r6.in-berlin.de]
>> The queue number is kernel-global.
>
>Then there is an API required to allocate and deallocate exclusive
queue
>IDs. This feels strange as a mechanism for (de-)serialization, and
>might introduce some bulk WRT code and data.
I think the queue IDs of different subsystem need not to be exclusive.
The subsystem can allocate queue IDs arbitrarily. If one queue ID is
shared between several subsystems, corresponding probing will be
serialized. This will slow down the probing unnecessarily, but there
will be no race condition.
The benefit of the mechanism is that the maximum parallelism of probing
can be control.
>Really, I don't believe there is anything else required from
subsystems'
>point of view than
> - the possibility to keep plain serial driver matching/probing,
> - to allow unrestricted parallelism,
> - to mark devices whose child devices shall be matched/probed
> serially.
Maybe.
>Should there be a subsystem which has more special demands on mixture
of
>parallelism and serialization, it can easily use private means to
>serialize certain parts of driver probes, for example with the familiar
>mechanism of mutexes.
>
>Or if need be, such a subsystem can implement its own threading model.
>The FireWire subsystem for example first fetches so-called
configuration
>ROMs from each node on a bus by means of asynchronous split
>transactions. The ROMs are scanned for device properties and
>capabilities, and then drivers are matched/probed. The new FireWire
>subsystem currently uses workqueue jobs to read the ROMs. The old
>FireWire subsystem uses one kernel thread per bus. Before the new
>FireWire subsystem was announced, I planned to let the bus kthread
spawn
>node kthreads which (1) fetch and scan ROMs and (2) match and probe
>drivers for each unit in a node.
I know nothing about FireWire. If I say something nonsense, please just
ignore it.
Whether is it possible that a new unit is inserted during probing
kthread is running. If it is possible, the probing kthread maybe ignore
the new inserted unit. So a kthread for this new unit should be created
upon inserting. And there must be some synchronization mechanism
provided. So, I think something like probing queue may be better than
raw kthread for this. And the bus scanning thread may need to exist
during system running, using the probing queue the bus scanning thread
can be created and destroyed on demanded too.
Thank you very much for your comment.
Best Regards,
Huang Ying
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-21 13:51 ` Huang, Ying
@ 2007-06-21 16:21 ` Stefan Richter
2007-06-22 9:52 ` Huang, Ying
0 siblings, 1 reply; 14+ messages in thread
From: Stefan Richter @ 2007-06-21 16:21 UTC (permalink / raw)
To: Huang, Ying
Cc: Greg K-H, Cornelia Huck, Adrian Bunk, david, David Miller,
Duncan Sands, Phillip Susi, linux-kernel
Huang, Ying wrote:
> I think the queue IDs of different subsystem need not to be exclusive.
> The subsystem can allocate queue IDs arbitrarily. If one queue ID is
> shared between several subsystems, corresponding probing will be
> serialized. This will slow down the probing unnecessarily, but there
> will be no race condition.
Parallelism between subsystems may be interesting during boot ==
"coldplug", /if/ the machine has time-consuming devices to probe on
/different/ types of buses. Of course some machines do the really
time-consuming stuff on only one type of bus. Granted, parallelism
betwen subsystems is not very interesting anymore later after boot ==
"hotplug".
[...]
> Whether is it possible that a new unit is inserted during probing
> kthread is running.
Nodes and units on nodes may came and go at arbitrary points in time,
and I'm sure similar things can be said about the majority of other bus
architectures or network architectures. We take this into account.
(The old FireWire stack will re-enter the main loop of the bus scanning
thread sometime after a bus reset event signaled that nodes or units may
have appeared or disappeared. The new FireWire stack will schedule
respective scanning workqueue jobs after such an event.)
--
Stefan Richter
-=====-=-=== -==- =-=-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-21 16:21 ` Stefan Richter
@ 2007-06-22 9:52 ` Huang, Ying
2007-07-03 15:04 ` Cornelia Huck
0 siblings, 1 reply; 14+ messages in thread
From: Huang, Ying @ 2007-06-22 9:52 UTC (permalink / raw)
To: Stefan Richter
Cc: Greg K-H, Cornelia Huck, Adrian Bunk, david, David Miller,
Duncan Sands, Phillip Susi, linux-kernel
On Thu, 2007-06-21 at 18:21 +0200, Stefan Richter wrote:
> Parallelism between subsystems may be interesting during boot ==
> "coldplug", /if/ the machine has time-consuming devices to probe on
> /different/ types of buses. Of course some machines do the really
> time-consuming stuff on only one type of bus. Granted, parallelism
> betwen subsystems is not very interesting anymore later after boot ==
> "hotplug".
Yes. So I think there are two possible solution.
1. Creating one set of probing queues for each subsystem (maybe just the
subsystems need it), so the probing queue IDs are local to each
subsystem.
2. There is only one set of probing queues in whole system. The probing
queue IDs are shared between subsystems. The subsystem can select a
random starting queue ID (maybe named as start_queue_id), and allocate
the queue IDs from that point on (start_queue_id + private_queue_id). So
the probability of queue ID sharing will be reduced.
> (The old FireWire stack will re-enter the main loop of the bus scanning
> thread sometime after a bus reset event signaled that nodes or units may
> have appeared or disappeared. The new FireWire stack will schedule
> respective scanning workqueue jobs after such an event.)
I think the workqueue is better than kernel thread here. With kernel
thread, the nodes and units may be needed to be scanned again and again
for each unit/node if many units/nodes are appeared at almost the same
time, while with workqueue, just schedule the jobs needed.
And a workqueue like the probing queue whose thread can be
created/destroyed on demand will save more resources than ordinary
workqueue. :)
Best Regards,
Huang Ying
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
[not found] <1182373258.30574.30.camel@caritas-dev.intel.com>
2007-06-20 15:09 ` [PATCH] driver core: multithreaded probing - more parallelism control Stefan Richter
@ 2007-06-24 7:06 ` Greg KH
2007-06-24 9:38 ` Stefan Richter
2007-06-24 15:04 ` [PATCH] driver core: multithreaded probing - more parallelismcontrol Huang, Ying
1 sibling, 2 replies; 14+ messages in thread
From: Greg KH @ 2007-06-24 7:06 UTC (permalink / raw)
To: Huang, Ying
Cc: Stefan Richter, Cornelia Huck, Adrian Bunk, david, David Miller,
bunk@stusta.de; Duncan Sands, Phillip Susi, linux-kernel
On Wed, Jun 20, 2007 at 09:00:58PM +0000, Huang, Ying wrote:
> Hi,
>
> This is a new version of multithreaded probing patch, with more
> parallelism control added.
>
> There are more control over which devices and drivers will be probed
> parallelized or serially. For example, in IEEE1394 subsystem, the
> different "units" in one "node" can be probed serially while the
> different "nodes" can be probed parallelized.
>
> The number of threads can be controlled through a kernel command line
> parameters.
>
> The patch is against 2.6.22-rc5. The "wait_for_probes" function in the
> patch comes from the original multithreaded probing patch. If I need do
> anything because of it, please let me know.
>
> Any comment is welcome.
I'm still not convinced that we need to add this kind of complexity to
the driver core, instead of just letting the individual driver
subsystems do this, if they want to do it.
Especially as no subsystem wants to do this today :)
thanks,
greg k-h
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-24 7:06 ` Greg KH
@ 2007-06-24 9:38 ` Stefan Richter
2007-06-24 15:04 ` [PATCH] driver core: multithreaded probing - more parallelismcontrol Huang, Ying
1 sibling, 0 replies; 14+ messages in thread
From: Stefan Richter @ 2007-06-24 9:38 UTC (permalink / raw)
To: Greg KH
Cc: Huang, Ying, Cornelia Huck, Adrian Bunk, david, David Miller,
bunk@stusta.de; Duncan Sands, Phillip Susi, linux-kernel
Greg KH wrote:
> I'm still not convinced that we need to add this kind of complexity to
> the driver core, instead of just letting the individual driver
> subsystems do this, if they want to do it.
>
> Especially as no subsystem wants to do this today :)
Yes, it should first be shown (with subsystem conversions and runtime
tests with, say, at least two different bus architectures) that features
like this really are appropriately implemented as a driver core abstraction.
I would lend a hand to put this to test, but I won't do this anymore
with the old ieee1394 subsystem, and work on the new firewire subsystem
will be focused on stabilization and feature completion in the short to
mid term.
--
Stefan Richter
-=====-=-=== -==- ==---
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH] driver core: multithreaded probing - more parallelismcontrol
2007-06-24 7:06 ` Greg KH
2007-06-24 9:38 ` Stefan Richter
@ 2007-06-24 15:04 ` Huang, Ying
2007-06-25 8:16 ` Greg KH
1 sibling, 1 reply; 14+ messages in thread
From: Huang, Ying @ 2007-06-24 15:04 UTC (permalink / raw)
To: Greg KH
Cc: Stefan Richter, Cornelia Huck, Adrian Bunk, david, David Miller,
bunk@stusta.de; Duncan Sands, Phillip Susi, linux-kernel
>From: Greg KH [mailto:greg@kroah.com]
>I'm still not convinced that we need to add this kind of complexity to
>the driver core, instead of just letting the individual driver
>subsystems do this, if they want to do it.
It may appear not necessary that providing more multithreaded device
probing in the driver core, but it seems more necessary that providing
more parallel control in the driver core to make some device probing
more single-threaded.
There does exist multithreaded device probing in current driver core
implementation, supposing two devices are hot-plugged at the same time.
But, many device drivers are written without this taken into account. I
think it may be better to make default device probing process more
single-threaded in the driver core. The single-thread workqueue or some
customized version of workqueue like that implemented by my patch can be
used for this. The parallel control mechanism can be used to implement
multithreaded device probing in needed subsystems too.
Best Regards,
Huang, Ying
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelismcontrol
2007-06-24 15:04 ` [PATCH] driver core: multithreaded probing - more parallelismcontrol Huang, Ying
@ 2007-06-25 8:16 ` Greg KH
2007-07-03 9:33 ` Cornelia Huck
0 siblings, 1 reply; 14+ messages in thread
From: Greg KH @ 2007-06-25 8:16 UTC (permalink / raw)
To: Huang, Ying
Cc: Stefan Richter, Cornelia Huck, Adrian Bunk, david, David Miller,
bunk@stusta.de; Duncan Sands, Phillip Susi, linux-kernel
On Sun, Jun 24, 2007 at 11:04:13PM +0800, Huang, Ying wrote:
> >From: Greg KH [mailto:greg@kroah.com]
> >I'm still not convinced that we need to add this kind of complexity to
> >the driver core, instead of just letting the individual driver
> >subsystems do this, if they want to do it.
>
> It may appear not necessary that providing more multithreaded device
> probing in the driver core, but it seems more necessary that providing
> more parallel control in the driver core to make some device probing
> more single-threaded.
>
> There does exist multithreaded device probing in current driver core
> implementation, supposing two devices are hot-plugged at the same time.
No, that is a bus-specific thing, and no bus that I know of supports
that at this time.
> But, many device drivers are written without this taken into account.
That's why no bus does this :)
> I think it may be better to make default device probing process more
> single-threaded in the driver core. The single-thread workqueue or some
> customized version of workqueue like that implemented by my patch can be
> used for this. The parallel control mechanism can be used to implement
> multithreaded device probing in needed subsystems too.
But remember, the individual busses already do this all in a single
thread anyway, nothing is needed in the driver core to do this.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelismcontrol
2007-06-25 8:16 ` Greg KH
@ 2007-07-03 9:33 ` Cornelia Huck
0 siblings, 0 replies; 14+ messages in thread
From: Cornelia Huck @ 2007-07-03 9:33 UTC (permalink / raw)
To: Greg KH
Cc: Huang, Ying, Stefan Richter, Adrian Bunk, david, David Miller,
Duncan Sands, Phillip Susi, linux-kernel
On Mon, 25 Jun 2007 01:16:24 -0700,
Greg KH <greg@kroah.com> wrote:
[I'm a bit late to the party, but...]
> On Sun, Jun 24, 2007 at 11:04:13PM +0800, Huang, Ying wrote:
> > There does exist multithreaded device probing in current driver core
> > implementation, supposing two devices are hot-plugged at the same time.
>
> No, that is a bus-specific thing, and no bus that I know of supports
> that at this time.
The s390 channel subsystem busses should be fine with any parallelism,
especially as the css bus kicks off tons of probes (device recognition)
at the same time. Any ccw driver must be able to be handle to be called
for many devices in parallel as well (like, when someone attaches their
shiny new storage subsystem to the LPAR and some thousands of dasds
become available).
>
> > But, many device drivers are written without this taken into account.
>
> That's why no bus does this :)
It is possible for busses for a small set of device drivers (like the
s390 busses; maybe there are others). It looks like a bad idea to try
this for PCI :)
>
> > I think it may be better to make default device probing process more
> > single-threaded in the driver core. The single-thread workqueue or some
> > customized version of workqueue like that implemented by my patch can be
> > used for this. The parallel control mechanism can be used to implement
> > multithreaded device probing in needed subsystems too.
>
> But remember, the individual busses already do this all in a single
> thread anyway, nothing is needed in the driver core to do this.
I think I could make good use of some more parallelism control (for
throttling or so). Not sure if it should really sit at the driver
core level, but that would avoid reinventing the wheel.
[Goes reading the original patch]
Cornelia
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] driver core: multithreaded probing - more parallelism control
2007-06-22 9:52 ` Huang, Ying
@ 2007-07-03 15:04 ` Cornelia Huck
0 siblings, 0 replies; 14+ messages in thread
From: Cornelia Huck @ 2007-07-03 15:04 UTC (permalink / raw)
To: Huang, Ying
Cc: Stefan Richter, Greg K-H, Adrian Bunk, david, David Miller,
Duncan Sands, Phillip Susi, linux-kernel
On Fri, 22 Jun 2007 09:52:38 +0000,
"Huang, Ying" <ying.huang@intel.com> wrote:
> On Thu, 2007-06-21 at 18:21 +0200, Stefan Richter wrote:
> > Parallelism between subsystems may be interesting during boot ==
> > "coldplug", /if/ the machine has time-consuming devices to probe on
> > /different/ types of buses. Of course some machines do the really
> > time-consuming stuff on only one type of bus. Granted, parallelism
> > betwen subsystems is not very interesting anymore later after boot ==
> > "hotplug".
>
> Yes. So I think there are two possible solution.
>
> 1. Creating one set of probing queues for each subsystem (maybe just the
> subsystems need it), so the probing queue IDs are local to each
> subsystem.
> 2. There is only one set of probing queues in whole system. The probing
> queue IDs are shared between subsystems. The subsystem can select a
> random starting queue ID (maybe named as start_queue_id), and allocate
> the queue IDs from that point on (start_queue_id + private_queue_id). So
> the probability of queue ID sharing will be reduced.
What should also be considered here is that we may want to have
different numbers of queues per subsystem (fewer for those where
probing is resource-heavy), but may want to restrict the total number
of queues as well. Some throttling mechanism may be helpful here (so
that a single subsystem cannot hog all queues while another is stuck
with a single queue, or double usage of queues). In fact, throttling
may be interesting for any subsystem using parallelism, especially if
the number of devices may be huge and/or probing resource hungry.
Cornelia
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH] driver core: multithreaded probing - more parallelism control
@ 2007-06-21 10:17 Huang, Ying
0 siblings, 0 replies; 14+ messages in thread
From: Huang, Ying @ 2007-06-21 10:17 UTC (permalink / raw)
To: linux-kernel
Hi,
This is a new version of multithreaded probing patch, with more
parallelism control added.
There are more control over which devices and drivers will be probed
parallelized or serially. For example, in IEEE1394 subsystem, the
different "units" in one "node" can be probed serially while the
different "nodes" can be probed parallelized.
The number of threads can be controlled through a kernel command line
parameters.
The patch is against 2.6.22-rc5. The "wait_for_probes" function in the
patch comes from the original multithreaded probing patch. If I need do
anything because of it, please let me know.
Any comment is welcome.
Best Regards,
Huang Ying
---
This patch add multithreaded probing with more parallelism control.
The device/driver probing is done on a probing queue, which is a
customized version of work queue, where the thread of probing queue will
come and go on demand. There is a queue No. for each probing queue. The
queue No. ranges from 0 to (unsigned short) ~0U, any queue No. can be
used by probing independent of the underlying thread number.
The device/driver probing is submitted to a probing queue though
"schedule_probe" function with probing queue No, a probing function and
corresponding parameter specified. The probing with same probing queue
No. will be done serially, while the probing with different probing
queue No. may be done parallelized.
"schedule_probe" is a general interface of multithreaded probing. While
convenient interfaces are as follow.
A field named "probe_queue_no" is added to "struct device", which
indicates probing queue No. on which the probing of the device will be
done. The subsystem can control the parallelism through this field. For
example, in IEEE1394 subsystems, the same probe_queue_no can be assigned
for different "units" in same "node" while different probe_queue_no can
be assigned to different "nodes".
Fields named "has_probe_queue" and "probe_queue_no" are added to "struct
device_driver". This let the probing queue can be controlled through
driver side too. If "has_probe_queue" is set, the "probe_queue_no" of
"struct device_driver" is used, otherwise that of "struct device" is
used.
There are other rules to control the parallelism.
1. The child device will not start probing until the probing of its
parent has been done.
2. The different drivers of same device will be probed serially.
If it is intended to separate the probing process of the device/driver
into synchronous part and asynchronous part, the ".probe" of "struct
device_driver" is used as the synchronous interface of device probing,
while the asynchronous part of probing can be submitted through
"schedule_probe" in ".probe" function. All the synchronous part will be
submitted with same probing queue number.
More strict serialization can be gotten through this mechanism if
needed. For example, if two USB driver is inserted into kernel
simultaneously, in original driver probing code, two USB device may be
probed simultaneously, while they will be probed serially if two device
has same probing queue No. in probing queue based probing code.
There is a problem of this multithreaded probing mechanism. There are
some code in kernel doing hardware access but not in the driver ".probe"
function, which may conflict with the multithreaded probiMore strict
serialization can be gotten through this mechanism if
needed. For example, if two USB driver is inserted into kernel
simultaneously, in original driver probing code, two USB device may be
probed simultaneously, while they will be probed serially if two device
has same probing queue No. in probing queue based probing code.
There is a problem of this multithreaded probing mechanism. There are
some code in kernel doing hardware access but not in the driver ".probe"
function, which may conflict with the multithreaded probing driver code.
A possible mechanism is executing those hardware access code in a
probing queue through "schedule_probe".ng driver code. A possible
mechanism is executing those hardware access code in a probing queue
through "schedule_probe".
Signed-off-by: Huang Ying <ying.huang@intel.com>
---
drivers/base/dd.c | 182 ++++++++++++++++++++++++++++++++++++++++++++++---
include/linux/device.h | 27 ++++++-
2 files changed, 199 insertions(+), 10 deletions(-)
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index b0088b0..a5cd629 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -23,6 +23,162 @@
#include "base.h"
#include "power/power.h"
+struct probe_queue {
+ spinlock_t lock;
+ struct list_head probe_list;
+ unsigned has_thread : 1;
+};
+
+struct simple_probe {
+ struct probe probe;
+ struct device *dev;
+ struct device_driver *drv;
+};
+
+static struct probe_queue *probe_queues;
+static atomic_t probe_count = ATOMIC_INIT(0);
+static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue);
+
+static int probe_thread_num = 1;
+
+static int __init set_probe_thread_num(char *str)
+{
+ get_option(&str, &probe_thread_num);
+ if (probe_thread_num <= 0 || probe_thread_num > (unsigned short) ~0U)
+ probe_thread_num = 1;
+ return 1;
+}
+
+__setup("probe_thread_num=", set_probe_thread_num);
+
+static int really_probe(struct probe *probe);
+
+static void init_probe_queue(struct probe_queue *probe_queue)
+{
+ spin_lock_init(&probe_queue->lock);
+ INIT_LIST_HEAD(&probe_queue->probe_list);
+ probe_queue->has_thread = 0;
+}
+
+static int probe_thread(void *data)
+{
+ struct probe_queue *probe_queue = data;
+ struct probe *probe;
+ int ret = 0;
+
+ while (1) {
+ spin_lock(&probe_queue->lock);
+ if (list_empty(&probe_queue->probe_list)) {
+ probe_queue->has_thread = 0;
+ spin_unlock(&probe_queue->lock);
+ break;
+ }
+ probe = list_entry(probe_queue->probe_list.next,
+ struct probe, entry);
+ list_del_init(probe_queue->probe_list.next);
+ spin_unlock(&probe_queue->lock);
+ ret = probe->probe_func(probe);
+ atomic_dec(&probe_count);
+ wake_up(&probe_waitqueue);
+ }
+ return ret;
+}
+
+/**
+ * schedule_probe - schedule a probing to be done later
+ * @probe_queue_no: probing queue No. on which the probing will be done
+ * @probe: probing information include probing function and parameter
+ *
+ * Schedule a probing to be done later on specified probing queue. The
+ * probing with same probing queue No. will be probed serially while
+ * the probing with different probing queue No. may be probed
+ * simultaneously.
+ */
+
+int schedule_probe(unsigned short probe_queue_no, struct probe *probe)
+{
+ int ret = 0;
+ struct task_struct *thread;
+ struct probe_queue *probe_queue =
+ &probe_queues[probe_queue_no%probe_thread_num];
+
+ atomic_inc(&probe_count);
+ probe->probe_queue_no = probe_queue_no;
+ spin_lock(&probe_queue->lock);
+ list_add_tail(&probe->entry, &probe_queue->probe_list);
+ if (probe_queue->has_thread) {
+ spin_unlock(&probe_queue->lock);
+ return ret;
+ }
+ probe_queue->has_thread = 1;
+ spin_unlock(&probe_queue->lock);
+ thread = kthread_run(probe_thread, probe_queue,
+ "probe-%u", probe_queue_no);
+ if (!thread)
+ ret = probe_thread(probe_queue);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(schedule_probe);
+
+static int simple_schedule_probe(struct device *dev,
+ struct device_driver *drv)
+{
+ struct simple_probe *simple_probe;
+ unsigned short probe_queue_no;
+
+ pr_debug("simple_schedule:%s,%s\n", dev->bus_id, drv->name);
+ simple_probe = kmalloc(sizeof(struct simple_probe), GFP_KERNEL);
+ if (!simple_probe)
+ return -ENOMEM;
+ INIT_PROBE(&simple_probe->probe, really_probe);
+ simple_probe->dev = get_device(dev);
+ simple_probe->drv = get_driver(drv);
+ dev->is_probing = 1;
+ if (drv->has_probe_queue)
+ probe_queue_no = drv->probe_queue_no;
+ else
+ probe_queue_no = dev->probe_queue_no;
+ return schedule_probe(probe_queue_no, &simple_probe->probe);
+}
+
+static int __init init_probe_queues(void)
+{
+ int i;
+
+ probe_queues = kmalloc(probe_thread_num * sizeof(struct probe_queue),
+ GFP_KERNEL);
+ if (!probe_queues)
+ return -ENOMEM;
+ for (i = 0; i < probe_thread_num; i++)
+ init_probe_queue(&probe_queues[i]);
+ return 0;
+}
+
+core_initcall_sync(init_probe_queues);
+
+static int __init wait_for_probes(void)
+{
+ DEFINE_WAIT(wait);
+
+ if (!atomic_read(&probe_count))
+ return 0;
+ while (atomic_read(&probe_count)) {
+ prepare_to_wait(&probe_waitqueue, &wait, TASK_UNINTERRUPTIBLE);
+ if (atomic_read(&probe_count))
+ schedule();
+ }
+ finish_wait(&probe_waitqueue, &wait);
+ return 0;
+}
+
+core_initcall_sync(wait_for_probes);
+postcore_initcall_sync(wait_for_probes);
+arch_initcall_sync(wait_for_probes);
+subsys_initcall_sync(wait_for_probes);
+fs_initcall_sync(wait_for_probes);
+device_initcall_sync(wait_for_probes);
+late_initcall_sync(wait_for_probes);
+
#define to_drv(node) container_of(node, struct device_driver, kobj.entry)
@@ -94,18 +250,25 @@ int device_bind_driver(struct device *dev)
return ret;
}
-static atomic_t probe_count = ATOMIC_INIT(0);
-static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue);
-
-static int really_probe(struct device *dev, struct device_driver *drv)
+static int really_probe(struct probe *probe)
{
+ struct device *dev, *parent;
+ struct device_driver *drv;
+ struct simple_probe *simple_probe;
int ret = 0;
- atomic_inc(&probe_count);
+ simple_probe = container_of(probe, struct simple_probe, probe);
+ dev = simple_probe->dev;
+ drv = simple_probe->drv;
+ parent = get_device(dev->parent);
+ while (parent && parent->is_probing)
+ yield();
+ put_device(parent);
pr_debug("%s: Probing driver %s with device %s\n",
drv->bus->name, drv->name, dev->bus_id);
WARN_ON(!list_empty(&dev->devres_head));
+ down(&dev->sem);
dev->driver = drv;
if (driver_sysfs_add(dev)) {
printk(KERN_ERR "%s: driver_sysfs_add(%s) failed\n",
@@ -146,8 +309,11 @@ probe_failed:
*/
ret = 0;
done:
- atomic_dec(&probe_count);
- wake_up(&probe_waitqueue);
+ dev->is_probing = 0;
+ up(&dev->sem);
+ put_device(dev);
+ put_driver(drv);
+ kfree(simple_probe);
return ret;
}
@@ -195,7 +361,7 @@ int driver_probe_device(struct device_driver * drv, struct device * dev)
pr_debug("%s: Matched Device %s with Driver %s\n",
drv->bus->name, dev->bus_id, drv->name);
- ret = really_probe(dev, drv);
+ ret = simple_schedule_probe(dev, drv);
done:
return ret;
diff --git a/include/linux/device.h b/include/linux/device.h
index 2e1a298..9e63602 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -133,6 +133,9 @@ struct device_driver {
const char * mod_name; /* used for built-in modules */
struct module_kobject * mkobj;
+ unsigned short has_probe_queue:1;
+ unsigned short probe_queue_no;
+
int (*probe) (struct device * dev);
int (*remove) (struct device * dev);
void (*shutdown) (struct device * dev);
@@ -417,8 +420,10 @@ struct device {
struct kobject kobj;
char bus_id[BUS_ID_SIZE]; /* position on parent bus */
struct device_type *type;
- unsigned is_registered:1;
- unsigned uevent_suppress:1;
+ unsigned short is_registered:1;
+ unsigned short uevent_suppress:1;
+ unsigned short is_probing:1;
+ unsigned short probe_queue_no;
struct device_attribute uevent_attr;
struct device_attribute *devt_attr;
@@ -526,6 +531,24 @@ extern int __must_check device_attach(struct device * dev);
extern int __must_check driver_attach(struct device_driver *drv);
extern int __must_check device_reprobe(struct device *dev);
+struct probe {
+ struct list_head entry;
+ int ( * probe_func)(struct probe *probe);
+ unsigned short probe_queue_no;
+};
+
+/**
+ * initialize the probing item
+ */
+#define INIT_PROBE(_probe, _probe_func) \
+ do { \
+ INIT_LIST_HEAD(&(_probe)->entry); \
+ (_probe)->probe_func = (_probe_func); \
+ (_probe)->probe_queue_no = 0; \
+ } while (0);
+
+int schedule_probe(unsigned short probe_queue_no, struct probe *probe);
+
/*
* Easy functions for dynamically creating devices on the fly
*/
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2007-07-03 15:04 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <1182373258.30574.30.camel@caritas-dev.intel.com>
2007-06-20 15:09 ` [PATCH] driver core: multithreaded probing - more parallelism control Stefan Richter
2007-06-20 15:14 ` Stefan Richter
2007-06-21 9:38 ` Huang, Ying
2007-06-21 8:49 ` Stefan Richter
2007-06-21 13:51 ` Huang, Ying
2007-06-21 16:21 ` Stefan Richter
2007-06-22 9:52 ` Huang, Ying
2007-07-03 15:04 ` Cornelia Huck
2007-06-24 7:06 ` Greg KH
2007-06-24 9:38 ` Stefan Richter
2007-06-24 15:04 ` [PATCH] driver core: multithreaded probing - more parallelismcontrol Huang, Ying
2007-06-25 8:16 ` Greg KH
2007-07-03 9:33 ` Cornelia Huck
2007-06-21 10:17 [PATCH] driver core: multithreaded probing - more parallelism control Huang, Ying
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).