LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: svaidy@linux.vnet.ibm.com
Cc: Trinabh Gupta <trinabh@linux.vnet.ibm.com>,
	arjan@linux.intel.com, lenb@kernel.org,
	suresh.b.siddha@intel.com, benh@kernel.crashing.org,
	venki@google.com, ak@linux.intel.com,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH V3 2/3] cpuidle: list based cpuidle driver registration and selection
Date: Thu, 10 Feb 2011 10:53:30 +0100	[thread overview]
Message-ID: <1297331610.5226.4.camel@laptop> (raw)
In-Reply-To: <20110210070031.GA5448@dirshya.in.ibm.com>

On Thu, 2011-02-10 at 12:30 +0530, Vaidyanathan Srinivasan wrote:

> We discussed this in the previous posts.  On ppc64 we would like to
> have single global registration, but for x86 Arjan recommended that we
> keep the per-cpu registration or else we may break legacy or buggy
> devices.
> 
> Ref: http://lkml.org/lkml/2009/10/7/210
> 
> One corner case that was against using lowest common C-State is that
> we could have cores/packages sporting a new lower C-State if they are
> at a thermal limit and want the OS to go to much lower C-State and
> sacrifice performance.  Our global design will prevent that cpu from
> going to a state lower than the rest of the system.
> 
> This is only a remote possibility, but could happen on battery
> operated devices, under low battery mode etc.  Basically we would have
> to keep our design open to allow individual CPUs to goto 'their'
> deepest allowed sleep state even in a asymmetric case.


But but but, its a stupid ACPI bug if it reports different C states for
different CPUs.

Len, does intel_idle also suffer this or does it simply ignore what ACPI
has to say?

Also, suppose for some daft reason it doesn't report C2 as available on
one of the CPUs, what happens if we use it anyway? (ie, use the union of
the reported states).




  reply	other threads:[~2011-02-10  9:52 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-08 10:51 [RFC PATCH V3 0/3] cpuidle: Cleanup pm_idle and include driver/cpuidle.c in-kernel Trinabh Gupta
2011-02-08 10:51 ` [RFC PATCH V3 1/3] cpuidle: Remove pm_idle pointer for x86 Trinabh Gupta
2011-02-08 10:52 ` [RFC PATCH V3 2/3] cpuidle: list based cpuidle driver registration and selection Trinabh Gupta
2011-02-09 11:17   ` Peter Zijlstra
2011-02-10  7:00     ` Vaidyanathan Srinivasan
2011-02-10  9:53       ` Peter Zijlstra [this message]
2011-02-10 17:16         ` Vaidyanathan Srinivasan
2011-02-08 10:52 ` [RFC PATCH V3 3/3] cpuidle: default idle driver for x86 Trinabh Gupta
2011-02-09 11:19   ` Peter Zijlstra
2011-02-09 11:21 ` [RFC PATCH V3 0/3] cpuidle: Cleanup pm_idle and include driver/cpuidle.c in-kernel Peter Zijlstra
2011-02-10 15:10   ` Trinabh Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1297331610.5226.4.camel@laptop \
    --to=peterz@infradead.org \
    --cc=ak@linux.intel.com \
    --cc=arjan@linux.intel.com \
    --cc=benh@kernel.crashing.org \
    --cc=lenb@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=suresh.b.siddha@intel.com \
    --cc=svaidy@linux.vnet.ibm.com \
    --cc=trinabh@linux.vnet.ibm.com \
    --cc=venki@google.com \
    --subject='Re: [RFC PATCH V3 2/3] cpuidle: list based cpuidle driver registration and selection' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).