LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH] sched/deadline: always enqueue on previous rq when dl_task_timer fires
@ 2015-03-31  8:53 Juri Lelli
  2015-03-31  8:53 ` [PATCH] sched/core: check for available -dl bandwidth in cpuset_cpu_inactive Juri Lelli
  2015-04-02 18:46 ` [tip:sched/core] sched/deadline: Always enqueue on previous rq when dl_task_timer() fires tip-bot for Juri Lelli
  0 siblings, 2 replies; 7+ messages in thread
From: Juri Lelli @ 2015-03-31  8:53 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel, Juri Lelli, Ingo Molnar, Kirill Tkhai, Juri Lelli

dl_task_timer() may fire on a different rq from where a task was removed
after throttling. Since the call path is:

  dl_task_timer() ->
    enqueue_task_dl() ->
      enqueue_dl_entity() ->
        replenish_dl_entity()

and replenish_dl_entity() uses dl_se's rq, we can't use current's rq
in dl_task_timer(), but we need to lock the task's previous one.

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Acked-by: Kirill Tkhai <ktkhai@parallels.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Kirill Tkhai <ktkhai@parallels.com>
Cc: Juri Lelli <juri.lelli@gmail.com>
Cc: linux-kernel@vger.kernel.org
Fixes: 3960c8c0c789 ("sched: Make dl_task_time() use task_rq_lock()")
---
 kernel/sched/deadline.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3fa8fa6..f670cbb 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -514,7 +514,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
 	unsigned long flags;
 	struct rq *rq;
 
-	rq = task_rq_lock(current, &flags);
+	rq = task_rq_lock(p, &flags);
 
 	/*
 	 * We need to take care of several possible races here:
@@ -569,7 +569,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
 		push_dl_task(rq);
 #endif
 unlock:
-	task_rq_unlock(rq, current, &flags);
+	task_rq_unlock(rq, p, &flags);
 
 	return HRTIMER_NORESTART;
 }
-- 
2.2.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] sched/core: check for available -dl bandwidth in cpuset_cpu_inactive
  2015-03-31  8:53 [PATCH] sched/deadline: always enqueue on previous rq when dl_task_timer fires Juri Lelli
@ 2015-03-31  8:53 ` Juri Lelli
  2015-04-02 18:47   ` [tip:sched/core] sched/core: Check for available DL bandwidth in cpuset_cpu_inactive() tip-bot for Juri Lelli
  2015-04-02 18:46 ` [tip:sched/core] sched/deadline: Always enqueue on previous rq when dl_task_timer() fires tip-bot for Juri Lelli
  1 sibling, 1 reply; 7+ messages in thread
From: Juri Lelli @ 2015-03-31  8:53 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel, Juri Lelli, Ingo Molnar, Juri Lelli

Hotplug operations are destructive w.r.t. cpusets. In case such an
operation is performed on a CPU belonging to an exlusive cpuset, the
-dl bandwidth information associated with the corresponding root
domain is gone even if the operation fails (in sched_cpu_inactive()).

For this reason we need to move the check we currently have in
sched_cpu_inactive() to cpuset_cpu_inactive() to prevent useless
cpusets reconfiguration in the CPU_DOWN_FAILED path.

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@gmail.com>
Cc: linux-kernel@vger.kernel.org
---
 kernel/sched/core.c | 56 ++++++++++++++++++++++++++---------------------------
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 62671f5..213d26d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5320,36 +5320,13 @@ static int sched_cpu_active(struct notifier_block *nfb,
 static int sched_cpu_inactive(struct notifier_block *nfb,
 					unsigned long action, void *hcpu)
 {
-	unsigned long flags;
-	long cpu = (long)hcpu;
-	struct dl_bw *dl_b;
-
 	switch (action & ~CPU_TASKS_FROZEN) {
 	case CPU_DOWN_PREPARE:
-		set_cpu_active(cpu, false);
-
-		/* explicitly allow suspend */
-		if (!(action & CPU_TASKS_FROZEN)) {
-			bool overflow;
-			int cpus;
-
-			rcu_read_lock_sched();
-			dl_b = dl_bw_of(cpu);
-
-			raw_spin_lock_irqsave(&dl_b->lock, flags);
-			cpus = dl_bw_cpus(cpu);
-			overflow = __dl_overflow(dl_b, cpus, 0, 0);
-			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-
-			rcu_read_unlock_sched();
-
-			if (overflow)
-				return notifier_from_errno(-EBUSY);
-		}
+		set_cpu_active((long)hcpu, false);
 		return NOTIFY_OK;
+	default:
+		return NOTIFY_DONE;
 	}
-
-	return NOTIFY_DONE;
 }
 
 static int __init migration_init(void)
@@ -7000,7 +6977,6 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
 		 */
 
 	case CPU_ONLINE:
-	case CPU_DOWN_FAILED:
 		cpuset_update_active_cpus(true);
 		break;
 	default:
@@ -7012,8 +6988,32 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
 static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action,
 			       void *hcpu)
 {
-	switch (action) {
+	unsigned long flags;
+	long cpu = (long)hcpu;
+	struct dl_bw *dl_b;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
 	case CPU_DOWN_PREPARE:
+		/* explicitly allow suspend */
+		if (!(action & CPU_TASKS_FROZEN)) {
+			bool overflow;
+			int cpus;
+
+			rcu_read_lock_sched();
+			dl_b = dl_bw_of(cpu);
+
+			raw_spin_lock_irqsave(&dl_b->lock, flags);
+			cpus = dl_bw_cpus(cpu);
+			overflow = __dl_overflow(dl_b, cpus, 0, 0);
+			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+
+			rcu_read_unlock_sched();
+
+			if (overflow) {
+				trace_printk("hotplug failed for cpu %lu", cpu);
+				return notifier_from_errno(-EBUSY);
+			}
+		}
 		cpuset_update_active_cpus(false);
 		break;
 	case CPU_DOWN_PREPARE_FROZEN:
-- 
2.2.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [tip:sched/core] sched/deadline: Always enqueue on previous rq when dl_task_timer() fires
  2015-03-31  8:53 [PATCH] sched/deadline: always enqueue on previous rq when dl_task_timer fires Juri Lelli
  2015-03-31  8:53 ` [PATCH] sched/core: check for available -dl bandwidth in cpuset_cpu_inactive Juri Lelli
@ 2015-04-02 18:46 ` tip-bot for Juri Lelli
  1 sibling, 0 replies; 7+ messages in thread
From: tip-bot for Juri Lelli @ 2015-04-02 18:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, wanpeng.li, ktkhai, mingo, juri.lelli, linux-kernel, hpa,
	tglx, juri.lelli

Commit-ID:  4cd57f97135840f637431c92380c8da3edbe44ed
Gitweb:     http://git.kernel.org/tip/4cd57f97135840f637431c92380c8da3edbe44ed
Author:     Juri Lelli <juri.lelli@arm.com>
AuthorDate: Tue, 31 Mar 2015 09:53:36 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 2 Apr 2015 17:42:56 +0200

sched/deadline: Always enqueue on previous rq when dl_task_timer() fires

dl_task_timer() may fire on a different rq from where a task was removed
after throttling. Since the call path is:

  dl_task_timer() ->
    enqueue_task_dl() ->
      enqueue_dl_entity() ->
        replenish_dl_entity()

and replenish_dl_entity() uses dl_se's rq, we can't use current's rq
in dl_task_timer(), but we need to lock the task's previous one.

Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Kirill Tkhai <ktkhai@parallels.com>
Cc: Juri Lelli <juri.lelli@gmail.com>
Fixes: 3960c8c0c789 ("sched: Make dl_task_time() use task_rq_lock()")
Link: http://lkml.kernel.org/r/1427792017-7356-1-git-send-email-juri.lelli@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/deadline.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 5e2f99b..9d3ad64 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -514,7 +514,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
 	unsigned long flags;
 	struct rq *rq;
 
-	rq = task_rq_lock(current, &flags);
+	rq = task_rq_lock(p, &flags);
 
 	/*
 	 * We need to take care of several possible races here:
@@ -569,7 +569,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
 		push_dl_task(rq);
 #endif
 unlock:
-	task_rq_unlock(rq, current, &flags);
+	task_rq_unlock(rq, p, &flags);
 
 	return HRTIMER_NORESTART;
 }

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [tip:sched/core] sched/core: Check for available DL bandwidth in cpuset_cpu_inactive()
  2015-03-31  8:53 ` [PATCH] sched/core: check for available -dl bandwidth in cpuset_cpu_inactive Juri Lelli
@ 2015-04-02 18:47   ` tip-bot for Juri Lelli
  2015-04-03  8:42     ` Borislav Petkov
  0 siblings, 1 reply; 7+ messages in thread
From: tip-bot for Juri Lelli @ 2015-04-02 18:47 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, juri.lelli, peterz, tglx, juri.lelli, mingo, hpa

Commit-ID:  3c18d447b3b36a8d3c90dc37dfbd363cdb685d0a
Gitweb:     http://git.kernel.org/tip/3c18d447b3b36a8d3c90dc37dfbd363cdb685d0a
Author:     Juri Lelli <juri.lelli@arm.com>
AuthorDate: Tue, 31 Mar 2015 09:53:37 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 2 Apr 2015 17:42:56 +0200

sched/core: Check for available DL bandwidth in cpuset_cpu_inactive()

Hotplug operations are destructive w.r.t. cpusets. In case such an
operation is performed on a CPU belonging to an exlusive cpuset, the
DL bandwidth information associated with the corresponding root
domain is gone even if the operation fails (in sched_cpu_inactive()).

For this reason we need to move the check we currently have in
sched_cpu_inactive() to cpuset_cpu_inactive() to prevent useless
cpusets reconfiguration in the CPU_DOWN_FAILED path.

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@gmail.com>
Link: http://lkml.kernel.org/r/1427792017-7356-2-git-send-email-juri.lelli@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c | 56 ++++++++++++++++++++++++++---------------------------
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4c49e75..28b0d75 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5337,36 +5337,13 @@ static int sched_cpu_active(struct notifier_block *nfb,
 static int sched_cpu_inactive(struct notifier_block *nfb,
 					unsigned long action, void *hcpu)
 {
-	unsigned long flags;
-	long cpu = (long)hcpu;
-	struct dl_bw *dl_b;
-
 	switch (action & ~CPU_TASKS_FROZEN) {
 	case CPU_DOWN_PREPARE:
-		set_cpu_active(cpu, false);
-
-		/* explicitly allow suspend */
-		if (!(action & CPU_TASKS_FROZEN)) {
-			bool overflow;
-			int cpus;
-
-			rcu_read_lock_sched();
-			dl_b = dl_bw_of(cpu);
-
-			raw_spin_lock_irqsave(&dl_b->lock, flags);
-			cpus = dl_bw_cpus(cpu);
-			overflow = __dl_overflow(dl_b, cpus, 0, 0);
-			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-
-			rcu_read_unlock_sched();
-
-			if (overflow)
-				return notifier_from_errno(-EBUSY);
-		}
+		set_cpu_active((long)hcpu, false);
 		return NOTIFY_OK;
+	default:
+		return NOTIFY_DONE;
 	}
-
-	return NOTIFY_DONE;
 }
 
 static int __init migration_init(void)
@@ -7006,7 +6983,6 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
 		 */
 
 	case CPU_ONLINE:
-	case CPU_DOWN_FAILED:
 		cpuset_update_active_cpus(true);
 		break;
 	default:
@@ -7018,8 +6994,32 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
 static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action,
 			       void *hcpu)
 {
-	switch (action) {
+	unsigned long flags;
+	long cpu = (long)hcpu;
+	struct dl_bw *dl_b;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
 	case CPU_DOWN_PREPARE:
+		/* explicitly allow suspend */
+		if (!(action & CPU_TASKS_FROZEN)) {
+			bool overflow;
+			int cpus;
+
+			rcu_read_lock_sched();
+			dl_b = dl_bw_of(cpu);
+
+			raw_spin_lock_irqsave(&dl_b->lock, flags);
+			cpus = dl_bw_cpus(cpu);
+			overflow = __dl_overflow(dl_b, cpus, 0, 0);
+			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+
+			rcu_read_unlock_sched();
+
+			if (overflow) {
+				trace_printk("hotplug failed for cpu %lu", cpu);
+				return notifier_from_errno(-EBUSY);
+			}
+		}
 		cpuset_update_active_cpus(false);
 		break;
 	case CPU_DOWN_PREPARE_FROZEN:

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [tip:sched/core] sched/core: Check for available DL bandwidth in cpuset_cpu_inactive()
  2015-04-02 18:47   ` [tip:sched/core] sched/core: Check for available DL bandwidth in cpuset_cpu_inactive() tip-bot for Juri Lelli
@ 2015-04-03  8:42     ` Borislav Petkov
  2015-04-03 14:21       ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: Borislav Petkov @ 2015-04-03  8:42 UTC (permalink / raw)
  To: peterz, tglx, juri.lelli, linux-kernel, mingo, hpa, juri.lelli
  Cc: linux-tip-commits

On Thu, Apr 02, 2015 at 11:47:04AM -0700, tip-bot for Juri Lelli wrote:
> Commit-ID:  3c18d447b3b36a8d3c90dc37dfbd363cdb685d0a
> Gitweb:     http://git.kernel.org/tip/3c18d447b3b36a8d3c90dc37dfbd363cdb685d0a
> Author:     Juri Lelli <juri.lelli@arm.com>
> AuthorDate: Tue, 31 Mar 2015 09:53:37 +0100
> Committer:  Ingo Molnar <mingo@kernel.org>
> CommitDate: Thu, 2 Apr 2015 17:42:56 +0200
> 
> sched/core: Check for available DL bandwidth in cpuset_cpu_inactive()
> 
> Hotplug operations are destructive w.r.t. cpusets. In case such an
> operation is performed on a CPU belonging to an exlusive cpuset, the
> DL bandwidth information associated with the corresponding root
> domain is gone even if the operation fails (in sched_cpu_inactive()).
> 
> For this reason we need to move the check we currently have in
> sched_cpu_inactive() to cpuset_cpu_inactive() to prevent useless
> cpusets reconfiguration in the CPU_DOWN_FAILED path.
> 
> Signed-off-by: Juri Lelli <juri.lelli@arm.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Juri Lelli <juri.lelli@gmail.com>
> Link: http://lkml.kernel.org/r/1427792017-7356-2-git-send-email-juri.lelli@arm.com
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
>  kernel/sched/core.c | 56 ++++++++++++++++++++++++++---------------------------
>  1 file changed, 28 insertions(+), 28 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 4c49e75..28b0d75 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5337,36 +5337,13 @@ static int sched_cpu_active(struct notifier_block *nfb,
>  static int sched_cpu_inactive(struct notifier_block *nfb,
>  					unsigned long action, void *hcpu)
>  {
> -	unsigned long flags;
> -	long cpu = (long)hcpu;
> -	struct dl_bw *dl_b;
> -
>  	switch (action & ~CPU_TASKS_FROZEN) {
>  	case CPU_DOWN_PREPARE:
> -		set_cpu_active(cpu, false);
> -
> -		/* explicitly allow suspend */
> -		if (!(action & CPU_TASKS_FROZEN)) {
> -			bool overflow;
> -			int cpus;
> -
> -			rcu_read_lock_sched();
> -			dl_b = dl_bw_of(cpu);
> -
> -			raw_spin_lock_irqsave(&dl_b->lock, flags);
> -			cpus = dl_bw_cpus(cpu);
> -			overflow = __dl_overflow(dl_b, cpus, 0, 0);
> -			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
> -
> -			rcu_read_unlock_sched();
> -
> -			if (overflow)
> -				return notifier_from_errno(-EBUSY);
> -		}
> +		set_cpu_active((long)hcpu, false);
>  		return NOTIFY_OK;
> +	default:
> +		return NOTIFY_DONE;
>  	}
> -
> -	return NOTIFY_DONE;
>  }
>  
>  static int __init migration_init(void)
> @@ -7006,7 +6983,6 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
>  		 */
>  
>  	case CPU_ONLINE:
> -	case CPU_DOWN_FAILED:
>  		cpuset_update_active_cpus(true);
>  		break;
>  	default:
> @@ -7018,8 +6994,32 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
>  static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action,
>  			       void *hcpu)
>  {
> -	switch (action) {
> +	unsigned long flags;
> +	long cpu = (long)hcpu;
> +	struct dl_bw *dl_b;
> +
> +	switch (action & ~CPU_TASKS_FROZEN) {
>  	case CPU_DOWN_PREPARE:
> +		/* explicitly allow suspend */
> +		if (!(action & CPU_TASKS_FROZEN)) {
> +			bool overflow;
> +			int cpus;
> +
> +			rcu_read_lock_sched();
> +			dl_b = dl_bw_of(cpu);
> +
> +			raw_spin_lock_irqsave(&dl_b->lock, flags);
> +			cpus = dl_bw_cpus(cpu);
> +			overflow = __dl_overflow(dl_b, cpus, 0, 0);
> +			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
> +
> +			rcu_read_unlock_sched();
> +
> +			if (overflow) {
> +				trace_printk("hotplug failed for cpu %lu", cpu);

Look ma,

someone forgot debugging code:

...
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
[    0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[    0.000000] 
[    0.000000] **********************************************************
[    0.000000] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.000000] **                                                      **
[    0.000000] ** trace_printk() being used. Allocating extra memory.  **
[    0.000000] **                                                      **
[    0.000000] ** This means that this is a DEBUG kernel and it is     **
[    0.000000] ** unsafe for production use.                           **
[    0.000000] **                                                      **
[    0.000000] ** If you see this message and you are not debugging    **
[    0.000000] ** the kernel, report this immediately to your vendor!  **
[    0.000000] **                                                      **
[    0.000000] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.000000] **********************************************************

Fix coming up.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [tip:sched/core] sched/core: Check for available DL bandwidth in cpuset_cpu_inactive()
  2015-04-03  8:42     ` Borislav Petkov
@ 2015-04-03 14:21       ` Peter Zijlstra
  2015-04-03 15:26         ` Borislav Petkov
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2015-04-03 14:21 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: tglx, juri.lelli, linux-kernel, mingo, hpa, juri.lelli,
	linux-tip-commits

On Fri, Apr 03, 2015 at 10:42:17AM +0200, Borislav Petkov wrote:
> > +			if (overflow) {
> > +				trace_printk("hotplug failed for cpu %lu", cpu);
> 
> Look ma,
> 
> someone forgot debugging code:
> 

*groan*, I actually saw that when Juri send the patch and though: I
should not forget to take that out... :/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [tip:sched/core] sched/core: Check for available DL bandwidth in cpuset_cpu_inactive()
  2015-04-03 14:21       ` Peter Zijlstra
@ 2015-04-03 15:26         ` Borislav Petkov
  0 siblings, 0 replies; 7+ messages in thread
From: Borislav Petkov @ 2015-04-03 15:26 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tglx, juri.lelli, linux-kernel, mingo, hpa, juri.lelli,
	linux-tip-commits

On Fri, Apr 03, 2015 at 04:21:25PM +0200, Peter Zijlstra wrote:
> *groan*, I actually saw that when Juri send the patch and though: I
> should not forget to take that out... :/

Look at the bright side - we tested rostedt's banner. It works. :-)

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-04-03 15:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-31  8:53 [PATCH] sched/deadline: always enqueue on previous rq when dl_task_timer fires Juri Lelli
2015-03-31  8:53 ` [PATCH] sched/core: check for available -dl bandwidth in cpuset_cpu_inactive Juri Lelli
2015-04-02 18:47   ` [tip:sched/core] sched/core: Check for available DL bandwidth in cpuset_cpu_inactive() tip-bot for Juri Lelli
2015-04-03  8:42     ` Borislav Petkov
2015-04-03 14:21       ` Peter Zijlstra
2015-04-03 15:26         ` Borislav Petkov
2015-04-02 18:46 ` [tip:sched/core] sched/deadline: Always enqueue on previous rq when dl_task_timer() fires tip-bot for Juri Lelli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).