LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH RT 00/25] Linux v4.14.170-rt75-rc1
@ 2020-02-21 21:24 zanussi
2020-02-21 21:24 ` [PATCH RT 01/25] Fix wrong-variable use in irq_set_affinity_notifier zanussi
` (24 more replies)
0 siblings, 25 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Tom Zanussi <zanussi@kernel.org>
Dear RT Folks,
This is the RT stable review cycle of patch 4.14.170-rt75-rc1.
Please scream at me if I messed something up. Please test the patches
too.
The -rc release will be uploaded to kernel.org and will be deleted
when the final release is out. This is just a review release (or
release candidate).
The pre-releases will not be pushed to the git repository, only the
final release is.
If all goes well, this patch will be converted to the next main
release on 2020-02-28.
To build 4.14.170-rt75-rc1 directly, the following patches should be applied:
https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.14.tar.xz
https://www.kernel.org/pub/linux/kernel/v4.x/patch-4.14.170.xz
https://www.kernel.org/pub/linux/kernel/projects/rt/4.14/patch-4.14.170-rt75-rc1.patch.xz
You can also build from 4.14.170-rt74 by applying the incremental patch:
https://www.kernel.org/pub/linux/kernel/projects/rt/4.14/incr/patch-4.14.170-rt74-rt75-rc1.patch.xz
Enjoy,
-- Tom
Joe Korty (1):
Fix wrong-variable use in irq_set_affinity_notifier
Julien Grall (1):
lib/ubsan: Don't seralize UBSAN report
Juri Lelli (1):
sched/deadline: Ensure inactive_timer runs in hardirq context
Liu Haitao (1):
kmemleak: Change the lock of kmemleak_object to raw_spinlock_t
Matt Fleming (1):
mm/memcontrol: Move misplaced local_unlock_irqrestore()
Peter Zijlstra (1):
locking/rtmutex: Clean ->pi_blocked_on in the error case
Scott Wood (5):
sched: migrate_dis/enable: Use sleeping_lock…() to annotate sleeping
points
sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr
sched: Remove dead __migrate_disabled() check
sched: migrate disable: Protect cpus_ptr with lock
sched: migrate_enable: Use select_fallback_rq()
Sebastian Andrzej Siewior (11):
x86: preempt: Check preemption level before looking at lazy-preempt
i2c: hix5hd2: Remove IRQF_ONESHOT
i2c: exynos5: Remove IRQF_ONESHOT
futex: Make the futex_hash_bucket spinlock_t again and bring back its
old state
Revert "ARM: Initialize split page table locks for vector page"
x86/fpu: Don't cache access to fpu_fpregs_owner_ctx
locking: Make spinlock_t and rwlock_t a RCU section on RT
userfaultfd: Use a seqlock instead of seqcount
kmemleak: Cosmetic changes
smp: Use smp_cond_func_t as type for the conditional function
locallock: Include header for the `current' macro
Thomas Gleixner (1):
sched: Provide migrate_disable/enable() inlines
Tom Zanussi (1):
Linux 4.14.170-rt75-rc1
Waiman Long (1):
lib/smp_processor_id: Don't use cpumask_equal()
arch/arm/kernel/process.c | 24 ----
arch/x86/include/asm/fpu/internal.h | 2 +-
arch/x86/include/asm/preempt.h | 2 +
drivers/i2c/busses/i2c-exynos5.c | 4 +-
drivers/i2c/busses/i2c-hix5hd2.c | 3 +-
fs/userfaultfd.c | 12 +-
include/linux/locallock.h | 1 +
include/linux/preempt.h | 26 +++-
include/linux/smp.h | 6 +-
kernel/cpu.c | 2 +
kernel/futex.c | 231 ++++++++++++++++++++----------------
kernel/irq/manage.c | 2 +-
kernel/locking/rtmutex.c | 114 ++++++++++++++----
kernel/locking/rtmutex_common.h | 3 +
kernel/locking/rwlock-rt.c | 6 +
kernel/sched/core.c | 43 +++----
kernel/sched/deadline.c | 4 +-
kernel/smp.c | 5 +-
kernel/up.c | 5 +-
lib/smp_processor_id.c | 2 +-
lib/ubsan.c | 76 +++++-------
localversion-rt | 2 +-
mm/kmemleak.c | 90 +++++++-------
mm/memcontrol.c | 2 +-
24 files changed, 370 insertions(+), 297 deletions(-)
--
2.14.1
^ permalink raw reply [flat|nested] 43+ messages in thread
* [PATCH RT 01/25] Fix wrong-variable use in irq_set_affinity_notifier
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 02/25] x86: preempt: Check preemption level before looking at lazy-preempt zanussi
` (23 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Joe Korty <joe.korty@concurrent-rt.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Fixes upstream commit 3e4242082f0384311f15ab9c93e2620268c6257f,
which erroneously switched old_notify->work to notify->work when
fixing a merge conflict ]
4.14-rt: Fix wrong-variable use in irq_set_affinity_notifier.
The bug was introduced in the 4.14-rt patch
0461-genirq-Handle-missing-work_struct-in-irq_set_affinit.patch
The symptom is a NULL pointer panic in the i40e driver on
system shutdown.
Rebooting.
BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
IP: __kthread_cancel_work_sync+0x12/0xa0
CPU: 15 PID: 6274 Comm: reboot Not tainted 4.14.155-rt70-RedHawk-8.0.2-prt-trace #1
task: ffff9ef0d1a58000 task.stack: ffffbe540c038000
RIP: 0010:__kthread_cancel_work_sync+0x12/0xa0
RSP: 0018:ffffbe540c03bbd8 EFLAGS: 00010296
RAX: 0000084000000020 RBX: 0000000000000000 RCX: 0000000000000034
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000008
RBP: ffffbe540c03bc00 R08: ffff9ee8ccdc3800 R09: ffff9ef0d8c0c000
R10: ffff9ef0d8c0c028 R11: 0000000000000040 R12: ffff9ee8ccdc3800
R13: 0000000000000000 R14: ffff9ee8ccdc3960 R15: 0000000000000074
FS: 00007ffff7fcf380(0000) GS:ffff9ef0ffdc0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000020 CR3: 000000104b428003 CR4: 00000000005606e0
DR0: 00000000006040e0 DR1: 00000000006040e8 DR2: 00000000006040f0
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600
PKRU: 55555554
Call Trace:
kthread_cancel_work_sync+0xb/0x10
irq_set_affinity_notifier+0x8e/0xc0
i40e_vsi_free_irq+0xbc/0x230 [i40e]
i40e_vsi_close+0x24/0xa0 [i40e]
i40e_close+0x10/0x20 [i40e]
i40e_quiesce_vsi.part.40+0x30/0x40 [i40e]
i40e_pf_quiesce_all_vsi.isra.41+0x34/0x50 [i40e]
i40e_prep_for_reset+0x67/0x110 [i40e]
i40e_shutdown+0x39/0x220 [i40e]
pci_device_shutdown+0x2b/0x50
device_shutdown+0x147/0x1f0
kernel_restart_prepare+0x71/0x74
kernel_restart+0xd/0x4e
SyS_reboot.cold.1+0x9/0x34
do_syscall_64+0x7c/0x150
4.19-rt and above do not have this problem due to a refactoring.
Signed-off-by: Joe Korty <Joe.Korty@concurrent-rt.com>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
kernel/irq/manage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 071691963f7b..12702d48aaa3 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -353,7 +353,7 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
if (old_notify) {
#ifdef CONFIG_PREEMPT_RT_BASE
- kthread_cancel_work_sync(¬ify->work);
+ kthread_cancel_work_sync(&old_notify->work);
#else
cancel_work_sync(&old_notify->work);
#endif
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 02/25] x86: preempt: Check preemption level before looking at lazy-preempt
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
2020-02-21 21:24 ` [PATCH RT 01/25] Fix wrong-variable use in irq_set_affinity_notifier zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context zanussi
` (22 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 19fc8557f2323c52b26561651ed4d51fc688a740 ]
Before evaluating the lazy-preempt state it must be ensure that the
preempt-count is zero.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
arch/x86/include/asm/preempt.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index f66708779274..afa0e42ccdd1 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -96,6 +96,8 @@ static __always_inline bool __preempt_count_dec_and_test(void)
if (____preempt_count_dec_and_test())
return true;
#ifdef CONFIG_PREEMPT_LAZY
+ if (preempt_count())
+ return false;
if (current_thread_info()->preempt_lazy_count)
return false;
return test_thread_flag(TIF_NEED_RESCHED_LAZY);
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
2020-02-21 21:24 ` [PATCH RT 01/25] Fix wrong-variable use in irq_set_affinity_notifier zanussi
2020-02-21 21:24 ` [PATCH RT 02/25] x86: preempt: Check preemption level before looking at lazy-preempt zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-24 8:33 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 04/25] i2c: hix5hd2: Remove IRQF_ONESHOT zanussi
` (21 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Juri Lelli <juri.lelli@redhat.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit ba94e7aed7405c58251b1380e6e7d73aa8284b41 ]
SCHED_DEADLINE inactive timer needs to run in hardirq context (as
dl_task_timer already does) on PREEMPT_RT
Change the mode to HRTIMER_MODE_REL_HARD.
[ tglx: Fixed up the start site, so mode debugging works ]
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190731103715.4047-1-juri.lelli@redhat.com
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
kernel/sched/deadline.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index eb68f7fb8a36..7b04e54bea01 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -252,7 +252,7 @@ static void task_non_contending(struct task_struct *p)
dl_se->dl_non_contending = 1;
get_task_struct(p);
- hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL);
+ hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL_HARD);
}
static void task_contending(struct sched_dl_entity *dl_se, int flags)
@@ -1234,7 +1234,7 @@ void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se)
{
struct hrtimer *timer = &dl_se->inactive_timer;
- hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
timer->function = inactive_task_timer;
}
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 04/25] i2c: hix5hd2: Remove IRQF_ONESHOT
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (2 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 05/25] i2c: exynos5: " zanussi
` (20 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit e88b481f3f86f11e3243e0808a830e5ca5782a9d ]
The drivers sets IRQF_ONESHOT and passes only a primary handler. The IRQ
is masked while the primary is handler is invoked independently of
IRQF_ONESHOT.
With IRQF_ONESHOT the core code will not force-thread the interrupt and
this is probably not intended. I *assume* that the original author copied
the IRQ registration from another driver which passed a primary and
secondary handler and removed the secondary handler but keeping the
ONESHOT flag.
Remove IRQF_ONESHOT.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
drivers/i2c/busses/i2c-hix5hd2.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c
index bb68957d3da5..76c1a207ccc1 100644
--- a/drivers/i2c/busses/i2c-hix5hd2.c
+++ b/drivers/i2c/busses/i2c-hix5hd2.c
@@ -464,8 +464,7 @@ static int hix5hd2_i2c_probe(struct platform_device *pdev)
hix5hd2_i2c_init(priv);
ret = devm_request_irq(&pdev->dev, irq, hix5hd2_i2c_irq,
- IRQF_NO_SUSPEND | IRQF_ONESHOT,
- dev_name(&pdev->dev), priv);
+ IRQF_NO_SUSPEND, dev_name(&pdev->dev), priv);
if (ret != 0) {
dev_err(&pdev->dev, "cannot request HS-I2C IRQ %d\n", irq);
goto err_clk;
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 05/25] i2c: exynos5: Remove IRQF_ONESHOT
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (3 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 04/25] i2c: hix5hd2: Remove IRQF_ONESHOT zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 06/25] sched: migrate_dis/enable: Use sleeping_lock…() to annotate sleeping points zanussi
` (19 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 4b217df0ab3f7910c96e42091cc7d9f221d05f01 ]
The drivers sets IRQF_ONESHOT and passes only a primary handler. The IRQ
is masked while the primary is handler is invoked independently of
IRQF_ONESHOT.
With IRQF_ONESHOT the core code will not force-thread the interrupt and
this is probably not intended. I *assume* that the original author copied
the IRQ registration from another driver which passed a primary and
secondary handler and removed the secondary handler but keeping the
ONESHOT flag.
Remove IRQF_ONESHOT.
Reported-by: Benjamin Rouxel <benjamin.rouxel@uva.nl>
Tested-by: Benjamin Rouxel <benjamin.rouxel@uva.nl>
Cc: Kukjin Kim <kgene@kernel.org>
Cc: Krzysztof Kozlowski <krzk@kernel.org>
Cc: linux-samsung-soc@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
drivers/i2c/busses/i2c-exynos5.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/i2c/busses/i2c-exynos5.c b/drivers/i2c/busses/i2c-exynos5.c
index 3855e0b11877..ec490eaac6f7 100644
--- a/drivers/i2c/busses/i2c-exynos5.c
+++ b/drivers/i2c/busses/i2c-exynos5.c
@@ -758,9 +758,7 @@ static int exynos5_i2c_probe(struct platform_device *pdev)
}
ret = devm_request_irq(&pdev->dev, i2c->irq, exynos5_i2c_irq,
- IRQF_NO_SUSPEND | IRQF_ONESHOT,
- dev_name(&pdev->dev), i2c);
-
+ IRQF_NO_SUSPEND, dev_name(&pdev->dev), i2c);
if (ret != 0) {
dev_err(&pdev->dev, "cannot request HS-I2C IRQ %d\n", i2c->irq);
goto err_clk;
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 06/25] sched: migrate_dis/enable: Use sleeping_lock…() to annotate sleeping points
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (4 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 05/25] i2c: exynos5: " zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 07/25] sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr zanussi
` (18 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Scott Wood <swood@redhat.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 4230dd3824c3e1785504e6f757ce79a4b55651fa ]
Without this, rcu_note_context_switch() will complain if an RCU read lock
is held when migrate_enable() calls stop_one_cpu(). Likewise when
migrate_disable() calls pin_current_cpu() which calls __read_rt_lock() --
which bypasses the part of the mutex code that calls sleeping_lock_inc().
Signed-off-by: Scott Wood <swood@redhat.com>
[bigeasy: use sleeping_lock_…() ]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
Conflicts:
kernel/sched/core.c
---
kernel/cpu.c | 2 ++
kernel/sched/core.c | 3 +++
2 files changed, 5 insertions(+)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 05b93cfa6fd9..9be794896d87 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -314,7 +314,9 @@ void pin_current_cpu(void)
preempt_lazy_enable();
preempt_enable();
+ sleeping_lock_inc();
__read_rt_lock(cpuhp_pin);
+ sleeping_lock_dec();
preempt_disable();
preempt_lazy_disable();
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fde47216af94..fcff75934bdc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7045,7 +7045,10 @@ void migrate_enable(void)
unpin_current_cpu();
preempt_lazy_enable();
preempt_enable();
+
+ sleeping_lock_inc();
stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
+ sleeping_lock_dec();
tlb_migrate_finish(p->mm);
return;
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 07/25] sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (5 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 06/25] sched: migrate_dis/enable: Use sleeping_lock…() to annotate sleeping points zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 08/25] sched: Remove dead __migrate_disabled() check zanussi
` (17 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Scott Wood <swood@redhat.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit e5606fb7b042db634ed62b4dd733d62e050e468f ]
This function is concerned with the long-term cpu mask, not the
transitory mask the task might have while migrate disabled. Before
this patch, if a task was migrate disabled at the time
__set_cpus_allowed_ptr() was called, and the new mask happened to be
equal to the cpu that the task was running on, then the mask update
would be lost.
Signed-off-by: Scott Wood <swood@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fcff75934bdc..8d6badac9225 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1192,7 +1192,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
goto out;
}
- if (cpumask_equal(p->cpus_ptr, new_mask))
+ if (cpumask_equal(&p->cpus_mask, new_mask))
goto out;
dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 08/25] sched: Remove dead __migrate_disabled() check
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (6 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 07/25] sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 09/25] sched: migrate disable: Protect cpus_ptr with lock zanussi
` (16 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Scott Wood <swood@redhat.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 14d9272d534ea91262e15db99443fc5995c7c016 ]
This code was unreachable given the __migrate_disabled() branch
to "out" immediately beforehand.
Signed-off-by: Scott Wood <swood@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
Conflicts:
kernel/sched/core.c
---
kernel/sched/core.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8d6badac9225..4708129e8df1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1217,13 +1217,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
goto out;
-#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
- if (__migrate_disabled(p)) {
- p->migrate_disable_update = 1;
- goto out;
- }
-#endif
-
if (task_running(rq, p) || p->state == TASK_WAKING) {
struct migration_arg arg = { p, dest_cpu };
/* Need help from migration thread: drop lock and wait. */
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 09/25] sched: migrate disable: Protect cpus_ptr with lock
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (7 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 08/25] sched: Remove dead __migrate_disabled() check zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 10/25] lib/smp_processor_id: Don't use cpumask_equal() zanussi
` (15 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Scott Wood <swood@redhat.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 27ee52a891ed2c7e2e2c8332ccae0de7c2674b09 ]
Various places assume that cpus_ptr is protected by rq/pi locks,
so don't change it before grabbing those locks.
Signed-off-by: Scott Wood <swood@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
kernel/sched/core.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4708129e8df1..189e6f08575e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6923,9 +6923,8 @@ migrate_disable_update_cpus_allowed(struct task_struct *p)
struct rq *rq;
struct rq_flags rf;
- p->cpus_ptr = cpumask_of(smp_processor_id());
-
rq = task_rq_lock(p, &rf);
+ p->cpus_ptr = cpumask_of(smp_processor_id());
update_nr_migratory(p, -1);
p->nr_cpus_allowed = 1;
task_rq_unlock(rq, p, &rf);
@@ -6937,9 +6936,8 @@ migrate_enable_update_cpus_allowed(struct task_struct *p)
struct rq *rq;
struct rq_flags rf;
- p->cpus_ptr = &p->cpus_mask;
-
rq = task_rq_lock(p, &rf);
+ p->cpus_ptr = &p->cpus_mask;
p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
update_nr_migratory(p, 1);
task_rq_unlock(rq, p, &rf);
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 10/25] lib/smp_processor_id: Don't use cpumask_equal()
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (8 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 09/25] sched: migrate disable: Protect cpus_ptr with lock zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 11/25] futex: Make the futex_hash_bucket spinlock_t again and bring back its old state zanussi
` (14 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Waiman Long <longman@redhat.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 659252061477862f45b79e1de169e6030f5c8918 ]
The check_preemption_disabled() function uses cpumask_equal() to see
if the task is bounded to the current CPU only. cpumask_equal() calls
memcmp() to do the comparison. As x86 doesn't have __HAVE_ARCH_MEMCMP,
the slow memcmp() function in lib/string.c is used.
On a RT kernel that call check_preemption_disabled() very frequently,
below is the perf-record output of a certain microbenchmark:
42.75% 2.45% testpmd [kernel.kallsyms] [k] check_preemption_disabled
40.01% 39.97% testpmd [kernel.kallsyms] [k] memcmp
We should avoid calling memcmp() in performance critical path. So the
cpumask_equal() call is now replaced with an equivalent simpler check.
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
lib/smp_processor_id.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c
index 6f4a4ae881c8..9f3c8bb62e57 100644
--- a/lib/smp_processor_id.c
+++ b/lib/smp_processor_id.c
@@ -23,7 +23,7 @@ notrace static unsigned int check_preemption_disabled(const char *what1,
* Kernel threads bound to a single CPU can safely use
* smp_processor_id():
*/
- if (cpumask_equal(current->cpus_ptr, cpumask_of(this_cpu)))
+ if (current->nr_cpus_allowed == 1)
goto out;
/*
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 11/25] futex: Make the futex_hash_bucket spinlock_t again and bring back its old state
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (9 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 10/25] lib/smp_processor_id: Don't use cpumask_equal() zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 12/25] locking/rtmutex: Clean ->pi_blocked_on in the error case zanussi
` (13 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 954ad80c23edfe71f4e8ce70b961eac884320c3a ]
This is an all-in-one patch that reverts the patches:
futex: Make the futex_hash_bucket lock raw
futex: Delay deallocation of pi_state
and adds back the old patches we had:
futex: workaround migrate_disable/enable in different context
rtmutex: Handle the various new futex race conditions
futex: Fix bug on when a requeued RT task times out
futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
Conflicts:
kernel/futex.c
---
kernel/futex.c | 231 +++++++++++++++++++++++-----------------
kernel/locking/rtmutex.c | 65 +++++++++--
kernel/locking/rtmutex_common.h | 3 +
3 files changed, 194 insertions(+), 105 deletions(-)
diff --git a/kernel/futex.c b/kernel/futex.c
index bcef01354d5c..581d40ee22a8 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -243,7 +243,7 @@ struct futex_q {
struct plist_node list;
struct task_struct *task;
- raw_spinlock_t *lock_ptr;
+ spinlock_t *lock_ptr;
union futex_key key;
struct futex_pi_state *pi_state;
struct rt_mutex_waiter *rt_waiter;
@@ -264,7 +264,7 @@ static const struct futex_q futex_q_init = {
*/
struct futex_hash_bucket {
atomic_t waiters;
- raw_spinlock_t lock;
+ spinlock_t lock;
struct plist_head chain;
} ____cacheline_aligned_in_smp;
@@ -831,13 +831,13 @@ static void get_pi_state(struct futex_pi_state *pi_state)
* Drops a reference to the pi_state object and frees or caches it
* when the last reference is gone.
*/
-static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state)
+static void put_pi_state(struct futex_pi_state *pi_state)
{
if (!pi_state)
- return NULL;
+ return;
if (!atomic_dec_and_test(&pi_state->refcount))
- return NULL;
+ return;
/*
* If pi_state->owner is NULL, the owner is most probably dying
@@ -857,7 +857,9 @@ static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state)
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
}
- if (!current->pi_state_cache) {
+ if (current->pi_state_cache) {
+ kfree(pi_state);
+ } else {
/*
* pi_state->list is already empty.
* clear pi_state->owner.
@@ -866,30 +868,6 @@ static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state)
pi_state->owner = NULL;
atomic_set(&pi_state->refcount, 1);
current->pi_state_cache = pi_state;
- pi_state = NULL;
- }
- return pi_state;
-}
-
-static void put_pi_state(struct futex_pi_state *pi_state)
-{
- kfree(__put_pi_state(pi_state));
-}
-
-static void put_pi_state_atomic(struct futex_pi_state *pi_state,
- struct list_head *to_free)
-{
- if (__put_pi_state(pi_state))
- list_add(&pi_state->list, to_free);
-}
-
-static void free_pi_state_list(struct list_head *to_free)
-{
- struct futex_pi_state *p, *next;
-
- list_for_each_entry_safe(p, next, to_free, list) {
- list_del(&p->list);
- kfree(p);
}
}
@@ -924,7 +902,6 @@ static void exit_pi_state_list(struct task_struct *curr)
struct futex_pi_state *pi_state;
struct futex_hash_bucket *hb;
union futex_key key = FUTEX_KEY_INIT;
- LIST_HEAD(to_free);
if (!futex_cmpxchg_enabled)
return;
@@ -958,7 +935,7 @@ static void exit_pi_state_list(struct task_struct *curr)
}
raw_spin_unlock_irq(&curr->pi_lock);
- raw_spin_lock(&hb->lock);
+ spin_lock(&hb->lock);
raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
raw_spin_lock(&curr->pi_lock);
/*
@@ -968,8 +945,10 @@ static void exit_pi_state_list(struct task_struct *curr)
if (head->next != next) {
/* retain curr->pi_lock for the loop invariant */
raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
- raw_spin_unlock(&hb->lock);
- put_pi_state_atomic(pi_state, &to_free);
+ raw_spin_unlock_irq(&curr->pi_lock);
+ spin_unlock(&hb->lock);
+ raw_spin_lock_irq(&curr->pi_lock);
+ put_pi_state(pi_state);
continue;
}
@@ -980,7 +959,7 @@ static void exit_pi_state_list(struct task_struct *curr)
raw_spin_unlock(&curr->pi_lock);
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
- raw_spin_unlock(&hb->lock);
+ spin_unlock(&hb->lock);
rt_mutex_futex_unlock(&pi_state->pi_mutex);
put_pi_state(pi_state);
@@ -988,8 +967,6 @@ static void exit_pi_state_list(struct task_struct *curr)
raw_spin_lock_irq(&curr->pi_lock);
}
raw_spin_unlock_irq(&curr->pi_lock);
-
- free_pi_state_list(&to_free);
}
#else
static inline void exit_pi_state_list(struct task_struct *curr) { }
@@ -1530,7 +1507,7 @@ static void __unqueue_futex(struct futex_q *q)
{
struct futex_hash_bucket *hb;
- if (WARN_ON_SMP(!q->lock_ptr || !raw_spin_is_locked(q->lock_ptr))
+ if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr))
|| WARN_ON(plist_node_empty(&q->list)))
return;
@@ -1658,21 +1635,21 @@ static inline void
double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
{
if (hb1 <= hb2) {
- raw_spin_lock(&hb1->lock);
+ spin_lock(&hb1->lock);
if (hb1 < hb2)
- raw_spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
+ spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
} else { /* hb1 > hb2 */
- raw_spin_lock(&hb2->lock);
- raw_spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
+ spin_lock(&hb2->lock);
+ spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
}
}
static inline void
double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
{
- raw_spin_unlock(&hb1->lock);
+ spin_unlock(&hb1->lock);
if (hb1 != hb2)
- raw_spin_unlock(&hb2->lock);
+ spin_unlock(&hb2->lock);
}
/*
@@ -1700,7 +1677,7 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
if (!hb_waiters_pending(hb))
goto out_put_key;
- raw_spin_lock(&hb->lock);
+ spin_lock(&hb->lock);
plist_for_each_entry_safe(this, next, &hb->chain, list) {
if (match_futex (&this->key, &key)) {
@@ -1719,7 +1696,7 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
}
}
- raw_spin_unlock(&hb->lock);
+ spin_unlock(&hb->lock);
wake_up_q(&wake_q);
out_put_key:
put_futex_key(&key);
@@ -2032,7 +2009,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
struct futex_hash_bucket *hb1, *hb2;
struct futex_q *this, *next;
DEFINE_WAKE_Q(wake_q);
- LIST_HEAD(to_free);
if (nr_wake < 0 || nr_requeue < 0)
return -EINVAL;
@@ -2271,6 +2247,16 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
requeue_pi_wake_futex(this, &key2, hb2);
drop_count++;
continue;
+ } else if (ret == -EAGAIN) {
+ /*
+ * Waiter was woken by timeout or
+ * signal and has set pi_blocked_on to
+ * PI_WAKEUP_INPROGRESS before we
+ * tried to enqueue it on the rtmutex.
+ */
+ this->pi_state = NULL;
+ put_pi_state(pi_state);
+ continue;
} else if (ret) {
/*
* rt_mutex_start_proxy_lock() detected a
@@ -2281,7 +2267,7 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
* object.
*/
this->pi_state = NULL;
- put_pi_state_atomic(pi_state, &to_free);
+ put_pi_state(pi_state);
/*
* We stop queueing more waiters and let user
* space deal with the mess.
@@ -2298,7 +2284,7 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
* in futex_proxy_trylock_atomic() or in lookup_pi_state(). We
* need to drop it here again.
*/
- put_pi_state_atomic(pi_state, &to_free);
+ put_pi_state(pi_state);
out_unlock:
double_unlock_hb(hb1, hb2);
@@ -2319,7 +2305,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
out_put_key1:
put_futex_key(&key1);
out:
- free_pi_state_list(&to_free);
return ret ? ret : task_count;
}
@@ -2343,8 +2328,7 @@ static inline struct futex_hash_bucket *queue_lock(struct futex_q *q)
q->lock_ptr = &hb->lock;
- raw_spin_lock(&hb->lock);
-
+ spin_lock(&hb->lock);
return hb;
}
@@ -2352,7 +2336,7 @@ static inline void
queue_unlock(struct futex_hash_bucket *hb)
__releases(&hb->lock)
{
- raw_spin_unlock(&hb->lock);
+ spin_unlock(&hb->lock);
hb_waiters_dec(hb);
}
@@ -2391,7 +2375,7 @@ static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
__releases(&hb->lock)
{
__queue_me(q, hb);
- raw_spin_unlock(&hb->lock);
+ spin_unlock(&hb->lock);
}
/**
@@ -2407,41 +2391,41 @@ static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
*/
static int unqueue_me(struct futex_q *q)
{
- raw_spinlock_t *lock_ptr;
+ spinlock_t *lock_ptr;
int ret = 0;
/* In the common case we don't take the spinlock, which is nice. */
retry:
/*
- * q->lock_ptr can change between this read and the following
- * raw_spin_lock. Use READ_ONCE to forbid the compiler from reloading
- * q->lock_ptr and optimizing lock_ptr out of the logic below.
+ * q->lock_ptr can change between this read and the following spin_lock.
+ * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
+ * optimizing lock_ptr out of the logic below.
*/
lock_ptr = READ_ONCE(q->lock_ptr);
if (lock_ptr != NULL) {
- raw_spin_lock(lock_ptr);
+ spin_lock(lock_ptr);
/*
* q->lock_ptr can change between reading it and
- * raw_spin_lock(), causing us to take the wrong lock. This
+ * spin_lock(), causing us to take the wrong lock. This
* corrects the race condition.
*
* Reasoning goes like this: if we have the wrong lock,
* q->lock_ptr must have changed (maybe several times)
- * between reading it and the raw_spin_lock(). It can
- * change again after the raw_spin_lock() but only if it was
- * already changed before the raw_spin_lock(). It cannot,
+ * between reading it and the spin_lock(). It can
+ * change again after the spin_lock() but only if it was
+ * already changed before the spin_lock(). It cannot,
* however, change back to the original value. Therefore
* we can detect whether we acquired the correct lock.
*/
if (unlikely(lock_ptr != q->lock_ptr)) {
- raw_spin_unlock(lock_ptr);
+ spin_unlock(lock_ptr);
goto retry;
}
__unqueue_futex(q);
BUG_ON(q->pi_state);
- raw_spin_unlock(lock_ptr);
+ spin_unlock(lock_ptr);
ret = 1;
}
@@ -2457,16 +2441,13 @@ static int unqueue_me(struct futex_q *q)
static void unqueue_me_pi(struct futex_q *q)
__releases(q->lock_ptr)
{
- struct futex_pi_state *ps;
-
__unqueue_futex(q);
BUG_ON(!q->pi_state);
- ps = __put_pi_state(q->pi_state);
+ put_pi_state(q->pi_state);
q->pi_state = NULL;
- raw_spin_unlock(q->lock_ptr);
- kfree(ps);
+ spin_unlock(q->lock_ptr);
}
static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
@@ -2599,7 +2580,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
*/
handle_err:
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
- raw_spin_unlock(q->lock_ptr);
+ spin_unlock(q->lock_ptr);
switch (err) {
case -EFAULT:
@@ -2617,7 +2598,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
break;
}
- raw_spin_lock(q->lock_ptr);
+ spin_lock(q->lock_ptr);
raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
/*
@@ -2713,7 +2694,7 @@ static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
/*
* The task state is guaranteed to be set before another task can
* wake it. set_current_state() is implemented using smp_store_mb() and
- * queue_me() calls raw_spin_unlock() upon completion, both serializing
+ * queue_me() calls spin_unlock() upon completion, both serializing
* access to the hash list and forcing another memory barrier.
*/
set_current_state(TASK_INTERRUPTIBLE);
@@ -3013,7 +2994,15 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
* before __rt_mutex_start_proxy_lock() is done.
*/
raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
- raw_spin_unlock(q.lock_ptr);
+ /*
+ * the migrate_disable() here disables migration in the in_atomic() fast
+ * path which is enabled again in the following spin_unlock(). We have
+ * one migrate_disable() pending in the slow-path which is reversed
+ * after the raw_spin_unlock_irq() where we leave the atomic context.
+ */
+ migrate_disable();
+
+ spin_unlock(q.lock_ptr);
/*
* __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter
* such that futex_unlock_pi() is guaranteed to observe the waiter when
@@ -3021,6 +3010,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
*/
ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current);
raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock);
+ migrate_enable();
if (ret) {
if (ret == 1)
@@ -3034,7 +3024,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags,
ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
cleanup:
- raw_spin_lock(q.lock_ptr);
+ spin_lock(q.lock_ptr);
/*
* If we failed to acquire the lock (deadlock/signal/timeout), we must
* first acquire the hb->lock before removing the lock from the
@@ -3135,7 +3125,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
return ret;
hb = hash_futex(&key);
- raw_spin_lock(&hb->lock);
+ spin_lock(&hb->lock);
/*
* Check waiters first. We do not trust user space values at
@@ -3169,10 +3159,19 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
* rt_waiter. Also see the WARN in wake_futex_pi().
*/
raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
- raw_spin_unlock(&hb->lock);
+ /*
+ * Magic trickery for now to make the RT migrate disable
+ * logic happy. The following spin_unlock() happens with
+ * interrupts disabled so the internal migrate_enable()
+ * won't undo the migrate_disable() which was issued when
+ * locking hb->lock.
+ */
+ migrate_disable();
+ spin_unlock(&hb->lock);
/* drops pi_state->pi_mutex.wait_lock */
ret = wake_futex_pi(uaddr, uval, pi_state);
+ migrate_enable();
put_pi_state(pi_state);
@@ -3208,7 +3207,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
* owner.
*/
if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) {
- raw_spin_unlock(&hb->lock);
+ spin_unlock(&hb->lock);
switch (ret) {
case -EFAULT:
goto pi_faulted;
@@ -3228,7 +3227,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
ret = (curval == uval) ? 0 : -EAGAIN;
out_unlock:
- raw_spin_unlock(&hb->lock);
+ spin_unlock(&hb->lock);
out_putkey:
put_futex_key(&key);
return ret;
@@ -3344,7 +3343,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
struct hrtimer_sleeper timeout, *to = NULL;
struct futex_pi_state *pi_state = NULL;
struct rt_mutex_waiter rt_waiter;
- struct futex_hash_bucket *hb;
+ struct futex_hash_bucket *hb, *hb2;
union futex_key key2 = FUTEX_KEY_INIT;
struct futex_q q = futex_q_init;
int res, ret;
@@ -3402,20 +3401,55 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
/* Queue the futex_q, drop the hb lock, wait for wakeup. */
futex_wait_queue_me(hb, &q, to);
- raw_spin_lock(&hb->lock);
- ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
- raw_spin_unlock(&hb->lock);
- if (ret)
- goto out_put_keys;
+ /*
+ * On RT we must avoid races with requeue and trying to block
+ * on two mutexes (hb->lock and uaddr2's rtmutex) by
+ * serializing access to pi_blocked_on with pi_lock.
+ */
+ raw_spin_lock_irq(¤t->pi_lock);
+ if (current->pi_blocked_on) {
+ /*
+ * We have been requeued or are in the process of
+ * being requeued.
+ */
+ raw_spin_unlock_irq(¤t->pi_lock);
+ } else {
+ /*
+ * Setting pi_blocked_on to PI_WAKEUP_INPROGRESS
+ * prevents a concurrent requeue from moving us to the
+ * uaddr2 rtmutex. After that we can safely acquire
+ * (and possibly block on) hb->lock.
+ */
+ current->pi_blocked_on = PI_WAKEUP_INPROGRESS;
+ raw_spin_unlock_irq(¤t->pi_lock);
+
+ spin_lock(&hb->lock);
+
+ /*
+ * Clean up pi_blocked_on. We might leak it otherwise
+ * when we succeeded with the hb->lock in the fast
+ * path.
+ */
+ raw_spin_lock_irq(¤t->pi_lock);
+ current->pi_blocked_on = NULL;
+ raw_spin_unlock_irq(¤t->pi_lock);
+
+ ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
+ spin_unlock(&hb->lock);
+ if (ret)
+ goto out_put_keys;
+ }
/*
- * In order for us to be here, we know our q.key == key2, and since
- * we took the hb->lock above, we also know that futex_requeue() has
- * completed and we no longer have to concern ourselves with a wakeup
- * race with the atomic proxy lock acquisition by the requeue code. The
- * futex_requeue dropped our key1 reference and incremented our key2
- * reference count.
+ * In order to be here, we have either been requeued, are in
+ * the process of being requeued, or requeue successfully
+ * acquired uaddr2 on our behalf. If pi_blocked_on was
+ * non-null above, we may be racing with a requeue. Do not
+ * rely on q->lock_ptr to be hb2->lock until after blocking on
+ * hb->lock or hb2->lock. The futex_requeue dropped our key1
+ * reference and incremented our key2 reference count.
*/
+ hb2 = hash_futex(&key2);
/* Check if the requeue code acquired the second futex for us. */
if (!q.rt_waiter) {
@@ -3424,9 +3458,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
* did a lock-steal - fix up the PI-state in that case.
*/
if (q.pi_state && (q.pi_state->owner != current)) {
- struct futex_pi_state *ps_free;
-
- raw_spin_lock(q.lock_ptr);
+ spin_lock(&hb2->lock);
+ BUG_ON(&hb2->lock != q.lock_ptr);
ret = fixup_pi_state_owner(uaddr2, &q, current);
if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
pi_state = q.pi_state;
@@ -3436,9 +3469,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
* Drop the reference to the pi state which
* the requeue_pi() code acquired for us.
*/
- ps_free = __put_pi_state(q.pi_state);
- raw_spin_unlock(q.lock_ptr);
- kfree(ps_free);
+ put_pi_state(q.pi_state);
+ spin_unlock(&hb2->lock);
}
} else {
struct rt_mutex *pi_mutex;
@@ -3452,7 +3484,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
pi_mutex = &q.pi_state->pi_mutex;
ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
- raw_spin_lock(q.lock_ptr);
+ spin_lock(&hb2->lock);
+ BUG_ON(&hb2->lock != q.lock_ptr);
if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
ret = 0;
@@ -4225,7 +4258,7 @@ static int __init futex_init(void)
for (i = 0; i < futex_hashsize; i++) {
atomic_set(&futex_queues[i].waiters, 0);
plist_head_init(&futex_queues[i].chain);
- raw_spin_lock_init(&futex_queues[i].lock);
+ spin_lock_init(&futex_queues[i].lock);
}
return 0;
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index e1497623780b..1177f2815040 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -142,6 +142,12 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock)
WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS);
}
+static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter)
+{
+ return waiter && waiter != PI_WAKEUP_INPROGRESS &&
+ waiter != PI_REQUEUE_INPROGRESS;
+}
+
/*
* We can speed up the acquire/release, if there's no debugging state to be
* set up.
@@ -415,7 +421,8 @@ int max_lock_depth = 1024;
static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p)
{
- return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL;
+ return rt_mutex_real_waiter(p->pi_blocked_on) ?
+ p->pi_blocked_on->lock : NULL;
}
/*
@@ -551,7 +558,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
* reached or the state of the chain has changed while we
* dropped the locks.
*/
- if (!waiter)
+ if (!rt_mutex_real_waiter(waiter))
goto out_unlock_pi;
/*
@@ -1334,6 +1341,22 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
return -EDEADLK;
raw_spin_lock(&task->pi_lock);
+ /*
+ * In the case of futex requeue PI, this will be a proxy
+ * lock. The task will wake unaware that it is enqueueed on
+ * this lock. Avoid blocking on two locks and corrupting
+ * pi_blocked_on via the PI_WAKEUP_INPROGRESS
+ * flag. futex_wait_requeue_pi() sets this when it wakes up
+ * before requeue (due to a signal or timeout). Do not enqueue
+ * the task if PI_WAKEUP_INPROGRESS is set.
+ */
+ if (task != current && task->pi_blocked_on == PI_WAKEUP_INPROGRESS) {
+ raw_spin_unlock(&task->pi_lock);
+ return -EAGAIN;
+ }
+
+ BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on));
+
waiter->task = task;
waiter->lock = lock;
waiter->prio = task->prio;
@@ -1357,7 +1380,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
rt_mutex_enqueue_pi(owner, waiter);
rt_mutex_adjust_prio(owner);
- if (owner->pi_blocked_on)
+ if (rt_mutex_real_waiter(owner->pi_blocked_on))
chain_walk = 1;
} else if (rt_mutex_cond_detect_deadlock(waiter, chwalk)) {
chain_walk = 1;
@@ -1457,7 +1480,7 @@ static void remove_waiter(struct rt_mutex *lock,
{
bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock));
struct task_struct *owner = rt_mutex_owner(lock);
- struct rt_mutex *next_lock;
+ struct rt_mutex *next_lock = NULL;
lockdep_assert_held(&lock->wait_lock);
@@ -1483,7 +1506,8 @@ static void remove_waiter(struct rt_mutex *lock,
rt_mutex_adjust_prio(owner);
/* Store the lock on which owner is blocked or NULL */
- next_lock = task_blocked_on_lock(owner);
+ if (rt_mutex_real_waiter(owner->pi_blocked_on))
+ next_lock = task_blocked_on_lock(owner);
raw_spin_unlock(&owner->pi_lock);
@@ -1519,7 +1543,8 @@ void rt_mutex_adjust_pi(struct task_struct *task)
raw_spin_lock_irqsave(&task->pi_lock, flags);
waiter = task->pi_blocked_on;
- if (!waiter || rt_mutex_waiter_equal(waiter, task_to_waiter(task))) {
+ if (!rt_mutex_real_waiter(waiter) ||
+ rt_mutex_waiter_equal(waiter, task_to_waiter(task))) {
raw_spin_unlock_irqrestore(&task->pi_lock, flags);
return;
}
@@ -2333,6 +2358,34 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock,
if (try_to_take_rt_mutex(lock, task, NULL))
return 1;
+#ifdef CONFIG_PREEMPT_RT_FULL
+ /*
+ * In PREEMPT_RT there's an added race.
+ * If the task, that we are about to requeue, times out,
+ * it can set the PI_WAKEUP_INPROGRESS. This tells the requeue
+ * to skip this task. But right after the task sets
+ * its pi_blocked_on to PI_WAKEUP_INPROGRESS it can then
+ * block on the spin_lock(&hb->lock), which in RT is an rtmutex.
+ * This will replace the PI_WAKEUP_INPROGRESS with the actual
+ * lock that it blocks on. We *must not* place this task
+ * on this proxy lock in that case.
+ *
+ * To prevent this race, we first take the task's pi_lock
+ * and check if it has updated its pi_blocked_on. If it has,
+ * we assume that it woke up and we return -EAGAIN.
+ * Otherwise, we set the task's pi_blocked_on to
+ * PI_REQUEUE_INPROGRESS, so that if the task is waking up
+ * it will know that we are in the process of requeuing it.
+ */
+ raw_spin_lock(&task->pi_lock);
+ if (task->pi_blocked_on) {
+ raw_spin_unlock(&task->pi_lock);
+ return -EAGAIN;
+ }
+ task->pi_blocked_on = PI_REQUEUE_INPROGRESS;
+ raw_spin_unlock(&task->pi_lock);
+#endif
+
/* We enforce deadlock detection for futexes */
ret = task_blocks_on_rt_mutex(lock, waiter, task,
RT_MUTEX_FULL_CHAINWALK);
diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
index 2f6662d052d6..2a157c78e18c 100644
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -131,6 +131,9 @@ enum rtmutex_chainwalk {
/*
* PI-futex support (proxy locking functions, etc.):
*/
+#define PI_WAKEUP_INPROGRESS ((struct rt_mutex_waiter *) 1)
+#define PI_REQUEUE_INPROGRESS ((struct rt_mutex_waiter *) 2)
+
extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock);
extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
struct task_struct *proxy_owner);
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 12/25] locking/rtmutex: Clean ->pi_blocked_on in the error case
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (10 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 11/25] futex: Make the futex_hash_bucket spinlock_t again and bring back its old state zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 13/25] lib/ubsan: Don't seralize UBSAN report zanussi
` (12 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Peter Zijlstra <peterz@infradead.org>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 0be4ea6e3ce693101be0fbd55a0cc7ce238ab2eb ]
The function rt_mutex_wait_proxy_lock() cleans ->pi_blocked_on in case
of failure (timeout, signal). The same cleanup is required in
__rt_mutex_start_proxy_lock().
In both the cases the tasks was interrupted by a signal or timeout while
acquiring the lock and after the interruption it longer blocks on the
lock.
Fixes: 1a1fb985f2e2b ("futex: Handle early deadlock return correctly")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
kernel/locking/rtmutex.c | 43 +++++++++++++++++++++++++------------------
1 file changed, 25 insertions(+), 18 deletions(-)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 1177f2815040..4bc01a2a9a88 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -2328,6 +2328,26 @@ void rt_mutex_proxy_unlock(struct rt_mutex *lock,
rt_mutex_set_owner(lock, NULL);
}
+static void fixup_rt_mutex_blocked(struct rt_mutex *lock)
+{
+ struct task_struct *tsk = current;
+ /*
+ * RT has a problem here when the wait got interrupted by a timeout
+ * or a signal. task->pi_blocked_on is still set. The task must
+ * acquire the hash bucket lock when returning from this function.
+ *
+ * If the hash bucket lock is contended then the
+ * BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)) in
+ * task_blocks_on_rt_mutex() will trigger. This can be avoided by
+ * clearing task->pi_blocked_on which removes the task from the
+ * boosting chain of the rtmutex. That's correct because the task
+ * is not longer blocked on it.
+ */
+ raw_spin_lock(&tsk->pi_lock);
+ tsk->pi_blocked_on = NULL;
+ raw_spin_unlock(&tsk->pi_lock);
+}
+
/**
* __rt_mutex_start_proxy_lock() - Start lock acquisition for another task
* @lock: the rt_mutex to take
@@ -2400,6 +2420,9 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock,
ret = 0;
}
+ if (ret)
+ fixup_rt_mutex_blocked(lock);
+
debug_rt_mutex_print_deadlock(waiter);
return ret;
@@ -2480,7 +2503,6 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock,
struct hrtimer_sleeper *to,
struct rt_mutex_waiter *waiter)
{
- struct task_struct *tsk = current;
int ret;
raw_spin_lock_irq(&lock->wait_lock);
@@ -2492,23 +2514,8 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock,
* have to fix that up.
*/
fixup_rt_mutex_waiters(lock);
- /*
- * RT has a problem here when the wait got interrupted by a timeout
- * or a signal. task->pi_blocked_on is still set. The task must
- * acquire the hash bucket lock when returning from this function.
- *
- * If the hash bucket lock is contended then the
- * BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)) in
- * task_blocks_on_rt_mutex() will trigger. This can be avoided by
- * clearing task->pi_blocked_on which removes the task from the
- * boosting chain of the rtmutex. That's correct because the task
- * is not longer blocked on it.
- */
- if (ret) {
- raw_spin_lock(&tsk->pi_lock);
- tsk->pi_blocked_on = NULL;
- raw_spin_unlock(&tsk->pi_lock);
- }
+ if (ret)
+ fixup_rt_mutex_blocked(lock);
raw_spin_unlock_irq(&lock->wait_lock);
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 13/25] lib/ubsan: Don't seralize UBSAN report
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (11 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 12/25] locking/rtmutex: Clean ->pi_blocked_on in the error case zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 14/25] kmemleak: Change the lock of kmemleak_object to raw_spinlock_t zanussi
` (11 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Julien Grall <julien.grall@arm.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commmit 4702c28ac777b27acb499cbd5e8e787ce1a7d82d ]
At the moment, UBSAN report will be serialized using a spin_lock(). On
RT-systems, spinlocks are turned to rt_spin_lock and may sleep. This will
result to the following splat if the undefined behavior is in a context
that can sleep:
| BUG: sleeping function called from invalid context at /src/linux/kernel/locking/rtmutex.c:968
| in_atomic(): 1, irqs_disabled(): 128, pid: 3447, name: make
| 1 lock held by make/3447:
| #0: 000000009a966332 (&mm->mmap_sem){++++}, at: do_page_fault+0x140/0x4f8
| Preemption disabled at:
| [<ffff000011324a4c>] rt_mutex_futex_unlock+0x4c/0xb0
| CPU: 3 PID: 3447 Comm: make Tainted: G W 5.2.14-rt7-01890-ge6e057589653 #911
| Call trace:
| dump_backtrace+0x0/0x148
| show_stack+0x14/0x20
| dump_stack+0xbc/0x104
| ___might_sleep+0x154/0x210
| rt_spin_lock+0x68/0xa0
| ubsan_prologue+0x30/0x68
| handle_overflow+0x64/0xe0
| __ubsan_handle_add_overflow+0x10/0x18
| __lock_acquire+0x1c28/0x2a28
| lock_acquire+0xf0/0x370
| _raw_spin_lock_irqsave+0x58/0x78
| rt_mutex_futex_unlock+0x4c/0xb0
| rt_spin_unlock+0x28/0x70
| get_page_from_freelist+0x428/0x2b60
| __alloc_pages_nodemask+0x174/0x1708
| alloc_pages_vma+0x1ac/0x238
| __handle_mm_fault+0x4ac/0x10b0
| handle_mm_fault+0x1d8/0x3b0
| do_page_fault+0x1c8/0x4f8
| do_translation_fault+0xb8/0xe0
| do_mem_abort+0x3c/0x98
| el0_da+0x20/0x24
The spin_lock() will protect against multiple CPUs to output a report
together, I guess to prevent them to be interleaved. However, they can
still interleave with other messages (and even splat from __migth_sleep).
So the lock usefulness seems pretty limited. Rather than trying to
accomodate RT-system by switching to a raw_spin_lock(), the lock is now
completely dropped.
Link: https://lkml.kernel.org/r/20190920100835.14999-1-julien.grall@arm.com
Reported-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
Conflicts:
lib/ubsan.c
---
lib/ubsan.c | 76 ++++++++++++++++++++++---------------------------------------
1 file changed, 27 insertions(+), 49 deletions(-)
diff --git a/lib/ubsan.c b/lib/ubsan.c
index c652b4a820cc..f94cfb3a41ed 100644
--- a/lib/ubsan.c
+++ b/lib/ubsan.c
@@ -147,26 +147,21 @@ static bool location_is_valid(struct source_location *loc)
{
return loc->file_name != NULL;
}
-
-static DEFINE_SPINLOCK(report_lock);
-
-static void ubsan_prologue(struct source_location *location,
- unsigned long *flags)
+static void ubsan_prologue(struct source_location *location)
{
current->in_ubsan++;
- spin_lock_irqsave(&report_lock, *flags);
pr_err("========================================"
"========================================\n");
print_source_location("UBSAN: Undefined behaviour in", location);
}
-static void ubsan_epilogue(unsigned long *flags)
+static void ubsan_epilogue(void)
{
dump_stack();
pr_err("========================================"
"========================================\n");
- spin_unlock_irqrestore(&report_lock, *flags);
+
current->in_ubsan--;
}
@@ -175,14 +170,13 @@ static void handle_overflow(struct overflow_data *data, void *lhs,
{
struct type_descriptor *type = data->type;
- unsigned long flags;
char lhs_val_str[VALUE_LENGTH];
char rhs_val_str[VALUE_LENGTH];
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
val_to_string(lhs_val_str, sizeof(lhs_val_str), type, lhs);
val_to_string(rhs_val_str, sizeof(rhs_val_str), type, rhs);
@@ -194,7 +188,7 @@ static void handle_overflow(struct overflow_data *data, void *lhs,
rhs_val_str,
type->type_name);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
void __ubsan_handle_add_overflow(struct overflow_data *data,
@@ -222,20 +216,19 @@ EXPORT_SYMBOL(__ubsan_handle_mul_overflow);
void __ubsan_handle_negate_overflow(struct overflow_data *data,
void *old_val)
{
- unsigned long flags;
char old_val_str[VALUE_LENGTH];
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
val_to_string(old_val_str, sizeof(old_val_str), data->type, old_val);
pr_err("negation of %s cannot be represented in type %s:\n",
old_val_str, data->type->type_name);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
EXPORT_SYMBOL(__ubsan_handle_negate_overflow);
@@ -243,13 +236,12 @@ EXPORT_SYMBOL(__ubsan_handle_negate_overflow);
void __ubsan_handle_divrem_overflow(struct overflow_data *data,
void *lhs, void *rhs)
{
- unsigned long flags;
char rhs_val_str[VALUE_LENGTH];
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
val_to_string(rhs_val_str, sizeof(rhs_val_str), data->type, rhs);
@@ -259,58 +251,52 @@ void __ubsan_handle_divrem_overflow(struct overflow_data *data,
else
pr_err("division by zero\n");
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
EXPORT_SYMBOL(__ubsan_handle_divrem_overflow);
static void handle_null_ptr_deref(struct type_mismatch_data_common *data)
{
- unsigned long flags;
-
if (suppress_report(data->location))
return;
- ubsan_prologue(data->location, &flags);
+ ubsan_prologue(data->location);
pr_err("%s null pointer of type %s\n",
type_check_kinds[data->type_check_kind],
data->type->type_name);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
static void handle_misaligned_access(struct type_mismatch_data_common *data,
unsigned long ptr)
{
- unsigned long flags;
-
if (suppress_report(data->location))
return;
- ubsan_prologue(data->location, &flags);
+ ubsan_prologue(data->location);
pr_err("%s misaligned address %p for type %s\n",
type_check_kinds[data->type_check_kind],
(void *)ptr, data->type->type_name);
pr_err("which requires %ld byte alignment\n", data->alignment);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
static void handle_object_size_mismatch(struct type_mismatch_data_common *data,
unsigned long ptr)
{
- unsigned long flags;
-
if (suppress_report(data->location))
return;
- ubsan_prologue(data->location, &flags);
+ ubsan_prologue(data->location);
pr_err("%s address %p with insufficient space\n",
type_check_kinds[data->type_check_kind],
(void *) ptr);
pr_err("for an object of type %s\n", data->type->type_name);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data,
@@ -356,12 +342,10 @@ EXPORT_SYMBOL(__ubsan_handle_type_mismatch_v1);
void __ubsan_handle_nonnull_return(struct nonnull_return_data *data)
{
- unsigned long flags;
-
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
pr_err("null pointer returned from function declared to never return null\n");
@@ -369,49 +353,46 @@ void __ubsan_handle_nonnull_return(struct nonnull_return_data *data)
print_source_location("returns_nonnull attribute specified in",
&data->attr_location);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
EXPORT_SYMBOL(__ubsan_handle_nonnull_return);
void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data,
void *bound)
{
- unsigned long flags;
char bound_str[VALUE_LENGTH];
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
val_to_string(bound_str, sizeof(bound_str), data->type, bound);
pr_err("variable length array bound value %s <= 0\n", bound_str);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
EXPORT_SYMBOL(__ubsan_handle_vla_bound_not_positive);
void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data, void *index)
{
- unsigned long flags;
char index_str[VALUE_LENGTH];
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
val_to_string(index_str, sizeof(index_str), data->index_type, index);
pr_err("index %s is out of range for type %s\n", index_str,
data->array_type->type_name);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
EXPORT_SYMBOL(__ubsan_handle_out_of_bounds);
void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data,
void *lhs, void *rhs)
{
- unsigned long flags;
struct type_descriptor *rhs_type = data->rhs_type;
struct type_descriptor *lhs_type = data->lhs_type;
char rhs_str[VALUE_LENGTH];
@@ -420,7 +401,7 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data,
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
val_to_string(rhs_str, sizeof(rhs_str), rhs_type, rhs);
val_to_string(lhs_str, sizeof(lhs_str), lhs_type, lhs);
@@ -443,18 +424,16 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data,
lhs_str, rhs_str,
lhs_type->type_name);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds);
void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)
{
- unsigned long flags;
-
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
pr_err("calling __builtin_unreachable()\n");
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
panic("can't return from __builtin_unreachable()");
}
EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable);
@@ -462,19 +441,18 @@ EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable);
void __ubsan_handle_load_invalid_value(struct invalid_value_data *data,
void *val)
{
- unsigned long flags;
char val_str[VALUE_LENGTH];
if (suppress_report(&data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(&data->location);
val_to_string(val_str, sizeof(val_str), data->type, val);
pr_err("load of value %s is not a valid value for type %s\n",
val_str, data->type->type_name);
- ubsan_epilogue(&flags);
+ ubsan_epilogue();
}
EXPORT_SYMBOL(__ubsan_handle_load_invalid_value);
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 14/25] kmemleak: Change the lock of kmemleak_object to raw_spinlock_t
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (12 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 13/25] lib/ubsan: Don't seralize UBSAN report zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq() zanussi
` (10 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Liu Haitao <haitao.liu@windriver.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 217847f57119b5fdd377bfa3d344613ddb98d9fc ]
The commit ("kmemleak: Turn kmemleak_lock to raw spinlock on RT")
changed the kmemleak_lock to raw spinlock. However the
kmemleak_object->lock is held after the kmemleak_lock is held in
scan_block().
Make the object->lock a raw_spinlock_t.
Cc: stable-rt@vger.kernel.org
Link: https://lkml.kernel.org/r/20190927082230.34152-1-yongxin.liu@windriver.com
Signed-off-by: Liu Haitao <haitao.liu@windriver.com>
Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
mm/kmemleak.c | 72 +++++++++++++++++++++++++++++------------------------------
1 file changed, 36 insertions(+), 36 deletions(-)
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index c18e23619f95..17718a11782b 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -148,7 +148,7 @@ struct kmemleak_scan_area {
* (use_count) and freed using the RCU mechanism.
*/
struct kmemleak_object {
- spinlock_t lock;
+ raw_spinlock_t lock;
unsigned int flags; /* object status flags */
struct list_head object_list;
struct list_head gray_list;
@@ -562,7 +562,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
INIT_LIST_HEAD(&object->object_list);
INIT_LIST_HEAD(&object->gray_list);
INIT_HLIST_HEAD(&object->area_list);
- spin_lock_init(&object->lock);
+ raw_spin_lock_init(&object->lock);
atomic_set(&object->use_count, 1);
object->flags = OBJECT_ALLOCATED;
object->pointer = ptr;
@@ -643,9 +643,9 @@ static void __delete_object(struct kmemleak_object *object)
* Locking here also ensures that the corresponding memory block
* cannot be freed when it is being scanned.
*/
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
object->flags &= ~OBJECT_ALLOCATED;
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
put_object(object);
}
@@ -717,9 +717,9 @@ static void paint_it(struct kmemleak_object *object, int color)
{
unsigned long flags;
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
__paint_it(object, color);
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
}
static void paint_ptr(unsigned long ptr, int color)
@@ -779,7 +779,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
goto out;
}
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
if (size == SIZE_MAX) {
size = object->pointer + object->size - ptr;
} else if (ptr + size > object->pointer + object->size) {
@@ -795,7 +795,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
hlist_add_head(&area->node, &object->area_list);
out_unlock:
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
out:
put_object(object);
}
@@ -818,9 +818,9 @@ static void object_set_excess_ref(unsigned long ptr, unsigned long excess_ref)
return;
}
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
object->excess_ref = excess_ref;
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
put_object(object);
}
@@ -840,9 +840,9 @@ static void object_no_scan(unsigned long ptr)
return;
}
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
object->flags |= OBJECT_NO_SCAN;
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
put_object(object);
}
@@ -903,11 +903,11 @@ static void early_alloc(struct early_log *log)
log->min_count, GFP_ATOMIC);
if (!object)
goto out;
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
for (i = 0; i < log->trace_len; i++)
object->trace[i] = log->trace[i];
object->trace_len = log->trace_len;
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
out:
rcu_read_unlock();
}
@@ -1097,9 +1097,9 @@ void __ref kmemleak_update_trace(const void *ptr)
return;
}
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
object->trace_len = __save_stack_trace(object->trace);
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
put_object(object);
}
@@ -1335,7 +1335,7 @@ static void scan_block(void *_start, void *_end,
* previously acquired in scan_object(). These locks are
* enclosed by scan_mutex.
*/
- spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
+ raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
/* only pass surplus references (object already gray) */
if (color_gray(object)) {
excess_ref = object->excess_ref;
@@ -1344,7 +1344,7 @@ static void scan_block(void *_start, void *_end,
excess_ref = 0;
update_refs(object);
}
- spin_unlock(&object->lock);
+ raw_spin_unlock(&object->lock);
if (excess_ref) {
object = lookup_object(excess_ref, 0);
@@ -1353,9 +1353,9 @@ static void scan_block(void *_start, void *_end,
if (object == scanned)
/* circular reference, ignore */
continue;
- spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
+ raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
update_refs(object);
- spin_unlock(&object->lock);
+ raw_spin_unlock(&object->lock);
}
}
raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
@@ -1391,7 +1391,7 @@ static void scan_object(struct kmemleak_object *object)
* Once the object->lock is acquired, the corresponding memory block
* cannot be freed (the same lock is acquired in delete_object).
*/
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
if (object->flags & OBJECT_NO_SCAN)
goto out;
if (!(object->flags & OBJECT_ALLOCATED))
@@ -1410,9 +1410,9 @@ static void scan_object(struct kmemleak_object *object)
if (start >= end)
break;
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
cond_resched();
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
} while (object->flags & OBJECT_ALLOCATED);
} else
hlist_for_each_entry(area, &object->area_list, node)
@@ -1420,7 +1420,7 @@ static void scan_object(struct kmemleak_object *object)
(void *)(area->start + area->size),
object);
out:
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
}
/*
@@ -1473,7 +1473,7 @@ static void kmemleak_scan(void)
/* prepare the kmemleak_object's */
rcu_read_lock();
list_for_each_entry_rcu(object, &object_list, object_list) {
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
#ifdef DEBUG
/*
* With a few exceptions there should be a maximum of
@@ -1490,7 +1490,7 @@ static void kmemleak_scan(void)
if (color_gray(object) && get_object(object))
list_add_tail(&object->gray_list, &gray_list);
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
}
rcu_read_unlock();
@@ -1555,14 +1555,14 @@ static void kmemleak_scan(void)
*/
rcu_read_lock();
list_for_each_entry_rcu(object, &object_list, object_list) {
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
if (color_white(object) && (object->flags & OBJECT_ALLOCATED)
&& update_checksum(object) && get_object(object)) {
/* color it gray temporarily */
object->count = object->min_count;
list_add_tail(&object->gray_list, &gray_list);
}
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
}
rcu_read_unlock();
@@ -1582,13 +1582,13 @@ static void kmemleak_scan(void)
*/
rcu_read_lock();
list_for_each_entry_rcu(object, &object_list, object_list) {
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
if (unreferenced_object(object) &&
!(object->flags & OBJECT_REPORTED)) {
object->flags |= OBJECT_REPORTED;
new_leaks++;
}
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
}
rcu_read_unlock();
@@ -1740,10 +1740,10 @@ static int kmemleak_seq_show(struct seq_file *seq, void *v)
struct kmemleak_object *object = v;
unsigned long flags;
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
if ((object->flags & OBJECT_REPORTED) && unreferenced_object(object))
print_unreferenced(seq, object);
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
return 0;
}
@@ -1773,9 +1773,9 @@ static int dump_str_object_info(const char *str)
return -EINVAL;
}
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
dump_object_info(object);
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
put_object(object);
return 0;
@@ -1794,11 +1794,11 @@ static void kmemleak_clear(void)
rcu_read_lock();
list_for_each_entry_rcu(object, &object_list, object_list) {
- spin_lock_irqsave(&object->lock, flags);
+ raw_spin_lock_irqsave(&object->lock, flags);
if ((object->flags & OBJECT_REPORTED) &&
unreferenced_object(object))
__paint_it(object, KMEMLEAK_GREY);
- spin_unlock_irqrestore(&object->lock, flags);
+ raw_spin_unlock_irqrestore(&object->lock, flags);
}
rcu_read_unlock();
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq()
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (13 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 14/25] kmemleak: Change the lock of kmemleak_object to raw_spinlock_t zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-24 9:43 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 16/25] Revert "ARM: Initialize split page table locks for vector page" zanussi
` (9 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Scott Wood <swood@redhat.com>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit adfa969d4cfcc995a9d866020124e50f1827d2d1 ]
migrate_enable() currently open-codes a variant of select_fallback_rq().
However, it does not have the "No more Mr. Nice Guy" fallback and thus
it will pass an invalid CPU to the migration thread if cpus_mask only
contains a CPU that is !active.
Signed-off-by: Scott Wood <swood@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
kernel/sched/core.c | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 189e6f08575e..46324d2099e3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7008,6 +7008,7 @@ void migrate_enable(void)
if (p->migrate_disable_update) {
struct rq *rq;
struct rq_flags rf;
+ int cpu = task_cpu(p);
rq = task_rq_lock(p, &rf);
update_rq_clock(rq);
@@ -7017,21 +7018,15 @@ void migrate_enable(void)
p->migrate_disable_update = 0;
- WARN_ON(smp_processor_id() != task_cpu(p));
- if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
- const struct cpumask *cpu_valid_mask = cpu_active_mask;
- struct migration_arg arg;
- unsigned int dest_cpu;
-
- if (p->flags & PF_KTHREAD) {
- /*
- * Kernel threads are allowed on online && !active CPUs
- */
- cpu_valid_mask = cpu_online_mask;
- }
- dest_cpu = cpumask_any_and(cpu_valid_mask, &p->cpus_mask);
- arg.task = p;
- arg.dest_cpu = dest_cpu;
+ WARN_ON(smp_processor_id() != cpu);
+ if (!cpumask_test_cpu(cpu, &p->cpus_mask)) {
+ struct migration_arg arg = { p };
+ struct rq_flags rf;
+
+ rq = task_rq_lock(p, &rf);
+ update_rq_clock(rq);
+ arg.dest_cpu = select_fallback_rq(cpu, p);
+ task_rq_unlock(rq, p, &rf);
unpin_current_cpu();
preempt_lazy_enable();
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 16/25] Revert "ARM: Initialize split page table locks for vector page"
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (14 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq() zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx zanussi
` (8 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 247074c44d8c3e619dfde6404a52295d8d671d38 ]
I'm dropping this patch, with its original description:
|ARM: Initialize split page table locks for vector page
|
|Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if
|PREEMPT_RT_FULL=y because vectors_user_mapping() creates a
|VM_ALWAYSDUMP mapping of the vector page (address 0xffff0000), but no
|ptl->lock has been allocated for the page. An attempt to coredump
|that page will result in a kernel NULL pointer dereference when
|follow_page() attempts to lock the page.
|
|The call tree to the NULL pointer dereference is:
|
| do_notify_resume()
| get_signal_to_deliver()
| do_coredump()
| elf_core_dump()
| get_dump_page()
| __get_user_pages()
| follow_page()
| pte_offset_map_lock() <----- a #define
| ...
| rt_spin_lock()
|
|The underlying problem is exposed by mm-shrink-the-page-frame-to-rt-size.patch.
The patch named mm-shrink-the-page-frame-to-rt-size.patch was dropped
from the RT queue once the SPLIT_PTLOCK_CPUS feature (in a slightly
different shape) went upstream (somewhere between v3.12 and v3.14).
I can see that the patch still allocates a lock which wasn't there
before. However I can't trigger a kernel oops like described in the
patch by triggering a coredump.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
arch/arm/kernel/process.c | 24 ------------------------
1 file changed, 24 deletions(-)
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index cf4e1452d4b4..d96714e1858c 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -325,30 +325,6 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
}
#ifdef CONFIG_MMU
-/*
- * CONFIG_SPLIT_PTLOCK_CPUS results in a page->ptl lock. If the lock is not
- * initialized by pgtable_page_ctor() then a coredump of the vector page will
- * fail.
- */
-static int __init vectors_user_mapping_init_page(void)
-{
- struct page *page;
- unsigned long addr = 0xffff0000;
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
-
- pgd = pgd_offset_k(addr);
- pud = pud_offset(pgd, addr);
- pmd = pmd_offset(pud, addr);
- page = pmd_page(*(pmd));
-
- pgtable_page_ctor(page);
-
- return 0;
-}
-late_initcall(vectors_user_mapping_init_page);
-
#ifdef CONFIG_KUSER_HELPERS
/*
* The vectors page is always readable from user space for the
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (15 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 16/25] Revert "ARM: Initialize split page table locks for vector page" zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-24 8:55 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 18/25] locking: Make spinlock_t and rwlock_t a RCU section on RT zanussi
` (7 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit eb46d70e4455e49928f136f768f1e54646ab4ff7 ]
The state/owner of the FPU is saved to fpu_fpregs_owner_ctx by pointing
to the context that is currently loaded. It never changed during the
lifetime of a task - it remained stable/constant.
After deferred FPU registers loading until return to userland was
implemented, the content of fpu_fpregs_owner_ctx may change during
preemption and must not be cached.
This went unnoticed for some time and was now noticed, in particular
since gcc 9 is caching that load in copy_fpstate_to_sigframe() and
reusing it in the retry loop:
copy_fpstate_to_sigframe()
load fpu_fpregs_owner_ctx and save on stack
fpregs_lock()
copy_fpregs_to_sigframe() /* failed */
fpregs_unlock()
*** PREEMPTION, another uses FPU, changes fpu_fpregs_owner_ctx ***
fault_in_pages_writeable() /* succeed, retry */
fpregs_lock()
__fpregs_load_activate()
fpregs_state_valid() /* uses fpu_fpregs_owner_ctx from stack */
copy_fpregs_to_sigframe() /* succeeds, random FPU content */
This is a comparison of the assembly produced by gcc 9, without vs with this
patch:
| # arch/x86/kernel/fpu/signal.c:173: if (!access_ok(buf, size))
| cmpq %rdx, %rax # tmp183, _4
| jb .L190 #,
|-# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
|-#APP
|-# 512 "arch/x86/include/asm/fpu/internal.h" 1
|- movq %gs:fpu_fpregs_owner_ctx,%rax #, pfo_ret__
|-# 0 "" 2
|-#NO_APP
|- movq %rax, -88(%rbp) # pfo_ret__, %sfp
…
|-# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
|- movq -88(%rbp), %rcx # %sfp, pfo_ret__
|- cmpq %rcx, -64(%rbp) # pfo_ret__, %sfp
|+# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
|+#APP
|+# 512 "arch/x86/include/asm/fpu/internal.h" 1
|+ movq %gs:fpu_fpregs_owner_ctx(%rip),%rax # fpu_fpregs_owner_ctx, pfo_ret__
|+# 0 "" 2
|+# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
|+#NO_APP
|+ cmpq %rax, -64(%rbp) # pfo_ret__, %sfp
Use this_cpu_read() instead this_cpu_read_stable() to avoid caching of
fpu_fpregs_owner_ctx during preemption points.
The Fixes: tag points to the commit where deferred FPU loading was
added. Since this commit, the compiler is no longer allowed to move the
load of fpu_fpregs_owner_ctx somewhere else / outside of the locked
section. A task preemption will change its value and stale content will
be observed.
[ bp: Massage. ]
Debugged-by: Austin Clements <austin@google.com>
Debugged-by: David Chase <drchase@golang.org>
Debugged-by: Ian Lance Taylor <ian@airs.com>
Fixes: 5f409e20b7945 ("x86/fpu: Defer FPU state load until return to userspace")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Rik van Riel <riel@surriel.com>
Tested-by: Borislav Petkov <bp@suse.de>
Cc: Aubrey Li <aubrey.li@intel.com>
Cc: Austin Clements <austin@google.com>
Cc: Barret Rhoden <brho@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Chase <drchase@golang.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: ian@airs.com
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Josh Bleecher Snyder <josharian@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Cc: stable-rt@vger.kernel.org
Link: https://lkml.kernel.org/r/20191128085306.hxfa2o3knqtu4wfn@linutronix.de
Link: https://bugzilla.kernel.org/show_bug.cgi?id=205663
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
arch/x86/include/asm/fpu/internal.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
index fa2c93cb42a2..92e12f5d0d64 100644
--- a/arch/x86/include/asm/fpu/internal.h
+++ b/arch/x86/include/asm/fpu/internal.h
@@ -498,7 +498,7 @@ static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu)
static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu)
{
- return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
+ return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
}
/*
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 18/25] locking: Make spinlock_t and rwlock_t a RCU section on RT
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (16 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount zanussi
` (6 subsequent siblings)
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 84440022a0e1c8c936d61f8f97593674a295d409 ]
On !RT a locked spinlock_t and rwlock_t disables preemption which
implies a RCU read section. There is code that relies on that behaviour.
Add an explicit RCU read section on RT while a sleeping lock (a lock
which would disables preemption on !RT) acquired.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
kernel/locking/rtmutex.c | 6 ++++++
kernel/locking/rwlock-rt.c | 6 ++++++
2 files changed, 12 insertions(+)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 4bc01a2a9a88..848d9ed6f053 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1142,6 +1142,7 @@ void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock)
void __lockfunc rt_spin_lock(spinlock_t *lock)
{
sleeping_lock_inc();
+ rcu_read_lock();
migrate_disable();
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
@@ -1157,6 +1158,7 @@ void __lockfunc __rt_spin_lock(struct rt_mutex *lock)
void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass)
{
sleeping_lock_inc();
+ rcu_read_lock();
migrate_disable();
spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
@@ -1170,6 +1172,7 @@ void __lockfunc rt_spin_unlock(spinlock_t *lock)
spin_release(&lock->dep_map, 1, _RET_IP_);
rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock);
migrate_enable();
+ rcu_read_unlock();
sleeping_lock_dec();
}
EXPORT_SYMBOL(rt_spin_unlock);
@@ -1201,6 +1204,7 @@ int __lockfunc rt_spin_trylock(spinlock_t *lock)
ret = __rt_mutex_trylock(&lock->lock);
if (ret) {
spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+ rcu_read_lock();
} else {
migrate_enable();
sleeping_lock_dec();
@@ -1217,6 +1221,7 @@ int __lockfunc rt_spin_trylock_bh(spinlock_t *lock)
ret = __rt_mutex_trylock(&lock->lock);
if (ret) {
sleeping_lock_inc();
+ rcu_read_lock();
migrate_disable();
spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
} else
@@ -1233,6 +1238,7 @@ int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags)
ret = __rt_mutex_trylock(&lock->lock);
if (ret) {
sleeping_lock_inc();
+ rcu_read_lock();
migrate_disable();
spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
}
diff --git a/kernel/locking/rwlock-rt.c b/kernel/locking/rwlock-rt.c
index c3b91205161c..0ae8c62ea832 100644
--- a/kernel/locking/rwlock-rt.c
+++ b/kernel/locking/rwlock-rt.c
@@ -310,6 +310,7 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock)
ret = do_read_rt_trylock(rwlock);
if (ret) {
rwlock_acquire_read(&rwlock->dep_map, 0, 1, _RET_IP_);
+ rcu_read_lock();
} else {
migrate_enable();
sleeping_lock_dec();
@@ -327,6 +328,7 @@ int __lockfunc rt_write_trylock(rwlock_t *rwlock)
ret = do_write_rt_trylock(rwlock);
if (ret) {
rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
+ rcu_read_lock();
} else {
migrate_enable();
sleeping_lock_dec();
@@ -338,6 +340,7 @@ EXPORT_SYMBOL(rt_write_trylock);
void __lockfunc rt_read_lock(rwlock_t *rwlock)
{
sleeping_lock_inc();
+ rcu_read_lock();
migrate_disable();
rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_);
do_read_rt_lock(rwlock);
@@ -347,6 +350,7 @@ EXPORT_SYMBOL(rt_read_lock);
void __lockfunc rt_write_lock(rwlock_t *rwlock)
{
sleeping_lock_inc();
+ rcu_read_lock();
migrate_disable();
rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
do_write_rt_lock(rwlock);
@@ -358,6 +362,7 @@ void __lockfunc rt_read_unlock(rwlock_t *rwlock)
rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
do_read_rt_unlock(rwlock);
migrate_enable();
+ rcu_read_unlock();
sleeping_lock_dec();
}
EXPORT_SYMBOL(rt_read_unlock);
@@ -367,6 +372,7 @@ void __lockfunc rt_write_unlock(rwlock_t *rwlock)
rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
do_write_rt_unlock(rwlock);
migrate_enable();
+ rcu_read_unlock();
sleeping_lock_dec();
}
EXPORT_SYMBOL(rt_write_unlock);
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (17 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 18/25] locking: Make spinlock_t and rwlock_t a RCU section on RT zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-24 9:03 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 20/25] kmemleak: Cosmetic changes zanussi
` (5 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit dc952a564d02997330654be9628bbe97ba2a05d3 ]
On RT write_seqcount_begin() disables preemption which leads to warning
in add_wait_queue() while the spinlock_t is acquired.
The waitqueue can't be converted to swait_queue because
userfaultfd_wake_function() is used as a custom wake function.
Use seqlock instead seqcount to avoid the preempt_disable() section
during add_wait_queue().
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
fs/userfaultfd.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index e2b2196fd942..71886a8e8f71 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -51,7 +51,7 @@ struct userfaultfd_ctx {
/* waitqueue head for events */
wait_queue_head_t event_wqh;
/* a refile sequence protected by fault_pending_wqh lock */
- struct seqcount refile_seq;
+ seqlock_t refile_seq;
/* pseudo fd refcounting */
atomic_t refcount;
/* userfaultfd syscall flags */
@@ -1047,7 +1047,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
* waitqueue could become empty if this is the
* only userfault.
*/
- write_seqcount_begin(&ctx->refile_seq);
+ write_seqlock(&ctx->refile_seq);
/*
* The fault_pending_wqh.lock prevents the uwq
@@ -1073,7 +1073,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
list_del(&uwq->wq.entry);
__add_wait_queue(&ctx->fault_wqh, &uwq->wq);
- write_seqcount_end(&ctx->refile_seq);
+ write_sequnlock(&ctx->refile_seq);
/* careful to always initialize msg if ret == 0 */
*msg = uwq->msg;
@@ -1246,11 +1246,11 @@ static __always_inline void wake_userfault(struct userfaultfd_ctx *ctx,
* sure we've userfaults to wake.
*/
do {
- seq = read_seqcount_begin(&ctx->refile_seq);
+ seq = read_seqbegin(&ctx->refile_seq);
need_wakeup = waitqueue_active(&ctx->fault_pending_wqh) ||
waitqueue_active(&ctx->fault_wqh);
cond_resched();
- } while (read_seqcount_retry(&ctx->refile_seq, seq));
+ } while (read_seqretry(&ctx->refile_seq, seq));
if (need_wakeup)
__wake_userfault(ctx, range);
}
@@ -1915,7 +1915,7 @@ static void init_once_userfaultfd_ctx(void *mem)
init_waitqueue_head(&ctx->fault_wqh);
init_waitqueue_head(&ctx->event_wqh);
init_waitqueue_head(&ctx->fd_wqh);
- seqcount_init(&ctx->refile_seq);
+ seqlock_init(&ctx->refile_seq);
}
/**
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 20/25] kmemleak: Cosmetic changes
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (18 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-24 9:12 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 21/25] smp: Use smp_cond_func_t as type for the conditional function zanussi
` (4 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 65a387a0b45cdd6844b7c6269e6333c9f0113410 ]
Align with the patch, that got sent upstream for review. Only cosmetic
changes.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
mm/kmemleak.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 17718a11782b..d7925ee4b052 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -26,7 +26,7 @@
*
* The following locks and mutexes are used by kmemleak:
*
- * - kmemleak_lock (raw spinlock): protects the object_list modifications and
+ * - kmemleak_lock (raw_spinlock_t): protects the object_list modifications and
* accesses to the object_tree_root. The object_list is the main list
* holding the metadata (struct kmemleak_object) for the allocated memory
* blocks. The object_tree_root is a red black tree used to look-up
@@ -35,13 +35,13 @@
* object_tree_root in the create_object() function called from the
* kmemleak_alloc() callback and removed in delete_object() called from the
* kmemleak_free() callback
- * - kmemleak_object.lock (spinlock): protects a kmemleak_object. Accesses to
- * the metadata (e.g. count) are protected by this lock. Note that some
- * members of this structure may be protected by other means (atomic or
- * kmemleak_lock). This lock is also held when scanning the corresponding
- * memory block to avoid the kernel freeing it via the kmemleak_free()
- * callback. This is less heavyweight than holding a global lock like
- * kmemleak_lock during scanning
+ * - kmemleak_object.lock (raw_spinlock_t): protects a kmemleak_object.
+ * Accesses to the metadata (e.g. count) are protected by this lock. Note
+ * that some members of this structure may be protected by other means
+ * (atomic or kmemleak_lock). This lock is also held when scanning the
+ * corresponding memory block to avoid the kernel freeing it via the
+ * kmemleak_free() callback. This is less heavyweight than holding a global
+ * lock like kmemleak_lock during scanning.
* - scan_mutex (mutex): ensures that only one thread may scan the memory for
* unreferenced objects at a time. The gray_list contains the objects which
* are already referenced or marked as false positives and need to be
@@ -197,7 +197,7 @@ static LIST_HEAD(object_list);
static LIST_HEAD(gray_list);
/* search tree for object boundaries */
static struct rb_root object_tree_root = RB_ROOT;
-/* rw_lock protecting the access to object_list and object_tree_root */
+/* protecting the access to object_list and object_tree_root */
static DEFINE_RAW_SPINLOCK(kmemleak_lock);
/* allocation caches for kmemleak internal data */
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 21/25] smp: Use smp_cond_func_t as type for the conditional function
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (19 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 20/25] kmemleak: Cosmetic changes zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-24 9:52 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 22/25] mm/memcontrol: Move misplaced local_unlock_irqrestore() zanussi
` (3 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 0c2799d2b9cd2e314298c68b81fbdc478f552ad4 ]
Use a typdef for the conditional function instead defining it each time in
the function prototype.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
Conflicts:
include/linux/smp.h
kernel/smp.c
kernel/up.c
---
include/linux/smp.h | 6 +++---
kernel/smp.c | 5 ++---
kernel/up.c | 5 ++---
3 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/include/linux/smp.h b/include/linux/smp.h
index 5801e516ba63..af05a63e4c06 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -15,6 +15,7 @@
#include <linux/llist.h>
typedef void (*smp_call_func_t)(void *info);
+typedef bool (*smp_cond_func_t)(int cpu, void *info);
struct __call_single_data {
struct llist_node llist;
smp_call_func_t func;
@@ -49,9 +50,8 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
* cond_func returns a positive value. This may include the local
* processor.
*/
-void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
- smp_call_func_t func, void *info, bool wait,
- gfp_t gfp_flags);
+void on_each_cpu_cond(smp_cond_func_t cond_func, smp_call_func_t func,
+ void *info, bool wait, gfp_t gfp_flags);
int smp_call_function_single_async(int cpu, call_single_data_t *csd);
diff --git a/kernel/smp.c b/kernel/smp.c
index c94dd85c8d41..00fbb6aa948a 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -667,9 +667,8 @@ EXPORT_SYMBOL(on_each_cpu_mask);
* You must not call this function with disabled interrupts or
* from a hardware interrupt handler or from a bottom half handler.
*/
-void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
- smp_call_func_t func, void *info, bool wait,
- gfp_t gfp_flags)
+void on_each_cpu_cond(smp_cond_func_t cond_func, smp_call_func_t func,
+ void *info, bool wait, gfp_t gfp_flags)
{
cpumask_var_t cpus;
int cpu, ret;
diff --git a/kernel/up.c b/kernel/up.c
index 42c46bf3e0a5..a0276ba75270 100644
--- a/kernel/up.c
+++ b/kernel/up.c
@@ -68,9 +68,8 @@ EXPORT_SYMBOL(on_each_cpu_mask);
* Preemption is disabled here to make sure the cond_func is called under the
* same condtions in UP and SMP.
*/
-void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
- smp_call_func_t func, void *info, bool wait,
- gfp_t gfp_flags)
+void on_each_cpu_cond(smp_cond_func_t cond_func, smp_call_func_t func,
+ void *info, bool wait, gfp_t gfp_flags)
{
unsigned long flags;
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 22/25] mm/memcontrol: Move misplaced local_unlock_irqrestore()
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (20 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 21/25] smp: Use smp_cond_func_t as type for the conditional function zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-24 9:55 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 23/25] locallock: Include header for the `current' macro zanussi
` (2 subsequent siblings)
24 siblings, 1 reply; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Matt Fleming <matt@codeblueprint.co.uk>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 071a1d6a6e14d0dec240a8c67b425140d7f92f6a ]
The comment about local_lock_irqsave() mentions just the counters and
css_put_many()'s callback just invokes a worker so it is safe to move the
unlock function after memcg_check_events() so css_put_many() can be invoked
without the lock acquired.
Cc: Daniel Wagner <wagi@monom.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
[bigeasy: rewrote the patch description]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0503b31e2a87..a359a24ebd9f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6102,10 +6102,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page),
-nr_entries);
memcg_check_events(memcg, page);
+ local_unlock_irqrestore(event_lock, flags);
if (!mem_cgroup_is_root(memcg))
css_put_many(&memcg->css, nr_entries);
- local_unlock_irqrestore(event_lock, flags);
}
/**
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 23/25] locallock: Include header for the `current' macro
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (21 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 22/25] mm/memcontrol: Move misplaced local_unlock_irqrestore() zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 24/25] sched: Provide migrate_disable/enable() inlines zanussi
2020-02-21 21:24 ` [PATCH RT 25/25] Linux 4.14.170-rt75-rc1 zanussi
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit e693075a5fd852043fa8d2b0467e078d9e5cb782 ]
Include the header for `current' macro so that
CONFIG_KERNEL_HEADER_TEST=y passes.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
include/linux/locallock.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 921eab83cd34..81c89d87723b 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -3,6 +3,7 @@
#include <linux/percpu.h>
#include <linux/spinlock.h>
+#include <asm/current.h>
#ifdef CONFIG_PREEMPT_RT_BASE
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 24/25] sched: Provide migrate_disable/enable() inlines
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (22 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 23/25] locallock: Include header for the `current' macro zanussi
@ 2020-02-21 21:24 ` zanussi
2020-02-21 21:24 ` [PATCH RT 25/25] Linux 4.14.170-rt75-rc1 zanussi
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Thomas Gleixner <tglx@linutronix.de>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
[ Upstream commit 87d447be4100447b42229cce5e9b33c7915871eb ]
Currently code which solely needs to prevent migration of a task uses
preempt_disable()/enable() pairs. This is the only reliable way to do so as
setting the task affinity to a single CPU can be undone by a setaffinity
operation from a different task/process. It's also significantly faster.
RT provides a seperate migrate_disable/enable() mechanism which does not
disable preemption to achieve the semantic requirements of a (almost) fully
preemptible kernel.
As it is unclear from looking at a given code path whether the intention is
to disable preemption or migration, introduce migrate_disable/enable()
inline functions which can be used to annotate code which merely needs to
disable migration. Map them to preempt_disable/enable() for now. The RT
substitution will be provided later.
Code which is annotated that way documents that it has no requirement to
protect against reentrancy of a preempting task. Either this is not
required at all or the call sites are already serialized by other means.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
include/linux/preempt.h | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 6728662a81e8..2e15fbc01eda 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -241,8 +241,30 @@ static inline int __migrate_disabled(struct task_struct *p)
}
#else
-#define migrate_disable() preempt_disable()
-#define migrate_enable() preempt_enable()
+/**
+ * migrate_disable - Prevent migration of the current task
+ *
+ * Maps to preempt_disable() which also disables preemption. Use
+ * migrate_disable() to annotate that the intent is to prevent migration
+ * but not necessarily preemption.
+ *
+ * Can be invoked nested like preempt_disable() and needs the corresponding
+ * number of migrate_enable() invocations.
+ */
+#define migrate_disable() preempt_disable()
+
+/**
+ * migrate_enable - Allow migration of the current task
+ *
+ * Counterpart to migrate_disable().
+ *
+ * As migrate_disable() can be invoked nested only the uttermost invocation
+ * reenables migration.
+ *
+ * Currently mapped to preempt_enable().
+ */
+#define migrate_enable() preempt_enable()
+
static inline int __migrate_disabled(struct task_struct *p)
{
return 0;
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH RT 25/25] Linux 4.14.170-rt75-rc1
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
` (23 preceding siblings ...)
2020-02-21 21:24 ` [PATCH RT 24/25] sched: Provide migrate_disable/enable() inlines zanussi
@ 2020-02-21 21:24 ` zanussi
24 siblings, 0 replies; 43+ messages in thread
From: zanussi @ 2020-02-21 21:24 UTC (permalink / raw)
To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
Daniel Wagner, Tom Zanussi
From: Tom Zanussi <zanussi@kernel.org>
v4.14.170-rt75-rc1 stable review patch.
If anyone has any objections, please let me know.
-----------
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
localversion-rt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/localversion-rt b/localversion-rt
index 7d028f4a9e56..5449cd22a524 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt74
+-rt75-rc1
--
2.14.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* Re: [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context
2020-02-21 21:24 ` [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context zanussi
@ 2020-02-24 8:33 ` Sebastian Andrzej Siewior
2020-02-25 14:50 ` Juri Lelli
0 siblings, 1 reply; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 8:33 UTC (permalink / raw)
To: zanussi, Juri Lelli
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-21 15:24:31 [-0600], zanussi@kernel.org wrote:
> [ Upstream commit ba94e7aed7405c58251b1380e6e7d73aa8284b41 ]
>
> SCHED_DEADLINE inactive timer needs to run in hardirq context (as
> dl_task_timer already does) on PREEMPT_RT
The message says "dl_task_timer already does" but this is not true for
v4.14 as it still runs in softirq context on RT. v4.19 has this either
via
https://lkml.kernel.org/r/20190731103715.4047-1-juri.lelli@redhat.com
or the patch which got merged upstream.
Juri, I guess we want this for v4.14, too?
> Change the mode to HRTIMER_MODE_REL_HARD.
>
> [ tglx: Fixed up the start site, so mode debugging works ]
>
> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Link: https://lkml.kernel.org/r/20190731103715.4047-1-juri.lelli@redhat.com
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Tom Zanussi <zanussi@kernel.org>
> ---
> kernel/sched/deadline.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index eb68f7fb8a36..7b04e54bea01 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -252,7 +252,7 @@ static void task_non_contending(struct task_struct *p)
>
> dl_se->dl_non_contending = 1;
> get_task_struct(p);
> - hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL);
> + hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL_HARD);
> }
>
> static void task_contending(struct sched_dl_entity *dl_se, int flags)
> @@ -1234,7 +1234,7 @@ void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se)
> {
> struct hrtimer *timer = &dl_se->inactive_timer;
>
> - hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> + hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
> timer->function = inactive_task_timer;
> }
>
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx
2020-02-21 21:24 ` [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx zanussi
@ 2020-02-24 8:55 ` Sebastian Andrzej Siewior
2020-02-24 15:12 ` Tom Zanussi
0 siblings, 1 reply; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 8:55 UTC (permalink / raw)
To: zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-21 15:24:45 [-0600], zanussi@kernel.org wrote:
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> v4.14.170-rt75-rc1 stable review patch.
> If anyone has any objections, please let me know.
Please don't apply this for the reasons I mentioned in
https://lkml.kernel.org/r/20200122084352.nyqnlfaumjgnvgih@linutronix.de
I guess they still apply (haven't checked).
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount
2020-02-21 21:24 ` [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount zanussi
@ 2020-02-24 9:03 ` Sebastian Andrzej Siewior
2020-02-24 15:14 ` Tom Zanussi
2020-02-24 16:17 ` Steven Rostedt
0 siblings, 2 replies; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 9:03 UTC (permalink / raw)
To: zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-21 15:24:47 [-0600], zanussi@kernel.org wrote:
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> v4.14.170-rt75-rc1 stable review patch.
> If anyone has any objections, please let me know.
This is required but it is not part of the next "higher" tree
(v4.19-RT). Which means if someone moves from v4.14-RT to the next tree
(v4.19-RT in this case) that someone would have the bug again.
Could you please wait with such patches or did the I miss the v4.19-RT
tree with this change?
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 20/25] kmemleak: Cosmetic changes
2020-02-21 21:24 ` [PATCH RT 20/25] kmemleak: Cosmetic changes zanussi
@ 2020-02-24 9:12 ` Sebastian Andrzej Siewior
2020-02-24 15:18 ` Tom Zanussi
0 siblings, 1 reply; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 9:12 UTC (permalink / raw)
To: zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-21 15:24:48 [-0600], zanussi@kernel.org wrote:
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> v4.14.170-rt75-rc1 stable review patch.
> If anyone has any objections, please let me know.
This makes no sense to apply it. I updated my patch in the RT queue to
what has been sent (and later merged) upstream. Then I was forced to
sync the non-rebase branch with the rebase branch. This is the result.
What should be applied instead is
fb2c57edcb943 ("kmemleak: Change the lock of kmemleak_object to raw_spinlock_t")
from the v4.19-RT branch.
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq()
2020-02-21 21:24 ` [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq() zanussi
@ 2020-02-24 9:43 ` Sebastian Andrzej Siewior
2020-02-24 15:31 ` Tom Zanussi
0 siblings, 1 reply; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 9:43 UTC (permalink / raw)
To: zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-21 15:24:43 [-0600], zanussi@kernel.org wrote:
> From: Scott Wood <swood@redhat.com>
>
> v4.14.170-rt75-rc1 stable review patch.
> If anyone has any objections, please let me know.
This creates bug which is stuffed later via
sched: migrate_enable: Busy loop until the migration request is completed
So if apply this, please take the bug fix, too. This is Stevens queue
for reference:
|[PATCH RT 22/30] sched: migrate_enable: Use select_fallback_rq()
^^ bug introduced
|[PATCH RT 23/30] sched: Lazy migrate_disable processing
|[PATCH RT 24/30] sched: migrate_enable: Use stop_one_cpu_nowait()
|[PATCH RT 25/30] Revert "ARM: Initialize split page table locks for vector page"
|[PATCH RT 26/30] locking: Make spinlock_t and rwlock_t a RCU section on RT
|[PATCH RT 27/30] sched/core: migrate_enable() must access takedown_cpu_task on !HOTPLUG_CPU
|[PATCH RT 28/30] lib/smp_processor_id: Adjust check_preemption_disabled()
|[PATCH RT 29/30] sched: migrate_enable: Busy loop until the migration request is completed
^^ bug fixed
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 21/25] smp: Use smp_cond_func_t as type for the conditional function
2020-02-21 21:24 ` [PATCH RT 21/25] smp: Use smp_cond_func_t as type for the conditional function zanussi
@ 2020-02-24 9:52 ` Sebastian Andrzej Siewior
2020-02-24 15:34 ` Tom Zanussi
0 siblings, 1 reply; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 9:52 UTC (permalink / raw)
To: zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-21 15:24:49 [-0600], zanussi@kernel.org wrote:
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> v4.14.170-rt75-rc1 stable review patch.
> If anyone has any objections, please let me know.
This alone makes no sense. I had this in the devel tree as part of a
three patch series to remove a limitation in on_each_cpu_cond_mask().
This does not apply to the v4.14 series due to lack of the
on_each_cpu_cond_mask() function.
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 22/25] mm/memcontrol: Move misplaced local_unlock_irqrestore()
2020-02-21 21:24 ` [PATCH RT 22/25] mm/memcontrol: Move misplaced local_unlock_irqrestore() zanussi
@ 2020-02-24 9:55 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 9:55 UTC (permalink / raw)
To: zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-21 15:24:50 [-0600], zanussi@kernel.org wrote:
> From: Matt Fleming <matt@codeblueprint.co.uk>
>
> v4.14.170-rt75-rc1 stable review patch.
> If anyone has any objections, please let me know.
For this and the three following: Please skip it if it is not (yet) part
of v4.19-RT.
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx
2020-02-24 8:55 ` Sebastian Andrzej Siewior
@ 2020-02-24 15:12 ` Tom Zanussi
0 siblings, 0 replies; 43+ messages in thread
From: Tom Zanussi @ 2020-02-24 15:12 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
Hi Sebastian,
Thanks for reviewing these..
On Mon, 2020-02-24 at 09:55 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-02-21 15:24:45 [-0600], zanussi@kernel.org wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >
> > v4.14.170-rt75-rc1 stable review patch.
> > If anyone has any objections, please let me know.
>
> Please don't apply this for the reasons I mentioned in
> https://lkml.kernel.org/r/20200122084352.nyqnlfaumjgnvgih@linut
> ronix.de
>
Yeah, I missed this comment on the 4.19 series (somehow the patch
itself shows as 'not found' in the archives).
Will drop.
Thanks,
Tom
> I guess they still apply (haven't checked).
>
> Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount
2020-02-24 9:03 ` Sebastian Andrzej Siewior
@ 2020-02-24 15:14 ` Tom Zanussi
2020-02-24 16:17 ` Steven Rostedt
1 sibling, 0 replies; 43+ messages in thread
From: Tom Zanussi @ 2020-02-24 15:14 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On Mon, 2020-02-24 at 10:03 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-02-21 15:24:47 [-0600], zanussi@kernel.org wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >
> > v4.14.170-rt75-rc1 stable review patch.
> > If anyone has any objections, please let me know.
>
> This is required but it is not part of the next "higher" tree
> (v4.19-RT). Which means if someone moves from v4.14-RT to the next
> tree
> (v4.19-RT in this case) that someone would have the bug again.
>
> Could you please wait with such patches or did the I miss the v4.19-
> RT
> tree with this change?
>
No, you didn't miss the 4.19 tree with this change - I got a little
ahead of 4.19 this time. Will drop all the patches ahead of 4.19.
Thanks,
Tom
> Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 20/25] kmemleak: Cosmetic changes
2020-02-24 9:12 ` Sebastian Andrzej Siewior
@ 2020-02-24 15:18 ` Tom Zanussi
2020-02-24 15:52 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 43+ messages in thread
From: Tom Zanussi @ 2020-02-24 15:18 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On Mon, 2020-02-24 at 10:12 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-02-21 15:24:48 [-0600], zanussi@kernel.org wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >
> > v4.14.170-rt75-rc1 stable review patch.
> > If anyone has any objections, please let me know.
>
> This makes no sense to apply it. I updated my patch in the RT queue
> to
> what has been sent (and later merged) upstream. Then I was forced to
> sync the non-rebase branch with the rebase branch. This is the
> result.
>
> What should be applied instead is
> fb2c57edcb943 ("kmemleak: Change the lock of kmemleak_object to
> raw_spinlock_t")
>
I did apply that patch (as patch 14/25 of this series). This patch
seemed like it was adding some comment bits mised for that one, which
is all it does.
Thanks,
Tom
> from the v4.19-RT branch.
>
> Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq()
2020-02-24 9:43 ` Sebastian Andrzej Siewior
@ 2020-02-24 15:31 ` Tom Zanussi
2020-02-24 16:05 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 43+ messages in thread
From: Tom Zanussi @ 2020-02-24 15:31 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On Mon, 2020-02-24 at 10:43 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-02-21 15:24:43 [-0600], zanussi@kernel.org wrote:
> > From: Scott Wood <swood@redhat.com>
> >
> > v4.14.170-rt75-rc1 stable review patch.
> > If anyone has any objections, please let me know.
>
> This creates bug which is stuffed later via
> sched: migrate_enable: Busy loop until the migration request is
> completed
>
> So if apply this, please take the bug fix, too. This is Stevens queue
> for reference:
> > [PATCH RT 22/30] sched: migrate_enable: Use select_fallback_rq()
>
> ^^ bug introduced
>
Hmm, it seemed from the comment on the 4.19 series that it was '24/32
sched: migrate_enable: Use stop_one_cpu_nowait()' that required 'sched:
migrate_enable: Busy loop until the migration request is
completed' as a bug fix.
https://lore.kernel.org/linux-rt-users/20200122083130.kuu3yppckhyjrr4u@linutronix.de/#t
I didn't take the stop_one_cpu_nowait() one, so didn't take the busy
loop one either.
Thanks,
Tom
> > [PATCH RT 23/30] sched: Lazy migrate_disable
> > processing
> >
> > [PATCH RT 24/30] sched: migrate_enable: Use stop_one_cpu_nowait()
> > [PATCH RT 25/30] Revert "ARM: Initialize split page table locks for
> > vector page"
> > [PATCH RT 26/30] locking: Make spinlock_t and rwlock_t a RCU
> > section on RT
> > [PATCH RT 27/30] sched/core: migrate_enable() must access
> > takedown_cpu_task on !HOTPLUG_CPU
> > [PATCH RT 28/30] lib/smp_processor_id: Adjust
> > check_preemption_disabled()
> > [PATCH RT 29/30] sched: migrate_enable: Busy loop until the
> > migration request is completed
>
> ^^ bug fixed
>
> Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 21/25] smp: Use smp_cond_func_t as type for the conditional function
2020-02-24 9:52 ` Sebastian Andrzej Siewior
@ 2020-02-24 15:34 ` Tom Zanussi
0 siblings, 0 replies; 43+ messages in thread
From: Tom Zanussi @ 2020-02-24 15:34 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On Mon, 2020-02-24 at 10:52 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-02-21 15:24:49 [-0600], zanussi@kernel.org wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >
> > v4.14.170-rt75-rc1 stable review patch.
> > If anyone has any objections, please let me know.
>
> This alone makes no sense. I had this in the devel tree as part of a
> three patch series to remove a limitation in on_each_cpu_cond_mask().
> This does not apply to the v4.14 series due to lack of the
> on_each_cpu_cond_mask() function.
>
Yeah, I did drop the on_each_cpu_cond_mask() versions, but the typedef
itself seemed like a nice innocuous cleanup. Will drop though as it's
really unnecessary.
Thanks,
Tom
> Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 20/25] kmemleak: Cosmetic changes
2020-02-24 15:18 ` Tom Zanussi
@ 2020-02-24 15:52 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 15:52 UTC (permalink / raw)
To: Tom Zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-24 09:18:23 [-0600], Tom Zanussi wrote:
> > What should be applied instead is
> > fb2c57edcb943 ("kmemleak: Change the lock of kmemleak_object to
> > raw_spinlock_t")
> >
>
> I did apply that patch (as patch 14/25 of this series). This patch
> seemed like it was adding some comment bits mised for that one, which
> is all it does.
Bah. So I saw that (#14/25), considered okay but while looking at this
patch I was comparing against v4.14.164-rt73 and forgot about it…
> Thanks,
>
> Tom
Sebastian.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq()
2020-02-24 15:31 ` Tom Zanussi
@ 2020-02-24 16:05 ` Sebastian Andrzej Siewior
2020-02-24 22:15 ` Scott Wood
0 siblings, 1 reply; 43+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-02-24 16:05 UTC (permalink / raw)
To: Tom Zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On 2020-02-24 09:31:06 [-0600], Tom Zanussi wrote:
> On Mon, 2020-02-24 at 10:43 +0100, Sebastian Andrzej Siewior wrote:
> > On 2020-02-21 15:24:43 [-0600], zanussi@kernel.org wrote:
> > > From: Scott Wood <swood@redhat.com>
> > >
> > > v4.14.170-rt75-rc1 stable review patch.
> > > If anyone has any objections, please let me know.
> >
> > This creates bug which is stuffed later via
> > sched: migrate_enable: Busy loop until the migration request is
> > completed
> >
> > So if apply this, please take the bug fix, too. This is Stevens queue
> > for reference:
> > > [PATCH RT 22/30] sched: migrate_enable: Use select_fallback_rq()
> >
> > ^^ bug introduced
> >
>
> Hmm, it seemed from the comment on the 4.19 series that it was '24/32
> sched: migrate_enable: Use stop_one_cpu_nowait()' that required 'sched:
> migrate_enable: Busy loop until the migration request is
> completed' as a bug fix.
>
> https://lore.kernel.org/linux-rt-users/20200122083130.kuu3yppckhyjrr4u@linutronix.de/#t
>
> I didn't take the stop_one_cpu_nowait() one, so didn't take the busy
> loop one either.
Ach, it was the different WARN_ON() then. So this might not introduce
any bug then. *Might*.
Steven backported the whole pile and you took just this one patch. The
whole set was tested in devel and uncovered a problem which was fixed
later. Taking only a part *may* expose other problems it *may* be fine.
Steven, any opinion on your side?
> Thanks,
>
> Tom
Sebastian
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount
2020-02-24 9:03 ` Sebastian Andrzej Siewior
2020-02-24 15:14 ` Tom Zanussi
@ 2020-02-24 16:17 ` Steven Rostedt
1 sibling, 0 replies; 43+ messages in thread
From: Steven Rostedt @ 2020-02-24 16:17 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: zanussi, LKML, linux-rt-users, Thomas Gleixner, Carsten Emde,
John Kacur, Daniel Wagner
On Mon, 24 Feb 2020 10:03:17 +0100
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> On 2020-02-21 15:24:47 [-0600], zanussi@kernel.org wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >
> > v4.14.170-rt75-rc1 stable review patch.
> > If anyone has any objections, please let me know.
>
> This is required but it is not part of the next "higher" tree
> (v4.19-RT). Which means if someone moves from v4.14-RT to the next tree
> (v4.19-RT in this case) that someone would have the bug again.
>
> Could you please wait with such patches or did the I miss the v4.19-RT
> tree with this change?
>
No, I'm just behind in backporting patches.
We need to work on synchronizing better what gets backported. :-/
-- Steve
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq()
2020-02-24 16:05 ` Sebastian Andrzej Siewior
@ 2020-02-24 22:15 ` Scott Wood
0 siblings, 0 replies; 43+ messages in thread
From: Scott Wood @ 2020-02-24 22:15 UTC (permalink / raw)
To: Sebastian Andrzej Siewior, Tom Zanussi
Cc: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
On Mon, 2020-02-24 at 17:05 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-02-24 09:31:06 [-0600], Tom Zanussi wrote:
> > On Mon, 2020-02-24 at 10:43 +0100, Sebastian Andrzej Siewior wrote:
> > > On 2020-02-21 15:24:43 [-0600], zanussi@kernel.org wrote:
> > > > From: Scott Wood <swood@redhat.com>
> > > >
> > > > v4.14.170-rt75-rc1 stable review patch.
> > > > If anyone has any objections, please let me know.
> > >
> > > This creates bug which is stuffed later via
> > > sched: migrate_enable: Busy loop until the migration request is
> > > completed
> > >
> > > So if apply this, please take the bug fix, too. This is Stevens queue
> > > for reference:
> > > > [PATCH RT 22/30] sched: migrate_enable: Use select_fallback_rq()
> > >
> > > ^^ bug introduced
> > >
> >
> > Hmm, it seemed from the comment on the 4.19 series that it was '24/32
> > sched: migrate_enable: Use stop_one_cpu_nowait()' that required 'sched:
> > migrate_enable: Busy loop until the migration request is
> > completed' as a bug fix.
> >
> >
> > https://lore.kernel.org/linux-rt-users/20200122083130.kuu3yppckhyjrr4u@linutronix.de/#t
> >
> > I didn't take the stop_one_cpu_nowait() one, so didn't take the busy
> > loop one either.
>
> Ach, it was the different WARN_ON() then. So this might not introduce
> any bug then. *Might*.
> Steven backported the whole pile and you took just this one patch. The
> whole set was tested in devel and uncovered a problem which was fixed
> later. Taking only a part *may* expose other problems it *may* be fine.
Taking up to this patch should be OK (well, you still have the
current->state clobbering, but it shouldn't introduce any new known bugs).
The busy loop patch itself has a followup fix though (in theory the busy
loop could deadlock): 2dcd94b443c5dcbc ("sched: migrate_enable: Use per-cpu
cpu_stop_work") which should be considered for v4.19 rt stable which has the
busy loop patch.
-Scott
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context
2020-02-24 8:33 ` Sebastian Andrzej Siewior
@ 2020-02-25 14:50 ` Juri Lelli
0 siblings, 0 replies; 43+ messages in thread
From: Juri Lelli @ 2020-02-25 14:50 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: zanussi, LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
Carsten Emde, John Kacur, Daniel Wagner
Hi,
On 24/02/20 09:33, Sebastian Andrzej Siewior wrote:
> On 2020-02-21 15:24:31 [-0600], zanussi@kernel.org wrote:
> > [ Upstream commit ba94e7aed7405c58251b1380e6e7d73aa8284b41 ]
> >
> > SCHED_DEADLINE inactive timer needs to run in hardirq context (as
> > dl_task_timer already does) on PREEMPT_RT
>
> The message says "dl_task_timer already does" but this is not true for
> v4.14 as it still runs in softirq context on RT. v4.19 has this either
> via
> https://lkml.kernel.org/r/20190731103715.4047-1-juri.lelli@redhat.com
>
> or the patch which got merged upstream.
>
> Juri, I guess we want this for v4.14, too?
Indeed. v4.14 needs this as well.
Thanks for noticing!
Best,
Juri
^ permalink raw reply [flat|nested] 43+ messages in thread
end of thread, other threads:[~2020-02-25 14:50 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-21 21:24 [PATCH RT 00/25] Linux v4.14.170-rt75-rc1 zanussi
2020-02-21 21:24 ` [PATCH RT 01/25] Fix wrong-variable use in irq_set_affinity_notifier zanussi
2020-02-21 21:24 ` [PATCH RT 02/25] x86: preempt: Check preemption level before looking at lazy-preempt zanussi
2020-02-21 21:24 ` [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context zanussi
2020-02-24 8:33 ` Sebastian Andrzej Siewior
2020-02-25 14:50 ` Juri Lelli
2020-02-21 21:24 ` [PATCH RT 04/25] i2c: hix5hd2: Remove IRQF_ONESHOT zanussi
2020-02-21 21:24 ` [PATCH RT 05/25] i2c: exynos5: " zanussi
2020-02-21 21:24 ` [PATCH RT 06/25] sched: migrate_dis/enable: Use sleeping_lock…() to annotate sleeping points zanussi
2020-02-21 21:24 ` [PATCH RT 07/25] sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr zanussi
2020-02-21 21:24 ` [PATCH RT 08/25] sched: Remove dead __migrate_disabled() check zanussi
2020-02-21 21:24 ` [PATCH RT 09/25] sched: migrate disable: Protect cpus_ptr with lock zanussi
2020-02-21 21:24 ` [PATCH RT 10/25] lib/smp_processor_id: Don't use cpumask_equal() zanussi
2020-02-21 21:24 ` [PATCH RT 11/25] futex: Make the futex_hash_bucket spinlock_t again and bring back its old state zanussi
2020-02-21 21:24 ` [PATCH RT 12/25] locking/rtmutex: Clean ->pi_blocked_on in the error case zanussi
2020-02-21 21:24 ` [PATCH RT 13/25] lib/ubsan: Don't seralize UBSAN report zanussi
2020-02-21 21:24 ` [PATCH RT 14/25] kmemleak: Change the lock of kmemleak_object to raw_spinlock_t zanussi
2020-02-21 21:24 ` [PATCH RT 15/25] sched: migrate_enable: Use select_fallback_rq() zanussi
2020-02-24 9:43 ` Sebastian Andrzej Siewior
2020-02-24 15:31 ` Tom Zanussi
2020-02-24 16:05 ` Sebastian Andrzej Siewior
2020-02-24 22:15 ` Scott Wood
2020-02-21 21:24 ` [PATCH RT 16/25] Revert "ARM: Initialize split page table locks for vector page" zanussi
2020-02-21 21:24 ` [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx zanussi
2020-02-24 8:55 ` Sebastian Andrzej Siewior
2020-02-24 15:12 ` Tom Zanussi
2020-02-21 21:24 ` [PATCH RT 18/25] locking: Make spinlock_t and rwlock_t a RCU section on RT zanussi
2020-02-21 21:24 ` [PATCH RT 19/25] userfaultfd: Use a seqlock instead of seqcount zanussi
2020-02-24 9:03 ` Sebastian Andrzej Siewior
2020-02-24 15:14 ` Tom Zanussi
2020-02-24 16:17 ` Steven Rostedt
2020-02-21 21:24 ` [PATCH RT 20/25] kmemleak: Cosmetic changes zanussi
2020-02-24 9:12 ` Sebastian Andrzej Siewior
2020-02-24 15:18 ` Tom Zanussi
2020-02-24 15:52 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 21/25] smp: Use smp_cond_func_t as type for the conditional function zanussi
2020-02-24 9:52 ` Sebastian Andrzej Siewior
2020-02-24 15:34 ` Tom Zanussi
2020-02-21 21:24 ` [PATCH RT 22/25] mm/memcontrol: Move misplaced local_unlock_irqrestore() zanussi
2020-02-24 9:55 ` Sebastian Andrzej Siewior
2020-02-21 21:24 ` [PATCH RT 23/25] locallock: Include header for the `current' macro zanussi
2020-02-21 21:24 ` [PATCH RT 24/25] sched: Provide migrate_disable/enable() inlines zanussi
2020-02-21 21:24 ` [PATCH RT 25/25] Linux 4.14.170-rt75-rc1 zanussi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).