LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection*
@ 2018-10-30 18:49 Tim Chen
  2018-10-30 18:49 ` [Patch v4 01/18] x86/speculation: Clean up spectre_v2_parse_cmdline() Tim Chen
                   ` (17 more replies)
  0 siblings, 18 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Thanks to the feedback from all reviewers.

The major change in this version is that I've updated defenses of all
threads of a process when there's a dumpability change on the process.

I've also added a patch to signal the need of a forced update of SPEC_CTRL
MSR on remote CPU. We could change the TIF flag affecting speculation
on a task running on remote CPU but not immediately update the MSR.
So we need to force the MSR update on the next context switch to make
sure that the MSR is in sync with the running task's flags.

Wonder if this needs to be extended to SSBD update.  It seems like SSBD's
update via prctl is on current task so it is not affected by the
remote CPU update issue, but I can be wrong.

Thomas also suggested grouping the x86 TIF_* flags according to their
functionality and I made a stab at it.  But this is not essential to
this patchset.  I can drop this if there are compatability reason not
to change the bit position associated with the TIF_* flags.

Patch 1 to 3 are clean up patches.
Patch 4 and 5 disable STIBP for enhacned IBRS.
Patch 6 to 12 reorganizes and clean up the code without affecting
 functionality for easier modification later.
Patch 13 introduces security hook to update defense mechanisms
for a process.
Patch 14 introduces the STIBP flag on a process to dynamically
 enable STIBP for that process.
Patch 15 introduces different modes to protect
 processes against Spectre v2 user space attack.
Patch 16 to 17 adds Spectre v2 user mode defenses on a per task basis. 
Patch 18 introduces prctl interface to restrict indirect
 branch speculation via prctl.
	      
Tim

Changes:
v4:
1. Extend STIBP update to all threads of a process changing
it dumpability.
2. Add logic to update SPEC_CTRL MSR on a remote CPU when TIF flags
affecting speculation changes for task running on the remote CPU.
3. Regroup x86 TIF_* flags according to their functions.
4. Various code clean up.

v3:
1. Add logic to skip STIBP when Enhanced IBRS is used.
2. Break up v2 patches into smaller logical patches. 
3. Fix bug in arch_set_dumpable that did not update SPEC_CTRL
MSR right away when according to task's STIBP flag clearing which
caused SITBP to be left on.
4. Various code clean up. 

v2:
1. Extend per process STIBP to AMD cpus
2. Add prctl option to control per process indirect branch speculation
3. Bug fixes and cleanups 

Jiri's patchset to harden Spectre v2 user space mitigation makes IBPB
and STIBP in use for Spectre v2 mitigation on all processes.  IBPB will
be issued for switching to an application that's not ptraceable by the
previous application and STIBP will be always turned on.

However, leaving STIBP on all the time is expensive for certain
applications that have frequent indirect branches. One such application
is perlbench in the SpecInt Rate 2006 test suite which shows a
21% reduction in throughput.  Other application like bzip2 in
the same test suite with  minimal indirct branches have
only a 0.7% reduction in throughput. IBPB will also impose
overhead during context switches.

Users may not wish to incur performance overhead from IBPB and STIBP for
general non security sensitive processes and use these mitigations only
for security sensitive processes.

This patchset provides a process property based lite protection mode.
In this mode, IBPB and STIBP mitigation are applied only to security
sensitive non-dumpable processes and processes that users want to protect
by having indirect branch speculation disabled via PRCTL.  So the overhead
from IBPB and STIBP are avoided for low security processes that don't
require extra protection.


Tim Chen (18):
  x86/speculation: Clean up spectre_v2_parse_cmdline()
  x86/speculation: Remove unnecessary ret variable in cpu_show_common()
  x86/speculation: Reorganize cpu_show_common()
  x86/speculation: Add X86_FEATURE_USE_IBRS_ENHANCED
  x86/speculation: Disable STIBP when enhanced IBRS is in use
  smt: Create cpu_smt_enabled static key for SMT specific code
  x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key
  sched: Deprecate sched_smt_present and use cpu_smt_enabled static key
  x86/speculation: Rename SSBD update functions
  x86/speculation: Reorganize speculation control MSRs update
  x86/speculation: Update comment on TIF_SSBD
  x86: Group thread info flags by functionality
  security: Update security level of a process when modifying its
    dumpability
  x86/speculation: Turn on or off STIBP according to a task's TIF_STIBP
  x86/speculation: Add Spectre v2 app to app protection modes
  x86/speculation: Enable STIBP to protect security sensitive tasks
  x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  x86/speculation: Create PRCTL interface to restrict indirect branch
    speculation

 Documentation/admin-guide/kernel-parameters.txt |  21 ++
 Documentation/userspace-api/spec_ctrl.rst       |   9 +
 arch/x86/include/asm/cpufeatures.h              |   1 +
 arch/x86/include/asm/msr-index.h                |   6 +-
 arch/x86/include/asm/nospec-branch.h            |   9 +
 arch/x86/include/asm/spec-ctrl.h                |  18 +-
 arch/x86/include/asm/thread_info.h              | 104 ++++----
 arch/x86/kernel/cpu/bugs.c                      | 309 +++++++++++++++++++++---
 arch/x86/kernel/process.c                       |  76 +++++-
 arch/x86/kvm/vmx.c                              |   2 +-
 arch/x86/mm/tlb.c                               |  23 +-
 fs/exec.c                                       |   2 +
 include/linux/cpu.h                             |   1 +
 include/linux/sched.h                           |   9 +
 include/linux/security.h                        |   6 +
 include/uapi/linux/prctl.h                      |   1 +
 kernel/cpu.c                                    |  12 +-
 kernel/cred.c                                   |   5 +-
 kernel/sched/core.c                             |  12 -
 kernel/sched/fair.c                             |   6 +-
 kernel/sched/sched.h                            |   4 +-
 kernel/sys.c                                    |   1 +
 security/security.c                             |  31 +++
 tools/include/uapi/linux/prctl.h                |   1 +
 24 files changed, 553 insertions(+), 116 deletions(-)

-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 01/18] x86/speculation: Clean up spectre_v2_parse_cmdline()
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 02/18] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Tim Chen
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Remove unnecessary else statement in spectre_v2_parse_cmdline()
to save a indentation level.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fe32103..bbda9b6 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -283,22 +283,21 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
-	else {
-		ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
-		if (ret < 0)
-			return SPECTRE_V2_CMD_AUTO;
 
-		for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
-			if (!match_option(arg, ret, mitigation_options[i].option))
-				continue;
-			cmd = mitigation_options[i].cmd;
-			break;
-		}
+	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_CMD_AUTO;
 
-		if (i >= ARRAY_SIZE(mitigation_options)) {
-			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
-			return SPECTRE_V2_CMD_AUTO;
-		}
+	for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
+		if (!match_option(arg, ret, mitigation_options[i].option))
+			continue;
+		cmd = mitigation_options[i].cmd;
+		break;
+	}
+
+	if (i >= ARRAY_SIZE(mitigation_options)) {
+		pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+		return SPECTRE_V2_CMD_AUTO;
 	}
 
 	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 02/18] x86/speculation: Remove unnecessary ret variable in cpu_show_common()
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
  2018-10-30 18:49 ` [Patch v4 01/18] x86/speculation: Clean up spectre_v2_parse_cmdline() Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common() Tim Chen
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Remove unnecessary ret variable in cpu_show_common() to make the
code more concise.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bbda9b6..6021b17 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -854,8 +854,6 @@ static ssize_t l1tf_show_state(char *buf)
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
-	int ret;
-
 	if (!boot_cpu_has_bug(bug))
 		return sprintf(buf, "Not affected\n");
 
@@ -873,13 +871,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 
 	case X86_BUG_SPECTRE_V2:
-		ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
 			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
-		return ret;
 
 	case X86_BUG_SPEC_STORE_BYPASS:
 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common()
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
  2018-10-30 18:49 ` [Patch v4 01/18] x86/speculation: Clean up spectre_v2_parse_cmdline() Tim Chen
  2018-10-30 18:49 ` [Patch v4 02/18] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-11-03 18:07   ` Thomas Gleixner
  2018-10-30 18:49 ` [Patch v4 04/18] x86/speculation: Add X86_FEATURE_USE_IBRS_ENHANCED Tim Chen
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Extract the logic to show IBPB, STIBP usages in cpu_show_common()
into helper functions.

Later patches will add other userspace Spectre v2 mitigation modes.
This patch makes it easy to show IBPB and STIBP
usage scenario according to the mitigation mode.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6021b17..af456f4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -851,6 +851,22 @@ static ssize_t l1tf_show_state(char *buf)
 }
 #endif
 
+static char *stibp_state(void)
+{
+	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
+		return ", STIBP";
+	else
+		return "";
+}
+
+static char *ibpb_state(void)
+{
+	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
+		return ", IBPB";
+	else
+		return "";
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -872,9 +888,8 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 
 	case X86_BUG_SPECTRE_V2:
 		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
-			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
-			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+			       ibpb_state(), stibp_state(),
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
 
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 04/18] x86/speculation: Add X86_FEATURE_USE_IBRS_ENHANCED
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (2 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common() Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 05/18] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Add X86_FEATURE_USE_IBRS_ENHANCED feature bit to indicate
enhanced IBRS is used for Spectre V2 mitigation.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/cpufeatures.h | 1 +
 arch/x86/kernel/cpu/bugs.c         | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 89a048c..612c2a3 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -221,6 +221,7 @@
 #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
 #define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
 #define X86_FEATURE_IBRS_ENHANCED	( 7*32+30) /* Enhanced IBRS */
+#define X86_FEATURE_USE_IBRS_ENHANCED	( 7*32+31) /* "" Enhanced IBRS enabled */
 
 /* Virtualization flags: Linux defined, word 8 */
 #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index af456f4..6095c9d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -386,6 +386,7 @@ static void __init spectre_v2_select_mitigation(void)
 			/* Force it so VMEXIT will restore correctly */
 			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+			setup_force_cpu_cap(X86_FEATURE_USE_IBRS_ENHANCED);
 			goto specv2_set_mode;
 		}
 		if (IS_ENABLED(CONFIG_RETPOLINE))
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 05/18] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (3 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 04/18] x86/speculation: Add X86_FEATURE_USE_IBRS_ENHANCED Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 06/18] smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

If enhanced IBRS is in use, a malicious application running on a CPU is
unable to exploit Spectre v2 vulnerability on applications running on
its hyperthread sibling.  Using STIBP to mitigate Spectre v2 against
exploit from hyperthread sibling is redundant.

This patch disables STIBP when enhanced IBRS is used for migitigating
Spectre v2.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6095c9d..eb07ab6 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -324,9 +324,17 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 
 static bool stibp_needed(void)
 {
+	/*
+	 * Determine if STIBP should be always on.
+	 * Using enhanced IBRS makes using STIBP unnecessary.
+	 */
+
 	if (spectre_v2_enabled == SPECTRE_V2_NONE)
 		return false;
 
+	if (static_cpu_has(X86_FEATURE_USE_IBRS_ENHANCED))
+		return false;
+
 	if (!boot_cpu_has(X86_FEATURE_STIBP))
 		return false;
 
@@ -854,6 +862,9 @@ static ssize_t l1tf_show_state(char *buf)
 
 static char *stibp_state(void)
 {
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return "";
+
 	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
 		return ", STIBP";
 	else
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 06/18] smt: Create cpu_smt_enabled static key for SMT specific code
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (4 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 05/18] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key Tim Chen
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Create cpu_smt_enabled static key used for enabling
SMT specific code paths.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 include/linux/cpu.h |  1 +
 kernel/cpu.c        | 12 ++++++++++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 218df7f..b54f085 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -188,5 +188,6 @@ static inline void cpu_smt_disable(bool force) { }
 static inline void cpu_smt_check_topology_early(void) { }
 static inline void cpu_smt_check_topology(void) { }
 #endif
+DECLARE_STATIC_KEY_TRUE(cpu_smt_enabled);
 
 #endif /* _LINUX_CPU_H_ */
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 3adecda..03170c8 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -349,6 +349,8 @@ EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
 #ifdef CONFIG_HOTPLUG_SMT
 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
 EXPORT_SYMBOL_GPL(cpu_smt_control);
+DEFINE_STATIC_KEY_TRUE(cpu_smt_enabled);
+EXPORT_SYMBOL_GPL(cpu_smt_enabled);
 
 static bool cpu_smt_available __read_mostly;
 
@@ -364,6 +366,7 @@ void __init cpu_smt_disable(bool force)
 	} else {
 		cpu_smt_control = CPU_SMT_DISABLED;
 	}
+	static_branch_disable(&cpu_smt_enabled);
 }
 
 /*
@@ -373,8 +376,10 @@ void __init cpu_smt_disable(bool force)
  */
 void __init cpu_smt_check_topology_early(void)
 {
-	if (!topology_smt_supported())
+	if (!topology_smt_supported()) {
 		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
+		static_branch_disable(&cpu_smt_enabled);
+	}
 }
 
 /*
@@ -386,8 +391,10 @@ void __init cpu_smt_check_topology_early(void)
  */
 void __init cpu_smt_check_topology(void)
 {
-	if (!cpu_smt_available)
+	if (!cpu_smt_available) {
 		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
+		static_branch_disable(&cpu_smt_enabled);
+	}
 }
 
 static int __init smt_cmdline_disable(char *str)
@@ -2072,6 +2079,7 @@ static int cpuhp_smt_enable(void)
 
 	cpu_maps_update_begin();
 	cpu_smt_control = CPU_SMT_ENABLED;
+	static_branch_enable(&cpu_smt_enabled);
 	arch_smt_update();
 	for_each_present_cpu(cpu) {
 		/* Skip online CPUs and CPUs on offline nodes */
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (5 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 06/18] smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-11-03 18:29   ` Thomas Gleixner
  2018-10-30 18:49 ` [Patch v4 08/18] sched: Deprecate sched_smt_present and use " Tim Chen
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Change the SMT code paths check from using cpu_smt_control to
cpu_smt_enabled static key.  This saves a branching check.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 arch/x86/kvm/vmx.c         | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index eb07ab6..32d962e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -355,7 +355,7 @@ void arch_smt_update(void)
 
 	mutex_lock(&spec_ctrl_mutex);
 	mask = x86_spec_ctrl_base;
-	if (cpu_smt_control == CPU_SMT_ENABLED)
+	if (static_branch_likely(&cpu_smt_enabled))
 		mask |= SPEC_CTRL_STIBP;
 	else
 		mask &= ~SPEC_CTRL_STIBP;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 533a327..8ec0ea3 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -11072,7 +11072,7 @@ static int vmx_vm_init(struct kvm *kvm)
 			 * Warn upon starting the first VM in a potentially
 			 * insecure environment.
 			 */
-			if (cpu_smt_control == CPU_SMT_ENABLED)
+			if (static_branch_likely(&cpu_smt_enabled))
 				pr_warn_once(L1TF_MSG_SMT);
 			if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)
 				pr_warn_once(L1TF_MSG_L1D);
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 08/18] sched: Deprecate sched_smt_present and use cpu_smt_enabled static key
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (6 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-11-03 18:20   ` Thomas Gleixner
  2018-10-30 18:49 ` [Patch v4 09/18] x86/speculation: Rename SSBD update functions Tim Chen
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

The cpu_smt_enabled static key serves identical purpose as cpu_smt_enabled
to enable SMT specific code.

This patch replaces sched_smt_present in the scheduler with
cpu_smt_enabled and deprecate sched_smt_present.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 kernel/sched/core.c  | 12 ------------
 kernel/sched/fair.c  |  6 ++----
 kernel/sched/sched.h |  4 +---
 3 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 625bc98..a06b157 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5739,18 +5739,6 @@ int sched_cpu_activate(unsigned int cpu)
 	struct rq *rq = cpu_rq(cpu);
 	struct rq_flags rf;
 
-#ifdef CONFIG_SCHED_SMT
-	/*
-	 * The sched_smt_present static key needs to be evaluated on every
-	 * hotplug event because at boot time SMT might be disabled when
-	 * the number of booted CPUs is limited.
-	 *
-	 * If then later a sibling gets hotplugged, then the key would stay
-	 * off and SMT scheduling would never be functional.
-	 */
-	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
-		static_branch_enable_cpuslocked(&sched_smt_present);
-#endif
 	set_cpu_active(cpu, true);
 
 	if (sched_smp_initialized) {
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b39fb59..4899bb1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5839,8 +5839,6 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 }
 
 #ifdef CONFIG_SCHED_SMT
-DEFINE_STATIC_KEY_FALSE(sched_smt_present);
-
 static inline void set_idle_cores(int cpu, int val)
 {
 	struct sched_domain_shared *sds;
@@ -5900,7 +5898,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
 	int core, cpu;
 
-	if (!static_branch_likely(&sched_smt_present))
+	if (!static_branch_likely(&cpu_smt_enabled))
 		return -1;
 
 	if (!test_idle_cores(target, false))
@@ -5936,7 +5934,7 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
 {
 	int cpu;
 
-	if (!static_branch_likely(&sched_smt_present))
+	if (!static_branch_likely(&cpu_smt_enabled))
 		return -1;
 
 	for_each_cpu(cpu, cpu_smt_mask(target)) {
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4a2e8ca..5adba57 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -928,13 +928,11 @@ static inline int cpu_of(struct rq *rq)
 
 #ifdef CONFIG_SCHED_SMT
 
-extern struct static_key_false sched_smt_present;
-
 extern void __update_idle_core(struct rq *rq);
 
 static inline void update_idle_core(struct rq *rq)
 {
-	if (static_branch_unlikely(&sched_smt_present))
+	if (static_branch_likely(&cpu_smt_enabled))
 		__update_idle_core(rq);
 }
 
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 09/18] x86/speculation: Rename SSBD update functions
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (7 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 08/18] sched: Deprecate sched_smt_present and use " Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 10/18] x86/speculation: Reorganize speculation control MSRs update Tim Chen
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

During context switch, the SSBD bit in SPEC_CTRL MSR is set depending
on the TIF_SSBD flag in the current running task.

In later patches, update of additional bits in SPEC_CTRL MSR are added.
The SPEC_CTRL MSR update functions should get rid of the speculative
store names as they will not be limited to SSBD update.

This patch renames the speculative store update functions to a more
generic name.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/spec-ctrl.h |  6 +++---
 arch/x86/kernel/cpu/bugs.c       |  4 ++--
 arch/x86/kernel/process.c        | 12 ++++++------
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index ae7c2c5..8e2f841 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -70,11 +70,11 @@ extern void speculative_store_bypass_ht_init(void);
 static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
-extern void speculative_store_bypass_update(unsigned long tif);
+extern void speculation_ctrl_update(unsigned long tif);
 
-static inline void speculative_store_bypass_update_current(void)
+static inline void speculation_ctrl_update_current(void)
 {
-	speculative_store_bypass_update(current_thread_info()->flags);
+	speculation_ctrl_update(current_thread_info()->flags);
 }
 
 #endif
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 32d962e..190d6eb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -202,7 +202,7 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 		tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) :
 				 ssbd_spec_ctrl_to_tif(hostval);
 
-		speculative_store_bypass_update(tif);
+		speculation_ctrl_update(tif);
 	}
 }
 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
@@ -644,7 +644,7 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	 * mitigation until it is next scheduled.
 	 */
 	if (task == current && update)
-		speculative_store_bypass_update_current();
+		speculation_ctrl_update_current();
 
 	return 0;
 }
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index c93fcfd..8aa4960 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -395,27 +395,27 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
 }
 
-static __always_inline void intel_set_ssb_state(unsigned long tifn)
+static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
 	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+static __always_inline void __speculation_ctrl_update(unsigned long tifn)
 {
 	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
 		amd_set_ssb_virt_state(tifn);
 	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		amd_set_core_ssb_state(tifn);
 	else
-		intel_set_ssb_state(tifn);
+		spec_ctrl_update_msr(tifn);
 }
 
-void speculative_store_bypass_update(unsigned long tif)
+void speculation_ctrl_update(unsigned long tif)
 {
 	preempt_disable();
-	__speculative_store_bypass_update(tif);
+	__speculation_ctrl_update(tif);
 	preempt_enable();
 }
 
@@ -452,7 +452,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
 	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculative_store_bypass_update(tifn);
+		__speculation_ctrl_update(tifn);
 }
 
 /*
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 10/18] x86/speculation: Reorganize speculation control MSRs update
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (8 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 09/18] x86/speculation: Rename SSBD update functions Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 11/18] x86/speculation: Update comment on TIF_SSBD Tim Chen
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

This patch consolidates checks for prev and next task's flags change
relevant for speculation control MSRs update to __speculation_ctrl_update
function.

This makes it easy to pick the right speculation control MSR,
and the bits in the MSRs that need to be updated based on
TIF_* flags changes.

Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/process.c | 44 +++++++++++++++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 8aa4960..74bef48 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -397,25 +397,48 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 
 static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
-	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
+	u64 msr = x86_spec_ctrl_base;
+
+	/*
+	 * If X86_FEATURE_SSBD is not set, the SSBD
+	 * bit is not to be touched.
+	 */
+	if (static_cpu_has(X86_FEATURE_SSBD))
+		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculation_ctrl_update(unsigned long tifn)
-{
-	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
-		amd_set_ssb_virt_state(tifn);
-	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
-		amd_set_core_ssb_state(tifn);
-	else
+/*
+ * Update the MSRs managing speculation control during context switch.
+ *
+ * tifp: previous task's thread flags
+ * tifn: next task's thread flags
+ */
+static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+						      unsigned long tifn)
+{
+	bool updmsr = false;
+
+	/* If TIF_SSBD is different, select the proper mitigation method */
+	if ((tifp ^ tifn) & _TIF_SSBD) {
+		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
+			amd_set_ssb_virt_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+			amd_set_core_ssb_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_SSBD))
+			updmsr  = true;
+	}
+
+	if (updmsr)
 		spec_ctrl_update_msr(tifn);
 }
 
 void speculation_ctrl_update(unsigned long tif)
 {
+	/* Forced update. Make sure all relevant TIF flags are different */
 	preempt_disable();
-	__speculation_ctrl_update(tif);
+	__speculation_ctrl_update(~tif, tif);
 	preempt_enable();
 }
 
@@ -451,8 +474,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
-	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculation_ctrl_update(tifn);
+	__speculation_ctrl_update(tifp, tifn);
 }
 
 /*
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 11/18] x86/speculation: Update comment on TIF_SSBD
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (9 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 10/18] x86/speculation: Reorganize speculation control MSRs update Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 12/18] x86: Group thread info flags by functionality Tim Chen
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Update comment on TIF_SSBD to use the correct name
"Speculative store bypass disable".

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/thread_info.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 2ff2a30..523c69e 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -79,7 +79,7 @@ struct thread_info {
 #define TIF_SIGPENDING		2	/* signal pending */
 #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
 #define TIF_SINGLESTEP		4	/* reenable singlestep on user return*/
-#define TIF_SSBD			5	/* Reduced data speculation */
+#define TIF_SSBD		5	/* Speculative store bypass disable */
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 12/18] x86: Group thread info flags by functionality
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (10 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 11/18] x86/speculation: Update comment on TIF_SSBD Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 13/18] security: Update security level of a process when modifying its dumpability Tim Chen
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

The thread info flags are currently randomly distributed in the header
file arch/x86/include/asm/thread_info.h.

This patch groups the TIF flags according to these functionalities:
operation mode, syscall mode, pending work, task status and security
status.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/thread_info.h | 93 ++++++++++++++++++++++----------------
 1 file changed, 54 insertions(+), 39 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 523c69e..60798a0 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -74,58 +74,73 @@ struct thread_info {
  * - these are process state flags that various assembly files
  *   may need to access
  */
-#define TIF_SYSCALL_TRACE	0	/* syscall trace active */
-#define TIF_NOTIFY_RESUME	1	/* callback before returning to user */
-#define TIF_SIGPENDING		2	/* signal pending */
-#define TIF_NEED_RESCHED	3	/* rescheduling necessary */
-#define TIF_SINGLESTEP		4	/* reenable singlestep on user return*/
-#define TIF_SSBD		5	/* Speculative store bypass disable */
-#define TIF_SYSCALL_EMU		6	/* syscall emulation active */
-#define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
-#define TIF_SECCOMP		8	/* secure computing */
-#define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
-#define TIF_UPROBE		12	/* breakpointed or singlestepping */
-#define TIF_PATCH_PENDING	13	/* pending live patching update */
-#define TIF_NOCPUID		15	/* CPUID is not accessible in userland */
-#define TIF_NOTSC		16	/* TSC is not accessible in userland */
-#define TIF_IA32		17	/* IA32 compatibility process */
-#define TIF_NOHZ		19	/* in adaptive nohz mode */
-#define TIF_MEMDIE		20	/* is terminating due to OOM killer */
-#define TIF_POLLING_NRFLAG	21	/* idle is polling for TIF_NEED_RESCHED */
-#define TIF_IO_BITMAP		22	/* uses I/O bitmap */
-#define TIF_FORCED_TF		24	/* true if TF in eflags artificially */
-#define TIF_BLOCKSTEP		25	/* set when we want DEBUGCTLMSR_BTF */
-#define TIF_LAZY_MMU_UPDATES	27	/* task is updating the mmu lazily */
-#define TIF_SYSCALL_TRACEPOINT	28	/* syscall tracepoint instrumentation */
-#define TIF_ADDR32		29	/* 32-bit address space on 64 bits */
-#define TIF_X32			30	/* 32-bit native x86-64 binary */
-#define TIF_FSCHECK		31	/* Check FS is USER_DS on return */
+
+/* Operation mode */
+#define TIF_NOCPUID		0	/* CPUID is not accessible in userland */
+#define TIF_NOTSC		1	/* TSC is not accessible in userland */
+#define TIF_IA32		2	/* IA32 compatibility process */
+#define TIF_NOHZ		3	/* In adaptive nohz mode */
+#define TIF_ADDR32		4	/* 32-bit address space on 64 bits */
+#define TIF_X32			5	/* 32-bit native x86-64 binary */
+
+/* Syscall mode */
+#define TIF_SYSCALL_TRACE	6	/* Syscall trace active */
+#define TIF_SYSCALL_EMU		7	/* Syscall emulation active */
+#define TIF_SYSCALL_AUDIT	8	/* Syscall auditing active */
+#define TIF_SYSCALL_TRACEPOINT	9	/* Syscall tracepoint instrumentation */
+
+/* Pending work */
+#define TIF_NOTIFY_RESUME	10	/* Callback before returning to user */
+#define TIF_SIGPENDING		11	/* Signal pending */
+#define TIF_NEED_RESCHED	12	/* Rescheduling necessary */
+#define TIF_SINGLESTEP		13	/* Reenable singlestep on user return*/
+#define TIF_USER_RETURN_NOTIFY	14	/* Notify kernel of userspace return */
+#define TIF_PATCH_PENDING	15	/* Pending live patching update */
+#define TIF_FSCHECK		16	/* Check FS is USER_DS on return */
+
+/* Task status */
+#define TIF_UPROBE		18	/* Breakpointed or singlestepping */
+#define TIF_MEMDIE		19	/* Is terminating due to OOM killer */
+#define TIF_POLLING_NRFLAG	20	/* Idle is polling for TIF_NEED_RESCHED */
+#define TIF_IO_BITMAP		21	/* Uses I/O bitmap */
+#define TIF_FORCED_TF		22	/* True if TF in eflags artificially */
+#define TIF_BLOCKSTEP		23	/* Set when we want DEBUGCTLMSR_BTF */
+#define TIF_LAZY_MMU_UPDATES	24	/* Task is updating the mmu lazily */
+
+/* Security mode */
+#define TIF_SECCOMP		25	/* Secure computing */
+#define TIF_SSBD		26	/* Speculative store bypass disable */
+
+#define _TIF_NOCPUID		(1 << TIF_NOCPUID)
+#define _TIF_NOTSC		(1 << TIF_NOTSC)
+#define _TIF_IA32		(1 << TIF_IA32)
+#define _TIF_NOHZ		(1 << TIF_NOHZ)
+#define _TIF_ADDR32		(1 << TIF_ADDR32)
+#define _TIF_X32		(1 << TIF_X32)
 
 #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+#define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
+#define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+#define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
+
 #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
 #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
-#define _TIF_SSBD		(1 << TIF_SSBD)
-#define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
-#define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
-#define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
-#define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
-#define _TIF_NOCPUID		(1 << TIF_NOCPUID)
-#define _TIF_NOTSC		(1 << TIF_NOTSC)
-#define _TIF_IA32		(1 << TIF_IA32)
-#define _TIF_NOHZ		(1 << TIF_NOHZ)
+#define _TIF_FSCHECK		(1 << TIF_FSCHECK)
+
+#define _TIF_UPROBE		(1 << TIF_UPROBE)
+#define _TIF_MEMDIE		(1 << TIF_MEMDIE)
 #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
 #define _TIF_IO_BITMAP		(1 << TIF_IO_BITMAP)
 #define _TIF_FORCED_TF		(1 << TIF_FORCED_TF)
 #define _TIF_BLOCKSTEP		(1 << TIF_BLOCKSTEP)
 #define _TIF_LAZY_MMU_UPDATES	(1 << TIF_LAZY_MMU_UPDATES)
-#define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
-#define _TIF_ADDR32		(1 << TIF_ADDR32)
-#define _TIF_X32		(1 << TIF_X32)
-#define _TIF_FSCHECK		(1 << TIF_FSCHECK)
+
+#define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_SSBD		(1 << TIF_SSBD)
 
 /*
  * work to do in syscall_trace_enter().  Also includes TIF_NOHZ for
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 13/18] security: Update security level of a process when modifying its dumpability
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (11 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 12/18] x86: Group thread info flags by functionality Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 20:57   ` Schaufler, Casey
  2018-10-30 18:49 ` [Patch v4 14/18] x86/speculation: Turn on or off STIBP according to a task's TIF_STIBP Tim Chen
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

When a process is made non-dumpable, the action implies a higher level
of security implicitly as its memory is imposed with access restriction.

A call to update_process_security() is added to update security defenses
according to a process's dumpability and its implied security level.

Architecture specific defenses is erected for threads in the process
by calling arch_set_security(task, SECURITY_LEVEL_HIGH) or the defenses
relaxed via arch_set_security(task, SECURITY_LEVEL_NORMAL).  Such defenses
may incur extra overhead and is reserved for tasks needing high security.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 fs/exec.c                |  2 ++
 include/linux/security.h |  6 ++++++
 kernel/cred.c            |  5 ++++-
 kernel/sys.c             |  1 +
 security/security.c      | 31 +++++++++++++++++++++++++++++++
 5 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/fs/exec.c b/fs/exec.c
index 1ebf6e5..e70c8a7 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1366,6 +1366,8 @@ void setup_new_exec(struct linux_binprm * bprm)
 	else
 		set_dumpable(current->mm, SUID_DUMP_USER);
 
+	update_process_security(current);
+
 	arch_setup_new_exec();
 	perf_event_exec();
 	__set_task_comm(current, kbasename(bprm->filename), true);
diff --git a/include/linux/security.h b/include/linux/security.h
index 75f4156..469d05f 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -61,6 +61,12 @@ struct mm_struct;
 /* LSM Agnostic defines for sb_set_mnt_opts */
 #define SECURITY_LSM_NATIVE_LABELS	1
 
+/* Security level */
+#define SECURITY_NORMAL	0
+#define SECURITY_HIGH	1
+
+extern int update_process_security(struct task_struct *task);
+
 struct ctl_table;
 struct audit_krule;
 struct user_namespace;
diff --git a/kernel/cred.c b/kernel/cred.c
index ecf0365..0806a74 100644
--- a/kernel/cred.c
+++ b/kernel/cred.c
@@ -19,6 +19,7 @@
 #include <linux/security.h>
 #include <linux/binfmts.h>
 #include <linux/cn_proc.h>
+#include <linux/security.h>
 
 #if 0
 #define kdebug(FMT, ...)						\
@@ -445,8 +446,10 @@ int commit_creds(struct cred *new)
 	    !uid_eq(old->fsuid, new->fsuid) ||
 	    !gid_eq(old->fsgid, new->fsgid) ||
 	    !cred_cap_issubset(old, new)) {
-		if (task->mm)
+		if (task->mm) {
 			set_dumpable(task->mm, suid_dumpable);
+			update_process_security(task);
+		}
 		task->pdeath_signal = 0;
 		smp_wmb();
 	}
diff --git a/kernel/sys.c b/kernel/sys.c
index cf5c675..c6f179a 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2293,6 +2293,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 			break;
 		}
 		set_dumpable(me->mm, arg2);
+		update_process_security(me);
 		break;
 
 	case PR_SET_UNALIGN:
diff --git a/security/security.c b/security/security.c
index 736e78d..12460f2 100644
--- a/security/security.c
+++ b/security/security.c
@@ -28,6 +28,8 @@
 #include <linux/personality.h>
 #include <linux/backing-dev.h>
 #include <linux/string.h>
+#include <linux/coredump.h>
+#include <linux/sched/signal.h>
 #include <net/flow.h>
 
 #include <trace/events/initcall.h>
@@ -1353,6 +1355,35 @@ int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
 }
 EXPORT_SYMBOL(security_inode_getsecctx);
 
+void __weak arch_set_security(struct task_struct *task,
+			      unsigned int security_level)
+{
+}
+
+int update_process_security(struct task_struct *task)
+{
+	unsigned long flags;
+	struct task_struct *t;
+	int security_level;
+
+	if (!task->mm)
+		return -EINVAL;
+
+	if (!lock_task_sighand(task, &flags))
+		return -ESRCH;
+
+	if (get_dumpable(task->mm) != SUID_DUMP_USER)
+		security_level = SECURITY_HIGH;
+	else
+		security_level = SECURITY_NORMAL;
+
+	for_each_thread(task, t)
+		arch_set_security(task, security_level);
+
+	unlock_task_sighand(task, &flags);
+	return 0;
+}
+
 #ifdef CONFIG_SECURITY_NETWORK
 
 int security_unix_stream_connect(struct sock *sock, struct sock *other, struct sock *newsk)
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 14/18] x86/speculation: Turn on or off STIBP according to a task's TIF_STIBP
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (12 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 13/18] security: Update security level of a process when modifying its dumpability Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 15/18] x86/speculation: Add Spectre v2 app to app protection modes Tim Chen
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

This patch creates the mechanism to apply STIBP protection on
a per task basis.

A new TIF_STIBP flag is created.  Tasks needing STIBP would have the
TIF_STIBP flag applied.  During context switch time, this flag is checked
and the STIBP bit in SPEC_CTRL MSR is updated according to changes in
this flag between previous and next tasks.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/msr-index.h   |  6 +++++-
 arch/x86/include/asm/spec-ctrl.h   | 12 ++++++++++++
 arch/x86/include/asm/thread_info.h |  5 ++++-
 arch/x86/kernel/process.c          | 10 +++++++++-
 4 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 4731f0c..bd19452 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -41,7 +41,11 @@
 
 #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
 #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
-#define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_STIBP_SHIFT		1          /*
+						    * Single Thread Indirect Branch
+						    * Predictor (STIBP) bit
+						    */
+#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
 #define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
 
diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index 8e2f841..b593779 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(u64 tifn)
 	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
+{
+	BUILD_BUG_ON(TIF_STIBP < SPEC_CTRL_STIBP_SHIFT);
+	return (tifn & _TIF_STIBP) >> (TIF_STIBP - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
 {
 	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
 	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
+{
+	BUILD_BUG_ON(TIF_STIBP < SPEC_CTRL_STIBP_SHIFT);
+	return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_STIBP - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
 {
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 60798a0..4f6a7a9 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -110,6 +110,7 @@ struct thread_info {
 /* Security mode */
 #define TIF_SECCOMP		25	/* Secure computing */
 #define TIF_SSBD		26	/* Speculative store bypass disable */
+#define TIF_STIBP		27	/* Single thread indirect branch speculation */
 
 #define _TIF_NOCPUID		(1 << TIF_NOCPUID)
 #define _TIF_NOTSC		(1 << TIF_NOTSC)
@@ -141,6 +142,7 @@ struct thread_info {
 
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_SSBD		(1 << TIF_SSBD)
+#define _TIF_STIBP		(1 << TIF_STIBP)
 
 /*
  * work to do in syscall_trace_enter().  Also includes TIF_NOHZ for
@@ -161,7 +163,8 @@ struct thread_info {
 
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW							\
-	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
+	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
+	 _TIF_SSBD|_TIF_STIBP)
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 74bef48..943e90d 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -406,6 +406,14 @@ static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 	if (static_cpu_has(X86_FEATURE_SSBD))
 		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
+	/*
+	 * Need STIBP defense against Spectre v2 attack
+	 * if SMT is in use and enhanced IBRS is unsupported.
+	 */
+	if (static_branch_likely(&cpu_smt_enabled) &&
+	    !static_cpu_has(X86_FEATURE_USE_IBRS_ENHANCED))
+		msr |= stibp_tif_to_spec_ctrl(tifn);
+
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
@@ -418,7 +426,7 @@ static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 						      unsigned long tifn)
 {
-	bool updmsr = false;
+	bool updmsr = !!((tifp ^ tifn) & _TIF_STIBP);
 
 	/* If TIF_SSBD is different, select the proper mitigation method */
 	if ((tifp ^ tifn) & _TIF_SSBD) {
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 15/18] x86/speculation: Add Spectre v2 app to app protection modes
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (13 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 14/18] x86/speculation: Turn on or off STIBP according to a task's TIF_STIBP Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 18:49 ` [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks Tim Chen
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

This patch adds protection modes for Spectre v2 mitigations
against attacks on user processes.  There are three modes:

	strict mode:
	In this mode, IBPB and STIBP are deployed full
	time to protect all processes.

	lite mode:
	In this mode, IBPB and STIBP are only deployed on
	processes marked with TIF_STIBP flag.

	none mode:
	In this mode, no mitigations are deployed.

The protection mode can be specified by the spectre_v2_app2app
boot parameter with the following semantics:

spectre_v2_app2app=
	off    - Turn off mitigation
	lite   - Protect processes which are marked non-dumpable
	strict - Protect all processes
	auto   - Kernel selects the mode

	Not specifying this option is equivalent to
	spectre_v2_app2app=auto.

	Setting spectre_v2=off will also turn off this mitigation.

	Setting spectre_v2=on implies unconditionally enabling
	this mitigation.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  18 +++
 arch/x86/include/asm/nospec-branch.h            |   9 ++
 arch/x86/kernel/cpu/bugs.c                      | 153 +++++++++++++++++++++---
 arch/x86/mm/tlb.c                               |  23 +++-
 4 files changed, 183 insertions(+), 20 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 64a3bf5..5e7028e 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4186,6 +4186,24 @@
 			Not specifying this option is equivalent to
 			spectre_v2=auto.
 
+	spectre_v2_app2app=
+			[X86] Control mitigation of Spectre variant 2
+		        application to application (indirect branch speculation)
+			vulnerability.
+
+			off    - Unconditionally disable mitigations
+			lite   - Protect processes which are marked non-dumpable
+			strict - Protect all processes
+			auto   - Kernel selects the mode
+
+			Not specifying this option is equivalent to
+			spectre_v2_app2app=auto.
+
+			Setting spectre_v2=off will also turn off this mitigation.
+
+			Setting spectre_v2=on implies unconditionally enabling
+			this mitigation.
+
 	spec_store_bypass_disable=
 			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
 			(Speculative Store Bypass vulnerability)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index fd2a8c1..c59a6c4 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -3,6 +3,7 @@
 #ifndef _ASM_X86_NOSPEC_BRANCH_H_
 #define _ASM_X86_NOSPEC_BRANCH_H_
 
+#include <linux/static_key.h>
 #include <asm/alternative.h>
 #include <asm/alternative-asm.h>
 #include <asm/cpufeatures.h>
@@ -217,6 +218,12 @@ enum spectre_v2_mitigation {
 	SPECTRE_V2_IBRS_ENHANCED,
 };
 
+enum spectre_v2_app2app_mitigation {
+	SPECTRE_V2_APP2APP_NONE,
+	SPECTRE_V2_APP2APP_LITE,
+	SPECTRE_V2_APP2APP_STRICT,
+};
+
 /* The Speculative Store Bypass disable variants */
 enum ssb_mitigation {
 	SPEC_STORE_BYPASS_NONE,
@@ -228,6 +235,8 @@ enum ssb_mitigation {
 extern char __indirect_thunk_start[];
 extern char __indirect_thunk_end[];
 
+DECLARE_STATIC_KEY_FALSE(spectre_v2_app_lite);
+
 /*
  * On VMEXIT we must ensure that no RSB predictions learned in the guest
  * can be followed in the host, by overwriting the RSB completely. Both
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 190d6eb..54f4675 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -133,6 +133,14 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_RETPOLINE_AMD,
 };
 
+enum spectre_v2_app2app_mitigation_cmd {
+	SPECTRE_V2_APP2APP_CMD_NONE,
+	SPECTRE_V2_APP2APP_CMD_AUTO,
+	SPECTRE_V2_APP2APP_CMD_FORCE,
+	SPECTRE_V2_APP2APP_CMD_LITE,
+	SPECTRE_V2_APP2APP_CMD_STRICT,
+};
+
 static const char *spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE_MINIMAL]		= "Vulnerable: Minimal generic ASM retpoline",
@@ -142,12 +150,24 @@ static const char *spectre_v2_strings[] = {
 	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
 };
 
+static const char *spectre_v2_app2app_strings[] = {
+	[SPECTRE_V2_APP2APP_NONE]   = "App-App Vulnerable",
+	[SPECTRE_V2_APP2APP_LITE]   = "App-App Mitigation: Protect non-dumpable process",
+	[SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app attack protection",
+};
+
+DEFINE_STATIC_KEY_FALSE(spectre_v2_app_lite);
+EXPORT_SYMBOL_GPL(spectre_v2_app_lite);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V2 : " fmt
 
 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
 	SPECTRE_V2_NONE;
 
+static enum spectre_v2_app2app_mitigation
+	spectre_v2_app2app_enabled __ro_after_init = SPECTRE_V2_APP2APP_NONE;
+
 void
 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 {
@@ -169,6 +189,9 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 		    static_cpu_has(X86_FEATURE_AMD_SSBD))
 			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
 
+		if (static_branch_unlikely(&spectre_v2_app_lite))
+			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
+
 		if (hostval != guestval) {
 			msrval = setguest ? guestval : hostval;
 			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
@@ -275,6 +298,66 @@ static const struct {
 	{ "auto",              SPECTRE_V2_CMD_AUTO,              false },
 };
 
+static const struct {
+	const char *option;
+	enum spectre_v2_app2app_mitigation_cmd cmd;
+	bool secure;
+} app2app_options[] = {
+	{ "off",	SPECTRE_V2_APP2APP_CMD_NONE,   false },
+	{ "lite",	SPECTRE_V2_APP2APP_CMD_LITE,   false },
+	{ "strict",	SPECTRE_V2_APP2APP_CMD_STRICT, false },
+	{ "auto",	SPECTRE_V2_APP2APP_CMD_AUTO,   false },
+	/*
+	 * The "on" option is kept as last entry. It is implied by
+	 * spectre_v2=on boot parameter and it is not checked
+	 * in spectre_v2_app2app boot parameter.
+	 */
+	{ "on",		SPECTRE_V2_APP2APP_CMD_FORCE,  true  },
+};
+
+static enum spectre_v2_app2app_mitigation_cmd __init
+	    spectre_v2_parse_app2app_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
+{
+	enum spectre_v2_app2app_mitigation_cmd cmd;
+	char arg[20];
+	int ret, i;
+
+	if (v2_cmd == SPECTRE_V2_CMD_FORCE) {
+		cmd = SPECTRE_V2_APP2APP_CMD_FORCE;
+		goto show_cmd;
+	}
+
+	cmd = SPECTRE_V2_APP2APP_CMD_AUTO;
+	ret = cmdline_find_option(boot_command_line, "spectre_v2_app2app",
+				  arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_APP2APP_CMD_AUTO;
+
+	/*
+	 * Don't check the last entry for forced mitigation. It is infered from
+	 * v2_cmd == SPECTRE_V2_CMD_FORCE
+	 */
+	for (i = 0; i < ARRAY_SIZE(app2app_options)-1; i++) {
+		if (!match_option(arg, ret, app2app_options[i].option))
+			continue;
+		cmd = app2app_options[i].cmd;
+		break;
+	}
+
+	if (i >= ARRAY_SIZE(app2app_options)) {
+		pr_err("unknown app to app protection option (%s). Switching to AUTO select\n", arg);
+		return SPECTRE_V2_APP2APP_CMD_AUTO;
+	}
+
+show_cmd:
+	if (app2app_options[i].secure)
+		spec2_print_if_secure(app2app_options[i].option);
+	else
+		spec2_print_if_insecure(app2app_options[i].option);
+
+	return cmd;
+}
+
 static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 {
 	char arg[20];
@@ -327,14 +410,19 @@ static bool stibp_needed(void)
 	/*
 	 * Determine if STIBP should be always on.
 	 * Using enhanced IBRS makes using STIBP unnecessary.
+	 * For lite option, STIBP is used only for task with
+	 * TIF_STIBP flag. STIBP is not always on for that case.
 	 */
 
-	if (spectre_v2_enabled == SPECTRE_V2_NONE)
+	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
 		return false;
 
 	if (static_cpu_has(X86_FEATURE_USE_IBRS_ENHANCED))
 		return false;
 
+	if (static_branch_unlikely(&spectre_v2_app_lite))
+		return false;
+
 	if (!boot_cpu_has(X86_FEATURE_STIBP))
 		return false;
 
@@ -374,6 +462,8 @@ static void __init spectre_v2_select_mitigation(void)
 {
 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
 	enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+	enum spectre_v2_app2app_mitigation_cmd app2app_cmd;
+	enum spectre_v2_app2app_mitigation app2app_mode;
 
 	/*
 	 * If the CPU is not affected and the command line mode is NONE or AUTO
@@ -449,12 +539,6 @@ static void __init spectre_v2_select_mitigation(void)
 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
 
-	/* Initialize Indirect Branch Prediction Barrier if supported */
-	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
-	}
-
 	/*
 	 * Retpoline means the kernel is safe because it has no indirect
 	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
@@ -471,6 +555,43 @@ static void __init spectre_v2_select_mitigation(void)
 		pr_info("Enabling Restricted Speculation for firmware calls\n");
 	}
 
+	app2app_mode = SPECTRE_V2_APP2APP_NONE;
+	if (!boot_cpu_has(X86_FEATURE_IBPB) ||
+	    !boot_cpu_has(X86_FEATURE_STIBP))
+		goto set_app2app_mode;
+
+	app2app_cmd = spectre_v2_parse_app2app_cmdline(cmd);
+
+	switch (app2app_cmd) {
+	case SPECTRE_V2_APP2APP_CMD_NONE:
+		break;
+
+	case SPECTRE_V2_APP2APP_CMD_LITE:
+	case SPECTRE_V2_APP2APP_CMD_AUTO:
+		app2app_mode = SPECTRE_V2_APP2APP_LITE;
+		break;
+
+	case SPECTRE_V2_APP2APP_CMD_STRICT:
+	case SPECTRE_V2_APP2APP_CMD_FORCE:
+		app2app_mode = SPECTRE_V2_APP2APP_STRICT;
+		break;
+	}
+
+	/*
+	 * Initialize Indirect Branch Prediction Barrier if supported
+	 * and not disabled explicitly
+	 */
+	if (app2app_mode != SPECTRE_V2_APP2APP_NONE) {
+		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+	}
+
+set_app2app_mode:
+	spectre_v2_app2app_enabled = app2app_mode;
+	pr_info("%s\n", spectre_v2_app2app_strings[app2app_mode]);
+	if (app2app_mode == SPECTRE_V2_APP2APP_LITE)
+		static_branch_enable(&spectre_v2_app_lite);
+
 	/* Enable STIBP if appropriate */
 	arch_smt_update();
 }
@@ -862,21 +983,23 @@ static ssize_t l1tf_show_state(char *buf)
 
 static char *stibp_state(void)
 {
-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ||
+	    spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
 		return "";
-
-	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
-		return ", STIBP";
+	else if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_LITE)
+		return ", STIBP-lite";
 	else
-		return "";
+		return ", STIBP-strict";
 }
 
 static char *ibpb_state(void)
 {
-	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
-		return ", IBPB";
-	else
+	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
 		return "";
+	else if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_LITE)
+		return ", IBPB-lite";
+	else
+		return ", IBPB-strict";
 }
 
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 073b8df..65d8c1c 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -184,14 +184,27 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
 static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
 {
 	/*
-	 * Check if the current (previous) task has access to the memory
-	 * of the @tsk (next) task. If access is denied, make sure to
-	 * issue a IBPB to stop user->user Spectre-v2 attacks.
+	 * Don't issue IBPB when switching to kernel threads or staying in the
+	 * same mm context.
+	 */
+	if (!tsk || !tsk->mm || tsk->mm->context.ctx_id == last_ctx_id)
+		return false;
+
+	/*
+	 * If lite protection mode is enabled, check the STIBP thread flag.
+	 *
+	 * Otherwise check if the current (previous) task has access to the
+	 * the memory of the @tsk (next) task for strict app to app protection.
+	 * If access is denied, make sure to issue a IBPB to stop user->user
+	 * Spectre-v2 attacks.
 	 *
 	 * Note: __ptrace_may_access() returns 0 or -ERRNO.
 	 */
-	return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
-		ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+
+	if (static_branch_unlikely(&spectre_v2_app_lite))
+		return test_tsk_thread_flag(tsk, TIF_STIBP);
+	else
+		return ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB);
 }
 
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (14 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 15/18] x86/speculation: Add Spectre v2 app to app protection modes Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-10-30 21:07   ` Schaufler, Casey
  2018-10-30 18:49 ` [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs Tim Chen
  2018-10-30 18:49 ` [Patch v4 18/18] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen
  17 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Enable STIBP defense on high security tasks.

For normal tasks, STIBP is unused so they are not impacted by overhead
from STIBP in lite protection mode.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 54f4675..b402b96 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -14,6 +14,8 @@
 #include <linux/module.h>
 #include <linux/nospec.h>
 #include <linux/prctl.h>
+#include <linux/sched/coredump.h>
+#include <linux/security.h>
 
 #include <asm/spec-ctrl.h>
 #include <asm/cmdline.h>
@@ -770,6 +772,37 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	return 0;
 }
 
+static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
+{
+	bool update = false;
+
+	if (!static_branch_unlikely(&spectre_v2_app_lite))
+		return;
+
+	if (stibp_on)
+		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
+	else
+		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
+
+	if (!update)
+		return;
+
+	if (tsk == current)
+		speculation_ctrl_update_current();
+}
+
+void arch_set_security(struct task_struct *tsk, unsigned int value)
+{
+	if (value > SECURITY_HIGH)
+		return;
+
+	/* Update STIBP defenses */
+	if (value == SECURITY_HIGH)
+		set_task_stibp(tsk, true);
+	else
+		set_task_stibp(tsk, false);
+}
+
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (15 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  2018-11-04 19:49   ` Thomas Gleixner
  2018-10-30 18:49 ` [Patch v4 18/18] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen
  17 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

The SPEC_CTRL MSR of a remote CPU cannot be updated immediately when
TIF_STIBP flag is changed on a task running on the remote CPU.

If next task's TIF_STIBP flag happened to be the same as the updated
TIF_STIBP on the previous task on the next context switch, the SPEC_CTRL
MSR update is missed as the SPEC_CTRL MSR update occurs only on flag
changes, and update of the SPEC_CTRL MSR did not happen while previous
task was running.

This patch creates TIF_UPDATE_SPEC_CTRL bit and set it along with
TIF_STIBP bit update for tasks running on remote CPU. This signals that
the SPEC_CTRL MSR has a pending forced update on the next context
switch.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/thread_info.h |  6 +++++-
 arch/x86/kernel/cpu/bugs.c         |  2 ++
 arch/x86/kernel/process.c          | 22 +++++++++++++++++++++-
 3 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 4f6a7a9..7bdd097 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -97,6 +97,7 @@ struct thread_info {
 #define TIF_USER_RETURN_NOTIFY	14	/* Notify kernel of userspace return */
 #define TIF_PATCH_PENDING	15	/* Pending live patching update */
 #define TIF_FSCHECK		16	/* Check FS is USER_DS on return */
+#define TIF_UPDATE_SPEC_CTRL    17	/* Pending update of speculation control MSR */
 
 /* Task status */
 #define TIF_UPROBE		18	/* Breakpointed or singlestepping */
@@ -131,6 +132,7 @@ struct thread_info {
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
 #define _TIF_FSCHECK		(1 << TIF_FSCHECK)
+#define _TIF_UPDATE_SPEC_CTRL	(1 << TIF_UPDATE_SPEC_CTRL)
 
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
@@ -166,7 +168,9 @@ struct thread_info {
 	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
 	 _TIF_SSBD|_TIF_STIBP)
 
-#define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
+#define _TIF_WORK_CTXSW_PREV \
+	(_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY|_TIF_UPDATE_SPEC_CTRL)
+
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
 
 #define STACK_WARN		(THREAD_SIZE/8)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b402b96..1ba9cb5 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -789,6 +789,8 @@ static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
 
 	if (tsk == current)
 		speculation_ctrl_update_current();
+	else if (task_cpu(tsk) != smp_processor_id())
+		set_tsk_thread_flag(tsk, TIF_UPDATE_SPEC_CTRL);
 }
 
 void arch_set_security(struct task_struct *tsk, unsigned int value)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 943e90d..048b7f4b 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -426,7 +426,19 @@ static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 						      unsigned long tifn)
 {
-	bool updmsr = !!((tifp ^ tifn) & _TIF_STIBP);
+	/*
+	 * If TIF_UPDATE_SPEC_CTRL bit is set in tifp, speculation related
+	 * TIF flags have changed when previous task was running, but
+	 * SPEC_CTRL MSR has not been synchronized with TIF flag changes.
+	 * SPEC_CTRL MSR value can be out of date.
+	 *
+	 * Need to force update SPEC_CTRL MSR if TIF_UPDATE_SPEC_CTRL
+	 * bit in tifp is set.
+	 *
+	 * The TIF_UPDATE_SPEC_CTRL bit in tifn was cleared before calling
+	 * this function.
+	 */
+	bool updmsr = !!((tifp ^ tifn) & (_TIF_STIBP|_TIF_UPDATE_SPEC_CTRL));
 
 	/* If TIF_SSBD is different, select the proper mitigation method */
 	if ((tifp ^ tifn) & _TIF_SSBD) {
@@ -482,6 +494,14 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
+	if (tifp & _TIF_UPDATE_SPEC_CTRL)
+		clear_tsk_thread_flag(prev_p, TIF_UPDATE_SPEC_CTRL);
+
+	if (tifn & _TIF_UPDATE_SPEC_CTRL) {
+		clear_tsk_thread_flag(next_p, TIF_UPDATE_SPEC_CTRL);
+		tifn &= ~_TIF_UPDATE_SPEC_CTRL;
+	}
+
 	__speculation_ctrl_update(tifp, tifn);
 }
 
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Patch v4 18/18] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
                   ` (16 preceding siblings ...)
  2018-10-30 18:49 ` [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs Tim Chen
@ 2018-10-30 18:49 ` Tim Chen
  17 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-10-30 18:49 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Create PRCTL interface to restrict an application's indirect branch
speculation.  This will protect the application against spectre v2 attack
from another application.

Invocations:
Check indirect branch speculation status with
- prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, 0, 0, 0);

Enable indirect branch speculation with
- prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);

Disable indirect branch speculation with
- prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);

Force disable indirect branch speculation with
- prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);

See Documentation/userspace-api/spec_ctrl.rst.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  3 ++
 Documentation/userspace-api/spec_ctrl.rst       |  9 ++++
 arch/x86/kernel/cpu/bugs.c                      | 72 ++++++++++++++++++++++++-
 include/linux/sched.h                           |  9 ++++
 include/uapi/linux/prctl.h                      |  1 +
 tools/include/uapi/linux/prctl.h                |  1 +
 6 files changed, 93 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 5e7028e..3c91805 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4193,6 +4193,9 @@
 
 			off    - Unconditionally disable mitigations
 			lite   - Protect processes which are marked non-dumpable
+				 and processes which have requested restricted
+				 indirect branch specuation via the
+				 PR_SET_SPECULATION_CTRL prctl().
 			strict - Protect all processes
 			auto   - Kernel selects the mode
 
diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
index 32f3d55..8a4e268 100644
--- a/Documentation/userspace-api/spec_ctrl.rst
+++ b/Documentation/userspace-api/spec_ctrl.rst
@@ -92,3 +92,12 @@ Speculation misfeature controls
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
+
+- PR_SPEC_INDIR_BRANCH: Indirect Branch Speculation in User Processes
+                        (Mitigate Spectre V2 style attacks against user processes)
+
+  Invocations:
+   * prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, 0, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1ba9cb5..3834338 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -154,7 +154,7 @@ static const char *spectre_v2_strings[] = {
 
 static const char *spectre_v2_app2app_strings[] = {
 	[SPECTRE_V2_APP2APP_NONE]   = "App-App Vulnerable",
-	[SPECTRE_V2_APP2APP_LITE]   = "App-App Mitigation: Protect non-dumpable process",
+	[SPECTRE_V2_APP2APP_LITE]   = "App-App Mitigation: Protect non-dumpable and branch speculation restricted tasks",
 	[SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app attack protection",
 };
 
@@ -781,7 +781,7 @@ static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
 
 	if (stibp_on)
 		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
-	else
+	else if (!task_spec_indir_branch_disable(tsk))
 		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
 
 	if (!update)
@@ -805,12 +805,57 @@ void arch_set_security(struct task_struct *tsk, unsigned int value)
 		set_task_stibp(tsk, false);
 }
 
+static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+			return 0;
+		/*
+		 * Indirect branch speculation is always disabled in
+		 * strict mode.
+		 */
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+			return -EPERM;
+		task_clear_spec_indir_branch_disable(task);
+		set_task_stibp(task, false);
+		break;
+	case PR_SPEC_DISABLE:
+		/*
+		 * Indirect branch speculation is always allowed when
+		 * mitigation is force disabled.
+		 */
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+			return -EPERM;
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+			return 0;
+		task_set_spec_indir_branch_disable(task);
+		set_task_stibp(task, true);
+		break;
+	case PR_SPEC_FORCE_DISABLE:
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+			return -EPERM;
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+			return 0;
+		task_set_spec_indir_branch_disable(task);
+		task_set_spec_indir_branch_force_disable(task);
+		set_task_stibp(task, true);
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	return 0;
+}
+
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_set(task, ctrl);
+	case PR_SPEC_INDIR_BRANCH:
+		return indir_branch_prctl_set(task, ctrl);
 	default:
 		return -ENODEV;
 	}
@@ -843,11 +888,34 @@ static int ssb_prctl_get(struct task_struct *task)
 	}
 }
 
+static int indir_branch_prctl_get(struct task_struct *task)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		return PR_SPEC_NOT_AFFECTED;
+
+	switch (spectre_v2_app2app_enabled) {
+	case SPECTRE_V2_APP2APP_NONE:
+		return PR_SPEC_ENABLE;
+	case SPECTRE_V2_APP2APP_LITE:
+		if (task_spec_indir_branch_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+		if (test_tsk_thread_flag(task, TIF_STIBP))
+			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+	case SPECTRE_V2_APP2APP_STRICT:
+		return PR_SPEC_DISABLE;
+	default:
+		return PR_SPEC_NOT_AFFECTED;
+	}
+}
+
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_get(task);
+	case PR_SPEC_INDIR_BRANCH:
+		return indir_branch_prctl_get(task);
 	default:
 		return -ENODEV;
 	}
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 977cb57..bec1442 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1439,6 +1439,8 @@ static inline bool is_percpu_thread(void)
 #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
 #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
 #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
+#define PFA_SPEC_INDIR_BRANCH_DISABLE	5	/* Indirect branch speculation restricted in apps */
+#define PFA_SPEC_INDIR_BRANCH_FORCE_DISABLE 6	/* Indirect branch speculation restricted in apps forced */
 
 #define TASK_PFA_TEST(name, func)					\
 	static inline bool task_##func(struct task_struct *p)		\
@@ -1470,6 +1472,13 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ssb_disable)
 TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 
+TASK_PFA_TEST(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_SET(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_CLEAR(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+
+TASK_PFA_TEST(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
+TASK_PFA_SET(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
+
 static inline void
 current_restore_flags(unsigned long orig_flags, unsigned long flags)
 {
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index c0d7ea0..577f2ca 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIR_BRANCH		1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index c0d7ea0..577f2ca 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIR_BRANCH		1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)
-- 
2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [Patch v4 13/18] security: Update security level of a process when modifying its dumpability
  2018-10-30 18:49 ` [Patch v4 13/18] security: Update security level of a process when modifying its dumpability Tim Chen
@ 2018-10-30 20:57   ` Schaufler, Casey
  2018-10-30 21:30     ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Schaufler, Casey @ 2018-10-30 20:57 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Hansen, Dave,
	Mallick, Asit K, Arjan van de Ven, Jon Masters, Waiman Long,
	linux-kernel, x86, Schaufler, Casey, linux-security-module

> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Tuesday, October 30, 2018 11:49 AM
> To: Jiri Kosina <jikos@kernel.org>; Thomas Gleixner <tglx@linutronix.de>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>; Tom Lendacky
> <thomas.lendacky@amd.com>; Ingo Molnar <mingo@redhat.com>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@redhat.com>;
> Andrea Arcangeli <aarcange@redhat.com>; David Woodhouse
> <dwmw@amazon.co.uk>; Andi Kleen <ak@linux.intel.com>; Hansen, Dave
> <dave.hansen@intel.com>; Schaufler, Casey <casey.schaufler@intel.com>;
> Mallick, Asit K <asit.k.mallick@intel.com>; Arjan van de Ven
> <arjan@linux.intel.com>; Jon Masters <jcm@redhat.com>; Waiman Long
> <longman9394@gmail.com>; linux-kernel@vger.kernel.org; x86@kernel.org

Added LSM mail list to the CC:

> Subject: [Patch v4 13/18] security: Update security level of a process when
> modifying its dumpability

"Level" isn't a good description of security characteristics
as it has been overloaded in too many ways already. More
on that later.

> When a process is made non-dumpable, the action implies a higher level
> of security implicitly as its memory is imposed with access restriction.

How is this "higher level" manifest?

> A call to update_process_security() is added to update security defenses
> according to a process's dumpability and its implied security level.

Can you describe the set of security attributes that are implied by the
process' dumpability? I would think that the dumpability would be an
artifact of these attributes, not the other way around.

> Architecture specific defenses is erected for threads in the process

nit: s/is/are/

> by calling arch_set_security(task, SECURITY_LEVEL_HIGH) or the defenses
> relaxed via arch_set_security(task, SECURITY_LEVEL_NORMAL).  Such defenses
> may incur extra overhead and is reserved for tasks needing high security.

nit: s/is/are/

> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  fs/exec.c                |  2 ++
>  include/linux/security.h |  6 ++++++
>  kernel/cred.c            |  5 ++++-
>  kernel/sys.c             |  1 +
>  security/security.c      | 31 +++++++++++++++++++++++++++++++
>  5 files changed, 44 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/exec.c b/fs/exec.c
> index 1ebf6e5..e70c8a7 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1366,6 +1366,8 @@ void setup_new_exec(struct linux_binprm * bprm)
>  	else
>  		set_dumpable(current->mm, SUID_DUMP_USER);
> 
> +	update_process_security(current);
> +
>  	arch_setup_new_exec();
>  	perf_event_exec();
>  	__set_task_comm(current, kbasename(bprm->filename), true);
> diff --git a/include/linux/security.h b/include/linux/security.h
> index 75f4156..469d05f 100644
> --- a/include/linux/security.h
> +++ b/include/linux/security.h
> @@ -61,6 +61,12 @@ struct mm_struct;
>  /* LSM Agnostic defines for sb_set_mnt_opts */
>  #define SECURITY_LSM_NATIVE_LABELS	1
> 
> +/* Security level */
> +#define SECURITY_NORMAL	0
> +#define SECURITY_HIGH	1

NAK: "NORMAL" and "HIGH" are meaningless in this context.
If you intend to differentiate between "dumpable" and "undumpable"
you could use those, although I would recommend something that
describes the reason the task is dumpable.

> +
> +extern int update_process_security(struct task_struct *task);
> +
>  struct ctl_table;
>  struct audit_krule;
>  struct user_namespace;
> diff --git a/kernel/cred.c b/kernel/cred.c
> index ecf0365..0806a74 100644
> --- a/kernel/cred.c
> +++ b/kernel/cred.c
> @@ -19,6 +19,7 @@
>  #include <linux/security.h>
>  #include <linux/binfmts.h>
>  #include <linux/cn_proc.h>
> +#include <linux/security.h>
> 
>  #if 0
>  #define kdebug(FMT, ...)						\
> @@ -445,8 +446,10 @@ int commit_creds(struct cred *new)
>  	    !uid_eq(old->fsuid, new->fsuid) ||
>  	    !gid_eq(old->fsgid, new->fsgid) ||
>  	    !cred_cap_issubset(old, new)) {
> -		if (task->mm)
> +		if (task->mm) {
>  			set_dumpable(task->mm, suid_dumpable);
> +			update_process_security(task);
> +		}
>  		task->pdeath_signal = 0;
>  		smp_wmb();
>  	}
> diff --git a/kernel/sys.c b/kernel/sys.c
> index cf5c675..c6f179a 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -2293,6 +2293,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long,
> arg2, unsigned long, arg3,
>  			break;
>  		}
>  		set_dumpable(me->mm, arg2);
> +		update_process_security(me);
>  		break;
> 
>  	case PR_SET_UNALIGN:
> diff --git a/security/security.c b/security/security.c
> index 736e78d..12460f2 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -28,6 +28,8 @@
>  #include <linux/personality.h>
>  #include <linux/backing-dev.h>
>  #include <linux/string.h>
> +#include <linux/coredump.h>
> +#include <linux/sched/signal.h>
>  #include <net/flow.h>
> 
>  #include <trace/events/initcall.h>
> @@ -1353,6 +1355,35 @@ int security_inode_getsecctx(struct inode *inode,
> void **ctx, u32 *ctxlen)
>  }
>  EXPORT_SYMBOL(security_inode_getsecctx);
> 
> +void __weak arch_set_security(struct task_struct *task,
> +			      unsigned int security_level)
> +{
> +}

This isn't an LSM hook and hence does not belong in this file.
arch_set_security() isn't descriptive, and is in fact a bad choice
as task_struct has a field "security". This function has nothing
to do with the task->security field, which is what I would expect
based on the name.

> +
> +int update_process_security(struct task_struct *task)

Again, this isn't an LSM hook and does not belong in this file.
Also again, "security" isn't descriptive in the name.

> +{
> +	unsigned long flags;
> +	struct task_struct *t;
> +	int security_level;
> +
> +	if (!task->mm)
> +		return -EINVAL;
> +
> +	if (!lock_task_sighand(task, &flags))
> +		return -ESRCH;
> +
> +	if (get_dumpable(task->mm) != SUID_DUMP_USER)
> +		security_level = SECURITY_HIGH;
> +	else
> +		security_level = SECURITY_NORMAL;
> +
> +	for_each_thread(task, t)
> +		arch_set_security(task, security_level);
> +
> +	unlock_task_sighand(task, &flags);
> +	return 0;
> +}
> +
>  #ifdef CONFIG_SECURITY_NETWORK
> 
>  int security_unix_stream_connect(struct sock *sock, struct sock *other, struct
> sock *newsk)
> --
> 2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks
  2018-10-30 18:49 ` [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks Tim Chen
@ 2018-10-30 21:07   ` Schaufler, Casey
  2018-10-30 21:34     ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Schaufler, Casey @ 2018-10-30 21:07 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Hansen, Dave,
	Mallick, Asit K, Arjan van de Ven, Jon Masters, Waiman Long,
	linux-kernel, x86

> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Tuesday, October 30, 2018 11:49 AM
> To: Jiri Kosina <jikos@kernel.org>; Thomas Gleixner <tglx@linutronix.de>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>; Tom Lendacky
> <thomas.lendacky@amd.com>; Ingo Molnar <mingo@redhat.com>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@redhat.com>;
> Andrea Arcangeli <aarcange@redhat.com>; David Woodhouse
> <dwmw@amazon.co.uk>; Andi Kleen <ak@linux.intel.com>; Hansen, Dave
> <dave.hansen@intel.com>; Schaufler, Casey <casey.schaufler@intel.com>;
> Mallick, Asit K <asit.k.mallick@intel.com>; Arjan van de Ven
> <arjan@linux.intel.com>; Jon Masters <jcm@redhat.com>; Waiman Long
> <longman9394@gmail.com>; linux-kernel@vger.kernel.org; x86@kernel.org
> Subject: [Patch v4 16/18] x86/speculation: Enable STIBP to protect security
> sensitive tasks
> 
> Enable STIBP defense on high security tasks.
> 
> For normal tasks, STIBP is unused so they are not impacted by overhead
> from STIBP in lite protection mode.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/bugs.c | 33 +++++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 54f4675..b402b96 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -14,6 +14,8 @@
>  #include <linux/module.h>
>  #include <linux/nospec.h>
>  #include <linux/prctl.h>
> +#include <linux/sched/coredump.h>
> +#include <linux/security.h>
> 
>  #include <asm/spec-ctrl.h>
>  #include <asm/cmdline.h>
> @@ -770,6 +772,37 @@ static int ssb_prctl_set(struct task_struct *task,
> unsigned long ctrl)
>  	return 0;
>  }
> 
> +static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
> +{
> +	bool update = false;
> +
> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
> +		return;
> +
> +	if (stibp_on)
> +		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
> +	else
> +		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
> +
> +	if (!update)
> +		return;
> +
> +	if (tsk == current)
> +		speculation_ctrl_update_current();
> +}
> +
> +void arch_set_security(struct task_struct *tsk, unsigned int value)

In this context "security" isn't descriptive. arch_set_stibp_defenses()
would be better.

Since "value" should only ever have one of two values, and those
map directly to "true" or "false" this should be a bool, making the
code trivial:

void arch_set_stibp_defenses(struct task_struct *task, bool stibp)
{
	set_task_stibp(task, stibp);
}

Or perhaps arch_set_security() should go away, and the calling
code would call set_task_stibp() directly. Unless there is some compelling
reason for the abstractions.

> +{
> +	if (value > SECURITY_HIGH)
> +		return;
> +
> +	/* Update STIBP defenses */
> +	if (value == SECURITY_HIGH)
> +		set_task_stibp(tsk, true);
> +	else
> +		set_task_stibp(tsk, false);
> +}
> +
>  int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
>  			     unsigned long ctrl)
>  {
> --
> 2.9.4


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 13/18] security: Update security level of a process when modifying its dumpability
  2018-10-30 20:57   ` Schaufler, Casey
@ 2018-10-30 21:30     ` Tim Chen
  2018-10-30 21:53       ` Schaufler, Casey
  0 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 21:30 UTC (permalink / raw)
  To: Schaufler, Casey, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Hansen, Dave,
	Mallick, Asit K, Arjan van de Ven, Jon Masters, Waiman Long,
	linux-kernel, x86, linux-security-module

On 10/30/2018 01:57 PM, Schaufler, Casey wrote:

> 
> This isn't an LSM hook and hence does not belong in this file.
> arch_set_security() isn't descriptive, and is in fact a bad choice
> as task_struct has a field "security". This function has nothing
> to do with the task->security field, which is what I would expect
> based on the name.
> 

What file will be a logical place for this function?

>> +
>> +int update_process_security(struct task_struct *task)
> 
> Again, this isn't an LSM hook and does not belong in this file.
> Also again, "security" isn't descriptive in the name.
> 

Thanks.

Tim

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks
  2018-10-30 21:07   ` Schaufler, Casey
@ 2018-10-30 21:34     ` Tim Chen
  2018-10-30 22:02       ` Schaufler, Casey
  0 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-10-30 21:34 UTC (permalink / raw)
  To: Schaufler, Casey, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Hansen, Dave,
	Mallick, Asit K, Arjan van de Ven, Jon Masters, Waiman Long,
	linux-kernel, x86

On 10/30/2018 02:07 PM, Schaufler, Casey wrote:
>> -----Original Message-----
>> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
>> Sent: Tuesday, October 30, 2018 11:49 AM
>> To: Jiri Kosina <jikos@kernel.org>; Thomas Gleixner <tglx@linutronix.de>
>> Cc: Tim Chen <tim.c.chen@linux.intel.com>; Tom Lendacky
>> <thomas.lendacky@amd.com>; Ingo Molnar <mingo@redhat.com>; Peter
>> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@redhat.com>;
>> Andrea Arcangeli <aarcange@redhat.com>; David Woodhouse
>> <dwmw@amazon.co.uk>; Andi Kleen <ak@linux.intel.com>; Hansen, Dave
>> <dave.hansen@intel.com>; Schaufler, Casey <casey.schaufler@intel.com>;
>> Mallick, Asit K <asit.k.mallick@intel.com>; Arjan van de Ven
>> <arjan@linux.intel.com>; Jon Masters <jcm@redhat.com>; Waiman Long
>> <longman9394@gmail.com>; linux-kernel@vger.kernel.org; x86@kernel.org
>> Subject: [Patch v4 16/18] x86/speculation: Enable STIBP to protect security
>> sensitive tasks
>>
>> Enable STIBP defense on high security tasks.
>>
>> For normal tasks, STIBP is unused so they are not impacted by overhead
>> from STIBP in lite protection mode.
>>
>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>> ---
>>  arch/x86/kernel/cpu/bugs.c | 33 +++++++++++++++++++++++++++++++++
>>  1 file changed, 33 insertions(+)
>>
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index 54f4675..b402b96 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -14,6 +14,8 @@
>>  #include <linux/module.h>
>>  #include <linux/nospec.h>
>>  #include <linux/prctl.h>
>> +#include <linux/sched/coredump.h>
>> +#include <linux/security.h>
>>
>>  #include <asm/spec-ctrl.h>
>>  #include <asm/cmdline.h>
>> @@ -770,6 +772,37 @@ static int ssb_prctl_set(struct task_struct *task,
>> unsigned long ctrl)
>>  	return 0;
>>  }
>>
>> +static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
>> +{
>> +	bool update = false;
>> +
>> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
>> +		return;
>> +
>> +	if (stibp_on)
>> +		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
>> +	else
>> +		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
>> +
>> +	if (!update)
>> +		return;
>> +
>> +	if (tsk == current)
>> +		speculation_ctrl_update_current();
>> +}
>> +
>> +void arch_set_security(struct task_struct *tsk, unsigned int value)
> 
> In this context "security" isn't descriptive. arch_set_stibp_defenses()
> would be better.

A more generic name decoupled from STIBP will be preferable.  There
can other kind of security defenses to be erected in
the future.

Perhaps arch_set_mitigation?

Thanks.

Tim

> 
> Since "value" should only ever have one of two values, and those
> map directly to "true" or "false" this should be a bool, making the
> code trivial:
> 
> void arch_set_stibp_defenses(struct task_struct *task, bool stibp)
> {
> 	set_task_stibp(task, stibp);
> }
> 
> Or perhaps arch_set_security() should go away, and the calling
> code would call set_task_stibp() directly. Unless there is some compelling
> reason for the abstractions.
> 
>> +{
>> +	if (value > SECURITY_HIGH)
>> +		return;
>> +
>> +	/* Update STIBP defenses */
>> +	if (value == SECURITY_HIGH)
>> +		set_task_stibp(tsk, true);
>> +	else
>> +		set_task_stibp(tsk, false);
>> +}
>> +
>>  int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
>>  			     unsigned long ctrl)
>>  {
>> --
>> 2.9.4
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [Patch v4 13/18] security: Update security level of a process when modifying its dumpability
  2018-10-30 21:30     ` Tim Chen
@ 2018-10-30 21:53       ` Schaufler, Casey
  0 siblings, 0 replies; 43+ messages in thread
From: Schaufler, Casey @ 2018-10-30 21:53 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Hansen, Dave,
	Mallick, Asit K, Arjan van de Ven, Jon Masters, Waiman Long,
	linux-kernel, x86, Schaufler, Casey, linux-security-module

> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Tuesday, October 30, 2018 2:31 PM
> To: Schaufler, Casey <casey.schaufler@intel.com>; Jiri Kosina
> <jikos@kernel.org>; Thomas Gleixner <tglx@linutronix.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>; Ingo Molnar
> <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Josh Poimboeuf
> <jpoimboe@redhat.com>; Andrea Arcangeli <aarcange@redhat.com>; David
> Woodhouse <dwmw@amazon.co.uk>; Andi Kleen <ak@linux.intel.com>;
> Hansen, Dave <dave.hansen@intel.com>; Mallick, Asit K
> <asit.k.mallick@intel.com>; Arjan van de Ven <arjan@linux.intel.com>; Jon
> Masters <jcm@redhat.com>; Waiman Long <longman9394@gmail.com>;
> linux-kernel@vger.kernel.org; x86@kernel.org; linux-security-module <linux-
> security-module@vger.kernel.org>
> Subject: Re: [Patch v4 13/18] security: Update security level of a process when
> modifying its dumpability
> 
> On 10/30/2018 01:57 PM, Schaufler, Casey wrote:
> 
> >
> > This isn't an LSM hook and hence does not belong in this file.
> > arch_set_security() isn't descriptive, and is in fact a bad choice
> > as task_struct has a field "security". This function has nothing
> > to do with the task->security field, which is what I would expect
> > based on the name.
> >
> 
> What file will be a logical place for this function?

kernel/cpu.c ? You're working with CPU localized mitigations, right?

You don't want it under security/ as that's all supposed to
be bits of the LSM infrastructure.

> >> +
> >> +int update_process_security(struct task_struct *task)
> >
> > Again, this isn't an LSM hook and does not belong in this file.
> > Also again, "security" isn't descriptive in the name.
> >
> 
> Thanks.
> 
> Tim

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks
  2018-10-30 21:34     ` Tim Chen
@ 2018-10-30 22:02       ` Schaufler, Casey
  0 siblings, 0 replies; 43+ messages in thread
From: Schaufler, Casey @ 2018-10-30 22:02 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Hansen, Dave,
	Mallick, Asit K, Arjan van de Ven, Jon Masters, Waiman Long,
	linux-kernel, x86, Schaufler, Casey

> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Tuesday, October 30, 2018 2:35 PM
> To: Schaufler, Casey <casey.schaufler@intel.com>; Jiri Kosina
> <jikos@kernel.org>; Thomas Gleixner <tglx@linutronix.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>; Ingo Molnar
> <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Josh Poimboeuf
> <jpoimboe@redhat.com>; Andrea Arcangeli <aarcange@redhat.com>; David
> Woodhouse <dwmw@amazon.co.uk>; Andi Kleen <ak@linux.intel.com>;
> Hansen, Dave <dave.hansen@intel.com>; Mallick, Asit K
> <asit.k.mallick@intel.com>; Arjan van de Ven <arjan@linux.intel.com>; Jon
> Masters <jcm@redhat.com>; Waiman Long <longman9394@gmail.com>;
> linux-kernel@vger.kernel.org; x86@kernel.org
> Subject: Re: [Patch v4 16/18] x86/speculation: Enable STIBP to protect security
> sensitive tasks
> 
> On 10/30/2018 02:07 PM, Schaufler, Casey wrote:
> >> -----Original Message-----
> >> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> >> Sent: Tuesday, October 30, 2018 11:49 AM
> >> To: Jiri Kosina <jikos@kernel.org>; Thomas Gleixner <tglx@linutronix.de>
> >> Cc: Tim Chen <tim.c.chen@linux.intel.com>; Tom Lendacky
> >> <thomas.lendacky@amd.com>; Ingo Molnar <mingo@redhat.com>; Peter
> >> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@redhat.com>;
> >> Andrea Arcangeli <aarcange@redhat.com>; David Woodhouse
> >> <dwmw@amazon.co.uk>; Andi Kleen <ak@linux.intel.com>; Hansen, Dave
> >> <dave.hansen@intel.com>; Schaufler, Casey <casey.schaufler@intel.com>;
> >> Mallick, Asit K <asit.k.mallick@intel.com>; Arjan van de Ven
> >> <arjan@linux.intel.com>; Jon Masters <jcm@redhat.com>; Waiman Long
> >> <longman9394@gmail.com>; linux-kernel@vger.kernel.org; x86@kernel.org
> >> Subject: [Patch v4 16/18] x86/speculation: Enable STIBP to protect security
> >> sensitive tasks
> >>
> >> Enable STIBP defense on high security tasks.
> >>
> >> For normal tasks, STIBP is unused so they are not impacted by overhead
> >> from STIBP in lite protection mode.
> >>
> >> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> >> ---
> >>  arch/x86/kernel/cpu/bugs.c | 33 +++++++++++++++++++++++++++++++++
> >>  1 file changed, 33 insertions(+)
> >>
> >> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> >> index 54f4675..b402b96 100644
> >> --- a/arch/x86/kernel/cpu/bugs.c
> >> +++ b/arch/x86/kernel/cpu/bugs.c
> >> @@ -14,6 +14,8 @@
> >>  #include <linux/module.h>
> >>  #include <linux/nospec.h>
> >>  #include <linux/prctl.h>
> >> +#include <linux/sched/coredump.h>
> >> +#include <linux/security.h>
> >>
> >>  #include <asm/spec-ctrl.h>
> >>  #include <asm/cmdline.h>
> >> @@ -770,6 +772,37 @@ static int ssb_prctl_set(struct task_struct *task,
> >> unsigned long ctrl)
> >>  	return 0;
> >>  }
> >>
> >> +static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
> >> +{
> >> +	bool update = false;
> >> +
> >> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
> >> +		return;
> >> +
> >> +	if (stibp_on)
> >> +		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
> >> +	else
> >> +		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
> >> +
> >> +	if (!update)
> >> +		return;
> >> +
> >> +	if (tsk == current)
> >> +		speculation_ctrl_update_current();
> >> +}
> >> +
> >> +void arch_set_security(struct task_struct *tsk, unsigned int value)
> >
> > In this context "security" isn't descriptive. arch_set_stibp_defenses()
> > would be better.
> 
> A more generic name decoupled from STIBP will be preferable.  There
> can other kind of security defenses to be erected in
> the future.
> 
> Perhaps arch_set_mitigation?

Better. On the other hand, adding function call layers just in case leads
to cascades of functions that do nothing but call other functions, and that
makes code hard to understand. I would leave generalization for the 2nd
person who wants to add mitigations. 

> 
> Thanks.
> 
> Tim
> 
> >
> > Since "value" should only ever have one of two values, and those
> > map directly to "true" or "false" this should be a bool, making the
> > code trivial:
> >
> > void arch_set_stibp_defenses(struct task_struct *task, bool stibp)
> > {
> > 	set_task_stibp(task, stibp);
> > }
> >
> > Or perhaps arch_set_security() should go away, and the calling
> > code would call set_task_stibp() directly. Unless there is some compelling
> > reason for the abstractions.
> >
> >> +{
> >> +	if (value > SECURITY_HIGH)
> >> +		return;
> >> +
> >> +	/* Update STIBP defenses */
> >> +	if (value == SECURITY_HIGH)
> >> +		set_task_stibp(tsk, true);
> >> +	else
> >> +		set_task_stibp(tsk, false);
> >> +}
> >> +
> >>  int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
> >>  			     unsigned long ctrl)
> >>  {
> >> --
> >> 2.9.4
> >


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common()
  2018-10-30 18:49 ` [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common() Tim Chen
@ 2018-11-03 18:07   ` Thomas Gleixner
  2018-11-05 19:12     ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-03 18:07 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Tim,

On Tue, 30 Oct 2018, Tim Chen wrote:
> Extract the logic to show IBPB, STIBP usages in cpu_show_common()
> into helper functions.
> 
> Later patches will add other userspace Spectre v2 mitigation modes.
> This patch makes it easy to show IBPB and STIBP
> usage scenario according to the mitigation mode.

First of all, I asked you before to do:

# git grep 'This patch' Documentation/process

This leads you to:

 "Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
  instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
  to do frotz", as if you are giving orders to the codebase to change
  its behaviour."

Documentation is there for a reason.

Aside of that, I'd really have a hard time to figure out what you are
trying to say, if I didn't have the context already. Change logs need to
make sense on their own. So something like this:

  The Spectre V2 printout in cpu_show_common() handles conditionals for the
  various mitigation methods directly in the sprintf() argument list. That's
  hard to read and will become unreadable if more complex decisions need to
  be made for a particular method.

  Move the conditionals for STIBP and IBPB string selection into helper
  functions, so they can be extended later on.

follows the obvious ordering for change logs:

  1) Describe context and problem
  
  2) Describe the solution

and is understandable without needing to know about the context in which
this change was developed.

Hmm? This is a suggestion, feel free to rewrite it in you own words. The
same applies to other change logs as well. I won't comment on those.
 
>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       char *buf, unsigned int bug)
>  {
> @@ -872,9 +888,8 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>  
>  	case X86_BUG_SPECTRE_V2:
>  		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> -			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
>  			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
> -			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
> +			       ibpb_state(), stibp_state(),
>  			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
>  			       spectre_v2_module_string());

Any particular reason for changing the output ordering here? If yes, then
the changelog should mention it. If no, why?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 08/18] sched: Deprecate sched_smt_present and use cpu_smt_enabled static key
  2018-10-30 18:49 ` [Patch v4 08/18] sched: Deprecate sched_smt_present and use " Tim Chen
@ 2018-11-03 18:20   ` Thomas Gleixner
  2018-11-09 22:08     ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-03 18:20 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Tim,

On Tue, 30 Oct 2018, Tim Chen wrote:

> The cpu_smt_enabled static key serves identical purpose as cpu_smt_enabled

That doesn't make any sense.

> to enable SMT specific code.
> 
> This patch replaces sched_smt_present in the scheduler with
> cpu_smt_enabled and deprecate sched_smt_present.

It's not deprecating it, it's replacing and removing it and thereby
breaking all architectures which select SCHED_SMT except x86.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key
  2018-10-30 18:49 ` [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key Tim Chen
@ 2018-11-03 18:29   ` Thomas Gleixner
  2018-11-08  1:43     ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-03 18:29 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Tim,

On Tue, 30 Oct 2018, Tim Chen wrote:

> Change the SMT code paths check from using cpu_smt_control to
> cpu_smt_enabled static key.  This saves a branching check.

and adds extra size to the kernel for the patching. The only reason why it
would make sense for kvm is that then the EXPORT of cpu_smt_control can go
away, which takes more space than the patch data.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-10-30 18:49 ` [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs Tim Chen
@ 2018-11-04 19:49   ` Thomas Gleixner
  2018-11-05 22:02     ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-04 19:49 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Tim,

On Tue, 30 Oct 2018, Tim Chen wrote:
>  void arch_set_security(struct task_struct *tsk, unsigned int value)
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 943e90d..048b7f4b 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -426,7 +426,19 @@ static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
>  static __always_inline void __speculation_ctrl_update(unsigned long tifp,
>  						      unsigned long tifn)
>  {
> -	bool updmsr = !!((tifp ^ tifn) & _TIF_STIBP);
> +	/*
> +	 * If TIF_UPDATE_SPEC_CTRL bit is set in tifp, speculation related
> +	 * TIF flags have changed when previous task was running, but
> +	 * SPEC_CTRL MSR has not been synchronized with TIF flag changes.
> +	 * SPEC_CTRL MSR value can be out of date.
> +	 *
> +	 * Need to force update SPEC_CTRL MSR if TIF_UPDATE_SPEC_CTRL
> +	 * bit in tifp is set.
> +	 *
> +	 * The TIF_UPDATE_SPEC_CTRL bit in tifn was cleared before calling
> +	 * this function.
> +	 */
> +	bool updmsr = !!((tifp ^ tifn) & (_TIF_STIBP|_TIF_UPDATE_SPEC_CTRL));
>  
>  	/* If TIF_SSBD is different, select the proper mitigation method */
>  	if ((tifp ^ tifn) & _TIF_SSBD) {
> @@ -482,6 +494,14 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
>  	if ((tifp ^ tifn) & _TIF_NOCPUID)
>  		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>  
> +	if (tifp & _TIF_UPDATE_SPEC_CTRL)
> +		clear_tsk_thread_flag(prev_p, TIF_UPDATE_SPEC_CTRL);
> +
> +	if (tifn & _TIF_UPDATE_SPEC_CTRL) {
> +		clear_tsk_thread_flag(next_p, TIF_UPDATE_SPEC_CTRL);
> +		tifn &= ~_TIF_UPDATE_SPEC_CTRL;
> +	}

I'm really unhappy about adding yet more conditionals into this code
path. We really need to find some better solution for that.

There are basically two options:

 1) Restrict the PRCTL control so it is only possible to modify it at the
    point where the application is still single threaded.

 2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
    in the slow work path.

    The KVM side can be handled in x86_virt_spec_ctrl().

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common()
  2018-11-03 18:07   ` Thomas Gleixner
@ 2018-11-05 19:12     ` Tim Chen
  2018-11-05 19:17       ` Thomas Gleixner
  0 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-11-05 19:12 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

On 11/03/2018 11:07 AM, Thomas Gleixner wrote:
> Tim,
> 
> On Tue, 30 Oct 2018, Tim Chen wrote:
>> Extract the logic to show IBPB, STIBP usages in cpu_show_common()
>> into helper functions.
>>
>> Later patches will add other userspace Spectre v2 mitigation modes.
>> This patch makes it easy to show IBPB and STIBP
>> usage scenario according to the mitigation mode.
> 
> First of all, I asked you before to do:
> 
> # git grep 'This patch' Documentation/process
> 
> This leads you to:
> 
>  "Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
>   instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
>   to do frotz", as if you are giving orders to the codebase to change
>   its behaviour."
> 
> Documentation is there for a reason.
> 
> Aside of that, I'd really have a hard time to figure out what you are
> trying to say, if I didn't have the context already. Change logs need to
> make sense on their own. So something like this:
> 
>   The Spectre V2 printout in cpu_show_common() handles conditionals for the
>   various mitigation methods directly in the sprintf() argument list. That's
>   hard to read and will become unreadable if more complex decisions need to
>   be made for a particular method.
> 
>   Move the conditionals for STIBP and IBPB string selection into helper
>   functions, so they can be extended later on.
> 
> follows the obvious ordering for change logs:
> 
>   1) Describe context and problem
>   
>   2) Describe the solution
> 
> and is understandable without needing to know about the context in which
> this change was developed.
> 
> Hmm? This is a suggestion, feel free to rewrite it in you own words. The
> same applies to other change logs as well. I won't comment on those.

Thanks for the suggestion.  Will update.

>  
>>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>>  			       char *buf, unsigned int bug)
>>  {
>> @@ -872,9 +888,8 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>>  
>>  	case X86_BUG_SPECTRE_V2:
>>  		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
>> -			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
>>  			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
>> -			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
>> +			       ibpb_state(), stibp_state(),
>>  			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
>>  			       spectre_v2_module_string());
> 
> Any particular reason for changing the output ordering here? If yes, then
> the changelog should mention it. If no, why?
> 

I was putting the features related to user application protection together. It
was not necessary and I can leave it at the same place.

Tim

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common()
  2018-11-05 19:12     ` Tim Chen
@ 2018-11-05 19:17       ` Thomas Gleixner
  0 siblings, 0 replies; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-05 19:17 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Tim,

On Mon, 5 Nov 2018, Tim Chen wrote:
> On 11/03/2018 11:07 AM, Thomas Gleixner wrote:
> >>  	case X86_BUG_SPECTRE_V2:
> >>  		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> >> -			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
> >>  			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
> >> -			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
> >> +			       ibpb_state(), stibp_state(),
> >>  			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
> >>  			       spectre_v2_module_string());
> > 
> > Any particular reason for changing the output ordering here? If yes, then
> > the changelog should mention it. If no, why?
> > 
> I was putting the features related to user application protection together. It
> was not necessary and I can leave it at the same place.

I have no strong opinion either way and changing it should not confuse user
space tools, but please mention it in the changelog if you decide to group it.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-04 19:49   ` Thomas Gleixner
@ 2018-11-05 22:02     ` Tim Chen
  2018-11-05 23:04       ` Thomas Gleixner
  0 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-11-05 22:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86


Thomas,

>> +	if (tifp & _TIF_UPDATE_SPEC_CTRL)
>> +		clear_tsk_thread_flag(prev_p, TIF_UPDATE_SPEC_CTRL);
>> +
>> +	if (tifn & _TIF_UPDATE_SPEC_CTRL) {
>> +		clear_tsk_thread_flag(next_p, TIF_UPDATE_SPEC_CTRL);
>> +		tifn &= ~_TIF_UPDATE_SPEC_CTRL;
>> +	}
> 
> I'm really unhappy about adding yet more conditionals into this code
> path. We really need to find some better solution for that.
> 
> There are basically two options:
> 
>  1) Restrict the PRCTL control so it is only possible to modify it at the
>     point where the application is still single threaded.
> 
>  2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
>     in the slow work path.
> 
>     The KVM side can be handled in x86_virt_spec_ctrl().
> 

How about sending an IPI if a remote CPU needs to have its SPEC_CTRL MSR
updated?

Something like the following to replace this patch?

Tim

---

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b8103991..7505925 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -772,9 +772,15 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	return 0;
 }
 
+static void spec_ctrl_update_func(void *info)
+{
+	speculation_ctrl_update(task_thread_info(current)->flags);
+}
+
 static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
 {
 	bool update = false;
+	int cpu;
 
 	if (!static_branch_unlikely(&spectre_v2_app_lite))
 		return;
@@ -789,6 +795,12 @@ static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
 
 	if (tsk == current)
 		speculation_ctrl_update_current();
+	else {
+		cpu = task_cpu(tsk);
+		if (cpu != smp_processor_id())
+			smp_call_function_single(cpu, spec_ctrl_update_func,
+						 NULL, false); 
+	}
 }
 
 void arch_set_security(struct task_struct *tsk, unsigned int value)

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-05 22:02     ` Tim Chen
@ 2018-11-05 23:04       ` Thomas Gleixner
  2018-11-05 23:59         ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-05 23:04 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, LKML, x86, Kees Cook

Tim,

On Mon, 5 Nov 2018, Tim Chen wrote:
> How about sending an IPI if a remote CPU needs to have its SPEC_CTRL MSR
> updated?
> 
> Something like the following to replace this patch?

...

> +static void spec_ctrl_update_func(void *info)
> +{
> +	speculation_ctrl_update(task_thread_info(current)->flags);
> +}
> +
>  static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
>  {
>  	bool update = false;
> +	int cpu;
>  
>  	if (!static_branch_unlikely(&spectre_v2_app_lite))
>  		return;
> @@ -789,6 +795,12 @@ static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
>  
>  	if (tsk == current)
>  		speculation_ctrl_update_current();
> +	else {
> +		cpu = task_cpu(tsk);
> +		if (cpu != smp_processor_id())
> +			smp_call_function_single(cpu, spec_ctrl_update_func,
> +						 NULL, false); 
> +	}


Aside of the condition being pointless in that case, that issues an IPI
whether the task is running or not. So this allows a task to issue tons of
async IPIs disturbing others by toggling the control.

I'm less and less convinced that piggybacking this on dumpable is a good
idea. It's lots of extra code and the security people are not really happy
about the whole thing either.

Can we please start out with the SSBD model and make use of the PRCTL and
the seccomp mitigation control?

Kees?

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-05 23:04       ` Thomas Gleixner
@ 2018-11-05 23:59         ` Tim Chen
  2018-11-06  7:46           ` Thomas Gleixner
  0 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-11-05 23:59 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, LKML, x86, Kees Cook

Thomas,

> 
> 
> Aside of the condition being pointless in that case, that issues an IPI
> whether the task is running or not. So this allows a task to issue tons of
> async IPIs disturbing others by toggling the control.

I'm not crazy about sending IPIs too.  Hence the original implementation
using TIF_UPDATE_SPEC_CTRL flag.

I've thought a bit about the options you suggested and was unclear
on a few things.

> 1) Restrict the PRCTL control so it is only possible to modify it at the
>    point where the application is still single threaded.
>

My understanding is PRCTL applied on the task only. Should it be extended
to other task threads?  In that case, it seems like we didn't do that for SSBD?

> 2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
>    in the slow work path.

There can be tasks that don't do any syscalls, and it seems like we can have MSRs getting
out of sync?

> 
> I'm less and less convinced that piggybacking this on dumpable is a good
> idea. 

There are daemons like sshd that are non-dumpable.  So I think protecting
them is desired.  Otherwise all those daemons will need to be updated to
use PRCTL.  In the original implementation, IBPB is issued for
non-dumpable task.  It will be nice to retain that.

Thanks.

Tim



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-05 23:59         ` Tim Chen
@ 2018-11-06  7:46           ` Thomas Gleixner
  2018-11-07  0:18             ` Tim Chen
  0 siblings, 1 reply; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-06  7:46 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, LKML, x86, Kees Cook

Tim,

On Mon, 5 Nov 2018, Tim Chen wrote:
> > Aside of the condition being pointless in that case, that issues an IPI
> > whether the task is running or not. So this allows a task to issue tons of
> > async IPIs disturbing others by toggling the control.
> 
> I'm not crazy about sending IPIs too.  Hence the original implementation
> using TIF_UPDATE_SPEC_CTRL flag.

which has it's own problems of bloating the context switch code.

> I've thought a bit about the options you suggested and was unclear
> on a few things.
> 
> > 1) Restrict the PRCTL control so it is only possible to modify it at the
> >    point where the application is still single threaded.
> 
> My understanding is PRCTL applied on the task only. Should it be extended
> to other task threads?  In that case, it seems like we didn't do that for SSBD?

Right, we did not.

This was discussed back then already and we agreed on avoiding the
complexity for this and only allow task scope. Though the child tasks will
inherit the TIF flag.

> > 2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
> >    in the slow work path.
> 
> There can be tasks that don't do any syscalls, and it seems like we can
> have MSRs getting out of sync?

Setting the TIF flag directly in a remote task is wrong. It needs to be
handled when the _TIF_UPDATE_SPEC_CTRL is evaluated, i.e. the information
needs to be stored process wide e.g. in task->mm.

But yes, if the remote task runs in user space forever, it won't
help. Though the point is that dumpable is usually set when the process
starts, so it's probably mostly a theoretical issue.

> > I'm less and less convinced that piggybacking this on dumpable is a good
> > idea. 
> 
> There are daemons like sshd that are non-dumpable.  So I think protecting
> them is desired.  Otherwise all those daemons will need to be updated to
> use PRCTL.  In the original implementation, IBPB is issued for
> non-dumpable task.  It will be nice to retain that.

A lot of things are nice to have, but you really have to carefully weigh
the tradeoff between the conveniance and the complexity plus the fast path
bloat it creates.

IBPB is a different thing as it writes into a different MSR and there is no
requirement to have the MSR access coordinated with the SSB handling in
switch_to(). 

As dumpable is evaluated in switch_mm() already it might be worthwhile to
evaluate whether the STIBP propagation can be tied to that. That does not
solve the problem of tasks running in user space forever, but we don't care
about that for IBPB either and as I said above it's mostly a theoretical
problem.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-06  7:46           ` Thomas Gleixner
@ 2018-11-07  0:18             ` Tim Chen
  2018-11-07 18:33               ` Waiman Long
  2018-11-07 23:03               ` Thomas Gleixner
  0 siblings, 2 replies; 43+ messages in thread
From: Tim Chen @ 2018-11-07  0:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, LKML, x86, Kees Cook

Thomas,

> 
>>> 2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
>>>    in the slow work path.
>>
>> There can be tasks that don't do any syscalls, and it seems like we can
>> have MSRs getting out of sync?
> 
> Setting the TIF flag directly in a remote task is wrong. It needs to be
> handled when the _TIF_UPDATE_SPEC_CTRL is evaluated, i.e. the information
> needs to be stored process wide e.g. in task->mm.
> 
> But yes, if the remote task runs in user space forever, it won't
> help. Though the point is that dumpable is usually set when the process
> starts, so it's probably mostly a theoretical issue.
> 

I took a crack to implement what you suggested to update
remote task's flag and remote SPEC_CTRL MSR on the syscall exit slow path.

This looks reasobale?

Tim


------------

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 3b2490b..614594a 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -216,7 +216,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
 
 #define SYSCALL_EXIT_WORK_FLAGS				\
 	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT |	\
-	 _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT)
+	 _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT | _TIF_UPDATE_SPEC_CTRL)
 
 static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags)
 {
@@ -227,6 +227,8 @@ static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags)
 	if (cached_flags & _TIF_SYSCALL_TRACEPOINT)
 		trace_sys_exit(regs, regs->ax);
 
+	if (cached_flags & _TIF_UPDATE_SPEC_CTRL)
+		spec_ctrl_do_pending_update();
 	/*
 	 * If TIF_SYSCALL_EMU is set, we only get here because of
 	 * TIF_SINGLESTEP (i.e. this is PTRACE_SYSEMU_SINGLESTEP).
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c59a6c4..f124597 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -276,6 +276,8 @@ static inline void indirect_branch_prediction_barrier(void)
 	alternative_msr_write(MSR_IA32_PRED_CMD, val, X86_FEATURE_USE_IBPB);
 }
 
+void spec_ctrl_do_pending_update(void);
+
 /* The Intel SPEC CTRL MSR base value cache */
 extern u64 x86_spec_ctrl_base;
 
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 4f6a7a9..b78db59 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -97,6 +97,7 @@ struct thread_info {
 #define TIF_USER_RETURN_NOTIFY	14	/* Notify kernel of userspace return */
 #define TIF_PATCH_PENDING	15	/* Pending live patching update */
 #define TIF_FSCHECK		16	/* Check FS is USER_DS on return */
+#define TIF_UPDATE_SPEC_CTRL	17	/* Pending update of speculation control */
 
 /* Task status */
 #define TIF_UPROBE		18	/* Breakpointed or singlestepping */
@@ -131,6 +132,7 @@ struct thread_info {
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
 #define _TIF_FSCHECK		(1 << TIF_FSCHECK)
+#define _TIF_UPDATE_SPEC_CTRL	(1 << TIF_UPDATE_SPEC_CTRL)
 
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4c15c86..d82d3f8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -14,6 +14,8 @@
 #include <linux/module.h>
 #include <linux/nospec.h>
 #include <linux/prctl.h>
+#include <linux/sched/coredump.h>
+#include <linux/sched/signal.h>
 
 #include <asm/spec-ctrl.h>
 #include <asm/cmdline.h>
@@ -770,6 +772,69 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	return 0;
 }
 
+static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
+{
+	bool update = false;
+
+	if (stibp_on)
+		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
+	else
+		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
+
+	if (tsk == current && update)
+		speculation_ctrl_update_current();
+}
+
+void spec_ctrl_do_pending_update(void)
+{
+	if (!static_branch_unlikely(&spectre_v2_app_lite))
+		return;
+
+	if (!current->mm)
+		return;
+
+	if (get_dumpable(current->mm) != SUID_DUMP_USER)
+		set_tsk_thread_flag(current, TIF_STIBP);
+	else
+		clear_tsk_thread_flag(current, TIF_STIBP);
+
+	clear_tsk_thread_flag(current, TIF_UPDATE_SPEC_CTRL);
+	speculation_ctrl_update_current();
+}
+ 
+int arch_update_spec_ctrl_restriction(struct task_struct *task)
+{
+	unsigned long flags;
+	struct task_struct *t;
+	bool stibp_on = false;
+
+	if (!static_branch_unlikely(&spectre_v2_app_lite))
+		return 0;
+
+	if (!task->mm)
+		return -EINVAL;
+
+	if (!lock_task_sighand(task, &flags))
+		return -ESRCH;
+
+	if (get_dumpable(task->mm) != SUID_DUMP_USER)
+		stibp_on = true;
+
+	for_each_thread(task, t) {
+		if (task_cpu(task) == smp_processor_id())
+			set_task_stibp(task, stibp_on);
+		else if (test_tsk_thread_flag(task, TIF_STIBP) != stibp_on) 
+			set_tsk_thread_flag(task, TIF_UPDATE_SPEC_CTRL);
+	}
+
+	unlock_task_sighand(task, &flags);
+	return 0;
+}
+
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-07  0:18             ` Tim Chen
@ 2018-11-07 18:33               ` Waiman Long
  2018-11-07 23:15                 ` Tim Chen
  2018-11-07 23:03               ` Thomas Gleixner
  1 sibling, 1 reply; 43+ messages in thread
From: Waiman Long @ 2018-11-07 18:33 UTC (permalink / raw)
  To: Tim Chen, Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, LKML, x86, Kees Cook

On 11/06/2018 07:18 PM, Tim Chen wrote:
> Thomas,
>
>>>> 2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
>>>>    in the slow work path.
>>> There can be tasks that don't do any syscalls, and it seems like we can
>>> have MSRs getting out of sync?
>> Setting the TIF flag directly in a remote task is wrong. It needs to be
>> handled when the _TIF_UPDATE_SPEC_CTRL is evaluated, i.e. the information
>> needs to be stored process wide e.g. in task->mm.
>>
>> But yes, if the remote task runs in user space forever, it won't
>> help. Though the point is that dumpable is usually set when the process
>> starts, so it's probably mostly a theoretical issue.
>>
> I took a crack to implement what you suggested to update
> remote task's flag and remote SPEC_CTRL MSR on the syscall exit slow path.
>
> This looks reasobale?
>
> Tim
>
>
> ------------
>
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index 3b2490b..614594a 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -216,7 +216,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
>  
>  #define SYSCALL_EXIT_WORK_FLAGS				\
>  	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT |	\
> -	 _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT)
> +	 _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT | _TIF_UPDATE_SPEC_CTRL)
>  
>  static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags)
>  {
> @@ -227,6 +227,8 @@ static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags)
>  	if (cached_flags & _TIF_SYSCALL_TRACEPOINT)
>  		trace_sys_exit(regs, regs->ax);
>  
> +	if (cached_flags & _TIF_UPDATE_SPEC_CTRL)
> +		spec_ctrl_do_pending_update();
>  	/*
>  	 * If TIF_SYSCALL_EMU is set, we only get here because of
>  	 * TIF_SINGLESTEP (i.e. this is PTRACE_SYSEMU_SINGLESTEP).
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index c59a6c4..f124597 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -276,6 +276,8 @@ static inline void indirect_branch_prediction_barrier(void)
>  	alternative_msr_write(MSR_IA32_PRED_CMD, val, X86_FEATURE_USE_IBPB);
>  }
>  
> +void spec_ctrl_do_pending_update(void);
> +
>  /* The Intel SPEC CTRL MSR base value cache */
>  extern u64 x86_spec_ctrl_base;
>  
> diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
> index 4f6a7a9..b78db59 100644
> --- a/arch/x86/include/asm/thread_info.h
> +++ b/arch/x86/include/asm/thread_info.h
> @@ -97,6 +97,7 @@ struct thread_info {
>  #define TIF_USER_RETURN_NOTIFY	14	/* Notify kernel of userspace return */
>  #define TIF_PATCH_PENDING	15	/* Pending live patching update */
>  #define TIF_FSCHECK		16	/* Check FS is USER_DS on return */
> +#define TIF_UPDATE_SPEC_CTRL	17	/* Pending update of speculation control */
>  
>  /* Task status */
>  #define TIF_UPROBE		18	/* Breakpointed or singlestepping */
> @@ -131,6 +132,7 @@ struct thread_info {
>  #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
>  #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
>  #define _TIF_FSCHECK		(1 << TIF_FSCHECK)
> +#define _TIF_UPDATE_SPEC_CTRL	(1 << TIF_UPDATE_SPEC_CTRL)
>  
>  #define _TIF_UPROBE		(1 << TIF_UPROBE)
>  #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 4c15c86..d82d3f8 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -14,6 +14,8 @@
>  #include <linux/module.h>
>  #include <linux/nospec.h>
>  #include <linux/prctl.h>
> +#include <linux/sched/coredump.h>
> +#include <linux/sched/signal.h>
>  
>  #include <asm/spec-ctrl.h>
>  #include <asm/cmdline.h>
> @@ -770,6 +772,69 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
>  	return 0;
>  }
>  
> +static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
> +{
> +	bool update = false;
> +
> +	if (stibp_on)
> +		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
> +	else
> +		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
> +
> +	if (tsk == current && update)
> +		speculation_ctrl_update_current();
> +}
> +
> +void spec_ctrl_do_pending_update(void)
> +{
> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
> +		return;
> +
> +	if (!current->mm)
> +		return;
> +
> +	if (get_dumpable(current->mm) != SUID_DUMP_USER)
> +		set_tsk_thread_flag(current, TIF_STIBP);
> +	else
> +		clear_tsk_thread_flag(current, TIF_STIBP);
> +
> +	clear_tsk_thread_flag(current, TIF_UPDATE_SPEC_CTRL);
> +	speculation_ctrl_update_current();
> +}
> + 
> +int arch_update_spec_ctrl_restriction(struct task_struct *task)
> +{
> +	unsigned long flags;
> +	struct task_struct *t;
> +	bool stibp_on = false;
> +
> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
> +		return 0;
> +
> +	if (!task->mm)
> +		return -EINVAL;
> +
> +	if (!lock_task_sighand(task, &flags))
> +		return -ESRCH;
> +
> +	if (get_dumpable(task->mm) != SUID_DUMP_USER)
> +		stibp_on = true;
> +
> +	for_each_thread(task, t) {
> +		if (task_cpu(task) == smp_processor_id())
> +			set_task_stibp(task, stibp_on);

I think "t" is the iterator, not "task". BTW, a thread is on the same
CPU doesn't mean it is running. Should you just check "(t == current)" here?

> +		else if (test_tsk_thread_flag(task, TIF_STIBP) != stibp_on) 
> +			set_tsk_thread_flag(task, TIF_UPDATE_SPEC_CTRL);
> +	}
> +
> +	unlock_task_sighand(task, &flags);
> +	return 0;
> +}
> +
>  int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
>  			     unsigned long ctrl)
>  {

Cheers,
Longman


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-07  0:18             ` Tim Chen
  2018-11-07 18:33               ` Waiman Long
@ 2018-11-07 23:03               ` Thomas Gleixner
  2018-11-08  0:22                 ` Tim Chen
  1 sibling, 1 reply; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-07 23:03 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, LKML, x86, Kees Cook

Tim,

On Tue, 6 Nov 2018, Tim Chen wrote:
> >>> 2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
> >>>    in the slow work path.
> >>
> >> There can be tasks that don't do any syscalls, and it seems like we can
> >> have MSRs getting out of sync?
> > 
> > Setting the TIF flag directly in a remote task is wrong. It needs to be
> > handled when the _TIF_UPDATE_SPEC_CTRL is evaluated, i.e. the information
> > needs to be stored process wide e.g. in task->mm.
> > 
> > But yes, if the remote task runs in user space forever, it won't
> > help. Though the point is that dumpable is usually set when the process
> > starts, so it's probably mostly a theoretical issue.
> > 
> 
> I took a crack to implement what you suggested to update
> remote task's flag and remote SPEC_CTRL MSR on the syscall exit slow path.

can we please take a step back and look at the problem again?

The goal is to glue the mitigation decision on dumpable. That's debatable,
but understandable, so let's not go into that discussion.

There are two ways to modify dumpable:

  1) prctl(PR_SET_DUMPABLE)

  2) operations which change UID/GID and end up in commit_creds()

Now the really interesting question is _when_ is any of the above invoked:

  1) When the process is still single threaded ?

  2) When the process has already spawned threads ?

If #2 is just the oddball exception, then why on earth should we go through
loops and hoops to make it work and add overhead in hot paths for no real
value?

So what we really want before discussing any potential solutions and their
pro and cons is an answer to that.

If we don't have that answer, then we might just create a purely academic
solution which creates more problems than it solves.

Thanks,

	tglx







^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-07 18:33               ` Waiman Long
@ 2018-11-07 23:15                 ` Tim Chen
  0 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-11-07 23:15 UTC (permalink / raw)
  To: Waiman Long, Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, LKML, x86, Kees Cook

On 11/07/2018 10:33 AM, Waiman Long wrote:
> On 11/06/2018 07:18 PM, Tim Chen wrote:
>> Thomas,
>>
>>>>> 2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
>>>>>    in the slow work path.
>>>> There can be tasks that don't do any syscalls, and it seems like we can
>>>> have MSRs getting out of sync?
>>> Setting the TIF flag directly in a remote task is wrong. It needs to be
>>> handled when the _TIF_UPDATE_SPEC_CTRL is evaluated, i.e. the information
>>> needs to be stored process wide e.g. in task->mm.
>>>
>>> But yes, if the remote task runs in user space forever, it won't
>>> help. Though the point is that dumpable is usually set when the process
>>> starts, so it's probably mostly a theoretical issue.
>>>
>> I took a crack to implement what you suggested to update
>> remote task's flag and remote SPEC_CTRL MSR on the syscall exit slow path.
>>
>> This looks reasobale?
>>
>> Tim
>>
>>
>> ------------
>>
>> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>> index 3b2490b..614594a 100644
>> --- a/arch/x86/entry/common.c
>> +++ b/arch/x86/entry/common.c
>> @@ -216,7 +216,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
>>  
>>  #define SYSCALL_EXIT_WORK_FLAGS				\
>>  	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT |	\
>> -	 _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT)
>> +	 _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT | _TIF_UPDATE_SPEC_CTRL)
>>  
>>  static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags)
>>  {
>> @@ -227,6 +227,8 @@ static void syscall_slow_exit_work(struct pt_regs *regs, u32 cached_flags)
>>  	if (cached_flags & _TIF_SYSCALL_TRACEPOINT)
>>  		trace_sys_exit(regs, regs->ax);
>>  
>> +	if (cached_flags & _TIF_UPDATE_SPEC_CTRL)
>> +		spec_ctrl_do_pending_update();
>>  	/*
>>  	 * If TIF_SYSCALL_EMU is set, we only get here because of
>>  	 * TIF_SINGLESTEP (i.e. this is PTRACE_SYSEMU_SINGLESTEP).
>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>> index c59a6c4..f124597 100644
>> --- a/arch/x86/include/asm/nospec-branch.h
>> +++ b/arch/x86/include/asm/nospec-branch.h
>> @@ -276,6 +276,8 @@ static inline void indirect_branch_prediction_barrier(void)
>>  	alternative_msr_write(MSR_IA32_PRED_CMD, val, X86_FEATURE_USE_IBPB);
>>  }
>>  
>> +void spec_ctrl_do_pending_update(void);
>> +
>>  /* The Intel SPEC CTRL MSR base value cache */
>>  extern u64 x86_spec_ctrl_base;
>>  
>> diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
>> index 4f6a7a9..b78db59 100644
>> --- a/arch/x86/include/asm/thread_info.h
>> +++ b/arch/x86/include/asm/thread_info.h
>> @@ -97,6 +97,7 @@ struct thread_info {
>>  #define TIF_USER_RETURN_NOTIFY	14	/* Notify kernel of userspace return */
>>  #define TIF_PATCH_PENDING	15	/* Pending live patching update */
>>  #define TIF_FSCHECK		16	/* Check FS is USER_DS on return */
>> +#define TIF_UPDATE_SPEC_CTRL	17	/* Pending update of speculation control */
>>  
>>  /* Task status */
>>  #define TIF_UPROBE		18	/* Breakpointed or singlestepping */
>> @@ -131,6 +132,7 @@ struct thread_info {
>>  #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
>>  #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
>>  #define _TIF_FSCHECK		(1 << TIF_FSCHECK)
>> +#define _TIF_UPDATE_SPEC_CTRL	(1 << TIF_UPDATE_SPEC_CTRL)
>>  
>>  #define _TIF_UPROBE		(1 << TIF_UPROBE)
>>  #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index 4c15c86..d82d3f8 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -14,6 +14,8 @@
>>  #include <linux/module.h>
>>  #include <linux/nospec.h>
>>  #include <linux/prctl.h>
>> +#include <linux/sched/coredump.h>
>> +#include <linux/sched/signal.h>
>>  
>>  #include <asm/spec-ctrl.h>
>>  #include <asm/cmdline.h>
>> @@ -770,6 +772,69 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
>>  	return 0;
>>  }
>>  
>> +static void set_task_stibp(struct task_struct *tsk, bool stibp_on)
>> +{
>> +	bool update = false;
>> +
>> +	if (stibp_on)
>> +		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
>> +	else
>> +		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
>> +
>> +	if (tsk == current && update)
>> +		speculation_ctrl_update_current();
>> +}
>> +
>> +void spec_ctrl_do_pending_update(void)
>> +{
>> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
>> +		return;
>> +
>> +	if (!current->mm)
>> +		return;
>> +
>> +	if (get_dumpable(current->mm) != SUID_DUMP_USER)
>> +		set_tsk_thread_flag(current, TIF_STIBP);
>> +	else
>> +		clear_tsk_thread_flag(current, TIF_STIBP);
>> +
>> +	clear_tsk_thread_flag(current, TIF_UPDATE_SPEC_CTRL);
>> +	speculation_ctrl_update_current();
>> +}
>> + 
>> +int arch_update_spec_ctrl_restriction(struct task_struct *task)
>> +{
>> +	unsigned long flags;
>> +	struct task_struct *t;
>> +	bool stibp_on = false;
>> +
>> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
>> +		return 0;
>> +
>> +	if (!task->mm)
>> +		return -EINVAL;
>> +
>> +	if (!lock_task_sighand(task, &flags))
>> +		return -ESRCH;
>> +
>> +	if (get_dumpable(task->mm) != SUID_DUMP_USER)
>> +		stibp_on = true;
>> +
>> +	for_each_thread(task, t) {
>> +		if (task_cpu(task) == smp_processor_id())
>> +			set_task_stibp(task, stibp_on);
> 
> I think "t" is the iterator, not "task". BTW, a thread is on the same
> CPU doesn't mean it is running. Should you just check "(t == current)" here?

Ah yes, should be t.  t==current is checked in set_task_stibp.

Tim

> 
>> +		else if (test_tsk_thread_flag(task, TIF_STIBP) != stibp_on) 
>> +			set_tsk_thread_flag(task, TIF_UPDATE_SPEC_CTRL);
>> +	}
>> +
>> +	unlock_task_sighand(task, &flags);
>> +	return 0;
>> +}
>> +
>>  int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
>>  			     unsigned long ctrl)
>>  {
> 
> Cheers,
> Longman
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
  2018-11-07 23:03               ` Thomas Gleixner
@ 2018-11-08  0:22                 ` Tim Chen
  0 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-11-08  0:22 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, LKML, x86, Kees Cook

Thomas,

> 
> The goal is to glue the mitigation decision on dumpable. That's debatable,
> but understandable, so let's not go into that discussion.
> 
> There are two ways to modify dumpable:
> 
>   1) prctl(PR_SET_DUMPABLE)
> 
>   2) operations which change UID/GID and end up in commit_creds()
> 
> Now the really interesting question is _when_ is any of the above invoked:
> 
>   1) When the process is still single threaded ?
> 
>   2) When the process has already spawned threads ?
> 


The UID/GID changes in setuid and setgid syscalls are applied to the
thread only.  There's no propagation of the cred change to other 
related threads.  So if the user process expect the change to propagate
to other related threads, they should do setuid/setgid before they spawn
other threads.

I think similar logic should apply for prctl(PR_SET_DUMPABLE) that
the user expects the change to be for the current task and the
propagation of this change to other threads is a side effect.

Will anyone mind if we put in restriction that prctl(PR_SET_DUMPABLE)
should only be done when a task is single threaded?

Otherwise the user process should only expect the current thread to get the 
benefit of STIBP protection when setting it to non-dumpable.  I can
add comments in the code to explain this.


> If #2 is just the oddball exception, then why on earth should we go through
> loops and hoops to make it work and add overhead in hot paths for no real
> value?
> 

I tend to agree with you on this.

> So what we really want before discussing any potential solutions and their
> pro and cons is an answer to that.
> 
> If we don't have that answer, then we might just create a purely academic
> solution which creates more problems than it solves.
> 

Thanks.

Tim

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key
  2018-11-03 18:29   ` Thomas Gleixner
@ 2018-11-08  1:43     ` Tim Chen
  2018-11-08 11:18       ` Thomas Gleixner
  0 siblings, 1 reply; 43+ messages in thread
From: Tim Chen @ 2018-11-08  1:43 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

On 11/03/2018 11:29 AM, Thomas Gleixner wrote:
> Tim,
> 
> On Tue, 30 Oct 2018, Tim Chen wrote:
> 
>> Change the SMT code paths check from using cpu_smt_control to
>> cpu_smt_enabled static key.  This saves a branching check.
> 
> and adds extra size to the kernel for the patching. The only reason why it
> would make sense for kvm is that then the EXPORT of cpu_smt_control can go
> away, which takes more space than the patch data.
> 

Should I just drop this patch then and only replace
sched_smt_present with cpu_smt_enabled?

Tim

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key
  2018-11-08  1:43     ` Tim Chen
@ 2018-11-08 11:18       ` Thomas Gleixner
  0 siblings, 0 replies; 43+ messages in thread
From: Thomas Gleixner @ 2018-11-08 11:18 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Tim,

On Wed, 7 Nov 2018, Tim Chen wrote:
> On 11/03/2018 11:29 AM, Thomas Gleixner wrote:
> > Tim,
> > 
> > On Tue, 30 Oct 2018, Tim Chen wrote:
> > 
> >> Change the SMT code paths check from using cpu_smt_control to
> >> cpu_smt_enabled static key.  This saves a branching check.
> > 
> > and adds extra size to the kernel for the patching. The only reason why it
> > would make sense for kvm is that then the EXPORT of cpu_smt_control can go
> > away, which takes more space than the patch data.
> > 
> 
> Should I just drop this patch then and only replace
> sched_smt_present with cpu_smt_enabled?

You have to decide which of the exports to drop. No strong opinion.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Patch v4 08/18] sched: Deprecate sched_smt_present and use cpu_smt_enabled static key
  2018-11-03 18:20   ` Thomas Gleixner
@ 2018-11-09 22:08     ` Tim Chen
  0 siblings, 0 replies; 43+ messages in thread
From: Tim Chen @ 2018-11-09 22:08 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, linux-kernel, x86

Thomas,

> 
>> The cpu_smt_enabled static key serves identical purpose as cpu_smt_enabled
> 
> That doesn't make any sense.
> 
>> to enable SMT specific code.
>>
>> This patch replaces sched_smt_present in the scheduler with
>> cpu_smt_enabled and deprecate sched_smt_present.
> 
> It's not deprecating it, it's replacing and removing it and thereby
> breaking all architectures which select SCHED_SMT except x86.

But the cpu_smt_enabled key is not x86 specific.  It is set and
defined in generic kernel/cpu.c and include/linux/cpu.h and available
for all architectures.

Why would it break other architectures?  I am missing something?

Thanks.

Tim

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2018-11-09 22:08 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-30 18:49 [Patch v4 00/18] Provide process property based options to enable Spectre v2 userspace-userspace protection* Tim Chen
2018-10-30 18:49 ` [Patch v4 01/18] x86/speculation: Clean up spectre_v2_parse_cmdline() Tim Chen
2018-10-30 18:49 ` [Patch v4 02/18] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Tim Chen
2018-10-30 18:49 ` [Patch v4 03/18] x86/speculation: Reorganize cpu_show_common() Tim Chen
2018-11-03 18:07   ` Thomas Gleixner
2018-11-05 19:12     ` Tim Chen
2018-11-05 19:17       ` Thomas Gleixner
2018-10-30 18:49 ` [Patch v4 04/18] x86/speculation: Add X86_FEATURE_USE_IBRS_ENHANCED Tim Chen
2018-10-30 18:49 ` [Patch v4 05/18] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
2018-10-30 18:49 ` [Patch v4 06/18] smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
2018-10-30 18:49 ` [Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key Tim Chen
2018-11-03 18:29   ` Thomas Gleixner
2018-11-08  1:43     ` Tim Chen
2018-11-08 11:18       ` Thomas Gleixner
2018-10-30 18:49 ` [Patch v4 08/18] sched: Deprecate sched_smt_present and use " Tim Chen
2018-11-03 18:20   ` Thomas Gleixner
2018-11-09 22:08     ` Tim Chen
2018-10-30 18:49 ` [Patch v4 09/18] x86/speculation: Rename SSBD update functions Tim Chen
2018-10-30 18:49 ` [Patch v4 10/18] x86/speculation: Reorganize speculation control MSRs update Tim Chen
2018-10-30 18:49 ` [Patch v4 11/18] x86/speculation: Update comment on TIF_SSBD Tim Chen
2018-10-30 18:49 ` [Patch v4 12/18] x86: Group thread info flags by functionality Tim Chen
2018-10-30 18:49 ` [Patch v4 13/18] security: Update security level of a process when modifying its dumpability Tim Chen
2018-10-30 20:57   ` Schaufler, Casey
2018-10-30 21:30     ` Tim Chen
2018-10-30 21:53       ` Schaufler, Casey
2018-10-30 18:49 ` [Patch v4 14/18] x86/speculation: Turn on or off STIBP according to a task's TIF_STIBP Tim Chen
2018-10-30 18:49 ` [Patch v4 15/18] x86/speculation: Add Spectre v2 app to app protection modes Tim Chen
2018-10-30 18:49 ` [Patch v4 16/18] x86/speculation: Enable STIBP to protect security sensitive tasks Tim Chen
2018-10-30 21:07   ` Schaufler, Casey
2018-10-30 21:34     ` Tim Chen
2018-10-30 22:02       ` Schaufler, Casey
2018-10-30 18:49 ` [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs Tim Chen
2018-11-04 19:49   ` Thomas Gleixner
2018-11-05 22:02     ` Tim Chen
2018-11-05 23:04       ` Thomas Gleixner
2018-11-05 23:59         ` Tim Chen
2018-11-06  7:46           ` Thomas Gleixner
2018-11-07  0:18             ` Tim Chen
2018-11-07 18:33               ` Waiman Long
2018-11-07 23:15                 ` Tim Chen
2018-11-07 23:03               ` Thomas Gleixner
2018-11-08  0:22                 ` Tim Chen
2018-10-30 18:49 ` [Patch v4 18/18] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).