LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH v3 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask
@ 2015-03-22  4:42 Shreyas B. Prabhu
  2015-03-22  4:42 ` [PATCH v3 2/3] powerpc/powernv: Move cpuidle related code from setup.c to new file Shreyas B. Prabhu
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Shreyas B. Prabhu @ 2015-03-22  4:42 UTC (permalink / raw)
  To: linux-kernel; +Cc: linuxppc-dev, mpe, benh, Shreyas B. Prabhu

Currently, cpu_online_cores_map returns a mask, which for every core
that has atleast one online thread, has the first-cpu-of-that-core's bit
set. But the first cpu itself may not be online always. In such cases, if
the returned mask is used for IPI, then it'll cause IPIs to be skipped
on cores where the first thread is offline.

Fix this by setting first-online-cpu-of-the-core's bit in the mask.
This is done by fixing this in the underlying function
cpu_thread_mask_to_cores.

Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
---
This patch is new in v3

In an example scenario where all the threads of 1st core are offline
and argument to cpu_thread_mask_to_cores is cpu_possible_mask,
with this implementation, return value will not have any bit
corresponding to 1st core set. I think that should be okay. Any thoughts?

 arch/powerpc/include/asm/cputhreads.h | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
index 2bf8e93..9e8485c 100644
--- a/arch/powerpc/include/asm/cputhreads.h
+++ b/arch/powerpc/include/asm/cputhreads.h
@@ -31,9 +31,9 @@ extern cpumask_t threads_core_mask;
 /* cpu_thread_mask_to_cores - Return a cpumask of one per cores
  *                            hit by the argument
  *
- * @threads:	a cpumask of threads
+ * @threads:	a cpumask of online threads
  *
- * This function returns a cpumask which will have one "cpu" (or thread)
+ * This function returns a cpumask which will have one online cpu's
  * bit set for each core that has at least one thread set in the argument.
  *
  * This can typically be used for things like IPI for tlb invalidations
@@ -42,13 +42,16 @@ extern cpumask_t threads_core_mask;
 static inline cpumask_t cpu_thread_mask_to_cores(const struct cpumask *threads)
 {
 	cpumask_t	tmp, res;
-	int		i;
+	int		i, cpu;
 
 	cpumask_clear(&res);
 	for (i = 0; i < NR_CPUS; i += threads_per_core) {
 		cpumask_shift_left(&tmp, &threads_core_mask, i);
-		if (cpumask_intersects(threads, &tmp))
-			cpumask_set_cpu(i, &res);
+		if (cpumask_intersects(threads, &tmp)) {
+			cpu = cpumask_next_and(-1, &tmp, cpu_online_mask);
+			if (cpu < nr_cpu_ids)
+				cpumask_set_cpu(cpu, &res);
+		}
 	}
 	return res;
 }
-- 
1.9.3


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 2/3] powerpc/powernv: Move cpuidle related code from setup.c to new file
  2015-03-22  4:42 [PATCH v3 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Shreyas B. Prabhu
@ 2015-03-22  4:42 ` Shreyas B. Prabhu
  2015-03-30  9:40   ` [v3, " Michael Ellerman
  2015-03-22  4:42 ` [PATCH v3 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior Shreyas B. Prabhu
  2015-03-30  9:36 ` [v3, 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Michael Ellerman
  2 siblings, 1 reply; 9+ messages in thread
From: Shreyas B. Prabhu @ 2015-03-22  4:42 UTC (permalink / raw)
  To: linux-kernel; +Cc: linuxppc-dev, mpe, benh, Shreyas B. Prabhu

This is a cleanup patch; doesn't change any functionality. Moves
all cpuidle related code from setup.c to a new file.

Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
---
This patch is new in v3

 arch/powerpc/platforms/powernv/Makefile |   2 +-
 arch/powerpc/platforms/powernv/idle.c   | 186 ++++++++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/setup.c  | 166 ----------------------------
 3 files changed, 187 insertions(+), 167 deletions(-)
 create mode 100644 arch/powerpc/platforms/powernv/idle.c

diff --git a/arch/powerpc/platforms/powernv/Makefile b/arch/powerpc/platforms/powernv/Makefile
index 6f3c5d3..560ee54 100644
--- a/arch/powerpc/platforms/powernv/Makefile
+++ b/arch/powerpc/platforms/powernv/Makefile
@@ -1,4 +1,4 @@
-obj-y			+= setup.o opal-wrappers.o opal.o opal-async.o
+obj-y			+= setup.o opal-wrappers.o opal.o opal-async.o idle.o
 obj-y			+= opal-rtc.o opal-nvram.o opal-lpc.o opal-flash.o
 obj-y			+= rng.o opal-elog.o opal-dump.o opal-sysparam.o opal-sensor.o
 obj-y			+= opal-msglog.o opal-hmi.o opal-power.o
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
new file mode 100644
index 0000000..77992f6
--- /dev/null
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -0,0 +1,186 @@
+/*
+ * PowerNV cpuidle code
+ *
+ * Copyright 2015 IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/of.h>
+
+#include <asm/firmware.h>
+#include <asm/opal.h>
+#include <asm/cputhreads.h>
+#include <asm/cpuidle.h>
+#include <asm/code-patching.h>
+
+#include "powernv.h"
+#include "subcore.h"
+
+static u32 supported_cpuidle_states;
+
+int pnv_save_sprs_for_winkle(void)
+{
+	int cpu;
+	int rc;
+
+	/*
+	 * hid0, hid1, hid4, hid5, hmeer and lpcr values are symmetric accross
+	 * all cpus at boot. Get these reg values of current cpu and use the
+	 * same accross all cpus.
+	 */
+	uint64_t lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
+	uint64_t hid0_val = mfspr(SPRN_HID0);
+	uint64_t hid1_val = mfspr(SPRN_HID1);
+	uint64_t hid4_val = mfspr(SPRN_HID4);
+	uint64_t hid5_val = mfspr(SPRN_HID5);
+	uint64_t hmeer_val = mfspr(SPRN_HMEER);
+
+	for_each_possible_cpu(cpu) {
+		uint64_t pir = get_hard_smp_processor_id(cpu);
+		uint64_t hsprg0_val = (uint64_t)&paca[cpu];
+
+		/*
+		 * HSPRG0 is used to store the cpu's pointer to paca. Hence last
+		 * 3 bits are guaranteed to be 0. Program slw to restore HSPRG0
+		 * with 63rd bit set, so that when a thread wakes up at 0x100 we
+		 * can use this bit to distinguish between fastsleep and
+		 * deep winkle.
+		 */
+		hsprg0_val |= 1;
+
+		rc = opal_slw_set_reg(pir, SPRN_HSPRG0, hsprg0_val);
+		if (rc != 0)
+			return rc;
+
+		rc = opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
+		if (rc != 0)
+			return rc;
+
+		/* HIDs are per core registers */
+		if (cpu_thread_in_core(cpu) == 0) {
+
+			rc = opal_slw_set_reg(pir, SPRN_HMEER, hmeer_val);
+			if (rc != 0)
+				return rc;
+
+			rc = opal_slw_set_reg(pir, SPRN_HID0, hid0_val);
+			if (rc != 0)
+				return rc;
+
+			rc = opal_slw_set_reg(pir, SPRN_HID1, hid1_val);
+			if (rc != 0)
+				return rc;
+
+			rc = opal_slw_set_reg(pir, SPRN_HID4, hid4_val);
+			if (rc != 0)
+				return rc;
+
+			rc = opal_slw_set_reg(pir, SPRN_HID5, hid5_val);
+			if (rc != 0)
+				return rc;
+		}
+	}
+
+	return 0;
+}
+
+static void pnv_alloc_idle_core_states(void)
+{
+	int i, j;
+	int nr_cores = cpu_nr_cores();
+	u32 *core_idle_state;
+
+	/*
+	 * core_idle_state - First 8 bits track the idle state of each thread
+	 * of the core. The 8th bit is the lock bit. Initially all thread bits
+	 * are set. They are cleared when the thread enters deep idle state
+	 * like sleep and winkle. Initially the lock bit is cleared.
+	 * The lock bit has 2 purposes
+	 * a. While the first thread is restoring core state, it prevents
+	 * other threads in the core from switching to process context.
+	 * b. While the last thread in the core is saving the core state, it
+	 * prevents a different thread from waking up.
+	 */
+	for (i = 0; i < nr_cores; i++) {
+		int first_cpu = i * threads_per_core;
+		int node = cpu_to_node(first_cpu);
+
+		core_idle_state = kmalloc_node(sizeof(u32), GFP_KERNEL, node);
+		*core_idle_state = PNV_CORE_IDLE_THREAD_BITS;
+
+		for (j = 0; j < threads_per_core; j++) {
+			int cpu = first_cpu + j;
+
+			paca[cpu].core_idle_state_ptr = core_idle_state;
+			paca[cpu].thread_idle_state = PNV_THREAD_RUNNING;
+			paca[cpu].thread_mask = 1 << j;
+		}
+	}
+
+	update_subcore_sibling_mask();
+
+	if (supported_cpuidle_states & OPAL_PM_WINKLE_ENABLED)
+		pnv_save_sprs_for_winkle();
+}
+
+u32 pnv_get_supported_cpuidle_states(void)
+{
+	return supported_cpuidle_states;
+}
+EXPORT_SYMBOL_GPL(pnv_get_supported_cpuidle_states);
+
+static int __init pnv_init_idle_states(void)
+{
+	struct device_node *power_mgt;
+	int dt_idle_states;
+	const __be32 *idle_state_flags;
+	u32 len_flags, flags;
+	int i;
+
+	supported_cpuidle_states = 0;
+
+	if (cpuidle_disable != IDLE_NO_OVERRIDE)
+		return 0;
+
+	if (!firmware_has_feature(FW_FEATURE_OPALv3))
+		return 0;
+
+	power_mgt = of_find_node_by_path("/ibm,opal/power-mgt");
+	if (!power_mgt) {
+		pr_warn("opal: PowerMgmt Node not found\n");
+		return 0;
+	}
+
+	idle_state_flags = of_get_property(power_mgt,
+			"ibm,cpu-idle-state-flags", &len_flags);
+	if (!idle_state_flags) {
+		pr_warn("DT-PowerMgmt: missing ibm,cpu-idle-state-flags\n");
+		return 0;
+	}
+
+	dt_idle_states = len_flags / sizeof(u32);
+
+	for (i = 0; i < dt_idle_states; i++) {
+		flags = be32_to_cpu(idle_state_flags[i]);
+		supported_cpuidle_states |= flags;
+	}
+	if (!(supported_cpuidle_states & OPAL_PM_SLEEP_ENABLED_ER1)) {
+		patch_instruction(
+			(unsigned int *)pnv_fastsleep_workaround_at_entry,
+			PPC_INST_NOP);
+		patch_instruction(
+			(unsigned int *)pnv_fastsleep_workaround_at_exit,
+			PPC_INST_NOP);
+	}
+	pnv_alloc_idle_core_states();
+	return 0;
+}
+
+subsys_initcall(pnv_init_idle_states);
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index d2de7d5..30b1c3e 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -36,12 +36,8 @@
 #include <asm/opal.h>
 #include <asm/kexec.h>
 #include <asm/smp.h>
-#include <asm/cputhreads.h>
-#include <asm/cpuidle.h>
-#include <asm/code-patching.h>
 
 #include "powernv.h"
-#include "subcore.h"
 
 static void __init pnv_setup_arch(void)
 {
@@ -292,168 +288,6 @@ static void __init pnv_setup_machdep_rtas(void)
 }
 #endif /* CONFIG_PPC_POWERNV_RTAS */
 
-static u32 supported_cpuidle_states;
-
-int pnv_save_sprs_for_winkle(void)
-{
-	int cpu;
-	int rc;
-
-	/*
-	 * hid0, hid1, hid4, hid5, hmeer and lpcr values are symmetric accross
-	 * all cpus at boot. Get these reg values of current cpu and use the
-	 * same accross all cpus.
-	 */
-	uint64_t lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
-	uint64_t hid0_val = mfspr(SPRN_HID0);
-	uint64_t hid1_val = mfspr(SPRN_HID1);
-	uint64_t hid4_val = mfspr(SPRN_HID4);
-	uint64_t hid5_val = mfspr(SPRN_HID5);
-	uint64_t hmeer_val = mfspr(SPRN_HMEER);
-
-	for_each_possible_cpu(cpu) {
-		uint64_t pir = get_hard_smp_processor_id(cpu);
-		uint64_t hsprg0_val = (uint64_t)&paca[cpu];
-
-		/*
-		 * HSPRG0 is used to store the cpu's pointer to paca. Hence last
-		 * 3 bits are guaranteed to be 0. Program slw to restore HSPRG0
-		 * with 63rd bit set, so that when a thread wakes up at 0x100 we
-		 * can use this bit to distinguish between fastsleep and
-		 * deep winkle.
-		 */
-		hsprg0_val |= 1;
-
-		rc = opal_slw_set_reg(pir, SPRN_HSPRG0, hsprg0_val);
-		if (rc != 0)
-			return rc;
-
-		rc = opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
-		if (rc != 0)
-			return rc;
-
-		/* HIDs are per core registers */
-		if (cpu_thread_in_core(cpu) == 0) {
-
-			rc = opal_slw_set_reg(pir, SPRN_HMEER, hmeer_val);
-			if (rc != 0)
-				return rc;
-
-			rc = opal_slw_set_reg(pir, SPRN_HID0, hid0_val);
-			if (rc != 0)
-				return rc;
-
-			rc = opal_slw_set_reg(pir, SPRN_HID1, hid1_val);
-			if (rc != 0)
-				return rc;
-
-			rc = opal_slw_set_reg(pir, SPRN_HID4, hid4_val);
-			if (rc != 0)
-				return rc;
-
-			rc = opal_slw_set_reg(pir, SPRN_HID5, hid5_val);
-			if (rc != 0)
-				return rc;
-		}
-	}
-
-	return 0;
-}
-
-static void pnv_alloc_idle_core_states(void)
-{
-	int i, j;
-	int nr_cores = cpu_nr_cores();
-	u32 *core_idle_state;
-
-	/*
-	 * core_idle_state - First 8 bits track the idle state of each thread
-	 * of the core. The 8th bit is the lock bit. Initially all thread bits
-	 * are set. They are cleared when the thread enters deep idle state
-	 * like sleep and winkle. Initially the lock bit is cleared.
-	 * The lock bit has 2 purposes
-	 * a. While the first thread is restoring core state, it prevents
-	 * other threads in the core from switching to process context.
-	 * b. While the last thread in the core is saving the core state, it
-	 * prevents a different thread from waking up.
-	 */
-	for (i = 0; i < nr_cores; i++) {
-		int first_cpu = i * threads_per_core;
-		int node = cpu_to_node(first_cpu);
-
-		core_idle_state = kmalloc_node(sizeof(u32), GFP_KERNEL, node);
-		*core_idle_state = PNV_CORE_IDLE_THREAD_BITS;
-
-		for (j = 0; j < threads_per_core; j++) {
-			int cpu = first_cpu + j;
-
-			paca[cpu].core_idle_state_ptr = core_idle_state;
-			paca[cpu].thread_idle_state = PNV_THREAD_RUNNING;
-			paca[cpu].thread_mask = 1 << j;
-		}
-	}
-
-	update_subcore_sibling_mask();
-
-	if (supported_cpuidle_states & OPAL_PM_WINKLE_ENABLED)
-		pnv_save_sprs_for_winkle();
-}
-
-u32 pnv_get_supported_cpuidle_states(void)
-{
-	return supported_cpuidle_states;
-}
-EXPORT_SYMBOL_GPL(pnv_get_supported_cpuidle_states);
-
-static int __init pnv_init_idle_states(void)
-{
-	struct device_node *power_mgt;
-	int dt_idle_states;
-	const __be32 *idle_state_flags;
-	u32 len_flags, flags;
-	int i;
-
-	supported_cpuidle_states = 0;
-
-	if (cpuidle_disable != IDLE_NO_OVERRIDE)
-		return 0;
-
-	if (!firmware_has_feature(FW_FEATURE_OPALv3))
-		return 0;
-
-	power_mgt = of_find_node_by_path("/ibm,opal/power-mgt");
-	if (!power_mgt) {
-		pr_warn("opal: PowerMgmt Node not found\n");
-		return 0;
-	}
-
-	idle_state_flags = of_get_property(power_mgt,
-			"ibm,cpu-idle-state-flags", &len_flags);
-	if (!idle_state_flags) {
-		pr_warn("DT-PowerMgmt: missing ibm,cpu-idle-state-flags\n");
-		return 0;
-	}
-
-	dt_idle_states = len_flags / sizeof(u32);
-
-	for (i = 0; i < dt_idle_states; i++) {
-		flags = be32_to_cpu(idle_state_flags[i]);
-		supported_cpuidle_states |= flags;
-	}
-	if (!(supported_cpuidle_states & OPAL_PM_SLEEP_ENABLED_ER1)) {
-		patch_instruction(
-			(unsigned int *)pnv_fastsleep_workaround_at_entry,
-			PPC_INST_NOP);
-		patch_instruction(
-			(unsigned int *)pnv_fastsleep_workaround_at_exit,
-			PPC_INST_NOP);
-	}
-	pnv_alloc_idle_core_states();
-	return 0;
-}
-
-subsys_initcall(pnv_init_idle_states);
-
 static int __init pnv_probe(void)
 {
 	unsigned long root = of_get_flat_dt_root();
-- 
1.9.3


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior
  2015-03-22  4:42 [PATCH v3 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Shreyas B. Prabhu
  2015-03-22  4:42 ` [PATCH v3 2/3] powerpc/powernv: Move cpuidle related code from setup.c to new file Shreyas B. Prabhu
@ 2015-03-22  4:42 ` Shreyas B. Prabhu
  2015-03-30 10:21   ` [v3, " Michael Ellerman
  2015-03-30  9:36 ` [v3, 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Michael Ellerman
  2 siblings, 1 reply; 9+ messages in thread
From: Shreyas B. Prabhu @ 2015-03-22  4:42 UTC (permalink / raw)
  To: linux-kernel; +Cc: linuxppc-dev, mpe, benh, Shreyas B. Prabhu

Fastsleep is one of the idle state which cpuidle subsystem currently
uses on power8 machines. In this state L2 cache is brought down to a
threshold voltage. Therefore when the core is in fastsleep, the
communication between L2 and L3 needs to be fenced. But there is a bug
in the current power8 chips surrounding this fencing.

OPAL provides a workaround which precludes the possibility of hitting
this bug. But running with this workaround applied causes checkstop
if any correctable error in L2 cache directory is detected. Hence OPAL
also provides a way to undo the workaround.

In the existing implementation, workaround is applied by the last thread
of the core entering fastsleep and undone by the first thread waking up.
But this has a performance cost. These OPAL calls account for roughly
4000 cycles everytime the core has to enter or wakeup from fastsleep.

This patch introduces a sysfs attribute (fastsleep_workaround_state)
to choose the behavior of this workaround.

By default, fastsleep_workaround_state = 0. In this case, workaround
is applied/undone everytime the core enters/exits fastsleep.

fastsleep_workaround_state = 1. In this case the workaround is applied
once on all the cores and never undone. This can be triggered by
echo 1 > /sys/devices/system/cpu/fastsleep_workaround_state

For simplicity this attribute can be modified only once. Implying, once
fastsleep_workaround_state is changed to 1, it cannot be reverted to
the default state.

Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
---
Changes in V3-
Kernel parameter changed to sysfs attribute
Modified commmit message

 arch/powerpc/include/asm/opal.h                |  8 +++
 arch/powerpc/platforms/powernv/idle.c          | 83 +++++++++++++++++++++++++-
 arch/powerpc/platforms/powernv/opal-wrappers.S |  1 +
 3 files changed, 91 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 9ee0a30..8bea8fc 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -180,6 +180,13 @@ struct opal_sg_list {
 #define OPAL_PM_WINKLE_ENABLED	0x00040000
 #define OPAL_PM_SLEEP_ENABLED_ER1	0x00080000
 
+/*
+ * OPAL_CONFIG_CPU_IDLE_STATE parameters
+ */
+#define OPAL_CONFIG_IDLE_FASTSLEEP	1
+#define OPAL_CONFIG_IDLE_UNDO		0
+#define OPAL_CONFIG_IDLE_APPLY		1
+
 #ifndef __ASSEMBLY__
 
 #include <linux/notifier.h>
@@ -924,6 +931,7 @@ int64_t opal_handle_hmi(void);
 int64_t opal_register_dump_region(uint32_t id, uint64_t start, uint64_t end);
 int64_t opal_unregister_dump_region(uint32_t id);
 int64_t opal_slw_set_reg(uint64_t cpu_pir, uint64_t sprn, uint64_t val);
+int64_t opal_config_cpu_idle_state(uint64_t state, uint64_t flag);
 int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number);
 int64_t opal_ipmi_send(uint64_t interface, struct opal_ipmi_msg *msg,
 		uint64_t msg_len);
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 77992f6..79157b9 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -13,6 +13,8 @@
 #include <linux/mm.h>
 #include <linux/slab.h>
 #include <linux/of.h>
+#include <linux/device.h>
+#include <linux/cpu.h>
 
 #include <asm/firmware.h>
 #include <asm/opal.h>
@@ -136,6 +138,77 @@ u32 pnv_get_supported_cpuidle_states(void)
 }
 EXPORT_SYMBOL_GPL(pnv_get_supported_cpuidle_states);
 
+static void pnv_fastsleep_workaround_apply(void *info)
+{
+	opal_config_cpu_idle_state(OPAL_CONFIG_IDLE_FASTSLEEP,
+					OPAL_CONFIG_IDLE_APPLY);
+}
+
+/*
+ * Used to store fastsleep workaround state
+ * 0 - Workaround applied/undone at fastsleep entry/exit path (Default)
+ * 1 - Workaround applied once, never undone.
+ */
+static u8 fastsleep_workaround_state;
+
+static ssize_t show_fastsleep_workaround_state(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%u\n", fastsleep_workaround_state);
+}
+
+static ssize_t store_fastsleep_workaround_state(struct device *dev,
+		struct device_attribute *attr, const char *buf,
+		size_t count)
+{
+	u32 val;
+	cpumask_t primary_thread_mask;
+
+	/*
+	 * fastsleep_workaround_state is write-once parameter.
+	 * Once it has been set to 1, it cannot be undone.
+	 */
+	if (fastsleep_workaround_state == 1)
+		return -EINVAL;
+
+	if (kstrtou32(buf, 0, &val))
+		return -EINVAL;
+
+	if (val > 1)
+		return -EINVAL;
+
+	fastsleep_workaround_state = 1;
+	/*
+	 * fastsleep_workaround_state = 1 implies fastsleep workaround needs to
+	 * be left in 'applied' state on all the cores. Do this by-
+	 * 1. Patching out the call to 'undo' workaround in fastsleep exit path
+	 * 2. Sending ipi to all the cores which have atleast one online thread
+	 * 3. Patching out the call to 'apply' workaround in fastsleep entry
+	 * path
+	 * There is no need to send ipi to cores which have all threads
+	 * offlined, as last thread of the core entering fastsleep or deeper
+	 * state would have applied workaround.
+	 */
+	patch_instruction(
+		(unsigned int *)pnv_fastsleep_workaround_at_exit,
+		PPC_INST_NOP);
+
+	primary_thread_mask = cpu_online_cores_map();
+	on_each_cpu_mask(&primary_thread_mask,
+				pnv_fastsleep_workaround_apply,
+				NULL, 1);
+
+	patch_instruction(
+		(unsigned int *)pnv_fastsleep_workaround_at_entry,
+		PPC_INST_NOP);
+
+	return count;
+}
+
+static DEVICE_ATTR(fastsleep_workaround_state, 0600,
+			show_fastsleep_workaround_state,
+			store_fastsleep_workaround_state);
+
 static int __init pnv_init_idle_states(void)
 {
 	struct device_node *power_mgt;
@@ -178,7 +251,15 @@ static int __init pnv_init_idle_states(void)
 		patch_instruction(
 			(unsigned int *)pnv_fastsleep_workaround_at_exit,
 			PPC_INST_NOP);
-	}
+	} else
+	/*
+	 * OPAL_PM_SLEEP_ENABLED_ER1 is set. It indicates that workaround is
+	 * needed to use fastsleep. Provide sysfs control to choose how this
+	 * workaround has to be applied.
+	 */
+		device_create_file(cpu_subsys.dev_root,
+					&dev_attr_fastsleep_workaround_state);
+
 	pnv_alloc_idle_core_states();
 	return 0;
 }
diff --git a/arch/powerpc/platforms/powernv/opal-wrappers.S b/arch/powerpc/platforms/powernv/opal-wrappers.S
index 0509bca..84a20bb 100644
--- a/arch/powerpc/platforms/powernv/opal-wrappers.S
+++ b/arch/powerpc/platforms/powernv/opal-wrappers.S
@@ -283,6 +283,7 @@ OPAL_CALL(opal_sensor_read,			OPAL_SENSOR_READ);
 OPAL_CALL(opal_get_param,			OPAL_GET_PARAM);
 OPAL_CALL(opal_set_param,			OPAL_SET_PARAM);
 OPAL_CALL(opal_handle_hmi,			OPAL_HANDLE_HMI);
+OPAL_CALL(opal_config_cpu_idle_state,		OPAL_CONFIG_CPU_IDLE_STATE);
 OPAL_CALL(opal_slw_set_reg,			OPAL_SLW_SET_REG);
 OPAL_CALL(opal_register_dump_region,		OPAL_REGISTER_DUMP_REGION);
 OPAL_CALL(opal_unregister_dump_region,		OPAL_UNREGISTER_DUMP_REGION);
-- 
1.9.3


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3, 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask
  2015-03-22  4:42 [PATCH v3 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Shreyas B. Prabhu
  2015-03-22  4:42 ` [PATCH v3 2/3] powerpc/powernv: Move cpuidle related code from setup.c to new file Shreyas B. Prabhu
  2015-03-22  4:42 ` [PATCH v3 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior Shreyas B. Prabhu
@ 2015-03-30  9:36 ` Michael Ellerman
  2015-03-30 17:00   ` Shreyas B Prabhu
  2 siblings, 1 reply; 9+ messages in thread
From: Michael Ellerman @ 2015-03-30  9:36 UTC (permalink / raw)
  To: Shreyas B. Prabhu, linux-kernel; +Cc: Shreyas B. Prabhu, linuxppc-dev

On Sun, 2015-22-03 at 04:42:57 UTC, "Shreyas B. Prabhu" wrote:
> Currently, cpu_online_cores_map returns a mask, which for every core
> that has atleast one online thread, has the first-cpu-of-that-core's bit
> set. 

  ... which for every core with at least one online thread, has the bit for
  thread 0 of the core set to 1, and the bits for all other threads of the core
  set to 0.

Maybe that's clearer?

> But the first cpu itself may not be online always. In such cases, if
                   ^
		   of the core

> the returned mask is used for IPI, then it'll cause IPIs to be skipped
> on cores where the first thread is offline.

  .. because the IPI code refuses to send IPIs to offline threads, right?

> Fix this by setting first-online-cpu-of-the-core's bit in the mask.

  .. by setting the bit of the first online thread in the core.

> This is done by fixing this in the underlying function
> cpu_thread_mask_to_cores.


The result has the property that for all cores with online threads, there is
one bit set in the returned map. And further, all bits that are set in the
returned map correspond to online threads.


> Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
> ---
> This patch is new in v3
> 
> In an example scenario where all the threads of 1st core are offline
> and argument to cpu_thread_mask_to_cores is cpu_possible_mask,
> with this implementation, return value will not have any bit
> corresponding to 1st core set. I think that should be okay. Any thoughts?

Looking at linux-next:

  $ git grep cpu_thread_mask_to_cores
  arch/powerpc/include/asm/cputhreads.h:/* cpu_thread_mask_to_cores - Return a cpumask of one per cores
  arch/powerpc/include/asm/cputhreads.h:static inline cpumask_t cpu_thread_mask_to_cores(const struct cpumask *threads)
  arch/powerpc/include/asm/cputhreads.h:  return cpu_thread_mask_to_cores(cpu_online_mask);
  $ git grep cpu_online_cores_map
  arch/powerpc/include/asm/cputhreads.h:static inline cpumask_t cpu_online_cores_map(void)

ie. There are no users.

So yeah I think we can change the semantics of this, and the semantics you
describe make sense.

If you agree with my changelog comments I'm happy to fix that up and merge
this, or you can send a v4 if you like.

cheers

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3, 2/3] powerpc/powernv: Move cpuidle related code from setup.c to new file
  2015-03-22  4:42 ` [PATCH v3 2/3] powerpc/powernv: Move cpuidle related code from setup.c to new file Shreyas B. Prabhu
@ 2015-03-30  9:40   ` Michael Ellerman
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Ellerman @ 2015-03-30  9:40 UTC (permalink / raw)
  To: Shreyas B. Prabhu, linux-kernel; +Cc: Shreyas B. Prabhu, linuxppc-dev

On Sun, 2015-22-03 at 04:42:58 UTC, "Shreyas B. Prabhu" wrote:
> This is a cleanup patch; doesn't change any functionality. Moves
> all cpuidle related code from setup.c to a new file.
> 
> Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
> ---
> This patch is new in v3
> 
>  arch/powerpc/platforms/powernv/Makefile |   2 +-
>  arch/powerpc/platforms/powernv/idle.c   | 186 ++++++++++++++++++++++++++++++++
>  arch/powerpc/platforms/powernv/setup.c  | 166 ----------------------------

Sorry this no longer applies, because I merged some changes from Preeti to
__init pnv_init_idle_states().

Can you please rebase on top of linux-next, or my next.

cheers

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3, 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior
  2015-03-22  4:42 ` [PATCH v3 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior Shreyas B. Prabhu
@ 2015-03-30 10:21   ` Michael Ellerman
  2015-03-30 17:15     ` Shreyas B Prabhu
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Ellerman @ 2015-03-30 10:21 UTC (permalink / raw)
  To: Shreyas B. Prabhu, linux-kernel; +Cc: linuxppc-dev

On Sun, 2015-22-03 at 04:42:59 UTC, "Shreyas B. Prabhu" wrote:
> Fastsleep is one of the idle state which cpuidle subsystem currently
> uses on power8 machines. In this state L2 cache is brought down to a
> threshold voltage. Therefore when the core is in fastsleep, the
> communication between L2 and L3 needs to be fenced. But there is a bug
> in the current power8 chips surrounding this fencing.
> 
> OPAL provides a workaround which precludes the possibility of hitting
> this bug. But running with this workaround applied causes checkstop
> if any correctable error in L2 cache directory is detected. Hence OPAL
> also provides a way to undo the workaround.
> 
> In the existing implementation, workaround is applied by the last thread
> of the core entering fastsleep and undone by the first thread waking up.
> But this has a performance cost. These OPAL calls account for roughly
> 4000 cycles everytime the core has to enter or wakeup from fastsleep.
> 
> This patch introduces a sysfs attribute (fastsleep_workaround_state)
> to choose the behavior of this workaround.
> 
> By default, fastsleep_workaround_state = 0. In this case, workaround
> is applied/undone everytime the core enters/exits fastsleep.
> 
> fastsleep_workaround_state = 1. In this case the workaround is applied
> once on all the cores and never undone. This can be triggered by
> echo 1 > /sys/devices/system/cpu/fastsleep_workaround_state
> 
> For simplicity this attribute can be modified only once. Implying, once
> fastsleep_workaround_state is changed to 1, it cannot be reverted to
> the default state.

This sounds good, although the name is a bit vague.

Just calling it "state" doesn't make it clear what 0 and 1 mean.
I think better would be "fastsleep_workaround_active" ?

Though even that is a bit wrong, because 0 doesn't really mean it's not active,
it means it's not *permanently* active.

So another option would be to make it a string attribute, with the initial
state being eg. "dynamic" and then maybe "applied" for the applied state?

I won't say you have to do that, but think about it, seeing as I'm going to ask
for a v4 anyway (see comments below).

> diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
> index 9ee0a30..8bea8fc 100644
> --- a/arch/powerpc/include/asm/opal.h
> +++ b/arch/powerpc/include/asm/opal.h
> @@ -180,6 +180,13 @@ struct opal_sg_list {
>  #define OPAL_PM_WINKLE_ENABLED	0x00040000
>  #define OPAL_PM_SLEEP_ENABLED_ER1	0x00080000
>  
> +/*
> + * OPAL_CONFIG_CPU_IDLE_STATE parameters
> + */
> +#define OPAL_CONFIG_IDLE_FASTSLEEP	1
> +#define OPAL_CONFIG_IDLE_UNDO		0
> +#define OPAL_CONFIG_IDLE_APPLY		1

The OPAL defines have moved to opal-api.h in Linux.

They should also be made #defines in skiboot.

> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
> index 77992f6..79157b9 100644
> --- a/arch/powerpc/platforms/powernv/idle.c
> +++ b/arch/powerpc/platforms/powernv/idle.c
> @@ -13,6 +13,8 @@
>  #include <linux/mm.h>
>  #include <linux/slab.h>
>  #include <linux/of.h>
> +#include <linux/device.h>
> +#include <linux/cpu.h>
>  
>  #include <asm/firmware.h>
>  #include <asm/opal.h>
> @@ -136,6 +138,77 @@ u32 pnv_get_supported_cpuidle_states(void)
>  }
>  EXPORT_SYMBOL_GPL(pnv_get_supported_cpuidle_states);
>  
> +static void pnv_fastsleep_workaround_apply(void *info)
> +{
> +	opal_config_cpu_idle_state(OPAL_CONFIG_IDLE_FASTSLEEP,
> +					OPAL_CONFIG_IDLE_APPLY);

This can fail, please check the return. You'll need to report the result via
*info.

> +}
> +
> +/*
> + * Used to store fastsleep workaround state
> + * 0 - Workaround applied/undone at fastsleep entry/exit path (Default)
> + * 1 - Workaround applied once, never undone.
> + */
> +static u8 fastsleep_workaround_state;
> +
> +static ssize_t show_fastsleep_workaround_state(struct device *dev,
> +		struct device_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "%u\n", fastsleep_workaround_state);
> +}
> +
> +static ssize_t store_fastsleep_workaround_state(struct device *dev,
> +		struct device_attribute *attr, const char *buf,
> +		size_t count)
> +{
> +	u32 val;
> +	cpumask_t primary_thread_mask;
> +
> +	/*
> +	 * fastsleep_workaround_state is write-once parameter.
> +	 * Once it has been set to 1, it cannot be undone.
> +	 */
> +	if (fastsleep_workaround_state == 1)
> +		return -EINVAL;

Better behaviour here is to delay this check until after you've done the
kstrtou32(), and if they are asking to set it to 1 (again) then you just return
count (OK).

That way scripts can just "echo 1 > .." and if the workaround is already
applied then there is no error.

> +	if (kstrtou32(buf, 0, &val))
> +		return -EINVAL;

You use a u8 above, so why not a u8 here?

> +	if (val > 1)
> +		return -EINVAL;
> +
> +	fastsleep_workaround_state = 1;

You should delay setting this until below when you know it's succeeded.

> +	/*
> +	 * fastsleep_workaround_state = 1 implies fastsleep workaround needs to
> +	 * be left in 'applied' state on all the cores. Do this by-
> +	 * 1. Patching out the call to 'undo' workaround in fastsleep exit path
> +	 * 2. Sending ipi to all the cores which have atleast one online thread
> +	 * 3. Patching out the call to 'apply' workaround in fastsleep entry
> +	 * path
> +	 * There is no need to send ipi to cores which have all threads
> +	 * offlined, as last thread of the core entering fastsleep or deeper
> +	 * state would have applied workaround.
> +	 */
> +	patch_instruction(
> +		(unsigned int *)pnv_fastsleep_workaround_at_exit,
> +		PPC_INST_NOP);

This can fail.

> +	primary_thread_mask = cpu_online_cores_map();
> +	on_each_cpu_mask(&primary_thread_mask,
> +				pnv_fastsleep_workaround_apply,
> +				NULL, 1);
> +
> +	patch_instruction(
> +		(unsigned int *)pnv_fastsleep_workaround_at_entry,
> +		PPC_INST_NOP);

And so can this.

> +	return count;
> +}
> +
> +static DEVICE_ATTR(fastsleep_workaround_state, 0600,
> +			show_fastsleep_workaround_state,
> +			store_fastsleep_workaround_state);
> +
>  static int __init pnv_init_idle_states(void)
>  {
>  	struct device_node *power_mgt;
> @@ -178,7 +251,15 @@ static int __init pnv_init_idle_states(void)
>  		patch_instruction(
>  			(unsigned int *)pnv_fastsleep_workaround_at_exit,
>  			PPC_INST_NOP);
> -	}
> +	} else

I know it's coding style to not bracket a single statement else block, but I
disagree in cases like this. Because the comment is so big it's preferable to
bracket it IMHO.

> +	/*
> +	 * OPAL_PM_SLEEP_ENABLED_ER1 is set. It indicates that workaround is
> +	 * needed to use fastsleep. Provide sysfs control to choose how this
> +	 * workaround has to be applied.
> +	 */

And the comment should indentend to match the code.

> +		device_create_file(cpu_subsys.dev_root,
> +					&dev_attr_fastsleep_workaround_state);
> +
>  	pnv_alloc_idle_core_states();
>  	return 0;
>  }

cheers

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3, 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask
  2015-03-30  9:36 ` [v3, 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Michael Ellerman
@ 2015-03-30 17:00   ` Shreyas B Prabhu
  0 siblings, 0 replies; 9+ messages in thread
From: Shreyas B Prabhu @ 2015-03-30 17:00 UTC (permalink / raw)
  To: Michael Ellerman, linux-kernel; +Cc: linuxppc-dev



On Monday 30 March 2015 03:06 PM, Michael Ellerman wrote:
> On Sun, 2015-22-03 at 04:42:57 UTC, "Shreyas B. Prabhu" wrote:
>> Currently, cpu_online_cores_map returns a mask, which for every core
>> that has atleast one online thread, has the first-cpu-of-that-core's bit
>> set. 
> 
>   ... which for every core with at least one online thread, has the bit for
>   thread 0 of the core set to 1, and the bits for all other threads of the core
>   set to 0.
> 
> Maybe that's clearer?
> 
>> But the first cpu itself may not be online always. In such cases, if
>                    ^
> 		   of the core
> 
>> the returned mask is used for IPI, then it'll cause IPIs to be skipped
>> on cores where the first thread is offline.
> 
>   .. because the IPI code refuses to send IPIs to offline threads, right?

Yes.
> 
>> Fix this by setting first-online-cpu-of-the-core's bit in the mask.
> 
>   .. by setting the bit of the first online thread in the core.
> 
>> This is done by fixing this in the underlying function
>> cpu_thread_mask_to_cores.
> 
> 
> The result has the property that for all cores with online threads, there is
> one bit set in the returned map. And further, all bits that are set in the
> returned map correspond to online threads.
> 
> 
>> Signed-off-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
>> ---
>> This patch is new in v3
>>
>> In an example scenario where all the threads of 1st core are offline
>> and argument to cpu_thread_mask_to_cores is cpu_possible_mask,
>> with this implementation, return value will not have any bit
>> corresponding to 1st core set. I think that should be okay. Any thoughts?
> 
> Looking at linux-next:
> 
>   $ git grep cpu_thread_mask_to_cores
>   arch/powerpc/include/asm/cputhreads.h:/* cpu_thread_mask_to_cores - Return a cpumask of one per cores
>   arch/powerpc/include/asm/cputhreads.h:static inline cpumask_t cpu_thread_mask_to_cores(const struct cpumask *threads)
>   arch/powerpc/include/asm/cputhreads.h:  return cpu_thread_mask_to_cores(cpu_online_mask);
>   $ git grep cpu_online_cores_map
>   arch/powerpc/include/asm/cputhreads.h:static inline cpumask_t cpu_online_cores_map(void)
> 
> ie. There are no users.
> 
> So yeah I think we can change the semantics of this, and the semantics you
> describe make sense.
> 
> If you agree with my changelog comments I'm happy to fix that up and merge
> this, or you can send a v4 if you like.
> 

I'll fix the changelog in v4.
> cheers
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3, 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior
  2015-03-30 10:21   ` [v3, " Michael Ellerman
@ 2015-03-30 17:15     ` Shreyas B Prabhu
  2015-03-31  4:24       ` Michael Ellerman
  0 siblings, 1 reply; 9+ messages in thread
From: Shreyas B Prabhu @ 2015-03-30 17:15 UTC (permalink / raw)
  To: Michael Ellerman, linux-kernel; +Cc: linuxppc-dev



On Monday 30 March 2015 03:51 PM, Michael Ellerman wrote:
> On Sun, 2015-22-03 at 04:42:59 UTC, "Shreyas B. Prabhu" wrote:
>> Fastsleep is one of the idle state which cpuidle subsystem currently
>> uses on power8 machines. In this state L2 cache is brought down to a
>> threshold voltage. Therefore when the core is in fastsleep, the
>> communication between L2 and L3 needs to be fenced. But there is a bug
>> in the current power8 chips surrounding this fencing.
>>
>> OPAL provides a workaround which precludes the possibility of hitting
>> this bug. But running with this workaround applied causes checkstop
>> if any correctable error in L2 cache directory is detected. Hence OPAL
>> also provides a way to undo the workaround.
>>
>> In the existing implementation, workaround is applied by the last thread
>> of the core entering fastsleep and undone by the first thread waking up.
>> But this has a performance cost. These OPAL calls account for roughly
>> 4000 cycles everytime the core has to enter or wakeup from fastsleep.
>>
>> This patch introduces a sysfs attribute (fastsleep_workaround_state)
>> to choose the behavior of this workaround.
>>
>> By default, fastsleep_workaround_state = 0. In this case, workaround
>> is applied/undone everytime the core enters/exits fastsleep.
>>
>> fastsleep_workaround_state = 1. In this case the workaround is applied
>> once on all the cores and never undone. This can be triggered by
>> echo 1 > /sys/devices/system/cpu/fastsleep_workaround_state
>>
>> For simplicity this attribute can be modified only once. Implying, once
>> fastsleep_workaround_state is changed to 1, it cannot be reverted to
>> the default state.
> 
> This sounds good, although the name is a bit vague.
> 
> Just calling it "state" doesn't make it clear what 0 and 1 mean.
> I think better would be "fastsleep_workaround_active" ?
> 
> Though even that is a bit wrong, because 0 doesn't really mean it's not active,
> it means it's not *permanently* active.
> 
> So another option would be to make it a string attribute, with the initial
> state being eg. "dynamic" and then maybe "applied" for the applied state?
> 
How about "fastsleep_workaround_permanent", with default value = 0. User
can make workaround permanent by echoing 1 to it.

I'll post out V4 with the suggested changes.


Thanks,
Shreyas


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3, 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior
  2015-03-30 17:15     ` Shreyas B Prabhu
@ 2015-03-31  4:24       ` Michael Ellerman
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Ellerman @ 2015-03-31  4:24 UTC (permalink / raw)
  To: Shreyas B Prabhu; +Cc: linux-kernel, linuxppc-dev

On Mon, 2015-03-30 at 22:45 +0530, Shreyas B Prabhu wrote:
> On Monday 30 March 2015 03:51 PM, Michael Ellerman wrote:
> > 
> > This sounds good, although the name is a bit vague.
 
> How about "fastsleep_workaround_permanent", with default value = 0. User
> can make workaround permanent by echoing 1 to it.

Yeah that's OK.

cheers



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-03-31  4:24 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-22  4:42 [PATCH v3 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Shreyas B. Prabhu
2015-03-22  4:42 ` [PATCH v3 2/3] powerpc/powernv: Move cpuidle related code from setup.c to new file Shreyas B. Prabhu
2015-03-30  9:40   ` [v3, " Michael Ellerman
2015-03-22  4:42 ` [PATCH v3 3/3] powerpc/powernv: Introduce sysfs control for fastsleep workaround behavior Shreyas B. Prabhu
2015-03-30 10:21   ` [v3, " Michael Ellerman
2015-03-30 17:15     ` Shreyas B Prabhu
2015-03-31  4:24       ` Michael Ellerman
2015-03-30  9:36 ` [v3, 1/3] powerpc: Fix cpu_online_cores_map to return only online threads mask Michael Ellerman
2015-03-30 17:00   ` Shreyas B Prabhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).