LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [0/2] vmalloc: Add /proc/vmallocinfo to display mappings
@ 2008-03-18 22:27 Christoph Lameter
  2008-03-18 22:27 ` [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo Christoph Lameter
                   ` (2 more replies)
  0 siblings, 3 replies; 28+ messages in thread
From: Christoph Lameter @ 2008-03-18 22:27 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel

The following two patches implement /proc/vmallocinfo. /proc/vmallocinfo
displays data about the vmalloc allocations. The second patch introduces
a tracing feature that allows to display the function that allocated the
vmalloc area.

Example:

cat /proc/vmallocinfo

0xffffc20000000000-0xffffc20000801000 8392704 alloc_large_system_hash+0x127/0x246 pages=2048 vmalloc vpages
0xffffc20000801000-0xffffc20000806000   20480 alloc_large_system_hash+0x127/0x246 pages=4 vmalloc
0xffffc20000806000-0xffffc20000c07000 4198400 alloc_large_system_hash+0x127/0x246 pages=1024 vmalloc vpages
0xffffc20000c07000-0xffffc20000c0a000   12288 alloc_large_system_hash+0x127/0x246 pages=2 vmalloc
0xffffc20000c0a000-0xffffc20000c0c000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c0c000-0xffffc20000c0f000   12288 acpi_os_map_memory+0x13/0x1c phys=cff64000 ioremap
0xffffc20000c10000-0xffffc20000c15000   20480 acpi_os_map_memory+0x13/0x1c phys=cff65000 ioremap
0xffffc20000c16000-0xffffc20000c18000    8192 acpi_os_map_memory+0x13/0x1c phys=cff69000 ioremap
0xffffc20000c18000-0xffffc20000c1a000    8192 acpi_os_map_memory+0x13/0x1c phys=fed1f000 ioremap
0xffffc20000c1a000-0xffffc20000c1c000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c1c000-0xffffc20000c1e000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c1e000-0xffffc20000c20000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c20000-0xffffc20000c22000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c22000-0xffffc20000c24000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c24000-0xffffc20000c26000    8192 acpi_os_map_memory+0x13/0x1c phys=e0081000 ioremap
0xffffc20000c26000-0xffffc20000c28000    8192 acpi_os_map_memory+0x13/0x1c phys=e0080000 ioremap
0xffffc20000c28000-0xffffc20000c2d000   20480 alloc_large_system_hash+0x127/0x246 pages=4 vmalloc
0xffffc20000c2d000-0xffffc20000c31000   16384 tcp_init+0xd5/0x31c pages=3 vmalloc
0xffffc20000c31000-0xffffc20000c34000   12288 alloc_large_system_hash+0x127/0x246 pages=2 vmalloc
0xffffc20000c34000-0xffffc20000c36000    8192 init_vdso_vars+0xde/0x1f1
0xffffc20000c36000-0xffffc20000c38000    8192 pci_iomap+0x8a/0xb4 phys=d8e00000 ioremap
0xffffc20000c38000-0xffffc20000c3a000    8192 usb_hcd_pci_probe+0x139/0x295 [usbcore] phys=d8e00000 ioremap
0xffffc20000c3a000-0xffffc20000c3e000   16384 sys_swapon+0x509/0xa15 pages=3 vmalloc
0xffffc20000c40000-0xffffc20000c61000  135168 e1000_probe+0x1c4/0xa32 phys=d8a20000 ioremap
0xffffc20000c61000-0xffffc20000c6a000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20000c6a000-0xffffc20000c73000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20000c73000-0xffffc20000c7c000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20000c7c000-0xffffc20000c7f000   12288 e1000e_setup_tx_resources+0x29/0xbe pages=2 vmalloc
0xffffc20000c80000-0xffffc20001481000 8392704 pci_mmcfg_arch_init+0x90/0x118 phys=e0000000 ioremap
0xffffc20001481000-0xffffc20001682000 2101248 alloc_large_system_hash+0x127/0x246 pages=512 vmalloc
0xffffc20001682000-0xffffc20001e83000 8392704 alloc_large_system_hash+0x127/0x246 pages=2048 vmalloc vpages
0xffffc20001e83000-0xffffc20002204000 3674112 alloc_large_system_hash+0x127/0x246 pages=896 vmalloc vpages
0xffffc20002204000-0xffffc2000220d000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc2000220d000-0xffffc20002216000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20002216000-0xffffc2000221f000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc2000221f000-0xffffc20002228000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20002228000-0xffffc20002231000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20002231000-0xffffc20002234000   12288 e1000e_setup_rx_resources+0x35/0x122 pages=2 vmalloc
0xffffc20002240000-0xffffc20002261000  135168 e1000_probe+0x1c4/0xa32 phys=d8a60000 ioremap
0xffffc20002261000-0xffffc2000270c000 4894720 sys_swapon+0x509/0xa15 pages=1194 vmalloc vpages
0xffffffffa0000000-0xffffffffa0022000  139264 module_alloc+0x4f/0x55 pages=33 vmalloc
0xffffffffa0022000-0xffffffffa0029000   28672 module_alloc+0x4f/0x55 pages=6 vmalloc
0xffffffffa002b000-0xffffffffa0034000   36864 module_alloc+0x4f/0x55 pages=8 vmalloc
0xffffffffa0034000-0xffffffffa003d000   36864 module_alloc+0x4f/0x55 pages=8 vmalloc
0xffffffffa003d000-0xffffffffa0049000   49152 module_alloc+0x4f/0x55 pages=11 vmalloc
0xffffffffa0049000-0xffffffffa0050000   28672 module_alloc+0x4f/0x55 pages=6 vmalloc


-- 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo
  2008-03-18 22:27 [0/2] vmalloc: Add /proc/vmallocinfo to display mappings Christoph Lameter
@ 2008-03-18 22:27 ` Christoph Lameter
  2008-03-20  4:04   ` Arjan van de Ven
  2008-03-18 22:27 ` [2/2] vmallocinfo: Add caller information Christoph Lameter
  2008-03-19  2:23 ` [0/2] vmalloc: Add /proc/vmallocinfo to display mappings KOSAKI Motohiro
  2 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2008-03-18 22:27 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel

[-- Attachment #1: vmalloc_status --]
[-- Type: text/plain, Size: 3806 bytes --]

Implement a new proc file that allows the display of the currently allocated vmalloc
memory.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 fs/proc/proc_misc.c     |   14 ++++++++
 include/linux/vmalloc.h |    2 +
 mm/vmalloc.c            |   76 +++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 91 insertions(+), 1 deletion(-)

Index: linux-2.6.25-rc5-mm1/fs/proc/proc_misc.c
===================================================================
--- linux-2.6.25-rc5-mm1.orig/fs/proc/proc_misc.c	2008-03-17 15:42:00.731811666 -0700
+++ linux-2.6.25-rc5-mm1/fs/proc/proc_misc.c	2008-03-18 12:11:19.104438620 -0700
@@ -456,6 +456,18 @@ static const struct file_operations proc
 #endif
 #endif
 
+static int vmalloc_open(struct inode *inode, struct file *file)
+{
+	return seq_open(file, &vmalloc_op);
+}
+
+static const struct file_operations proc_vmalloc_operations = {
+	.open		= vmalloc_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= seq_release,
+};
+
 static int show_stat(struct seq_file *p, void *v)
 {
 	int i;
@@ -990,6 +1002,8 @@ void __init proc_misc_init(void)
 	proc_create("slab_allocators", 0, NULL, &proc_slabstats_operations);
 #endif
 #endif
+	proc_create("vmallocinfo",S_IWUSR|S_IRUGO, NULL,
+						&proc_vmalloc_operations);
 	proc_create("buddyinfo", S_IRUGO, NULL, &fragmentation_file_operations);
 	proc_create("pagetypeinfo", S_IRUGO, NULL, &pagetypeinfo_file_ops);
 	proc_create("vmstat", S_IRUGO, NULL, &proc_vmstat_file_operations);
Index: linux-2.6.25-rc5-mm1/include/linux/vmalloc.h
===================================================================
--- linux-2.6.25-rc5-mm1.orig/include/linux/vmalloc.h	2008-03-09 22:22:27.000000000 -0700
+++ linux-2.6.25-rc5-mm1/include/linux/vmalloc.h	2008-03-18 12:08:41.507241390 -0700
@@ -87,4 +87,6 @@ extern void free_vm_area(struct vm_struc
 extern rwlock_t vmlist_lock;
 extern struct vm_struct *vmlist;
 
+extern const struct seq_operations vmalloc_op;
+
 #endif /* _LINUX_VMALLOC_H */
Index: linux-2.6.25-rc5-mm1/mm/vmalloc.c
===================================================================
--- linux-2.6.25-rc5-mm1.orig/mm/vmalloc.c	2008-03-09 22:22:27.000000000 -0700
+++ linux-2.6.25-rc5-mm1/mm/vmalloc.c	2008-03-18 12:10:15.995956807 -0700
@@ -14,7 +14,7 @@
 #include <linux/slab.h>
 #include <linux/spinlock.h>
 #include <linux/interrupt.h>
-
+#include <linux/seq_file.h>
 #include <linux/vmalloc.h>
 
 #include <asm/uaccess.h>
@@ -871,3 +871,77 @@ void free_vm_area(struct vm_struct *area
 	kfree(area);
 }
 EXPORT_SYMBOL_GPL(free_vm_area);
+
+
+#ifdef CONFIG_PROC_FS
+static void *s_start(struct seq_file *m, loff_t *pos)
+{
+	loff_t n = *pos;
+	struct vm_struct *v;
+
+	read_lock(&vmlist_lock);
+	v = vmlist;
+	while (n > 0 && v) {
+		n--;
+		v = v->next;
+	}
+	if (!n)
+		return v;
+
+	return NULL;
+
+}
+
+static void *s_next(struct seq_file *m, void *p, loff_t *pos)
+{
+	struct vm_struct *v = p;
+
+	++*pos;
+	return v->next;
+}
+
+static void s_stop(struct seq_file *m, void *p)
+{
+	read_unlock(&vmlist_lock);
+}
+
+static int s_show(struct seq_file *m, void *p)
+{
+	struct vm_struct *v = p;
+
+	seq_printf(m, "0x%p-0x%p %7ld",
+		v->addr, v->addr + v->size, v->size);
+
+	if (v->nr_pages)
+		seq_printf(m, " pages=%d", v->nr_pages);
+
+	if (v->phys_addr)
+		seq_printf(m, " phys=%lx", v->phys_addr);
+
+	if (v->flags & VM_IOREMAP)
+		seq_printf(m, " ioremap");
+
+	if (v->flags & VM_ALLOC)
+		seq_printf(m, " vmalloc");
+
+	if (v->flags & VM_MAP)
+		seq_printf(m, " vmap");
+
+	if (v->flags & VM_USERMAP)
+		seq_printf(m, " user");
+
+	if (v->flags & VM_VPAGES)
+		seq_printf(m, " vpages");
+
+	seq_putc(m, '\n');
+	return 0;
+}
+
+const struct seq_operations vmalloc_op = {
+	.start = s_start,
+	.next = s_next,
+	.stop = s_stop,
+	.show = s_show,
+};
+#endif
+

-- 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [2/2] vmallocinfo: Add caller information
  2008-03-18 22:27 [0/2] vmalloc: Add /proc/vmallocinfo to display mappings Christoph Lameter
  2008-03-18 22:27 ` [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo Christoph Lameter
@ 2008-03-18 22:27 ` Christoph Lameter
  2008-03-19 21:42   ` Ingo Molnar
  2008-04-29  8:48   ` Ingo Molnar
  2008-03-19  2:23 ` [0/2] vmalloc: Add /proc/vmallocinfo to display mappings KOSAKI Motohiro
  2 siblings, 2 replies; 28+ messages in thread
From: Christoph Lameter @ 2008-03-18 22:27 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel

[-- Attachment #1: trace --]
[-- Type: text/plain, Size: 13351 bytes --]

Add caller information so that /proc/vmallocinfo shows where the allocation
request for a slice of vmalloc memory originated.

Result in output like this:

0xffffc20000000000-0xffffc20000801000 8392704 alloc_large_system_hash+0x127/0x246 pages=2048 vmalloc vpages
0xffffc20000801000-0xffffc20000806000   20480 alloc_large_system_hash+0x127/0x246 pages=4 vmalloc
0xffffc20000806000-0xffffc20000c07000 4198400 alloc_large_system_hash+0x127/0x246 pages=1024 vmalloc vpages
0xffffc20000c07000-0xffffc20000c0a000   12288 alloc_large_system_hash+0x127/0x246 pages=2 vmalloc
0xffffc20000c0a000-0xffffc20000c0c000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c0c000-0xffffc20000c0f000   12288 acpi_os_map_memory+0x13/0x1c phys=cff64000 ioremap
0xffffc20000c10000-0xffffc20000c15000   20480 acpi_os_map_memory+0x13/0x1c phys=cff65000 ioremap
0xffffc20000c16000-0xffffc20000c18000    8192 acpi_os_map_memory+0x13/0x1c phys=cff69000 ioremap
0xffffc20000c18000-0xffffc20000c1a000    8192 acpi_os_map_memory+0x13/0x1c phys=fed1f000 ioremap
0xffffc20000c1a000-0xffffc20000c1c000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c1c000-0xffffc20000c1e000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c1e000-0xffffc20000c20000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c20000-0xffffc20000c22000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c22000-0xffffc20000c24000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
0xffffc20000c24000-0xffffc20000c26000    8192 acpi_os_map_memory+0x13/0x1c phys=e0081000 ioremap
0xffffc20000c26000-0xffffc20000c28000    8192 acpi_os_map_memory+0x13/0x1c phys=e0080000 ioremap
0xffffc20000c28000-0xffffc20000c2d000   20480 alloc_large_system_hash+0x127/0x246 pages=4 vmalloc
0xffffc20000c2d000-0xffffc20000c31000   16384 tcp_init+0xd5/0x31c pages=3 vmalloc
0xffffc20000c31000-0xffffc20000c34000   12288 alloc_large_system_hash+0x127/0x246 pages=2 vmalloc
0xffffc20000c34000-0xffffc20000c36000    8192 init_vdso_vars+0xde/0x1f1
0xffffc20000c36000-0xffffc20000c38000    8192 pci_iomap+0x8a/0xb4 phys=d8e00000 ioremap
0xffffc20000c38000-0xffffc20000c3a000    8192 usb_hcd_pci_probe+0x139/0x295 [usbcore] phys=d8e00000 ioremap
0xffffc20000c3a000-0xffffc20000c3e000   16384 sys_swapon+0x509/0xa15 pages=3 vmalloc
0xffffc20000c40000-0xffffc20000c61000  135168 e1000_probe+0x1c4/0xa32 phys=d8a20000 ioremap
0xffffc20000c61000-0xffffc20000c6a000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20000c6a000-0xffffc20000c73000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20000c73000-0xffffc20000c7c000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20000c7c000-0xffffc20000c7f000   12288 e1000e_setup_tx_resources+0x29/0xbe pages=2 vmalloc
0xffffc20000c80000-0xffffc20001481000 8392704 pci_mmcfg_arch_init+0x90/0x118 phys=e0000000 ioremap
0xffffc20001481000-0xffffc20001682000 2101248 alloc_large_system_hash+0x127/0x246 pages=512 vmalloc
0xffffc20001682000-0xffffc20001e83000 8392704 alloc_large_system_hash+0x127/0x246 pages=2048 vmalloc vpages
0xffffc20001e83000-0xffffc20002204000 3674112 alloc_large_system_hash+0x127/0x246 pages=896 vmalloc vpages
0xffffc20002204000-0xffffc2000220d000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc2000220d000-0xffffc20002216000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20002216000-0xffffc2000221f000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc2000221f000-0xffffc20002228000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20002228000-0xffffc20002231000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
0xffffc20002231000-0xffffc20002234000   12288 e1000e_setup_rx_resources+0x35/0x122 pages=2 vmalloc
0xffffc20002240000-0xffffc20002261000  135168 e1000_probe+0x1c4/0xa32 phys=d8a60000 ioremap
0xffffc20002261000-0xffffc2000270c000 4894720 sys_swapon+0x509/0xa15 pages=1194 vmalloc vpages
0xffffffffa0000000-0xffffffffa0022000  139264 module_alloc+0x4f/0x55 pages=33 vmalloc
0xffffffffa0022000-0xffffffffa0029000   28672 module_alloc+0x4f/0x55 pages=6 vmalloc
0xffffffffa002b000-0xffffffffa0034000   36864 module_alloc+0x4f/0x55 pages=8 vmalloc
0xffffffffa0034000-0xffffffffa003d000   36864 module_alloc+0x4f/0x55 pages=8 vmalloc
0xffffffffa003d000-0xffffffffa0049000   49152 module_alloc+0x4f/0x55 pages=11 vmalloc
0xffffffffa0049000-0xffffffffa0050000   28672 module_alloc+0x4f/0x55 pages=6 vmalloc

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 arch/x86/mm/ioremap.c   |   12 +++++----
 include/linux/vmalloc.h |    3 ++
 mm/vmalloc.c            |   61 +++++++++++++++++++++++++++++++++++-------------
 3 files changed, 55 insertions(+), 21 deletions(-)

Index: linux-2.6.25-rc5-mm1/include/linux/vmalloc.h
===================================================================
--- linux-2.6.25-rc5-mm1.orig/include/linux/vmalloc.h	2008-03-18 12:20:12.283837064 -0700
+++ linux-2.6.25-rc5-mm1/include/linux/vmalloc.h	2008-03-18 12:20:12.295837331 -0700
@@ -31,6 +31,7 @@ struct vm_struct {
 	struct page		**pages;
 	unsigned int		nr_pages;
 	unsigned long		phys_addr;
+	void			*caller;
 };
 
 /*
@@ -66,6 +67,8 @@ static inline size_t get_vm_area_size(co
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
+extern struct vm_struct *get_vm_area_caller(unsigned long size,
+					unsigned long flags, void *caller);
 extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags,
 					unsigned long start, unsigned long end);
 extern struct vm_struct *get_vm_area_node(unsigned long size,
Index: linux-2.6.25-rc5-mm1/mm/vmalloc.c
===================================================================
--- linux-2.6.25-rc5-mm1.orig/mm/vmalloc.c	2008-03-18 12:20:12.283837064 -0700
+++ linux-2.6.25-rc5-mm1/mm/vmalloc.c	2008-03-18 13:48:56.344025498 -0700
@@ -16,6 +16,7 @@
 #include <linux/interrupt.h>
 #include <linux/seq_file.h>
 #include <linux/vmalloc.h>
+#include <linux/kallsyms.h>
 
 #include <asm/uaccess.h>
 #include <asm/tlbflush.h>
@@ -25,7 +26,7 @@ DEFINE_RWLOCK(vmlist_lock);
 struct vm_struct *vmlist;
 
 static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
-			    int node);
+			    int node, void *caller);
 
 static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end)
 {
@@ -206,7 +207,7 @@ EXPORT_SYMBOL(vmalloc_to_pfn);
 
 static struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long flags,
 					    unsigned long start, unsigned long end,
-					    int node, gfp_t gfp_mask)
+					    int node, gfp_t gfp_mask, void *caller)
 {
 	struct vm_struct **p, *tmp, *area;
 	unsigned long align = 1;
@@ -269,6 +270,7 @@ found:
 	area->pages = NULL;
 	area->nr_pages = 0;
 	area->phys_addr = 0;
+	area->caller = caller;
 	write_unlock(&vmlist_lock);
 
 	return area;
@@ -284,7 +286,8 @@ out:
 struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags,
 				unsigned long start, unsigned long end)
 {
-	return __get_vm_area_node(size, flags, start, end, -1, GFP_KERNEL);
+	return __get_vm_area_node(size, flags, start, end, -1, GFP_KERNEL,
+						__builtin_return_address(0));
 }
 EXPORT_SYMBOL_GPL(__get_vm_area);
 
@@ -299,14 +302,22 @@ EXPORT_SYMBOL_GPL(__get_vm_area);
  */
 struct vm_struct *get_vm_area(unsigned long size, unsigned long flags)
 {
-	return __get_vm_area(size, flags, VMALLOC_START, VMALLOC_END);
+	return __get_vm_area_node(size, flags, VMALLOC_START, VMALLOC_END,
+				-1, GFP_KERNEL, __builtin_return_address(0));
+}
+
+struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags,
+				void *caller)
+{
+	return __get_vm_area_node(size, flags, VMALLOC_START, VMALLOC_END,
+						-1, GFP_KERNEL, caller);
 }
 
 struct vm_struct *get_vm_area_node(unsigned long size, unsigned long flags,
 				   int node, gfp_t gfp_mask)
 {
 	return __get_vm_area_node(size, flags, VMALLOC_START, VMALLOC_END, node,
-				  gfp_mask);
+				  gfp_mask, __builtin_return_address(0));
 }
 
 /* Caller must hold vmlist_lock */
@@ -455,9 +466,11 @@ void *vmap(struct page **pages, unsigned
 	if (count > num_physpages)
 		return NULL;
 
-	area = get_vm_area((count << PAGE_SHIFT), flags);
+	area = get_vm_area_caller((count << PAGE_SHIFT), flags,
+					__builtin_return_address(0));
 	if (!area)
 		return NULL;
+
 	if (map_vm_area(area, prot, &pages)) {
 		vunmap(area->addr);
 		return NULL;
@@ -468,7 +481,7 @@ void *vmap(struct page **pages, unsigned
 EXPORT_SYMBOL(vmap);
 
 static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
-				 pgprot_t prot, int node)
+				 pgprot_t prot, int node, void *caller)
 {
 	struct page **pages;
 	unsigned int nr_pages, array_size, i;
@@ -480,7 +493,7 @@ static void *__vmalloc_area_node(struct 
 	/* Please note that the recursion is strictly bounded. */
 	if (array_size > PAGE_SIZE) {
 		pages = __vmalloc_node(array_size, gfp_mask | __GFP_ZERO,
-					PAGE_KERNEL, node);
+				PAGE_KERNEL, node, caller);
 		area->flags |= VM_VPAGES;
 	} else {
 		pages = kmalloc_node(array_size,
@@ -488,6 +501,7 @@ static void *__vmalloc_area_node(struct 
 				node);
 	}
 	area->pages = pages;
+	area->caller = caller;
 	if (!area->pages) {
 		remove_vm_area(area->addr);
 		kfree(area);
@@ -521,7 +535,8 @@ fail:
 
 void *__vmalloc_area(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot)
 {
-	return __vmalloc_area_node(area, gfp_mask, prot, -1);
+	return __vmalloc_area_node(area, gfp_mask, prot, -1,
+					__builtin_return_address(0));
 }
 
 /**
@@ -536,7 +551,7 @@ void *__vmalloc_area(struct vm_struct *a
  *	kernel virtual space, using a pagetable protection of @prot.
  */
 static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
-			    int node)
+						int node, void *caller)
 {
 	struct vm_struct *area;
 
@@ -544,16 +559,19 @@ static void *__vmalloc_node(unsigned lon
 	if (!size || (size >> PAGE_SHIFT) > num_physpages)
 		return NULL;
 
-	area = get_vm_area_node(size, VM_ALLOC, node, gfp_mask);
+	area = __get_vm_area_node(size, VM_ALLOC, VMALLOC_START, VMALLOC_END,
+						node, gfp_mask, caller);
+
 	if (!area)
 		return NULL;
 
-	return __vmalloc_area_node(area, gfp_mask, prot, node);
+	return __vmalloc_area_node(area, gfp_mask, prot, node, caller);
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
 {
-	return __vmalloc_node(size, gfp_mask, prot, -1);
+	return __vmalloc_node(size, gfp_mask, prot, -1,
+				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(__vmalloc);
 
@@ -568,7 +586,8 @@ EXPORT_SYMBOL(__vmalloc);
  */
 void *vmalloc(unsigned long size)
 {
-	return __vmalloc(size, GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL);
+	return __vmalloc_node(size, GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL,
+					-1, __builtin_return_address(0));
 }
 EXPORT_SYMBOL(vmalloc);
 
@@ -608,7 +627,8 @@ EXPORT_SYMBOL(vmalloc_user);
  */
 void *vmalloc_node(unsigned long size, int node)
 {
-	return __vmalloc_node(size, GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL, node);
+	return __vmalloc_node(size, GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL,
+					node, __builtin_return_address(0));
 }
 EXPORT_SYMBOL(vmalloc_node);
 
@@ -841,7 +861,8 @@ struct vm_struct *alloc_vm_area(size_t s
 {
 	struct vm_struct *area;
 
-	area = get_vm_area(size, VM_IOREMAP);
+	area = get_vm_area_caller(size, VM_IOREMAP,
+				__builtin_return_address(0));
 	if (area == NULL)
 		return NULL;
 
@@ -912,6 +933,14 @@ static int s_show(struct seq_file *m, vo
 	seq_printf(m, "0x%p-0x%p %7ld",
 		v->addr, v->addr + v->size, v->size);
 
+	if (v->caller) {
+		char buff[2 * KSYM_NAME_LEN];
+
+		seq_putc(m, ' ');
+		sprint_symbol(buff, (unsigned long)v->caller);
+		seq_puts(m, buff);
+	}
+
 	if (v->nr_pages)
 		seq_printf(m, " pages=%d", v->nr_pages);
 
Index: linux-2.6.25-rc5-mm1/arch/x86/mm/ioremap.c
===================================================================
--- linux-2.6.25-rc5-mm1.orig/arch/x86/mm/ioremap.c	2008-03-18 12:20:10.803827969 -0700
+++ linux-2.6.25-rc5-mm1/arch/x86/mm/ioremap.c	2008-03-18 12:22:09.744570798 -0700
@@ -118,8 +118,8 @@ static int ioremap_change_attr(unsigned 
  * have to convert them into an offset in a page-aligned mapping, but the
  * caller shouldn't need to know that small detail.
  */
-static void __iomem *__ioremap(unsigned long phys_addr, unsigned long size,
-			       enum ioremap_mode mode)
+static void __iomem *__ioremap_caller(unsigned long phys_addr,
+	unsigned long size, enum ioremap_mode mode, void *caller)
 {
 	unsigned long pfn, offset, last_addr, vaddr;
 	struct vm_struct *area;
@@ -176,7 +176,7 @@ static void __iomem *__ioremap(unsigned 
 	/*
 	 * Ok, go for it..
 	 */
-	area = get_vm_area(size, VM_IOREMAP);
+	area = get_vm_area_caller(size, VM_IOREMAP, caller);
 	if (!area)
 		return NULL;
 	area->phys_addr = phys_addr;
@@ -217,13 +217,15 @@ static void __iomem *__ioremap(unsigned 
  */
 void __iomem *ioremap_nocache(unsigned long phys_addr, unsigned long size)
 {
-	return __ioremap(phys_addr, size, IOR_MODE_UNCACHED);
+	return __ioremap_caller(phys_addr, size, IOR_MODE_UNCACHED,
+						__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_nocache);
 
 void __iomem *ioremap_cache(unsigned long phys_addr, unsigned long size)
 {
-	return __ioremap(phys_addr, size, IOR_MODE_CACHED);
+	return __ioremap_caller(phys_addr, size, IOR_MODE_CACHED,
+						__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_cache);
 

-- 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [0/2] vmalloc: Add /proc/vmallocinfo to display mappings
  2008-03-18 22:27 [0/2] vmalloc: Add /proc/vmallocinfo to display mappings Christoph Lameter
  2008-03-18 22:27 ` [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo Christoph Lameter
  2008-03-18 22:27 ` [2/2] vmallocinfo: Add caller information Christoph Lameter
@ 2008-03-19  2:23 ` KOSAKI Motohiro
  2008-03-19 22:07   ` Andrew Morton
  2 siblings, 1 reply; 28+ messages in thread
From: KOSAKI Motohiro @ 2008-03-19  2:23 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: kosaki.motohiro, akpm, linux-mm, linux-kernel

Hi

Great.
it seems very useful.
and, I found no bug.

Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>



> The following two patches implement /proc/vmallocinfo. /proc/vmallocinfo
> displays data about the vmalloc allocations. The second patch introduces
> a tracing feature that allows to display the function that allocated the
> vmalloc area.
> 
> Example:
> 
> cat /proc/vmallocinfo
> 
> 0xffffc20000000000-0xffffc20000801000 8392704 alloc_large_system_hash+0x127/0x246 pages=2048 vmalloc vpages
> 0xffffc20000801000-0xffffc20000806000   20480 alloc_large_system_hash+0x127/0x246 pages=4 vmalloc
> 0xffffc20000806000-0xffffc20000c07000 4198400 alloc_large_system_hash+0x127/0x246 pages=1024 vmalloc vpages
> 0xffffc20000c07000-0xffffc20000c0a000   12288 alloc_large_system_hash+0x127/0x246 pages=2 vmalloc
> 0xffffc20000c0a000-0xffffc20000c0c000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
> 0xffffc20000c0c000-0xffffc20000c0f000   12288 acpi_os_map_memory+0x13/0x1c phys=cff64000 ioremap
> 0xffffc20000c10000-0xffffc20000c15000   20480 acpi_os_map_memory+0x13/0x1c phys=cff65000 ioremap
> 0xffffc20000c16000-0xffffc20000c18000    8192 acpi_os_map_memory+0x13/0x1c phys=cff69000 ioremap
> 0xffffc20000c18000-0xffffc20000c1a000    8192 acpi_os_map_memory+0x13/0x1c phys=fed1f000 ioremap
> 0xffffc20000c1a000-0xffffc20000c1c000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
> 0xffffc20000c1c000-0xffffc20000c1e000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
> 0xffffc20000c1e000-0xffffc20000c20000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
> 0xffffc20000c20000-0xffffc20000c22000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
> 0xffffc20000c22000-0xffffc20000c24000    8192 acpi_os_map_memory+0x13/0x1c phys=cff68000 ioremap
> 0xffffc20000c24000-0xffffc20000c26000    8192 acpi_os_map_memory+0x13/0x1c phys=e0081000 ioremap
> 0xffffc20000c26000-0xffffc20000c28000    8192 acpi_os_map_memory+0x13/0x1c phys=e0080000 ioremap
> 0xffffc20000c28000-0xffffc20000c2d000   20480 alloc_large_system_hash+0x127/0x246 pages=4 vmalloc
> 0xffffc20000c2d000-0xffffc20000c31000   16384 tcp_init+0xd5/0x31c pages=3 vmalloc
> 0xffffc20000c31000-0xffffc20000c34000   12288 alloc_large_system_hash+0x127/0x246 pages=2 vmalloc
> 0xffffc20000c34000-0xffffc20000c36000    8192 init_vdso_vars+0xde/0x1f1
> 0xffffc20000c36000-0xffffc20000c38000    8192 pci_iomap+0x8a/0xb4 phys=d8e00000 ioremap
> 0xffffc20000c38000-0xffffc20000c3a000    8192 usb_hcd_pci_probe+0x139/0x295 [usbcore] phys=d8e00000 ioremap
> 0xffffc20000c3a000-0xffffc20000c3e000   16384 sys_swapon+0x509/0xa15 pages=3 vmalloc
> 0xffffc20000c40000-0xffffc20000c61000  135168 e1000_probe+0x1c4/0xa32 phys=d8a20000 ioremap
> 0xffffc20000c61000-0xffffc20000c6a000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc20000c6a000-0xffffc20000c73000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc20000c73000-0xffffc20000c7c000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc20000c7c000-0xffffc20000c7f000   12288 e1000e_setup_tx_resources+0x29/0xbe pages=2 vmalloc
> 0xffffc20000c80000-0xffffc20001481000 8392704 pci_mmcfg_arch_init+0x90/0x118 phys=e0000000 ioremap
> 0xffffc20001481000-0xffffc20001682000 2101248 alloc_large_system_hash+0x127/0x246 pages=512 vmalloc
> 0xffffc20001682000-0xffffc20001e83000 8392704 alloc_large_system_hash+0x127/0x246 pages=2048 vmalloc vpages
> 0xffffc20001e83000-0xffffc20002204000 3674112 alloc_large_system_hash+0x127/0x246 pages=896 vmalloc vpages
> 0xffffc20002204000-0xffffc2000220d000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc2000220d000-0xffffc20002216000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc20002216000-0xffffc2000221f000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc2000221f000-0xffffc20002228000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc20002228000-0xffffc20002231000   36864 _xfs_buf_map_pages+0x8e/0xc0 vmap
> 0xffffc20002231000-0xffffc20002234000   12288 e1000e_setup_rx_resources+0x35/0x122 pages=2 vmalloc
> 0xffffc20002240000-0xffffc20002261000  135168 e1000_probe+0x1c4/0xa32 phys=d8a60000 ioremap
> 0xffffc20002261000-0xffffc2000270c000 4894720 sys_swapon+0x509/0xa15 pages=1194 vmalloc vpages
> 0xffffffffa0000000-0xffffffffa0022000  139264 module_alloc+0x4f/0x55 pages=33 vmalloc
> 0xffffffffa0022000-0xffffffffa0029000   28672 module_alloc+0x4f/0x55 pages=6 vmalloc
> 0xffffffffa002b000-0xffffffffa0034000   36864 module_alloc+0x4f/0x55 pages=8 vmalloc
> 0xffffffffa0034000-0xffffffffa003d000   36864 module_alloc+0x4f/0x55 pages=8 vmalloc
> 0xffffffffa003d000-0xffffffffa0049000   49152 module_alloc+0x4f/0x55 pages=11 vmalloc
> 0xffffffffa0049000-0xffffffffa0050000   28672 module_alloc+0x4f/0x55 pages=6 vmalloc



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-18 22:27 ` [2/2] vmallocinfo: Add caller information Christoph Lameter
@ 2008-03-19 21:42   ` Ingo Molnar
  2008-03-20  0:03     ` Christoph Lameter
  2008-04-29  8:48   ` Ingo Molnar
  1 sibling, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2008-03-19 21:42 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: akpm, linux-mm, linux-kernel


* Christoph Lameter <clameter@sgi.com> wrote:

> Add caller information so that /proc/vmallocinfo shows where the 
> allocation request for a slice of vmalloc memory originated.

please use one simple save_stack_trace() instead of polluting a dozen 
architectures with:

> -	return __ioremap(phys_addr, size, IOR_MODE_UNCACHED);
> +	return __ioremap_caller(phys_addr, size, IOR_MODE_UNCACHED,
> +						__builtin_return_address(0));

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [0/2] vmalloc: Add /proc/vmallocinfo to display mappings
  2008-03-19  2:23 ` [0/2] vmalloc: Add /proc/vmallocinfo to display mappings KOSAKI Motohiro
@ 2008-03-19 22:07   ` Andrew Morton
  2008-03-19 23:33     ` Christoph Lameter
  2008-03-20  7:43     ` KOSAKI Motohiro
  0 siblings, 2 replies; 28+ messages in thread
From: Andrew Morton @ 2008-03-19 22:07 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: clameter, kosaki.motohiro, linux-mm, linux-kernel

On Wed, 19 Mar 2008 11:23:30 +0900
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
>
> > The following two patches implement /proc/vmallocinfo. /proc/vmallocinfo
> > displays data about the vmalloc allocations. The second patch introduces
> > a tracing feature that allows to display the function that allocated the
> > vmalloc area.
> > 
> > Example:
> > 
> > cat /proc/vmallocinfo

argh, please don't top-post.

(undoes it)

>
> Hi
> 
> Great.
> it seems very useful.
> and, I found no bug.
> 
> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>


I was just about to ask whether we actually need the feature - I don't
recall ever having needed it, nor do I recall seeing anyone else need it.

Why is it useful?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [0/2] vmalloc: Add /proc/vmallocinfo to display mappings
  2008-03-19 22:07   ` Andrew Morton
@ 2008-03-19 23:33     ` Christoph Lameter
  2008-03-20  7:43     ` KOSAKI Motohiro
  1 sibling, 0 replies; 28+ messages in thread
From: Christoph Lameter @ 2008-03-19 23:33 UTC (permalink / raw)
  To: Andrew Morton; +Cc: KOSAKI Motohiro, linux-mm, linux-kernel

On Wed, 19 Mar 2008, Andrew Morton wrote:

> I was just about to ask whether we actually need the feature - I don't
> recall ever having needed it, nor do I recall seeing anyone else need it.
> 
> Why is it useful?

It allows to see the users of vmalloc. That is important if vmalloc space 
is scarce (i386 for example).

And its going to be important for the compound page fallback to vmalloc.
Many of the current users can be switched to use compound pages with
fallback. This means that the number of users of vmalloc is reduced and 
page tables no longer necessary to access the memory.
/proc/vmallocinfo allows to review how that reduction occurs.

If memory becomes fragmented and larger order allocations are no longer 
possible then /proc/vmallocinfo allows to see which compound 
page allocations fell back to virtual compound pages. That is important 
for new users of virtual compound pages. Such as order 1 stack allocation 
etc that may fallback to virtual compound pages in the future.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-19 21:42   ` Ingo Molnar
@ 2008-03-20  0:03     ` Christoph Lameter
  2008-03-21 11:00       ` Ingo Molnar
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2008-03-20  0:03 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: akpm, linux-mm, linux-kernel

On Wed, 19 Mar 2008, Ingo Molnar wrote:

> 
> * Christoph Lameter <clameter@sgi.com> wrote:
> 
> > Add caller information so that /proc/vmallocinfo shows where the 
> > allocation request for a slice of vmalloc memory originated.
> 
> please use one simple save_stack_trace() instead of polluting a dozen 
> architectures with:

save_stack_trace() depends on CONFIG_STACKTRACE which is only available 
when debugging is compiled it. I was more thinking about this as a 
generally available feature.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo
  2008-03-18 22:27 ` [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo Christoph Lameter
@ 2008-03-20  4:04   ` Arjan van de Ven
  2008-03-20 19:22     ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Arjan van de Ven @ 2008-03-20  4:04 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: akpm, linux-mm, linux-kernel

On Tue, 18 Mar 2008 15:27:02 -0700
Christoph Lameter <clameter@sgi.com> wrote:

> Implement a new proc file that allows the display of the currently
> allocated vmalloc memory.

> +	proc_create("vmallocinfo",S_IWUSR|S_IRUGO, NULL,


why should non-root be able to read this? sounds like a security issue (info leak) to me...




-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [0/2] vmalloc: Add /proc/vmallocinfo to display mappings
  2008-03-19 22:07   ` Andrew Morton
  2008-03-19 23:33     ` Christoph Lameter
@ 2008-03-20  7:43     ` KOSAKI Motohiro
  1 sibling, 0 replies; 28+ messages in thread
From: KOSAKI Motohiro @ 2008-03-20  7:43 UTC (permalink / raw)
  To: Andrew Morton; +Cc: clameter, linux-mm, linux-kernel

Hi Andrew,

>  > > Example:
>  > >
>  > > cat /proc/vmallocinfo
>
>  argh, please don't top-post.
>
>  (undoes it)

sorry, I don't do that at next.

>  > Great.
>  > it seems very useful.
>  > and, I found no bug.
>  >
>  > Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>
>  I was just about to ask whether we actually need the feature - I don't
>  recall ever having needed it, nor do I recall seeing anyone else need it.
>
>  Why is it useful?

to be honest, I tought this is merely good debug feature.
but crishtoph-san already explained it is more useful things.

Thanks.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo
  2008-03-20  4:04   ` Arjan van de Ven
@ 2008-03-20 19:22     ` Christoph Lameter
  2008-03-21 22:19       ` Andrew Morton
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2008-03-20 19:22 UTC (permalink / raw)
  To: Arjan van de Ven; +Cc: akpm, linux-mm, linux-kernel

On Wed, 19 Mar 2008, Arjan van de Ven wrote:

> > +	proc_create("vmallocinfo",S_IWUSR|S_IRUGO, NULL,
> why should non-root be able to read this? sounds like a security issue (info leak) to me...

Well I copied from the slabinfo logic (leaking info for slabs is okay?).

Lets restrict it to root then:



Subject: vmallocinfo: Only allow root to read /proc/vmallocinfo

Change permissions for /proc/vmallocinfo to only allow read
for root.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 fs/proc/proc_misc.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Index: linux-2.6.25-rc5-mm1/fs/proc/proc_misc.c
===================================================================
--- linux-2.6.25-rc5-mm1.orig/fs/proc/proc_misc.c	2008-03-20 12:14:20.215358835 -0700
+++ linux-2.6.25-rc5-mm1/fs/proc/proc_misc.c	2008-03-20 12:23:01.920887750 -0700
@@ -1002,8 +1002,7 @@ void __init proc_misc_init(void)
 	proc_create("slab_allocators", 0, NULL, &proc_slabstats_operations);
 #endif
 #endif
-	proc_create("vmallocinfo",S_IWUSR|S_IRUGO, NULL,
-						&proc_vmalloc_operations);
+	proc_create("vmallocinfo",S_IRUSR, NULL, &proc_vmalloc_operations);
 	proc_create("buddyinfo", S_IRUGO, NULL, &fragmentation_file_operations);
 	proc_create("pagetypeinfo", S_IRUGO, NULL, &pagetypeinfo_file_ops);
 	proc_create("vmstat", S_IRUGO, NULL, &proc_vmstat_file_operations);

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-20  0:03     ` Christoph Lameter
@ 2008-03-21 11:00       ` Ingo Molnar
  2008-03-21 17:35         ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2008-03-21 11:00 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: akpm, linux-mm, linux-kernel


* Christoph Lameter <clameter@sgi.com> wrote:

> On Wed, 19 Mar 2008, Ingo Molnar wrote:
> 
> > 
> > * Christoph Lameter <clameter@sgi.com> wrote:
> > 
> > > Add caller information so that /proc/vmallocinfo shows where the 
> > > allocation request for a slice of vmalloc memory originated.
> > 
> > please use one simple save_stack_trace() instead of polluting a dozen 
> > architectures with:
> 
> save_stack_trace() depends on CONFIG_STACKTRACE which is only 
> available when debugging is compiled it. I was more thinking about 
> this as a generally available feature.

then make STACKTRACE available generally via the patch below.

	Ingo

------------------------------------------->
Subject: debugging: always enable stacktrace
From: Ingo Molnar <mingo@elte.hu>
Date: Fri Mar 21 11:48:32 CET 2008

Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 lib/Kconfig.debug |    1 -
 1 file changed, 1 deletion(-)

Index: linux-x86.q/lib/Kconfig.debug
===================================================================
--- linux-x86.q.orig/lib/Kconfig.debug
+++ linux-x86.q/lib/Kconfig.debug
@@ -387,7 +387,6 @@ config DEBUG_LOCKING_API_SELFTESTS
 
 config STACKTRACE
 	bool
-	depends on DEBUG_KERNEL
 	depends on STACKTRACE_SUPPORT
 
 config DEBUG_KOBJECT

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-21 11:00       ` Ingo Molnar
@ 2008-03-21 17:35         ` Christoph Lameter
  2008-03-21 18:45           ` Ingo Molnar
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2008-03-21 17:35 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: akpm, linux-mm, linux-kernel

On Fri, 21 Mar 2008, Ingo Molnar wrote:

> then make STACKTRACE available generally via the patch below.

How do I figure out which nesting level to display if we'd do this?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-21 17:35         ` Christoph Lameter
@ 2008-03-21 18:45           ` Ingo Molnar
  2008-03-21 19:16             ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2008-03-21 18:45 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: akpm, linux-mm, linux-kernel


* Christoph Lameter <clameter@sgi.com> wrote:

> On Fri, 21 Mar 2008, Ingo Molnar wrote:
> 
> > then make STACKTRACE available generally via the patch below.
> 
> How do I figure out which nesting level to display if we'd do this?

the best i found for lockdep was to include a fair number of them, and 
to skip the top 3. struct vm_area that vmalloc uses isnt space-critical, 
so 4-8 entries with a 3 skip would be quite ok. (but can be more than 
that as well)

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-21 18:45           ` Ingo Molnar
@ 2008-03-21 19:16             ` Christoph Lameter
  2008-03-21 20:55               ` Ingo Molnar
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2008-03-21 19:16 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: akpm, linux-mm, linux-kernel

On Fri, 21 Mar 2008, Ingo Molnar wrote:

> the best i found for lockdep was to include a fair number of them, and 
> to skip the top 3. struct vm_area that vmalloc uses isnt space-critical, 
> so 4-8 entries with a 3 skip would be quite ok. (but can be more than 
> that as well)

STACKTRACE depends on STACKTRACE_SUPPORT which is not available on 
all arches? alpha blackfin ia64 etc are missing it?

I thought there were also issues on x86 with optimizations leading to 
weird stacktraces?



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-21 19:16             ` Christoph Lameter
@ 2008-03-21 20:55               ` Ingo Molnar
  2008-03-22  2:40                 ` Mike Frysinger
  0 siblings, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2008-03-21 20:55 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: akpm, linux-mm, linux-kernel


* Christoph Lameter <clameter@sgi.com> wrote:

> > the best i found for lockdep was to include a fair number of them, 
> > and to skip the top 3. struct vm_area that vmalloc uses isnt 
> > space-critical, so 4-8 entries with a 3 skip would be quite ok. (but 
> > can be more than that as well)
> 
> STACKTRACE depends on STACKTRACE_SUPPORT which is not available on all 
> arches? alpha blackfin ia64 etc are missing it?

one more reason for them to implement it.

> I thought there were also issues on x86 with optimizations leading to 
> weird stacktraces?

at most there can be extra stack trace entries. This is for debugging, 
so if someone wants exact stacktraces, FRAME_POINTERS will certainly 
improve them.

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo
  2008-03-21 22:19       ` Andrew Morton
@ 2008-03-21 22:09         ` Alan Cox
  0 siblings, 0 replies; 28+ messages in thread
From: Alan Cox @ 2008-03-21 22:09 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Christoph Lameter, arjan, linux-mm, linux-kernel

> That makes the feature somewhat less useful.  Let's think this through more
> carefully - it is, after all, an unrevokable, unalterable addition to the
> kernel ABI.

Which means it does not belong in /proc.

Alan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo
  2008-03-20 19:22     ` Christoph Lameter
@ 2008-03-21 22:19       ` Andrew Morton
  2008-03-21 22:09         ` Alan Cox
  0 siblings, 1 reply; 28+ messages in thread
From: Andrew Morton @ 2008-03-21 22:19 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: arjan, linux-mm, linux-kernel

On Thu, 20 Mar 2008 12:22:07 -0700 (PDT)
Christoph Lameter <clameter@sgi.com> wrote:

> On Wed, 19 Mar 2008, Arjan van de Ven wrote:
> 
> > > +	proc_create("vmallocinfo",S_IWUSR|S_IRUGO, NULL,
> > why should non-root be able to read this? sounds like a security issue (info leak) to me...

What is the security concern here?  This objection is rather vague.

> Well I copied from the slabinfo logic (leaking info for slabs is okay?).
> 
> Lets restrict it to root then:
> 
> 
> 
> Subject: vmallocinfo: Only allow root to read /proc/vmallocinfo
> 
> Change permissions for /proc/vmallocinfo to only allow read
> for root.

That makes the feature somewhat less useful.  Let's think this through more
carefully - it is, after all, an unrevokable, unalterable addition to the
kernel ABI.

Arjan, what scenarios are you thinking about?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-21 20:55               ` Ingo Molnar
@ 2008-03-22  2:40                 ` Mike Frysinger
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Frysinger @ 2008-03-22  2:40 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Christoph Lameter, akpm, linux-mm, linux-kernel

On Fri, Mar 21, 2008 at 4:55 PM, Ingo Molnar <mingo@elte.hu> wrote:
>  * Christoph Lameter <clameter@sgi.com> wrote:
>  > > the best i found for lockdep was to include a fair number of them,
>  > > and to skip the top 3. struct vm_area that vmalloc uses isnt
>  > > space-critical, so 4-8 entries with a 3 skip would be quite ok. (but
>  > > can be more than that as well)
>  >
>  > STACKTRACE depends on STACKTRACE_SUPPORT which is not available on all
>  > arches? alpha blackfin ia64 etc are missing it?
>
>  one more reason for them to implement it.

as long as the new code in question is properly ifdef-ed, making it
rely on STACKTRACE sounds fine.  i'll open an item in our Blackfin
tracker to add support for it.
-mike

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-29 17:08     ` Christoph Lameter
@ 2008-04-28 19:48       ` Arjan van de Ven
  2008-04-29 18:49         ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Arjan van de Ven @ 2008-04-28 19:48 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Ingo Molnar, akpm, linux-mm, linux-kernel, Linus Torvalds,
	Peter Zijlstra

On Tue, 29 Apr 2008 10:08:29 -0700 (PDT)
Christoph Lameter <clameter@sgi.com> wrote:

> On Tue, 29 Apr 2008, Ingo Molnar wrote:
> 
> > i pointed out how it should be done _much cleaner_ (and much
> > smaller - only a single patch needed) via stack-trace, without
> > changing a dozen architectures, and even gave a patch to make it
> > all easier for you:
> > 
> >     http://lkml.org/lkml/2008/3/19/568
> >     http://lkml.org/lkml/2008/3/21/88
> > 
> > in fact, a stacktrace printout is much more informative as well to 
> > users, than a punny __builtin_return_address(0)!
> 
> Sorry lost track of this issue. Adding stracktrace support is not a 
> trivial thing and will change the basic handling of vmallocinfo.
> 
> Not sure if stacktrace support can be enabled without a penalty on
> various platforms. Doesnt this require stackframes to be formatted in
> a certain way?

it doesn't.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-29 18:49         ` Christoph Lameter
@ 2008-04-28 21:00           ` Arjan van de Ven
  2008-04-29 19:09             ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Arjan van de Ven @ 2008-04-28 21:00 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Ingo Molnar, akpm, linux-mm, linux-kernel, Linus Torvalds,
	Peter Zijlstra

On Tue, 29 Apr 2008 11:49:54 -0700 (PDT)
Christoph Lameter <clameter@sgi.com> wrote:

> On Mon, 28 Apr 2008, Arjan van de Ven wrote:
> 
> > > Sorry lost track of this issue. Adding stracktrace support is not
> > > a trivial thing and will change the basic handling of vmallocinfo.
> > > 
> > > Not sure if stacktrace support can be enabled without a penalty on
> > > various platforms. Doesnt this require stackframes to be
> > > formatted in a certain way?
> > 
> > it doesn't.
> 
> Hmmm... Why do we have CONFIG_FRAMEPOINTER then?

to make the backtraces more accurate.


> The current implementation of vmalloc_caller() follows what we have
> done with kmalloc_track_caller. Its low overhead and always on.

stacktraces aren't entirely free, the cost is O(nr of modules) unfortunately ;(

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-03-18 22:27 ` [2/2] vmallocinfo: Add caller information Christoph Lameter
  2008-03-19 21:42   ` Ingo Molnar
@ 2008-04-29  8:48   ` Ingo Molnar
  2008-04-29 17:08     ` Christoph Lameter
  1 sibling, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2008-04-29  8:48 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: akpm, linux-mm, linux-kernel, Linus Torvalds, Peter Zijlstra,
	Arjan van de Ven


* Christoph Lameter <clameter@sgi.com> wrote:

> Add caller information so that /proc/vmallocinfo shows where the 
> allocation request for a slice of vmalloc memory originated.

i _specifically_ objected to the uglification that this patch brings 
with itself to the modified arch/x86 files (see the diff excerpt below), 
in:

    http://lkml.org/lkml/2008/3/19/450

i pointed out how it should be done _much cleaner_ (and much smaller - 
only a single patch needed) via stack-trace, without changing a dozen 
architectures, and even gave a patch to make it all easier for you:

    http://lkml.org/lkml/2008/3/19/568
    http://lkml.org/lkml/2008/3/21/88

in fact, a stacktrace printout is much more informative as well to 
users, than a punny __builtin_return_address(0)!

but you did not reply to my objections in substance, hence i considered 
the issue closed - but you apparently went ahead without addressing my 
concerns (which are rather obvious to anyone doing debug code) and now 
this ugly code is upstream.

If lockdep can get stacktrace samples from all around the kernel without 
adding "caller" info parameters to widely used APIs, then the MM is 
evidently able to do it too. _Saving_ a stacktrace is relatively fast 
[printing it to the console is what is slow], and vmalloc() is an utter 
slowpath anyway [and 1 million file descriptors does not count as a 
fastpath].

If performance is of any concern then make it dependent on 
CONFIG_DEBUG_VM or whatever debug switch in the MM - that will be 
_faster_ in the default case than the current 
pass-parameter-deep-down-the-arch crap you've pushed here. I dont 
remember the last time i genuinely needed the allocation site of a 
vmalloc().

I any case, do _NOT_ pollute any architectures with stack debugging 
hacks (and that holds for future similar patches too), that's why we 
wrote stacktrace. This needs to be reverted or fixed properly.

	Ingo

> Index: linux-2.6.25-rc5-mm1/arch/x86/mm/ioremap.c
> ===================================================================
> --- linux-2.6.25-rc5-mm1.orig/arch/x86/mm/ioremap.c	2008-03-18 12:20:10.803827969 -0700
> +++ linux-2.6.25-rc5-mm1/arch/x86/mm/ioremap.c	2008-03-18 12:22:09.744570798 -0700
> @@ -118,8 +118,8 @@ static int ioremap_change_attr(unsigned 
>   * have to convert them into an offset in a page-aligned mapping, but the
>   * caller shouldn't need to know that small detail.
>   */
> -static void __iomem *__ioremap(unsigned long phys_addr, unsigned long size,
> -			       enum ioremap_mode mode)
> +static void __iomem *__ioremap_caller(unsigned long phys_addr,
> +	unsigned long size, enum ioremap_mode mode, void *caller)
>  {
>  	unsigned long pfn, offset, last_addr, vaddr;
>  	struct vm_struct *area;
> @@ -176,7 +176,7 @@ static void __iomem *__ioremap(unsigned 
>  	/*
>  	 * Ok, go for it..
>  	 */
> -	area = get_vm_area(size, VM_IOREMAP);
> +	area = get_vm_area_caller(size, VM_IOREMAP, caller);
>  	if (!area)
>  		return NULL;
>  	area->phys_addr = phys_addr;
> @@ -217,13 +217,15 @@ static void __iomem *__ioremap(unsigned 
>   */
>  void __iomem *ioremap_nocache(unsigned long phys_addr, unsigned long size)
>  {
> -	return __ioremap(phys_addr, size, IOR_MODE_UNCACHED);
> +	return __ioremap_caller(phys_addr, size, IOR_MODE_UNCACHED,
> +						__builtin_return_address(0));
>  }
>  EXPORT_SYMBOL(ioremap_nocache);
>  
>  void __iomem *ioremap_cache(unsigned long phys_addr, unsigned long size)
>  {
> -	return __ioremap(phys_addr, size, IOR_MODE_CACHED);
> +	return __ioremap_caller(phys_addr, size, IOR_MODE_CACHED,
> +						__builtin_return_address(0));
>  }
>  EXPORT_SYMBOL(ioremap_cache);


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-29  8:48   ` Ingo Molnar
@ 2008-04-29 17:08     ` Christoph Lameter
  2008-04-28 19:48       ` Arjan van de Ven
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2008-04-29 17:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: akpm, linux-mm, linux-kernel, Linus Torvalds, Peter Zijlstra,
	Arjan van de Ven

On Tue, 29 Apr 2008, Ingo Molnar wrote:

> i pointed out how it should be done _much cleaner_ (and much smaller - 
> only a single patch needed) via stack-trace, without changing a dozen 
> architectures, and even gave a patch to make it all easier for you:
> 
>     http://lkml.org/lkml/2008/3/19/568
>     http://lkml.org/lkml/2008/3/21/88
> 
> in fact, a stacktrace printout is much more informative as well to 
> users, than a punny __builtin_return_address(0)!

Sorry lost track of this issue. Adding stracktrace support is not a 
trivial thing and will change the basic handling of vmallocinfo.

Not sure if stacktrace support can be enabled without a penalty on various 
platforms. Doesnt this require stackframes to be formatted in a certain 
way?


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-28 19:48       ` Arjan van de Ven
@ 2008-04-29 18:49         ` Christoph Lameter
  2008-04-28 21:00           ` Arjan van de Ven
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2008-04-29 18:49 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Ingo Molnar, akpm, linux-mm, linux-kernel, Linus Torvalds,
	Peter Zijlstra

On Mon, 28 Apr 2008, Arjan van de Ven wrote:

> > Sorry lost track of this issue. Adding stracktrace support is not a 
> > trivial thing and will change the basic handling of vmallocinfo.
> > 
> > Not sure if stacktrace support can be enabled without a penalty on
> > various platforms. Doesnt this require stackframes to be formatted in
> > a certain way?
> 
> it doesn't.

Hmmm... Why do we have CONFIG_FRAMEPOINTER then?

The current implementation of vmalloc_caller() follows what we have done 
with kmalloc_track_caller. Its low overhead and always on.

It would be great if we could have stacktrace support both for kmalloc and 
vmalloc in the same way also with low overhead but I think following a 
backtrace requires much more than simply storing the caller address. A 
mechanism like that would require an explicit kernel CONFIG option. A 
year or so ago we had patches to implement stacktraces in the slab 
allocators but they were not merged due to various arch specific issues 
with backtraces.

We could dump the offending x86_64 pieces. Some detail of what
/proc/vmallocinfo would be lost then.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-28 21:00           ` Arjan van de Ven
@ 2008-04-29 19:09             ` Christoph Lameter
  2008-04-29 19:23               ` Pekka Enberg
  2008-04-29 19:29               ` Ingo Molnar
  0 siblings, 2 replies; 28+ messages in thread
From: Christoph Lameter @ 2008-04-29 19:09 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Ingo Molnar, akpm, linux-mm, linux-kernel, Linus Torvalds,
	Peter Zijlstra

On Mon, 28 Apr 2008, Arjan van de Ven wrote:

> > Hmmm... Why do we have CONFIG_FRAMEPOINTER then?
> 
> to make the backtraces more accurate.

Well so we display out of whack backtraces? There are also issues on 
platforms that do not have a stack in the classic sense (rotating register 
file on IA64 and Sparc64 f.e.). Determining a backtrace can be very 
expensive.

> > The current implementation of vmalloc_caller() follows what we have
> > done with kmalloc_track_caller. Its low overhead and always on.
> 
> stacktraces aren't entirely free, the cost is O(nr of modules) unfortunately ;(

The current implementation /proc/vmallocinfo avoids these issues and 
with just one caller address it can print one line per vmalloc request. 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-29 19:09             ` Christoph Lameter
@ 2008-04-29 19:23               ` Pekka Enberg
  2008-04-29 19:29                 ` Pekka Enberg
  2008-04-29 19:29               ` Ingo Molnar
  1 sibling, 1 reply; 28+ messages in thread
From: Pekka Enberg @ 2008-04-29 19:23 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Arjan van de Ven, Ingo Molnar, akpm, linux-mm, linux-kernel,
	Linus Torvalds, Peter Zijlstra

On Tue, Apr 29, 2008 at 10:09 PM, Christoph Lameter <clameter@sgi.com> wrote:
>  Well so we display out of whack backtraces? There are also issues on
>  platforms that do not have a stack in the classic sense (rotating register
>  file on IA64 and Sparc64 f.e.). Determining a backtrace can be very
>  expensive.

I think that's the key question here whether we need to enable this on
production systems? If yes, why? If it's just a debugging aid, then I
see Ingo's point of save_stack_trace(); otherwise the low-overhead
__builtin_return_address() makes more sense.

And btw, why is this new file not in /sys/kernel....?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-29 19:23               ` Pekka Enberg
@ 2008-04-29 19:29                 ` Pekka Enberg
  0 siblings, 0 replies; 28+ messages in thread
From: Pekka Enberg @ 2008-04-29 19:29 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Arjan van de Ven, Ingo Molnar, akpm, linux-mm, linux-kernel,
	Linus Torvalds, Peter Zijlstra

On Tue, Apr 29, 2008 at 10:09 PM, Christoph Lameter <clameter@sgi.com> wrote:
>  >  Well so we display out of whack backtraces? There are also issues on
>  >  platforms that do not have a stack in the classic sense (rotating register
>  >  file on IA64 and Sparc64 f.e.). Determining a backtrace can be very
>  >  expensive.

On Tue, Apr 29, 2008 at 10:23 PM, Pekka Enberg <penberg@cs.helsinki.fi> wrote:
>  I think that's the key question here whether we need to enable this on
>  production systems? If yes, why? If it's just a debugging aid, then I
>  see Ingo's point of save_stack_trace(); otherwise the low-overhead
>  __builtin_return_address() makes more sense.

Actually, this is vmalloc() so why do we even care? If there are
callers in the tree that use vmalloc() for performance sensitive
stuff, they ought to be converted to kmalloc() anyway, no?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [2/2] vmallocinfo: Add caller information
  2008-04-29 19:09             ` Christoph Lameter
  2008-04-29 19:23               ` Pekka Enberg
@ 2008-04-29 19:29               ` Ingo Molnar
  1 sibling, 0 replies; 28+ messages in thread
From: Ingo Molnar @ 2008-04-29 19:29 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Arjan van de Ven, akpm, linux-mm, linux-kernel, Linus Torvalds,
	Peter Zijlstra


* Christoph Lameter <clameter@sgi.com> wrote:

> On Mon, 28 Apr 2008, Arjan van de Ven wrote:
> 
> > > Hmmm... Why do we have CONFIG_FRAMEPOINTER then?
> > 
> > to make the backtraces more accurate.
> 
> Well so we display out of whack backtraces? There are also issues on 
> platforms that do not have a stack in the classic sense (rotating 
> register file on IA64 and Sparc64 f.e.). Determining a backtrace can 
> be very expensive.

they have to solve that for kernel oopses and for lockdep somehow 
anyway. Other users of stacktrace are: fault injection, kmemcheck, 
latencytop, ftrace. All new debugging and instrumentation code uses it, 
and for a good reason.

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2008-04-29 19:31 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-03-18 22:27 [0/2] vmalloc: Add /proc/vmallocinfo to display mappings Christoph Lameter
2008-03-18 22:27 ` [1/2] vmalloc: Show vmalloced areas via /proc/vmallocinfo Christoph Lameter
2008-03-20  4:04   ` Arjan van de Ven
2008-03-20 19:22     ` Christoph Lameter
2008-03-21 22:19       ` Andrew Morton
2008-03-21 22:09         ` Alan Cox
2008-03-18 22:27 ` [2/2] vmallocinfo: Add caller information Christoph Lameter
2008-03-19 21:42   ` Ingo Molnar
2008-03-20  0:03     ` Christoph Lameter
2008-03-21 11:00       ` Ingo Molnar
2008-03-21 17:35         ` Christoph Lameter
2008-03-21 18:45           ` Ingo Molnar
2008-03-21 19:16             ` Christoph Lameter
2008-03-21 20:55               ` Ingo Molnar
2008-03-22  2:40                 ` Mike Frysinger
2008-04-29  8:48   ` Ingo Molnar
2008-04-29 17:08     ` Christoph Lameter
2008-04-28 19:48       ` Arjan van de Ven
2008-04-29 18:49         ` Christoph Lameter
2008-04-28 21:00           ` Arjan van de Ven
2008-04-29 19:09             ` Christoph Lameter
2008-04-29 19:23               ` Pekka Enberg
2008-04-29 19:29                 ` Pekka Enberg
2008-04-29 19:29               ` Ingo Molnar
2008-03-19  2:23 ` [0/2] vmalloc: Add /proc/vmallocinfo to display mappings KOSAKI Motohiro
2008-03-19 22:07   ` Andrew Morton
2008-03-19 23:33     ` Christoph Lameter
2008-03-20  7:43     ` KOSAKI Motohiro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).