LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* high system cpu load during intense disk i/o
@ 2007-08-03 16:03 Dimitrios Apostolou
2007-08-05 16:03 ` Dimitrios Apostolou
2007-08-07 14:50 ` Dimitrios Apostolou
0 siblings, 2 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-03 16:03 UTC (permalink / raw)
To: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 2641 bytes --]
Hello list,
I have a P3, 256MB RAM system with 3 IDE disks attached, 2 identical
ones as hda and hdc (primary and secondary master), and the disc with
the OS partitions as primary slave hdb. For more info please refer to
the attached dmesg.txt. I attach several oprofile outputs that describe
various circumstances referenced later. The script I used to get them is
the attached script.sh.
The problem was encountered when I started two processes doing heavy I/O
on hda and hdc, "badblocks -v -w /dev/hda" and "badblocks -v -w
/dev/hdc". At the beginning (two_discs.txt) everything was fine and
vmstat reported more than 90% iowait CPU load. However, after a while
(two_discs_bad.txt) that some cron jobs kicked in, the image changed
completely: the cpu load was now about 60% system, and the rest was user
cpu load possibly going to the simple cron jobs.
Even though under normal circumstances (for example when running
badblocks on only one disc (one_disc.txt)) the cron jobs finish almost
instantaneously, this time they were simply never ending and every 10
minutes or so more and more jobs were being added to the process table.
One day later, the vmstat still reports about 60/40 system/user cpu load, all
processes still run (hundreds of them), and the load average is huge!
Another day later the OOM killer kicks in and kills various processes,
however never touches any badblocks process. Indeed, manually suspending
one badblocks process remedies the situation: within some seconds the
process table is cleared from cron jobs, cpu usage is back to 2-3% user
and ~90% iowait and the system is normally responsive again. This
happens no matter which badblocks process I suspend, being hda or hdc.
Any ideas about what could be wrong? I should note that the kernel is my
distro's default. As the problem seems to be scheduler specific I didn't
bother to compile a vanilla kernel, since the applied patches seem
irrelevant:
http://archlinux.org/packages/4197/
http://cvs.archlinux.org/cgi-bin/viewcvs.cgi/kernels/kernel26/?cvsroot=Current&only_with_tag=CURRENT
Thank in advance,
Dimitris
P.S.1: Please CC me directly as I'm not subscribed
P.S.2: Keep in mind that the problematic oprofile outputs probably refer
to much longer time than 5 sec, since due to high load the script was
taking long to complete.
P.S.3: I couldn't find anywhere in kernel documentation that setting
nmi_watchdog=0 is neccessary for oprofile to work correctly. However,
Documentation/nmi_watchdog.txt mentions that oprofile should disable the
nmi_watchdog automatically, which doesn't happen with the latest kernel.
[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 10767 bytes --]
Linux version 2.6.22-ARCH (root@Wohnung) (gcc version 4.2.1 20070704 (prerelease)) #1 SMP PREEMPT Thu Aug 2 18:27:37 CEST 2007
BIOS-provided physical RAM map:
BIOS-e820: 0000000000000000 - 00000000000a0000 (usable)
BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
BIOS-e820: 0000000000100000 - 000000000fff0000 (usable)
BIOS-e820: 000000000fff0000 - 000000000fff3000 (ACPI NVS)
BIOS-e820: 000000000fff3000 - 0000000010000000 (ACPI data)
BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved)
0MB HIGHMEM available.
255MB LOWMEM available.
Entering add_active_range(0, 0, 65520) 0 entries of 256 used
Zone PFN ranges:
DMA 0 -> 4096
Normal 4096 -> 65520
HighMem 65520 -> 65520
early_node_map[1] active PFN ranges
0: 0 -> 65520
On node 0 totalpages: 65520
DMA zone: 32 pages used for memmap
DMA zone: 0 pages reserved
DMA zone: 4064 pages, LIFO batch:0
Normal zone: 479 pages used for memmap
Normal zone: 60945 pages, LIFO batch:15
HighMem zone: 0 pages used for memmap
DMI 2.2 present.
ACPI: RSDP 000F6B30, 0014 (r0 GBT )
ACPI: RSDT 0FFF3000, 0028 (r1 GBT AWRDACPI 42302E31 AWRD 0)
ACPI: FACP 0FFF3040, 0074 (r1 GBT AWRDACPI 42302E31 AWRD 0)
ACPI: DSDT 0FFF30C0, 224C (r1 GBT AWRDACPI 1000 MSFT 100000C)
ACPI: FACS 0FFF0000, 0040
ACPI: PM-Timer IO Port: 0x4008
Allocating PCI resources starting at 20000000 (gap: 10000000:efff0000)
Built 1 zonelists. Total pages: 65009
Kernel command line: auto BOOT_IMAGE=arch ro root=341 lapic nmi_watchdog=0
Local APIC disabled by BIOS -- reenabling.
Found and enabled local APIC!
mapped APIC to ffffd000 (fee00000)
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Initializing CPU#0
PID hash table entries: 1024 (order: 10, 4096 bytes)
Detected 798.025 MHz processor.
Console: colour VGA+ 80x25
Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
Memory: 254924k/262080k available (2392k kernel code, 6704k reserved, 787k data, 304k init, 0k highmem)
virtual kernel memory layout:
fixmap : 0xfff82000 - 0xfffff000 ( 500 kB)
pkmap : 0xff800000 - 0xffc00000 (4096 kB)
vmalloc : 0xd0800000 - 0xff7fe000 ( 751 MB)
lowmem : 0xc0000000 - 0xcfff0000 ( 255 MB)
.init : 0xc0421000 - 0xc046d000 ( 304 kB)
.data : 0xc03561df - 0xc041b1bc ( 787 kB)
.text : 0xc0100000 - 0xc03561df (2392 kB)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
Calibrating delay using timer specific routine.. 1597.75 BogoMIPS (lpj=2662004)
Security Framework v1.0.0 initialized
Mount-cache hash table entries: 512
CPU: After generic identify, caps: 0387fbff 00000000 00000000 00000000 00000000 00000000 00000000
CPU: L1 I cache: 16K, L1 D cache: 16K
CPU: L2 cache: 256K
CPU serial number disabled.
CPU: After all inits, caps: 0383fbff 00000000 00000000 00000040 00000000 00000000 00000000
Intel machine check architecture supported.
Intel machine check reporting enabled on CPU#0.
Compat vDSO mapped to ffffe000.
Checking 'hlt' instruction... OK.
SMP alternatives: switching to UP code
Freeing SMP alternatives: 11k freed
Early unpacking initramfs... done
ACPI: Core revision 20070126
ACPI: Looking for DSDT in initramfs... error, file /DSDT.aml not found.
ACPI: setting ELCR to 0200 (from 1e00)
CPU0: Intel Pentium III (Coppermine) stepping 06
SMP motherboard not detected.
Brought up 1 CPUs
Booting paravirtualized kernel on bare hardware
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: PCI BIOS revision 2.10 entry at 0xfb370, last bus=1
PCI: Using configuration type 1
Setting up standard PCI resources
ACPI: Interpreter enabled
ACPI: (supports S0 S1 S4 S5)
ACPI: Using PIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (0000:00)
PCI: Probing PCI hardware (bus 00)
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 1 3 4 5 6 7 *10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKB] (IRQs 1 3 4 5 6 7 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKC] (IRQs 1 3 4 5 6 7 10 11 *12 14 15)
ACPI: PCI Interrupt Link [LNKD] (IRQs 1 3 4 5 6 7 10 *11 12 14 15)
Linux Plug and Play Support v0.97 (c) Adam Belay
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp: PnP ACPI: found 11 devices
ACPI: ACPI bus type pnp unregistered
SCSI subsystem initialized
PCI: Using ACPI for IRQ routing
PCI: If a device doesn't work, try "pci=routeirq". If it helps, post a report
NetLabel: Initializing
NetLabel: domain hash size = 128
NetLabel: protocols = UNLABELED CIPSOv4
NetLabel: unlabeled traffic allowed by default
ACPI: RTC can wake from S4
pnp: 00:00: iomem range 0xf0000-0xf3fff could not be reserved
pnp: 00:00: iomem range 0xf4000-0xf7fff could not be reserved
pnp: 00:00: iomem range 0xf8000-0xfbfff could not be reserved
pnp: 00:00: iomem range 0xfc000-0xfffff could not be reserved
Time: tsc clocksource has been installed.
PCI: Bridge: 0000:00:01.0
IO window: disabled.
MEM window: d8000000-dfffffff
PREFETCH window: 20000000-200fffff
PCI: Setting latency timer of device 0000:00:01.0 to 64
NET: Registered protocol family 2
IP route cache hash table entries: 2048 (order: 1, 8192 bytes)
TCP established hash table entries: 8192 (order: 5, 131072 bytes)
TCP bind hash table entries: 8192 (order: 4, 98304 bytes)
TCP: Hash tables configured (established 8192 bind 8192)
TCP reno registered
checking if image is initramfs... it is
Freeing initrd memory: 600k freed
apm: BIOS version 1.2 Flags 0x07 (Driver version 1.16ac)
apm: overridden by ACPI.
VFS: Disk quotas dquot_6.5.1
Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
PCI: VIA PCI bridge detected. Disabling DAC.
Activating ISA DMA hang workarounds.
Boot video device is 0000:01:00.0
isapnp: Scanning for PnP cards...
Switched to high resolution mode on CPU 0
isapnp: No Plug & Play device found
Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled
00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
00:09: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
RAMDISK driver initialized: 16 RAM disks of 16384K size 1024 blocksize
loop: module loaded
input: Macintosh mouse button emulation as /class/input/input0
PNP: No PS/2 controller found. Probing ports directly.
serio: i8042 KBD port at 0x60,0x64 irq 1
mice: PS/2 mouse device common for all mice
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
Using IPI No-Shortcut mode
Freeing unused kernel memory: 304k freed
Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
VP_IDE: IDE controller at PCI slot 0000:00:07.1
VP_IDE: chipset revision 6
VP_IDE: not 100% native mode: will probe irqs later
VP_IDE: VIA vt82c596b (rev 12) IDE UDMA66 controller on pci0000:00:07.1
ide0: BM-DMA at 0xe000-0xe007, BIOS settings: hda:DMA, hdb:DMA
ide1: BM-DMA at 0xe008-0xe00f, BIOS settings: hdc:DMA, hdd:pio
Probing IDE interface ide0...
hda: WDC WD2500JB-55REA0, ATA DISK drive
hdb: MAXTOR 6L020J1, ATA DISK drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Probing IDE interface ide1...
hdc: WDC WD2500JB-55REA0, ATA DISK drive
ide1 at 0x170-0x177,0x376 on irq 15
hda: max request size: 512KiB
hda: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA(66)
hda: cache flushes supported
hda: unknown partition table
hdb: max request size: 128KiB
hdb: 40132503 sectors (20547 MB) w/1819KiB Cache, CHS=39813/16/63, UDMA(66)
hdb: cache flushes supported
hdb: hdb1 hdb2
hdc: max request size: 512KiB
hdc: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA(66)
hdc: cache flushes supported
hdc: unknown partition table
kjournald starting. Commit interval 5 seconds
EXT3-fs: mounted filesystem with ordered data mode.
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
input: Power Button (FF) as /class/input/input1
ACPI: Power Button (FF) [PWRF]
input: Power Button (CM) as /class/input/input2
ACPI: Power Button (CM) [PWRB]
input: Sleep Button (CM) as /class/input/input3
ACPI: Sleep Button (CM) [SLPB]
ACPI: CPU0 (power states: C1[C1] C2[C2])
ACPI: Processor [CPU0] (supports 2 throttling states)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Linux agpgart interface v0.102 (c) Dave Jones
USB Universal Host Controller Interface driver v3.0
ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
PCI: setting IRQ 11 as level-triggered
ACPI: PCI Interrupt 0000:00:07.2[D] -> Link [LNKD] -> GSI 11 (level, low) -> IRQ 11
uhci_hcd 0000:00:07.2: UHCI Host Controller
uhci_hcd 0000:00:07.2: new USB bus registered, assigned bus number 1
uhci_hcd 0000:00:07.2: irq 11, io base 0x0000e400
usb usb1: configuration #1 chosen from 1 choice
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 2 ports detected
agpgart: Detected VIA Apollo Pro 133 chipset
agpgart: AGP aperture is 64M @ 0xe0000000
rtc_cmos 00:04: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one year, y3k
ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 12
PCI: setting IRQ 12 as level-triggered
ACPI: PCI Interrupt 0000:00:0a.0[A] -> Link [LNKC] -> GSI 12 (level, low) -> IRQ 12
skge 1.11 addr 0xe4000000 irq 12 chip Yukon rev 1
skge eth0: addr 00:0f:38:6a:9c:fe
ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
PCI: setting IRQ 10 as level-triggered
ACPI: PCI Interrupt 0000:00:08.0[A] -> Link [LNKA] -> GSI 10 (level, low) -> IRQ 10
sk98lin: driver has been replaced by the skge driver and is scheduled for removal
AC'97 0 analog subsections not ready
parport_pc 00:0a: reported by Plug and Play ACPI
parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
PPP generic driver version 2.4.2
input: PC Speaker as /class/input/input4
lp0: using parport0 (interrupt-driven).
ppdev: user-space parallel port driver
Marking TSC unstable due to: possible TSC halt in C2.
Time: acpi_pm clocksource has been installed.
md: md0 stopped.
EXT3 FS on hdb1, internal journal
ReiserFS: hdb2: found reiserfs format "3.6" with standard journal
ReiserFS: hdb2: using ordered data mode
ReiserFS: hdb2: journal params: device hdb2, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
ReiserFS: hdb2: checking transaction log (hdb2)
ReiserFS: hdb2: Using r5 hash to sort names
skge eth0: enabling interface
skge eth0: Link is up at 1000 Mbps, full duplex, flow control both
[-- Attachment #3: script.sh --]
[-- Type: application/x-shellscript, Size: 243 bytes --]
[-- Attachment #4: two_discs.txt --]
[-- Type: text/plain, Size: 15250 bytes --]
Thu Aug 2 18:36:39 EEST 2007
+ opcontrol --vmlinux=/usr/src/linux-2.6.22-ARCH/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
2970 61.0734 vmlinux
577 11.8651 libc-2.6.so
450 9.2535 ide_core
399 8.2048 ld-2.6.so
244 5.0175 bash
46 0.9459 ISO8859-1.so
28 0.5758 ext3
21 0.4318 jbd
20 0.4113 grep
11 0.2262 processor
11 0.2262 imap-login
CPU_CLK_UNHALT...|
samples| %|
------------------
10 90.9091 imap-login
1 9.0909 anon (tgid:3941 range:0xb7fd9000-0xb7fda000)
10 0.2056 oprofile
9 0.1851 ide_disk
8 0.1645 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
5 62.5000 badblocks
3 37.5000 anon (tgid:5590 range:0xb7ef6000-0xb7ef7000)
6 0.1234 gawk
5 0.1028 skge
5 0.1028 ophelp
5 0.1028 libcrypto.so.0.9.8
5 0.1028 libpopt.so.0.0.0
5 0.1028 dovecot
4 0.0823 libext2fs.so.2.4
4 0.0823 reiserfs
3 0.0617 libpcre.so.0.0.1
3 0.0617 dovecot-auth
2 0.0411 libncurses.so.5.6
2 0.0411 screen-4.0.3
2 0.0411 libnetsnmp.so.15.0.0
2 0.0411 locale-archive
1 0.0206 tr
1 0.0206 libreadline.so.5.2
1 0.0206 librt-2.6.so
1 0.0206 libssl.so.0.9.8
1 0.0206 imap
CPU_CLK_UNHALT...|
samples| %|
------------------
1 100.000 anon (tgid:4125 range:0xb7f99000-0xb7f9a000)
1 0.0206 sshd
+ echo
+ echo
+ echo
+ opreport -l /usr/src/linux-2.6.22-ARCH/vmlinux
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
471 15.8586 acpi_pm_read
416 14.0067 schedule
192 6.4646 __switch_to
111 3.7374 do_wp_page
77 2.5926 find_next_bit
71 2.3906 __handle_mm_fault
65 2.1886 __blockdev_direct_IO
64 2.1549 dequeue_task
49 1.6498 delay_tsc
43 1.4478 unmap_vmas
40 1.3468 do_page_fault
39 1.3131 get_page_from_freelist
30 1.0101 follow_page
30 1.0101 page_fault
29 0.9764 mask_and_ack_8259A
28 0.9428 filemap_nopage
27 0.9091 native_load_tls
24 0.8081 blk_rq_map_sg
24 0.8081 find_get_page
23 0.7744 __link_path_walk
23 0.7744 page_address
20 0.6734 find_vma
19 0.6397 kmem_cache_free
19 0.6397 put_page
18 0.6061 enable_8259A_irq
18 0.6061 radix_tree_lookup
17 0.5724 dio_bio_submit
17 0.5724 strnlen_user
16 0.5387 kmem_cache_alloc
15 0.5051 sysenter_past_esp
14 0.4714 copy_process
13 0.4377 __generic_file_aio_write_nolock
13 0.4377 permission
12 0.4040 __alloc_pages
12 0.4040 __mutex_lock_slowpath
12 0.4040 current_fs_time
12 0.4040 do_mmap_pgoff
12 0.4040 get_user_pages
11 0.3704 generic_make_request
10 0.3367 __d_lookup
10 0.3367 copy_page_range
10 0.3367 find_busiest_group
10 0.3367 kmem_cache_zalloc
9 0.3030 do_lookup
9 0.3030 generic_file_direct_IO
9 0.3030 load_elf_binary
9 0.3030 memcpy
9 0.3030 restore_nocheck
8 0.2694 file_update_time
8 0.2694 inotify_inode_queue_event
7 0.2357 __copy_to_user_ll
7 0.2357 _spin_lock_irqsave
7 0.2357 block_llseek
7 0.2357 deactivate_task
7 0.2357 do_exit
7 0.2357 flush_tlb_page
7 0.2357 generic_unplug_device
7 0.2357 irq_entries_start
7 0.2357 up_read
6 0.2020 __fput
6 0.2020 __make_request
6 0.2020 acpi_os_read_port
6 0.2020 bio_alloc_bioset
6 0.2020 cache_reap
6 0.2020 do_generic_mapping_read
6 0.2020 native_flush_tlb_single
6 0.2020 sys_mprotect
5 0.1684 __add_entropy_words
5 0.1684 __bio_add_page
5 0.1684 copy_strings
5 0.1684 do_path_lookup
5 0.1684 generic_permission
5 0.1684 lru_cache_add_active
5 0.1684 number
5 0.1684 proc_sys_lookup_table_one
5 0.1684 vfs_write
5 0.1684 vm_normal_page
5 0.1684 vm_stat_account
4 0.1347 __kmalloc
4 0.1347 __mutex_unlock_slowpath
4 0.1347 bio_add_page
4 0.1347 blk_backing_dev_unplug
4 0.1347 cpu_idle
4 0.1347 dio_bio_add_page
4 0.1347 dio_bio_complete
4 0.1347 dio_get_page
4 0.1347 do_munmap
4 0.1347 do_sys_poll
4 0.1347 fget_light
4 0.1347 file_read_actor
4 0.1347 filemap_write_and_wait
4 0.1347 find_vma_prepare
4 0.1347 getname
4 0.1347 kernel_read
4 0.1347 notifier_call_chain
4 0.1347 percpu_counter_mod
4 0.1347 preempt_schedule
4 0.1347 rb_first
4 0.1347 secure_ip_id
4 0.1347 strncpy_from_user
4 0.1347 sys_mmap2
3 0.1010 __blocking_notifier_call_chain
3 0.1010 __dentry_open
3 0.1010 __find_get_block
3 0.1010 __find_get_block_slow
3 0.1010 __get_user_4
3 0.1010 __mark_inode_dirty
3 0.1010 __mod_timer
3 0.1010 __pte_alloc
3 0.1010 __vm_enough_memory
3 0.1010 __wake_up_bit
3 0.1010 _atomic_dec_and_lock
3 0.1010 anon_vma_prepare
3 0.1010 blk_remove_plug
3 0.1010 cfq_dispatch_requests
3 0.1010 clear_user
3 0.1010 cond_resched
3 0.1010 copy_from_user
3 0.1010 copy_to_user
3 0.1010 debug_mutex_add_waiter
3 0.1010 dio_cleanup
3 0.1010 dnotify_parent
3 0.1010 do_fork
3 0.1010 do_wait
3 0.1010 dummy_vm_enough_memory
3 0.1010 elv_dispatch_sort
3 0.1010 flush_tlb_mm
3 0.1010 generic_file_direct_write
3 0.1010 getnstimeofday
3 0.1010 lock_timer_base
3 0.1010 may_expand_vm
3 0.1010 mod_timer
3 0.1010 need_resched
3 0.1010 page_cache_readahead
3 0.1010 page_remove_rmap
3 0.1010 path_walk
3 0.1010 pit_next_event
3 0.1010 preempt_schedule_irq
3 0.1010 rb_erase
3 0.1010 restore_all
3 0.1010 should_remove_suid
3 0.1010 submit_page_section
3 0.1010 sys_close
3 0.1010 unmap_region
3 0.1010 vma_link
3 0.1010 vma_prio_tree_add
3 0.1010 vsnprintf
3 0.1010 wake_up_new_task
2 0.0673 __atomic_notifier_call_chain
2 0.0673 __copy_user_intel
2 0.0673 __dec_zone_state
2 0.0673 __do_softirq
2 0.0673 __first_cpu
2 0.0673 __generic_unplug_device
2 0.0673 __inc_zone_page_state
2 0.0673 __inc_zone_state
2 0.0673 add_timer_randomness
2 0.0673 alloc_inode
2 0.0673 blk_do_ordered
2 0.0673 blk_queue_bounce
2 0.0673 blk_recount_segments
2 0.0673 blkdev_direct_IO
2 0.0673 cache_alloc_refill
2 0.0673 cfq_add_rq_rb
2 0.0673 cfq_insert_request
2 0.0673 cfq_remove_request
2 0.0673 debug_mutex_lock_common
2 0.0673 del_timer
2 0.0673 detach_pid
2 0.0673 do_IRQ
2 0.0673 do_select
2 0.0673 do_sync_read
2 0.0673 do_sync_write
2 0.0673 drain_array
2 0.0673 dummy_file_mmap
2 0.0673 elf_map
2 0.0673 elv_completed_request
2 0.0673 elv_insert
2 0.0673 error_code
2 0.0673 file_ra_state_init
2 0.0673 find_extend_vma
2 0.0673 find_mergeable_anon_vma
2 0.0673 free_hot_cold_page
2 0.0673 free_pgtables
2 0.0673 generic_segment_checks
2 0.0673 get_next_timer_interrupt
2 0.0673 get_nr_files
2 0.0673 get_unmapped_area
2 0.0673 hrtimer_interrupt
2 0.0673 idle_cpu
2 0.0673 internal_add_timer
2 0.0673 io_schedule
2 0.0673 ip_append_data
2 0.0673 kmap_atomic
2 0.0673 kmap_atomic_prot
2 0.0673 ktime_get_ts
2 0.0673 link_path_walk
2 0.0673 locks_remove_flock
2 0.0673 max_block
2 0.0673 mempool_alloc
2 0.0673 mempool_free
2 0.0673 mutex_remove_waiter
2 0.0673 nameidata_to_filp
2 0.0673 new_inode
2 0.0673 open_namei
2 0.0673 pipe_read
2 0.0673 prio_tree_insert
2 0.0673 rb_insert_color
2 0.0673 rcu_pending
2 0.0673 remove_suid
2 0.0673 resume_userspace
2 0.0673 rw_verify_area
2 0.0673 sched_clock
2 0.0673 scheduler_tick
2 0.0673 special_mapping_nopage
2 0.0673 split_vma
2 0.0673 submit_bio
2 0.0673 sys_llseek
2 0.0673 sys_rt_sigaction
2 0.0673 sys_write
2 0.0673 syscall_exit
2 0.0673 sysctl_head_next
2 0.0673 system_call
2 0.0673 timespec_trunc
2 0.0673 touch_atime
2 0.0673 vfs_fstat
2 0.0673 vfs_read
2 0.0673 vma_merge
2 0.0673 vma_prio_tree_insert
2 0.0673 xrlim_allow
1 0.0337 I_BDEV
1 0.0337 __brelse
1 0.0337 __do_page_cache_readahead
1 0.0337 __end_that_request_first
1 0.0337 __free_pages
1 0.0337 __getblk
1 0.0337 __init_rwsem
1 0.0337 __ip_route_output_key
1 0.0337 __kfree_skb
1 0.0337 __lru_add_drain
1 0.0337 __page_set_anon_rmap
1 0.0337 __pagevec_lru_add_active
1 0.0337 __pollwait
1 0.0337 __rcu_process_callbacks
1 0.0337 __udp4_lib_rcv
1 0.0337 __vma_link
1 0.0337 __vma_link_rb
1 0.0337 __writeback_single_inode
1 0.0337 acpi_hw_register_read
1 0.0337 acpi_os_write_port
1 0.0337 add_wait_queue
1 0.0337 anon_pipe_buf_release
1 0.0337 anon_vma_unlink
1 0.0337 arch_get_unmapped_area_topdown
1 0.0337 arch_pick_mmap_layout
1 0.0337 arch_setup_additional_pages
1 0.0337 bio_get_nr_vecs
1 0.0337 blkdev_get_blocks
1 0.0337 blockable_page_cache_readahead
1 0.0337 can_share_swap_page
1 0.0337 cfq_choose_req
1 0.0337 cfq_cic_rb_lookup
1 0.0337 cfq_completed_request
1 0.0337 cfq_init_prio_data
1 0.0337 cfq_queue_empty
1 0.0337 cfq_service_tree_add
1 0.0337 cfq_set_request
1 0.0337 check_userspace
1 0.0337 clear_inode
1 0.0337 clockevents_program_event
1 0.0337 copy_thread_group_keys
1 0.0337 count
1 0.0337 cp_new_stat64
1 0.0337 d_alloc
1 0.0337 d_callback
1 0.0337 d_hash_and_lookup
1 0.0337 debug_mutex_free_waiter
1 0.0337 debug_mutex_unlock
1 0.0337 dev_queue_xmit
1 0.0337 disk_round_stats
1 0.0337 do_notify_parent
1 0.0337 do_notify_resume
1 0.0337 do_softirq
1 0.0337 do_sys_open
1 0.0337 down_read_trylock
1 0.0337 dummy_bprm_set_security
1 0.0337 dummy_capable
1 0.0337 dummy_inode_getattr
1 0.0337 dummy_task_wait
1 0.0337 dup_fd
1 0.0337 end_that_request_last
1 0.0337 enqueue_hrtimer
1 0.0337 exit_aio
1 0.0337 exit_sem
1 0.0337 fd_install
1 0.0337 flock64_to_posix_lock
1 0.0337 flush_old_exec
1 0.0337 fn_hash_lookup
1 0.0337 fput
1 0.0337 free_block
1 0.0337 free_pgd_range
1 0.0337 generic_drop_inode
1 0.0337 generic_file_aio_read
1 0.0337 generic_file_llseek
1 0.0337 generic_fillattr
1 0.0337 get_dcookie
1 0.0337 get_empty_filp
1 0.0337 get_request
1 0.0337 get_unused_fd
1 0.0337 handle_level_irq
1 0.0337 hrtimer_try_to_cancel
1 0.0337 icmp_send
1 0.0337 inode_has_buffers
1 0.0337 install_special_mapping
1 0.0337 ip_push_pending_frames
1 0.0337 iput
1 0.0337 key_put
1 0.0337 kfree
1 0.0337 kfree_skb
1 0.0337 kfree_skbmem
1 0.0337 ksoftirqd
1 0.0337 kthread_should_stop
1 0.0337 local_bh_enable_ip
1 0.0337 lock_hrtimer_base
1 0.0337 lock_sock_nested
1 0.0337 lookup_mnt
1 0.0337 mark_page_accessed
1 0.0337 may_open
1 0.0337 mempool_alloc_slab
1 0.0337 mm_release
1 0.0337 msecs_to_jiffies
1 0.0337 native_flush_tlb
1 0.0337 native_io_delay
1 0.0337 native_load_esp0
1 0.0337 native_set_pte_at
1 0.0337 netif_receive_skb
1 0.0337 padzero
1 0.0337 page_add_file_rmap
1 0.0337 pfifo_fast_enqueue
1 0.0337 pipe_poll
1 0.0337 pipe_release
1 0.0337 pipe_write
1 0.0337 prepare_binprm
1 0.0337 proc_flush_task
1 0.0337 profile_pc
1 0.0337 profile_tick
1 0.0337 pty_close
1 0.0337 put_files_struct
1 0.0337 put_pid
1 0.0337 quicklist_trim
1 0.0337 raise_softirq
1 0.0337 rb_next
1 0.0337 rb_prev
1 0.0337 rcu_needs_cpu
1 0.0337 rcu_start_batch
1 0.0337 recalc_sigpending_tsk
1 0.0337 recalc_task_prio
1 0.0337 release_pages
1 0.0337 release_vm86_irqs
1 0.0337 remove_vma
1 0.0337 restore_sigcontext
1 0.0337 ret_from_exception
1 0.0337 run_local_timers
1 0.0337 schedule_delayed_work
1 0.0337 set_binfmt
1 0.0337 setup_arg_pages
1 0.0337 show_stat
1 0.0337 sig_ignored
1 0.0337 sigprocmask
1 0.0337 skb_clone
1 0.0337 sys_fstat64
1 0.0337 sys_lseek
1 0.0337 sys_read
1 0.0337 sys_rt_sigprocmask
1 0.0337 sys_select
1 0.0337 task_rq_lock
1 0.0337 tasklet_action
1 0.0337 tcp_ack
1 0.0337 tick_do_update_jiffies64
1 0.0337 tick_nohz_stop_sched_tick
1 0.0337 try_to_del_timer_sync
1 0.0337 try_to_wake_up
1 0.0337 unix_create1
1 0.0337 unix_poll
1 0.0337 vfs_getattr
1 0.0337 vfs_llseek
1 0.0337 vm_acct_memory
1 0.0337 vma_adjust
1 0.0337 vma_prio_tree_remove
1 0.0337 work_resched
[-- Attachment #5: two_discs_bad.txt --]
[-- Type: text/plain, Size: 25561 bytes --]
Thu Aug 2 18:43:32 EEST 2007
+ opcontrol --vmlinux=/usr/src/linux-2.6.22-ARCH/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
16319 45.7794 vmlinux
6421 18.0127 perl
CPU_CLK_UNHALT...|
samples| %|
------------------
6420 99.9844 perl
1 0.0156 anon (tgid:5897 range:0xb7fdb000-0xb7fdc000)
4770 13.3812 libpython2.5.so.1.0
3497 9.8101 libc-2.6.so
1830 5.1337 ide_core
1018 2.8558 ld-2.6.so
290 0.8135 bash
249 0.6985 oprofiled
CPU_CLK_UNHALT...|
samples| %|
------------------
246 98.7952 oprofiled
3 1.2048 anon (tgid:5878 range:0xb7f71000-0xb7f72000)
202 0.5667 ext3
178 0.4993 jbd
159 0.4460 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
115 72.3270 badblocks
27 16.9811 anon (tgid:5590 range:0xb7ef6000-0xb7ef7000)
17 10.6918 anon (tgid:5430 range:0xb7f1a000-0xb7f1b000)
154 0.4320 libpthread-2.6.so
72 0.2020 imap-login
CPU_CLK_UNHALT...|
samples| %|
------------------
66 91.6667 imap-login
6 8.3333 anon (tgid:3940 range:0xb7f94000-0xb7f95000)
64 0.1795 oprofile
56 0.1571 ISO8859-1.so
51 0.1431 libcrypto.so.0.9.8
50 0.1403 libext2fs.so.2.4
48 0.1347 skge
33 0.0926 ide_disk
32 0.0898 gawk
27 0.0757 grep
17 0.0477 libresolv-2.6.so
14 0.0393 dovecot-auth
13 0.0365 libnetsnmp.so.15.0.0
11 0.0309 libnetsnmpmibs.so.15.0.0
11 0.0309 libtasn1.so.3.0.10
10 0.0281 reiserfs
8 0.0224 dovecot
CPU_CLK_UNHALT...|
samples| %|
------------------
7 87.5000 dovecot
1 12.5000 anon (tgid:3919 range:0xb7f01000-0xb7f02000)
6 0.0168 libncurses.so.5.6
6 0.0168 locale-archive
4 0.0112 libdl-2.6.so
3 0.0084 libm-2.6.so
2 0.0056 mpop
CPU_CLK_UNHALT...|
samples| %|
------------------
1 50.0000 mpop
1 50.0000 anon (tgid:5900 range:0xb7f9d000-0xb7f9e000)
2 0.0056 libreadline.so.5.2
2 0.0056 sshd
1 0.0028 cat
1 0.0028 ls
1 0.0028 tr
1 0.0028 ipv6
1 0.0028 libnss_dns-2.6.so
1 0.0028 libnss_files-2.6.so
1 0.0028 libpcre.so.0.0.1
1 0.0028 mktemp
1 0.0028 ophelp
1 0.0028 screen-4.0.3
1 0.0028 which
1 0.0028 libgcrypt.so.11.2.3
1 0.0028 libidn.so.11.5.28
1 0.0028 libpopt.so.0.0.0
1 0.0028 _random.so
1 0.0028 imap
1 0.0028 crond
1 0.0028 snmpd
+ echo
+ echo
+ echo
+ opreport -l /usr/src/linux-2.6.22-ARCH/vmlinux
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
3832 23.4818 __switch_to
3380 20.7121 schedule
653 4.0015 mask_and_ack_8259A
452 2.7698 __blockdev_direct_IO
327 2.0038 dequeue_task
319 1.9548 follow_page
243 1.4891 get_page_from_freelist
219 1.3420 delay_tsc
187 1.1459 enable_8259A_irq
184 1.1275 put_page
173 1.0601 __handle_mm_fault
159 0.9743 native_load_tls
155 0.9498 do_page_fault
131 0.8027 __copy_to_user_ll
128 0.7844 do_wp_page
104 0.6373 irq_entries_start
103 0.6312 kmem_cache_free
101 0.6189 unmap_vmas
91 0.5576 sysenter_past_esp
89 0.5454 get_user_pages
87 0.5331 __link_path_walk
86 0.5270 math_state_restore
85 0.5209 blk_rq_map_sg
83 0.5086 __generic_file_aio_write_nolock
82 0.5025 __bio_add_page
77 0.4718 find_vma
75 0.4596 page_address
67 0.4106 page_fault
66 0.4044 generic_file_direct_write
65 0.3983 dio_bio_complete
62 0.3799 kmem_cache_alloc
62 0.3799 vfs_write
59 0.3615 filemap_nopage
53 0.3248 __d_lookup
51 0.3125 __mutex_lock_slowpath
51 0.3125 submit_page_section
48 0.2941 _spin_lock_irqsave
48 0.2941 dio_bio_submit
46 0.2819 bio_alloc_bioset
46 0.2819 block_llseek
45 0.2758 __make_request
44 0.2696 do_generic_mapping_read
42 0.2574 __add_entropy_words
42 0.2574 mempool_free
42 0.2574 restore_all
41 0.2512 current_fs_time
41 0.2512 handle_level_irq
41 0.2512 vm_normal_page
39 0.2390 find_get_page
38 0.2329 radix_tree_lookup
37 0.2267 dio_get_page
36 0.2206 blk_backing_dev_unplug
35 0.2145 dio_bio_add_page
34 0.2083 device_not_available
34 0.2083 dio_send_cur_page
34 0.2083 mark_page_accessed
33 0.2022 __mutex_unlock_slowpath
33 0.2022 acpi_pm_read
33 0.2022 do_sync_write
33 0.2022 fget_light
33 0.2022 generic_unplug_device
32 0.1961 cfq_completed_request
32 0.1961 permission
31 0.1900 generic_file_aio_write_nolock
31 0.1900 generic_make_request
31 0.1900 restore_nocheck
30 0.1838 __alloc_pages
30 0.1838 cache_reap
30 0.1838 generic_file_direct_IO
29 0.1777 __find_get_block
29 0.1777 do_mmap_pgoff
28 0.1716 try_to_wake_up
27 0.1655 blk_recount_segments
27 0.1655 cond_resched
26 0.1593 add_timer_randomness
26 0.1593 bio_add_page
25 0.1532 __mod_timer
25 0.1532 io_schedule
25 0.1532 load_elf_binary
25 0.1532 sched_clock
24 0.1471 elv_insert
24 0.1471 file_update_time
24 0.1471 generic_permission
24 0.1471 strnlen_user
23 0.1409 __end_that_request_first
22 0.1348 kfree
21 0.1287 dio_cleanup
21 0.1287 do_path_lookup
21 0.1287 strncpy_from_user
21 0.1287 sys_llseek
20 0.1226 do_sys_poll
20 0.1226 max_block
19 0.1164 copy_process
19 0.1164 proc_sys_lookup_table_one
19 0.1164 rw_verify_area
18 0.1103 kmap_atomic_prot
18 0.1103 preempt_schedule_irq
17 0.1042 blkdev_direct_IO
17 0.1042 dio_complete
17 0.1042 dnotify_parent
17 0.1042 do_lookup
17 0.1042 number
17 0.1042 vsnprintf
16 0.0980 bio_fs_destructor
16 0.0980 cfq_remove_request
16 0.0980 memcpy
16 0.0980 note_interrupt
15 0.0919 dio_bio_end_io
15 0.0919 generic_segment_checks
15 0.0919 hrtimer_interrupt
15 0.0919 inotify_inode_queue_event
15 0.0919 preempt_schedule
14 0.0858 _atomic_dec_and_lock
14 0.0858 drain_array
14 0.0858 internal_add_timer
14 0.0858 mempool_alloc
14 0.0858 vm_stat_account
13 0.0797 __copy_user_intel
13 0.0797 __inc_zone_state
13 0.0797 __mark_inode_dirty
13 0.0797 blk_remove_plug
13 0.0797 copy_page_range
13 0.0797 deactivate_task
13 0.0797 del_timer
13 0.0797 kmem_cache_zalloc
13 0.0797 open_namei
12 0.0735 __freed_request
12 0.0735 cache_alloc_refill
12 0.0735 do_IRQ
12 0.0735 find_extend_vma
12 0.0735 mutex_remove_waiter
12 0.0735 rb_erase
12 0.0735 task_rq_lock
11 0.0674 __copy_from_user_ll
11 0.0674 blkdev_get_blocks
11 0.0674 cfq_dispatch_requests
11 0.0674 cfq_insert_request
11 0.0674 clear_bdi_congested
11 0.0674 common_interrupt
11 0.0674 debug_mutex_add_waiter
11 0.0674 dio_new_bio
11 0.0674 disk_round_stats
11 0.0674 native_flush_tlb_single
11 0.0674 timespec_trunc
11 0.0674 unix_poll
11 0.0674 up_read
11 0.0674 vfs_read
10 0.0613 __follow_mount
10 0.0613 __fput
10 0.0613 __generic_unplug_device
10 0.0613 anon_vma_prepare
10 0.0613 arch_get_unmapped_area_topdown
10 0.0613 bio_get_nr_vecs
10 0.0613 bio_init
10 0.0613 cfq_set_request
10 0.0613 debug_mutex_lock_common
10 0.0613 generic_file_aio_read
10 0.0613 generic_fillattr
10 0.0613 irq_exit
10 0.0613 rb_insert_color
10 0.0613 sys_write
9 0.0552 __do_page_cache_readahead
9 0.0552 cfq_may_queue
9 0.0552 cfq_service_tree_add
9 0.0552 debug_mutex_unlock
9 0.0552 error_code
9 0.0552 find_mergeable_anon_vma
9 0.0552 get_empty_filp
9 0.0552 getname
9 0.0552 idle_cpu
9 0.0552 path_walk
9 0.0552 work_resched
8 0.0490 __dentry_open
8 0.0490 __wake_up_bit
8 0.0490 bdev_read_only
8 0.0490 bio_put
8 0.0490 cfq_queue_empty
8 0.0490 cp_new_stat64
8 0.0490 debug_mutex_free_waiter
8 0.0490 elv_completed_request
8 0.0490 elv_next_request
8 0.0490 file_read_actor
8 0.0490 find_vma_prepare
8 0.0490 inotify_dentry_parent_queue_event
8 0.0490 ip_append_data
8 0.0490 lock_timer_base
8 0.0490 lru_cache_add_active
8 0.0490 mempool_free_slab
8 0.0490 percpu_counter_mod
8 0.0490 rq_init
8 0.0490 submit_bio
8 0.0490 touch_atime
8 0.0490 update_wall_time
8 0.0490 vfs_llseek
8 0.0490 vma_merge
7 0.0429 __do_softirq
7 0.0429 __find_get_block_slow
7 0.0429 __rmqueue
7 0.0429 bio_free
7 0.0429 block_read_full_page
7 0.0429 call_rcu
7 0.0429 copy_strings
7 0.0429 do_sync_read
7 0.0429 do_sys_open
7 0.0429 dput
7 0.0429 dummy_inode_permission
7 0.0429 end_that_request_last
7 0.0429 fget
7 0.0429 filp_close
7 0.0429 find_vma_prev
7 0.0429 flush_signal_handlers
7 0.0429 kmap_atomic
7 0.0429 native_load_esp0
7 0.0429 recalc_task_prio
7 0.0429 remove_vma
7 0.0429 sys_mmap2
7 0.0429 system_call
7 0.0429 vma_adjust
7 0.0429 zone_watermark_ok
6 0.0368 __atomic_notifier_call_chain
6 0.0368 __blk_put_request
6 0.0368 __getblk
6 0.0368 __pagevec_lru_add_active
6 0.0368 __path_lookup_intent_open
6 0.0368 __pte_alloc
6 0.0368 alloc_buffer_head
6 0.0368 copy_to_user
6 0.0368 down_read_trylock
6 0.0368 flush_tlb_mm
6 0.0368 get_request
6 0.0368 get_unused_fd
6 0.0368 handle_IRQ_event
6 0.0368 link_path_walk
6 0.0368 mempool_alloc_slab
6 0.0368 ret_from_exception
6 0.0368 sys_mprotect
6 0.0368 sys_rt_sigaction
6 0.0368 vm_acct_memory
5 0.0306 __const_udelay
5 0.0306 __ip_route_output_key
5 0.0306 __page_set_anon_rmap
5 0.0306 account_user_time
5 0.0306 bit_waitqueue
5 0.0306 block_write_full_page
5 0.0306 cfq_add_rq_rb
5 0.0306 cfq_cic_rb_lookup
5 0.0306 copy_from_user
5 0.0306 do_softirq
5 0.0306 dummy_file_permission
5 0.0306 elv_dequeue_request
5 0.0306 file_ra_state_init
5 0.0306 free_block
5 0.0306 free_poll_entry
5 0.0306 freed_request
5 0.0306 generic_file_open
5 0.0306 get_unmapped_area
5 0.0306 hrtimer_forward
5 0.0306 inode_init_once
5 0.0306 ktime_get_ts
5 0.0306 lru_add_drain
5 0.0306 may_open
5 0.0306 native_set_pte_at
5 0.0306 net_rx_action
5 0.0306 notifier_call_chain
5 0.0306 page_cache_readahead
5 0.0306 put_io_context
5 0.0306 radix_tree_gang_lookup_tag
5 0.0306 rcu_pending
5 0.0306 sync_sb_inodes
5 0.0306 sys_faccessat
5 0.0306 unmap_region
4 0.0245 __anon_vma_link
4 0.0245 __block_write_full_page
4 0.0245 __get_user_4
4 0.0245 __kmalloc
4 0.0245 __rcu_process_callbacks
4 0.0245 __wake_up
4 0.0245 __wake_up_common
4 0.0245 _spin_lock
4 0.0245 add_disk_randomness
4 0.0245 blk_plug_device
4 0.0245 blk_queue_bounce
4 0.0245 cfq_dispatch_insert
4 0.0245 cfq_put_request
4 0.0245 clockevents_program_event
4 0.0245 d_alloc
4 0.0245 do_fcntl
4 0.0245 do_munmap
4 0.0245 down_read
4 0.0245 dummy_vm_enough_memory
4 0.0245 dup_fd
4 0.0245 effective_prio
4 0.0245 elv_queue_empty
4 0.0245 enqueue_hrtimer
4 0.0245 enqueue_task
4 0.0245 flush_old_exec
4 0.0245 generic_file_mmap
4 0.0245 getnstimeofday
4 0.0245 groups_search
4 0.0245 ip_options_build
4 0.0245 ip_push_pending_frames
4 0.0245 irq_enter
4 0.0245 kunmap_atomic
4 0.0245 locks_remove_flock
4 0.0245 need_resched
4 0.0245 page_waitqueue
4 0.0245 path_lookup_open
4 0.0245 raise_softirq
4 0.0245 release_pages
4 0.0245 ret_from_intr
4 0.0245 run_posix_cpu_timers
4 0.0245 sha_transform
4 0.0245 skb_copy_and_csum_bits
4 0.0245 skb_release_data
4 0.0245 softlockup_tick
4 0.0245 sys_close
4 0.0245 sys_rt_sigprocmask
4 0.0245 sys_socketcall
4 0.0245 sysctl_head_next
4 0.0245 task_running_tick
4 0.0245 wake_up_process
3 0.0184 __activate_task
3 0.0184 __bread
3 0.0184 __dec_zone_state
3 0.0184 __iget
3 0.0184 __lru_add_drain
3 0.0184 __pollwait
3 0.0184 __rb_rotate_right
3 0.0184 __tasklet_schedule
3 0.0184 __vm_enough_memory
3 0.0184 __vma_link
3 0.0184 _local_bh_enable
3 0.0184 alloc_inode
3 0.0184 bio_endio
3 0.0184 blk_start_queueing
3 0.0184 blockable_page_cache_readahead
3 0.0184 cfq_put_queue
3 0.0184 clear_user
3 0.0184 clocksource_get_next
3 0.0184 credit_entropy_store
3 0.0184 debug_mutex_set_owner
3 0.0184 do_mpage_readpage
3 0.0184 do_notify_parent
3 0.0184 dummy_inode_getattr
3 0.0184 elv_dispatch_sort
3 0.0184 elv_put_request
3 0.0184 file_move
3 0.0184 find_next_bit
3 0.0184 find_next_zero_bit
3 0.0184 flush_tlb_page
3 0.0184 fput
3 0.0184 free_hot_cold_page
3 0.0184 free_pgtables
3 0.0184 get_nr_files
3 0.0184 install_special_mapping
3 0.0184 ip_rcv
3 0.0184 local_bh_enable_ip
3 0.0184 locks_remove_posix
3 0.0184 may_expand_vm
3 0.0184 neigh_lookup
3 0.0184 new_inode
3 0.0184 page_add_new_anon_rmap
3 0.0184 poll_initwait
3 0.0184 proc_lookup
3 0.0184 put_files_struct
3 0.0184 radix_tree_insert
3 0.0184 rb_first
3 0.0184 rb_next
3 0.0184 set_normalized_timespec
3 0.0184 setup_arg_pages
3 0.0184 should_remove_suid
3 0.0184 sock_alloc_send_skb
3 0.0184 submit_bh
3 0.0184 sys_poll
3 0.0184 sys_read
3 0.0184 tick_do_update_jiffies64
3 0.0184 tick_sched_timer
3 0.0184 unlink_file_vma
3 0.0184 vfs_getattr
3 0.0184 vma_link
2 0.0123 I_BDEV
2 0.0123 __block_prepare_write
2 0.0123 __blocking_notifier_call_chain
2 0.0123 __brelse
2 0.0123 __dev_get_by_name
2 0.0123 __elv_add_request
2 0.0123 __inc_zone_page_state
2 0.0123 __lookup_mnt
2 0.0123 __mutex_init
2 0.0123 __rcu_pending
2 0.0123 __set_page_dirty_buffers
2 0.0123 __set_page_dirty_nobuffers
2 0.0123 __vma_link_rb
2 0.0123 __xfrm_lookup
2 0.0123 account_system_time
2 0.0123 add_to_page_cache
2 0.0123 add_wait_queue
2 0.0123 apic_timer_interrupt
2 0.0123 bio_alloc
2 0.0123 bio_hw_segments
2 0.0123 blk_do_ordered
2 0.0123 cfq_choose_req
2 0.0123 cfq_init_prio_data
2 0.0123 cfq_rb_erase
2 0.0123 cfq_resort_rr_list
2 0.0123 check_pgt_cache
2 0.0123 check_userspace
2 0.0123 copy_strings_kernel
2 0.0123 copy_thread
2 0.0123 csum_partial_copy_generic
2 0.0123 d_rehash
2 0.0123 datagram_poll
2 0.0123 dentry_iput
2 0.0123 dnotify_flush
2 0.0123 do_filp_open
2 0.0123 do_gettimeofday
2 0.0123 do_ioctl
2 0.0123 do_notify_resume
2 0.0123 do_wait
2 0.0123 dummy_capable
2 0.0123 dummy_file_alloc_security
2 0.0123 dummy_inode_alloc_security
2 0.0123 elv_rb_add
2 0.0123 elv_rqhash_add
2 0.0123 elv_set_request
2 0.0123 exit_aio
2 0.0123 exit_sem
2 0.0123 fd_install
2 0.0123 fib_semantic_match
2 0.0123 filemap_write_and_wait
2 0.0123 get_dcookie
2 0.0123 get_index
2 0.0123 half_md4_transform
2 0.0123 hweight32
2 0.0123 icmp_push_reply
2 0.0123 icmp_send
2 0.0123 init_new_context
2 0.0123 init_page_buffers
2 0.0123 init_request_from_bio
2 0.0123 inode_has_buffers
2 0.0123 insert_vm_struct
2 0.0123 iput
2 0.0123 kfree_skbmem
2 0.0123 ll_rw_block
2 0.0123 mmput
2 0.0123 mod_zone_page_state
2 0.0123 nameidata_to_filp
2 0.0123 native_apic_write
2 0.0123 page_add_file_rmap
2 0.0123 page_remove_rmap
2 0.0123 pipe_poll
2 0.0123 pipe_read
2 0.0123 pipe_write
2 0.0123 prepare_binprm
2 0.0123 proc_flush_task
2 0.0123 proc_sys_lookup_table
2 0.0123 profile_tick
2 0.0123 quicklist_trim
2 0.0123 radix_tree_tag_clear
2 0.0123 rcu_check_callbacks
2 0.0123 recalc_sigpending_tsk
2 0.0123 resched_task
2 0.0123 resume_userspace
2 0.0123 rt_intern_hash
2 0.0123 sched_balance_self
2 0.0123 schedule_tail
2 0.0123 seq_printf
2 0.0123 set_page_dirty
2 0.0123 show_stat
2 0.0123 sigprocmask
2 0.0123 sk_free
2 0.0123 smp_apic_timer_interrupt
2 0.0123 sock_def_write_space
2 0.0123 sock_recvmsg
2 0.0123 special_mapping_nopage
2 0.0123 sys_dup2
2 0.0123 sys_ioctl
2 0.0123 sys_mkdirat
2 0.0123 syscall_exit
2 0.0123 tcp_poll
2 0.0123 tcp_sendmsg
2 0.0123 tcp_v4_rcv
2 0.0123 unshare_files
2 0.0123 vma_prio_tree_remove
2 0.0123 wake_up_bit
2 0.0123 wake_up_new_task
1 0.0061 __alloc_skb
1 0.0061 __cfq_slice_expired
1 0.0061 __dec_zone_page_state
1 0.0061 __delay
1 0.0061 __dequeue_signal
1 0.0061 __free_pages_ok
1 0.0061 __free_pipe_info
1 0.0061 __get_free_pages
1 0.0061 __group_complete_signal
1 0.0061 __init_rwsem
1 0.0061 __kfree_skb
1 0.0061 __next_cpu
1 0.0061 __pagevec_lru_add
1 0.0061 __put_unused_fd
1 0.0061 __put_user_4
1 0.0061 __rb_rotate_left
1 0.0061 __remove_hrtimer
1 0.0061 __udp4_lib_lookup
1 0.0061 __udp4_lib_rcv
1 0.0061 __user_walk_fd
1 0.0061 _d_rehash
1 0.0061 _read_lock
1 0.0061 add_to_page_cache_lru
1 0.0061 all_vm_events
1 0.0061 anon_pipe_buf_release
1 0.0061 anon_vma_link
1 0.0061 anon_vma_unlink
1 0.0061 arch_align_stack
1 0.0061 arch_pick_mmap_layout
1 0.0061 arch_setup_additional_pages
1 0.0061 attach_pid
1 0.0061 bio_phys_segments
1 0.0061 can_share_swap_page
1 0.0061 can_vma_merge_after
1 0.0061 can_vma_merge_before
1 0.0061 cfq_merged_request
1 0.0061 cleanup_timers
1 0.0061 compute_creds
1 0.0061 copy_vma
1 0.0061 d_instantiate
1 0.0061 d_path
1 0.0061 dec_zone_page_state
1 0.0061 default_llseek
1 0.0061 del_timer_sync
1 0.0061 delayed_put_pid
1 0.0061 dev_seq_open
1 0.0061 dio_zero_block
1 0.0061 diskstats_show
1 0.0061 do_brk
1 0.0061 do_fork
1 0.0061 do_futex
1 0.0061 do_mremap
1 0.0061 do_proc_sys_lookup
1 0.0061 do_select
1 0.0061 do_sigaction
1 0.0061 do_timer
1 0.0061 do_writepages
1 0.0061 dst_destroy
1 0.0061 dummy_bprm_check_security
1 0.0061 dummy_d_instantiate
1 0.0061 dummy_file_free_security
1 0.0061 dummy_file_mmap
1 0.0061 elf_map
1 0.0061 eligible_child
1 0.0061 elv_may_queue
1 0.0061 elv_merge
1 0.0061 elv_rqhash_del
1 0.0061 end_buffer_async_write
1 0.0061 exit_itimers
1 0.0061 extract_buf
1 0.0061 fasync_helper
1 0.0061 file_permission
1 0.0061 find_get_pages_tag
1 0.0061 find_or_create_page
1 0.0061 finish_wait
1 0.0061 free_pages_bulk
1 0.0061 get_locked_pte
1 0.0061 get_random_int
1 0.0061 get_request_wait
1 0.0061 get_task_mm
1 0.0061 get_vmalloc_info
1 0.0061 inc_zone_page_state
1 0.0061 inet_create
1 0.0061 inet_getpeer
1 0.0061 inet_select_addr
1 0.0061 init_fpu
1 0.0061 inotify_d_instantiate
1 0.0061 iov_fault_in_pages_read
1 0.0061 ip4_datagram_connect
1 0.0061 ip_dev_find
1 0.0061 ip_local_deliver
1 0.0061 ip_output
1 0.0061 kill_fasync
1 0.0061 lookup_create
1 0.0061 lookup_hash
1 0.0061 lookup_mnt
1 0.0061 lru_cache_add
1 0.0061 mark_buffer_dirty
1 0.0061 memset
1 0.0061 mm_release
1 0.0061 mntput_no_expire
1 0.0061 mod_timer
1 0.0061 mutex_trylock
1 0.0061 native_read_cr0
1 0.0061 native_write_cr0
1 0.0061 neigh_update
1 0.0061 netif_receive_skb
1 0.0061 notify_change
1 0.0061 open_exec
1 0.0061 ordered_bio_endio
1 0.0061 page_mkclean
1 0.0061 pdflush
1 0.0061 pgd_alloc
1 0.0061 pipe_iov_copy_from_user
1 0.0061 pipe_write_fasync
1 0.0061 pipe_write_release
1 0.0061 poll_freewait
1 0.0061 posix_cpu_timers_exit_group
1 0.0061 prepare_to_wait_exclusive
1 0.0061 prio_tree_insert
1 0.0061 prio_tree_remove
1 0.0061 prio_tree_replace
1 0.0061 proc_alloc_inode
1 0.0061 proc_sys_permission
1 0.0061 put_pid
1 0.0061 radix_tree_tag_set
1 0.0061 rb_prev
1 0.0061 rcu_process_callbacks
1 0.0061 recalc_bh_state
1 0.0061 release_open_intent
1 0.0061 release_task
1 0.0061 release_thread
1 0.0061 remove_suid
1 0.0061 resume_kernel
1 0.0061 run_timer_softirq
1 0.0061 rwsem_down_failed_common
1 0.0061 rwsem_wake
1 0.0061 schedule_timeout
1 0.0061 scheduler_tick
1 0.0061 search_binary_handler
1 0.0061 secure_ip_id
1 0.0061 set_bh_page
1 0.0061 set_brk
1 0.0061 sk_alloc
1 0.0061 sk_stream_mem_schedule
1 0.0061 skip_atoi
1 0.0061 sock_aio_read
1 0.0061 sock_fasync
1 0.0061 sock_from_file
1 0.0061 sock_ioctl
1 0.0061 split_vma
1 0.0061 sprintf
1 0.0061 sync_buffer
1 0.0061 sys_brk
1 0.0061 sys_fstat64
1 0.0061 sys_getcwd
1 0.0061 sys_gettimeofday
1 0.0061 sys_munmap
1 0.0061 sys_recvfrom
1 0.0061 sys_send
1 0.0061 sys_sendto
1 0.0061 tcp_ack
1 0.0061 tcp_urg
1 0.0061 try_to_del_timer_sync
1 0.0061 tty_poll
1 0.0061 uart_read_proc
1 0.0061 udp_flush_pending_frames
1 0.0061 udp_v4_get_port
1 0.0061 unlock_page
1 0.0061 vfs_fstat
1 0.0061 vfs_permission
1 0.0061 vfs_stat_fd
1 0.0061 vma_stop
1 0.0061 wake_up_inode
1 0.0061 wb_kupdate
1 0.0061 work_pending
1 0.0061 worker_thread
1 0.0061 writeback_inodes
[-- Attachment #6: one_disc.txt --]
[-- Type: text/plain, Size: 12013 bytes --]
Thu Aug 2 18:33:52 EEST 2007
+ opcontrol --vmlinux=/usr/src/linux-2.6.22-ARCH/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
1829 48.3734 processor
1029 27.2150 vmlinux
356 9.4155 libc-2.6.so
200 5.2896 bash
157 4.1523 ld-2.6.so
52 1.3753 ISO8859-1.so
31 0.8199 ext3
29 0.7670 oprofiled
CPU_CLK_UNHALT...|
samples| %|
------------------
28 96.5517 oprofiled
1 3.4483 anon (tgid:5517 range:0xb7f71000-0xb7f72000)
26 0.6876 jbd
19 0.5025 ide_core
8 0.2116 grep
7 0.1851 oprofile
5 0.1322 gawk
5 0.1322 locale-archive
4 0.1058 badblocks
3 0.0793 screen-4.0.3
3 0.0793 sshd
2 0.0529 libext2fs.so.2.4
2 0.0529 expr
2 0.0529 libcrypto.so.0.9.8
1 0.0264 ls
1 0.0264 rm
1 0.0264 tr
1 0.0264 libhistory.so.5.2
1 0.0264 libm-2.6.so
1 0.0264 libncurses.so.5.6
1 0.0264 libpthread-2.6.so
1 0.0264 libreadline.so.5.2
1 0.0264 skge
1 0.0264 id
1 0.0264 libnetsnmpmibs.so.15.0.0
1 0.0264 snmpd
+ echo
+ echo
+ echo
+ opreport -l /usr/src/linux-2.6.22-ARCH/vmlinux
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
160 15.5491 do_wp_page
123 11.9534 native_safe_halt
49 4.7619 __handle_mm_fault
34 3.3042 unmap_vmas
27 2.6239 get_page_from_freelist
23 2.2352 page_fault
15 1.4577 __d_lookup
14 1.3605 put_page
13 1.2634 copy_process
12 1.1662 __link_path_walk
12 1.1662 do_page_fault
11 1.0690 page_address
9 0.8746 __copy_to_user_ll
9 0.8746 memcpy
8 0.7775 acpi_pm_read
8 0.7775 find_vma
8 0.7775 radix_tree_lookup
7 0.6803 __wake_up_bit
7 0.6803 copy_page_range
7 0.6803 find_get_page
7 0.6803 kmem_cache_free
6 0.5831 __blockdev_direct_IO
6 0.5831 copy_strings
6 0.5831 do_mmap_pgoff
6 0.5831 filemap_nopage
6 0.5831 mask_and_ack_8259A
6 0.5831 sysenter_past_esp
5 0.4859 __find_get_block
5 0.4859 error_code
5 0.4859 flush_tlb_page
5 0.4859 free_pgtables
5 0.4859 permission
5 0.4859 strnlen_user
4 0.3887 __pagevec_lru_add_active
4 0.3887 blk_backing_dev_unplug
4 0.3887 do_generic_mapping_read
4 0.3887 do_path_lookup
4 0.3887 follow_page
4 0.3887 rb_insert_color
3 0.2915 __atomic_notifier_call_chain
3 0.2915 __do_page_cache_readahead
3 0.2915 __find_get_block_slow
3 0.2915 __fput
3 0.2915 _atomic_dec_and_lock
3 0.2915 acpi_os_read_port
3 0.2915 add_timer_randomness
3 0.2915 bit_waitqueue
3 0.2915 copy_to_user
3 0.2915 delay_tsc
3 0.2915 do_exit
3 0.2915 find_next_zero_bit
3 0.2915 flush_tlb_mm
3 0.2915 generic_fillattr
3 0.2915 generic_permission
3 0.2915 generic_segment_checks
3 0.2915 getnstimeofday
3 0.2915 ktime_get_ts
3 0.2915 kunmap_atomic
3 0.2915 load_elf_binary
3 0.2915 page_remove_rmap
3 0.2915 path_walk
3 0.2915 put_files_struct
3 0.2915 rb_erase
3 0.2915 release_pages
3 0.2915 ret_from_exception
3 0.2915 schedule
2 0.1944 __blocking_notifier_call_chain
2 0.1944 __dec_zone_state
2 0.1944 __pte_alloc
2 0.1944 __vm_enough_memory
2 0.1944 _spin_lock_irqsave
2 0.1944 account_system_time
2 0.1944 anon_vma_unlink
2 0.1944 atomic_notifier_call_chain
2 0.1944 cache_reap
2 0.1944 cfq_queue_empty
2 0.1944 clockevents_program_event
2 0.1944 current_fs_time
2 0.1944 d_alloc
2 0.1944 debug_mutex_add_waiter
2 0.1944 debug_mutex_unlock
2 0.1944 disk_round_stats
2 0.1944 dnotify_flush
2 0.1944 do_notify_resume
2 0.1944 do_softirq
2 0.1944 down_read_trylock
2 0.1944 dup_fd
2 0.1944 end_that_request_last
2 0.1944 fd_install
2 0.1944 file_read_actor
2 0.1944 flush_old_exec
2 0.1944 free_block
2 0.1944 free_hot_cold_page
2 0.1944 generic_make_request
2 0.1944 get_empty_filp
2 0.1944 get_index
2 0.1944 get_next_timer_interrupt
2 0.1944 get_signal_to_deliver
2 0.1944 irq_entries_start
2 0.1944 kmap_atomic_prot
2 0.1944 kmem_cache_alloc
2 0.1944 mutex_remove_waiter
2 0.1944 page_add_file_rmap
2 0.1944 page_waitqueue
2 0.1944 percpu_counter_mod
2 0.1944 pit_next_event
2 0.1944 prepare_to_copy
2 0.1944 prio_tree_insert
2 0.1944 rcu_start_batch
2 0.1944 recalc_sigpending_tsk
2 0.1944 remove_vma
2 0.1944 resume_userspace
2 0.1944 rm_from_queue_full
2 0.1944 sched_balance_self
2 0.1944 scheduler_tick
2 0.1944 secure_ip_id
2 0.1944 sys_close
2 0.1944 sys_rt_sigprocmask
2 0.1944 task_running_tick
2 0.1944 try_to_wake_up
2 0.1944 up_read
2 0.1944 update_wall_time
2 0.1944 vfs_permission
2 0.1944 vm_normal_page
2 0.1944 vma_link
2 0.1944 vma_merge
1 0.0972 I_BDEV
1 0.0972 __add_entropy_words
1 0.0972 __brelse
1 0.0972 __copy_from_user_ll
1 0.0972 __d_path
1 0.0972 __do_softirq
1 0.0972 __elv_add_request
1 0.0972 __end_that_request_first
1 0.0972 __free_pipe_info
1 0.0972 __freed_request
1 0.0972 __inc_zone_page_state
1 0.0972 __inc_zone_state
1 0.0972 __init_rwsem
1 0.0972 __insert_inode_hash
1 0.0972 __lru_add_drain
1 0.0972 __make_request
1 0.0972 __mutex_lock_interruptible_slowpath
1 0.0972 __mutex_unlock_slowpath
1 0.0972 __pollwait
1 0.0972 __rcu_pending
1 0.0972 __sigqueue_alloc
1 0.0972 __sock_create
1 0.0972 __switch_to
1 0.0972 __tasklet_schedule
1 0.0972 __tcp_push_pending_frames
1 0.0972 __wake_up_common
1 0.0972 acpi_get_register
1 0.0972 acpi_hw_register_read
1 0.0972 acpi_os_write_port
1 0.0972 add_disk_randomness
1 0.0972 alloc_inode
1 0.0972 alloc_pid
1 0.0972 anon_vma_link
1 0.0972 anon_vma_prepare
1 0.0972 arch_setup_additional_pages
1 0.0972 bio_alloc_bioset
1 0.0972 bio_fs_destructor
1 0.0972 blk_plug_device
1 0.0972 blk_queue_bounce
1 0.0972 blk_recount_segments
1 0.0972 cache_alloc_refill
1 0.0972 call_rcu
1 0.0972 can_share_swap_page
1 0.0972 cfq_choose_req
1 0.0972 cfq_completed_request
1 0.0972 cfq_dispatch_requests
1 0.0972 cfq_init_prio_data
1 0.0972 cfq_may_queue
1 0.0972 cfq_remove_request
1 0.0972 cfq_service_tree_add
1 0.0972 cfq_set_request
1 0.0972 check_pgt_cache
1 0.0972 check_userspace
1 0.0972 cleanup_timers
1 0.0972 clear_user
1 0.0972 common_interrupt
1 0.0972 copy_from_user
1 0.0972 copy_thread_group_keys
1 0.0972 d_instantiate
1 0.0972 deactivate_task
1 0.0972 debug_mutex_free_waiter
1 0.0972 debug_mutex_lock_common
1 0.0972 del_timer
1 0.0972 dequeue_task
1 0.0972 dio_bio_add_page
1 0.0972 do_fork
1 0.0972 do_lookup
1 0.0972 do_munmap
1 0.0972 do_pipe
1 0.0972 do_sigaction
1 0.0972 do_sync_read
1 0.0972 do_timer
1 0.0972 do_wait
1 0.0972 dummy_capable
1 0.0972 dummy_file_alloc_security
1 0.0972 elf_map
1 0.0972 elv_dequeue_request
1 0.0972 elv_insert
1 0.0972 elv_merge
1 0.0972 elv_next_request
1 0.0972 exit_itimers
1 0.0972 expand_files
1 0.0972 fasync_helper
1 0.0972 fget_light
1 0.0972 filp_close
1 0.0972 find_mergeable_anon_vma
1 0.0972 find_vma_prev
1 0.0972 finish_wait
1 0.0972 fput
1 0.0972 free_page_and_swap_cache
1 0.0972 generic_file_aio_read
1 0.0972 generic_file_open
1 0.0972 generic_pipe_buf_pin
1 0.0972 generic_unplug_device
1 0.0972 get_nr_files
1 0.0972 get_user_pages
1 0.0972 getname
1 0.0972 half_md4_transform
1 0.0972 hrtimer_get_next_event
1 0.0972 hrtimer_interrupt
1 0.0972 hrtimer_start
1 0.0972 hrtimer_try_to_cancel
1 0.0972 idle_cpu
1 0.0972 init_new_context
1 0.0972 inotify_d_instantiate
1 0.0972 inotify_dentry_parent_queue_event
1 0.0972 inotify_inode_queue_event
1 0.0972 internal_add_timer
1 0.0972 ip_push_pending_frames
1 0.0972 ip_route_output_flow
1 0.0972 kernel_read
1 0.0972 kmem_cache_zalloc
1 0.0972 kref_put
1 0.0972 load_script
1 0.0972 lru_add_drain
1 0.0972 mark_page_accessed
1 0.0972 may_expand_vm
1 0.0972 may_open
1 0.0972 mempool_alloc
1 0.0972 mm_release
1 0.0972 mmput
1 0.0972 native_flush_tlb_single
1 0.0972 native_set_pte_at
1 0.0972 next_signal
1 0.0972 normal_poll
1 0.0972 nr_iowait
1 0.0972 open_exec
1 0.0972 page_add_new_anon_rmap
1 0.0972 pid_revalidate
1 0.0972 pipe_read
1 0.0972 posix_cpu_timers_exit
1 0.0972 prio_tree_replace
1 0.0972 proc_flush_task
1 0.0972 proc_sys_lookup_table_one
1 0.0972 quicklist_trim
1 0.0972 raise_softirq
1 0.0972 rb_first
1 0.0972 rb_next
1 0.0972 recalc_task_prio
1 0.0972 release_open_intent
1 0.0972 release_vm86_irqs
1 0.0972 resched_task
1 0.0972 restore_all
1 0.0972 rq_init
1 0.0972 run_posix_cpu_timers
1 0.0972 run_timer_softirq
1 0.0972 save_i387
1 0.0972 send_signal
1 0.0972 simple_read_from_buffer
1 0.0972 split_vma
1 0.0972 sys_brk
1 0.0972 sys_clone
1 0.0972 sys_dup2
1 0.0972 sys_lseek
1 0.0972 sys_mkdir
1 0.0972 sys_rt_sigaction
1 0.0972 sys_select
1 0.0972 syscall_exit_work
1 0.0972 system_call
1 0.0972 tick_do_broadcast
1 0.0972 tick_nohz_stop_sched_tick
1 0.0972 tick_sched_timer
1 0.0972 tty_ldisc_try
1 0.0972 tty_poll
1 0.0972 unlock_buffer
1 0.0972 update_process_times
1 0.0972 vfs_getattr
1 0.0972 vfs_mkdir
1 0.0972 vfs_read
1 0.0972 vfs_write
1 0.0972 vma_adjust
1 0.0972 vma_prio_tree_add
1 0.0972 vma_prio_tree_insert
1 0.0972 vsnprintf
1 0.0972 wake_up_bit
[-- Attachment #7: idle.txt --]
[-- Type: text/plain, Size: 9966 bytes --]
Thu Aug 2 18:31:43 EEST 2007
+ opcontrol --vmlinux=/usr/src/linux-2.6.22-ARCH/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
692 46.9153 vmlinux
315 21.3559 libc-2.6.so
163 11.0508 bash
CPU_CLK_UNHALT...|
samples| %|
------------------
162 99.3865 bash
1 0.6135 anon (tgid:5385 range:0xb7fb9000-0xb7fba000)
126 8.5424 ld-2.6.so
51 3.4576 ISO8859-1.so
41 2.7797 ext3
21 1.4237 jbd
16 1.0847 oprofiled
8 0.5424 skge
7 0.4746 processor
5 0.3390 grep
4 0.2712 oprofile
3 0.2034 gawk
3 0.2034 libcrypto.so.0.9.8
3 0.2034 libnetsnmpmibs.so.15.0.0
3 0.2034 imap-login
2 0.1356 ide_core
2 0.1356 libncurses.so.5.6
2 0.1356 libnetsnmp.so.15.0.0
2 0.1356 locale-archive
1 0.0678 tr
1 0.0678 ide_disk
1 0.0678 libpthread-2.6.so
1 0.0678 screen-4.0.3
1 0.0678 dovecot-auth
1 0.0678 dovecot
CPU_CLK_UNHALT...|
samples| %|
------------------
1 100.000 anon (tgid:3919 range:0xb7f01000-0xb7f02000)
+ echo
+ echo
+ echo
+ opreport -l /usr/src/linux-2.6.22-ARCH/vmlinux
CPU: PIII, speed 798.017 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
114 16.4740 do_wp_page
36 5.2023 __handle_mm_fault
26 3.7572 get_page_from_freelist
20 2.8902 page_fault
15 2.1676 unmap_vmas
13 1.8786 __link_path_walk
11 1.5896 copy_process
11 1.5896 filemap_nopage
10 1.4451 delay_tsc
9 1.3006 __d_lookup
9 1.3006 find_get_page
8 1.1561 __mutex_lock_slowpath
8 1.1561 do_page_fault
7 1.0116 mask_and_ack_8259A
7 1.0116 page_address
6 0.8671 acpi_pm_read
6 0.8671 error_code
6 0.8671 find_vma
6 0.8671 kmem_cache_alloc
6 0.8671 kmem_cache_free
6 0.8671 pit_next_event
6 0.8671 strnlen_user
5 0.7225 __copy_to_user_ll
5 0.7225 __wake_up_bit
5 0.7225 native_flush_tlb_single
5 0.7225 put_page
4 0.5780 __find_get_block
4 0.5780 copy_page_range
4 0.5780 inode_init_once
4 0.5780 kunmap_atomic
4 0.5780 memcpy
4 0.5780 permission
4 0.5780 radix_tree_lookup
4 0.5780 restore_nocheck
4 0.5780 resume_userspace
4 0.5780 sysenter_past_esp
4 0.5780 vma_adjust
3 0.4335 __atomic_notifier_call_chain
3 0.4335 __do_page_cache_readahead
3 0.4335 add_to_page_cache
3 0.4335 clockevents_program_event
3 0.4335 current_fs_time
3 0.4335 do_mmap_pgoff
3 0.4335 do_sigaction
3 0.4335 enable_8259A_irq
3 0.4335 free_hot_cold_page
3 0.4335 get_unused_fd
3 0.4335 getnstimeofday
3 0.4335 ktime_get_ts
3 0.4335 notifier_call_chain
3 0.4335 prio_tree_remove
3 0.4335 proc_lookup
3 0.4335 ret_from_exception
3 0.4335 schedule
3 0.4335 scheduler_tick
2 0.2890 __alloc_pages
2 0.2890 __copy_user_intel
2 0.2890 __inc_zone_state
2 0.2890 __mutex_unlock_slowpath
2 0.2890 __pagevec_lru_add_active
2 0.2890 acpi_os_read_port
2 0.2890 arch_setup_additional_pages
2 0.2890 atomic_notifier_call_chain
2 0.2890 cache_reap
2 0.2890 clear_user
2 0.2890 copy_thread
2 0.2890 cp_new_stat64
2 0.2890 debug_mutex_unlock
2 0.2890 destroy_context
2 0.2890 do_generic_mapping_read
2 0.2890 do_notify_parent
2 0.2890 do_notify_resume
2 0.2890 dup_fd
2 0.2890 find_next_zero_bit
2 0.2890 fput
2 0.2890 free_pgd_range
2 0.2890 generic_permission
2 0.2890 get_index
2 0.2890 get_next_timer_interrupt
2 0.2890 getname
2 0.2890 hrtimer_get_next_event
2 0.2890 kmap_atomic_prot
2 0.2890 mark_page_accessed
2 0.2890 mm_release
2 0.2890 proc_sys_lookup_table_one
2 0.2890 quicklist_trim
2 0.2890 rb_insert_color
2 0.2890 remove_vma
2 0.2890 rw_verify_area
2 0.2890 sys_mmap2
2 0.2890 sys_rt_sigprocmask
2 0.2890 update_wall_time
2 0.2890 vm_acct_memory
2 0.2890 vm_normal_page
2 0.2890 vm_stat_account
1 0.1445 __blocking_notifier_call_chain
1 0.1445 __const_udelay
1 0.1445 __dec_zone_page_state
1 0.1445 __fput
1 0.1445 __free_pages_ok
1 0.1445 __get_free_pages
1 0.1445 __get_user_4
1 0.1445 __kmalloc
1 0.1445 __mod_timer
1 0.1445 __put_user_4
1 0.1445 __qdisc_run
1 0.1445 __rcu_process_callbacks
1 0.1445 __remove_shared_vm_struct
1 0.1445 __sigqueue_alloc
1 0.1445 __switch_to
1 0.1445 _atomic_dec_and_lock
1 0.1445 _spin_lock_irqsave
1 0.1445 account_user_time
1 0.1445 acpi_hw_register_read
1 0.1445 anon_vma_prepare
1 0.1445 balance_dirty_pages_ratelimited_nr
1 0.1445 block_read_full_page
1 0.1445 blockable_page_cache_readahead
1 0.1445 can_share_swap_page
1 0.1445 cfq_remove_request
1 0.1445 cfq_service_tree_add
1 0.1445 check_tty_count
1 0.1445 clear_inode
1 0.1445 clocksource_watchdog
1 0.1445 compute_creds
1 0.1445 copy_from_user
1 0.1445 copy_thread_group_keys
1 0.1445 copy_to_user
1 0.1445 cpu_idle
1 0.1445 create_read_pipe
1 0.1445 d_alloc
1 0.1445 d_lookup
1 0.1445 debug_mutex_lock_common
1 0.1445 debug_mutex_set_owner
1 0.1445 default_llseek
1 0.1445 dequeue_task
1 0.1445 dev_hard_start_xmit
1 0.1445 dnotify_flush
1 0.1445 do_brk
1 0.1445 do_lookup
1 0.1445 do_path_lookup
1 0.1445 do_wait
1 0.1445 down_read
1 0.1445 dput
1 0.1445 dummy_file_permission
1 0.1445 dummy_inode_permission
1 0.1445 dummy_task_create
1 0.1445 enqueue_hrtimer
1 0.1445 enqueue_task
1 0.1445 exit_aio
1 0.1445 exit_itimers
1 0.1445 exit_mm
1 0.1445 expand_files
1 0.1445 filp_close
1 0.1445 find_busiest_group
1 0.1445 find_mergeable_anon_vma
1 0.1445 find_next_bit
1 0.1445 find_vma_prev
1 0.1445 flush_old_exec
1 0.1445 flush_tlb_mm
1 0.1445 free_page_and_swap_cache
1 0.1445 free_pgtables
1 0.1445 generic_file_mmap
1 0.1445 generic_file_open
1 0.1445 generic_make_request
1 0.1445 get_nr_files
1 0.1445 get_pid_task
1 0.1445 get_unmapped_area
1 0.1445 get_write_access
1 0.1445 half_md4_transform
1 0.1445 hrtimer_reprogram
1 0.1445 hweight32
1 0.1445 in_lock_functions
1 0.1445 init_new_context
1 0.1445 ip_push_pending_frames
1 0.1445 irq_entries_start
1 0.1445 kmem_cache_zalloc
1 0.1445 ktime_divns
1 0.1445 link_path_walk
1 0.1445 load_elf_binary
1 0.1445 lock_hrtimer_base
1 0.1445 locks_remove_flock
1 0.1445 lru_cache_add_active
1 0.1445 math_state_restore
1 0.1445 may_open
1 0.1445 memory_open
1 0.1445 mm_alloc
1 0.1445 mutex_remove_waiter
1 0.1445 name_to_int
1 0.1445 native_io_delay
1 0.1445 neigh_periodic_timer
1 0.1445 net_rx_action
1 0.1445 normal_poll
1 0.1445 open_namei
1 0.1445 page_add_file_rmap
1 0.1445 page_remove_rmap
1 0.1445 page_waitqueue
1 0.1445 path_release
1 0.1445 pgd_alloc
1 0.1445 pipe_write
1 0.1445 poll_freewait
1 0.1445 proc_flush_task
1 0.1445 process_timeout
1 0.1445 profile_tick
1 0.1445 radix_tree_insert
1 0.1445 raise_softirq
1 0.1445 rb_erase
1 0.1445 release_task
1 0.1445 restore_all
1 0.1445 run_timer_softirq
1 0.1445 send_signal
1 0.1445 seq_printf
1 0.1445 set_task_comm
1 0.1445 sha_transform
1 0.1445 show_map_internal
1 0.1445 sig_ignored
1 0.1445 sk_common_release
1 0.1445 sock_wfree
1 0.1445 sys_brk
1 0.1445 sys_close
1 0.1445 sys_mkdirat
1 0.1445 sys_read
1 0.1445 sys_set_thread_area
1 0.1445 sys_sigreturn
1 0.1445 tcp_poll
1 0.1445 tcp_sendmsg
1 0.1445 tcp_v4_rcv
1 0.1445 tick_do_update_jiffies64
1 0.1445 tick_nohz_update_jiffies
1 0.1445 tick_sched_timer
1 0.1445 try_to_wake_up
1 0.1445 tty_paranoia_check
1 0.1445 tty_write
1 0.1445 unlink_file_vma
1 0.1445 unmap_region
1 0.1445 vfs_fstat
1 0.1445 vfs_llseek
1 0.1445 vfs_mkdir
1 0.1445 vfs_write
1 0.1445 vma_link
1 0.1445 vsnprintf
1 0.1445 wake_up_bit
1 0.1445 write_chan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-03 16:03 high system cpu load during intense disk i/o Dimitrios Apostolou
@ 2007-08-05 16:03 ` Dimitrios Apostolou
2007-08-05 17:58 ` Rafał Bilski
2007-08-06 1:28 ` Andrew Morton
2007-08-07 14:50 ` Dimitrios Apostolou
1 sibling, 2 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-05 16:03 UTC (permalink / raw)
To: linux-kernel
Hello again,
was my report so complicated? Perhaps I shouldn't have included so many
oprofile outputs. Anyway, if anyone wants to have a look, the most important
is two_discs_bad.txt oprofile output, attached on my original message. The
problem is 100% reproducible for me so I would appreciate if anyone told me
he has similar experiences.
Thanks,
Dimitris
On Friday 03 August 2007 19:03:09 Dimitrios Apostolou wrote:
> Hello list,
>
> I have a P3, 256MB RAM system with 3 IDE disks attached, 2 identical
> ones as hda and hdc (primary and secondary master), and the disc with
> the OS partitions as primary slave hdb. For more info please refer to
> the attached dmesg.txt. I attach several oprofile outputs that describe
> various circumstances referenced later. The script I used to get them is
> the attached script.sh.
>
> The problem was encountered when I started two processes doing heavy I/O
> on hda and hdc, "badblocks -v -w /dev/hda" and "badblocks -v -w
> /dev/hdc". At the beginning (two_discs.txt) everything was fine and
> vmstat reported more than 90% iowait CPU load. However, after a while
> (two_discs_bad.txt) that some cron jobs kicked in, the image changed
> completely: the cpu load was now about 60% system, and the rest was user
> cpu load possibly going to the simple cron jobs.
>
> Even though under normal circumstances (for example when running
> badblocks on only one disc (one_disc.txt)) the cron jobs finish almost
> instantaneously, this time they were simply never ending and every 10
> minutes or so more and more jobs were being added to the process table.
> One day later, the vmstat still reports about 60/40 system/user cpu load,
> all processes still run (hundreds of them), and the load average is huge!
>
> Another day later the OOM killer kicks in and kills various processes,
> however never touches any badblocks process. Indeed, manually suspending
> one badblocks process remedies the situation: within some seconds the
> process table is cleared from cron jobs, cpu usage is back to 2-3% user
> and ~90% iowait and the system is normally responsive again. This
> happens no matter which badblocks process I suspend, being hda or hdc.
>
> Any ideas about what could be wrong? I should note that the kernel is my
> distro's default. As the problem seems to be scheduler specific I didn't
> bother to compile a vanilla kernel, since the applied patches seem
> irrelevant:
>
> http://archlinux.org/packages/4197/
> http://cvs.archlinux.org/cgi-bin/viewcvs.cgi/kernels/kernel26/?cvsroot=Curr
>ent&only_with_tag=CURRENT
>
>
> Thank in advance,
> Dimitris
>
>
> P.S.1: Please CC me directly as I'm not subscribed
>
> P.S.2: Keep in mind that the problematic oprofile outputs probably refer
> to much longer time than 5 sec, since due to high load the script was
> taking long to complete.
>
> P.S.3: I couldn't find anywhere in kernel documentation that setting
> nmi_watchdog=0 is neccessary for oprofile to work correctly. However,
> Documentation/nmi_watchdog.txt mentions that oprofile should disable the
> nmi_watchdog automatically, which doesn't happen with the latest kernel.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-05 16:03 ` Dimitrios Apostolou
@ 2007-08-05 17:58 ` Rafał Bilski
2007-08-05 18:42 ` Dimitrios Apostolou
2007-08-06 1:28 ` Andrew Morton
1 sibling, 1 reply; 26+ messages in thread
From: Rafał Bilski @ 2007-08-05 17:58 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel
> Hello again,
Hi!
> was my report so complicated? Perhaps I shouldn't have included so many
> oprofile outputs. Anyway, if anyone wants to have a look, the most important
> is two_discs_bad.txt oprofile output, attached on my original message. The
> problem is 100% reproducible for me so I would appreciate if anyone told me
> he has similar experiences.
Probably nobody replied to Your message because people at this list think
that Your problem isn't kernel related. In this moment I'm using "Arch Linux"
too, so I checked /etc/cron directory. There simple jobs You are talking
about are not so simple:
- update the "locate" database,
- update the "whatis" database.
Both jobs are scaning "/" partition. I don't know how dcron works, but I can
imagine situation in which it is polling cron.daily and says: "hey it wasn't
done today yet" and it is starting same jobs over and over again. More
and more tasks scans the "/" partition and in result access is slower and
slower.
>
> Thanks,
> Dimitris
Let me know if I'm wrong
Rafał
----------------------------------------------------------------------
Zmien konto na takie o nieograniczonej pojemnosci.
Za darmo w INTERIA.PL
>>>http://link.interia.pl/f1b0a
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-05 17:58 ` Rafał Bilski
@ 2007-08-05 18:42 ` Dimitrios Apostolou
2007-08-05 20:08 ` Rafał Bilski
2007-08-06 16:14 ` Rafał Bilski
0 siblings, 2 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-05 18:42 UTC (permalink / raw)
To: Rafał Bilski; +Cc: linux-kernel
On Sunday 05 August 2007 20:58:15 Rafał Bilski wrote:
> > Hello again,
>
> Hi!
>
> > was my report so complicated? Perhaps I shouldn't have included so many
> > oprofile outputs. Anyway, if anyone wants to have a look, the most
> > important is two_discs_bad.txt oprofile output, attached on my original
> > message. The problem is 100% reproducible for me so I would appreciate if
> > anyone told me he has similar experiences.
>
> Probably nobody replied to Your message because people at this list think
> that Your problem isn't kernel related. In this moment I'm using "Arch
> Linux" too, so I checked /etc/cron directory. There simple jobs You are
> talking about are not so simple:
> - update the "locate" database,
> - update the "whatis" database.
> Both jobs are scaning "/" partition. I don't know how dcron works, but I
> can imagine situation in which it is polling cron.daily and says: "hey it
> wasn't done today yet" and it is starting same jobs over and over again.
> More and more tasks scans the "/" partition and in result access is slower
> and slower.
Hello and thanks for your reply.
The cron job that is running every 10 min on my system is mpop (a
fetchmail-like program) and another running every 5 min is mrtg. Both
normally finish within 1-2 seconds.
The fact that these simple cron jobs don't finish ever is certainly because of
the high system CPU load. If you see the two_discs_bad.txt which I attached
on my original message, you'll see that *vmlinux*, and specifically the
*scheduler*, take up most time.
And the fact that this happens only when running two i/o processes but when
running only one everything is absolutely snappy (not at all slow, see
one_disc.txt), makes me sure that this is a kernel bug. I'd be happy to help
but I need some guidance to pinpoint the problem.
Thanks,
Dimitris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-05 18:42 ` Dimitrios Apostolou
@ 2007-08-05 20:08 ` Rafał Bilski
2007-08-06 16:14 ` Rafał Bilski
1 sibling, 0 replies; 26+ messages in thread
From: Rafał Bilski @ 2007-08-05 20:08 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel
> Hello and thanks for your reply.
Hi,
> The cron job that is running every 10 min on my system is mpop (a
> fetchmail-like program) and another running every 5 min is mrtg. Both
> normally finish within 1-2 seconds.
>
> The fact that these simple cron jobs don't finish ever is certainly because of
> the high system CPU load. If you see the two_discs_bad.txt which I attached
> on my original message, you'll see that *vmlinux*, and specifically the
> *scheduler*, take up most time.
>
> And the fact that this happens only when running two i/o processes but when
> running only one everything is absolutely snappy (not at all slow, see
> one_disc.txt), makes me sure that this is a kernel bug. I'd be happy to help
> but I need some guidance to pinpoint the problem.
OK, but first can You try to fix Your cron daemon? Just make sure that if mpop
is already started it won't be started again. Maybe something like "pgrep mpop"
and "if [ $?".
I don't remember exactly, but some time ago somebody had problem with to large
disk buffers and sync(). Check LKML archives. MPOP is doing fsync().
You have VIA chipset. Me too. It isn't very reliable. Don't You have something
like "error { d0 BUSY }" in dmesg? This would explain high CPU load. Simply
DMA isn't used after such error and disk goes to PIO mode. On two disk system
load is about 4.0 in this case. Simple program takes hours to complete if
there is havy I/O in progress. Btw. SLUB seems to behave better in this
situation (at least up to 8.0).
> Thanks,
> Dimitris
Regards
Rafał
----------------------------------------------------------------------
Dowiedz sie, co naprawde podnieca kobiety. Wiecej wiesz, latwiej je
oczarujesz
>>>http://link.interia.pl/f1b17
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-05 16:03 ` Dimitrios Apostolou
2007-08-05 17:58 ` Rafał Bilski
@ 2007-08-06 1:28 ` Andrew Morton
2007-08-06 14:20 ` Dimitrios Apostolou
2007-08-06 16:09 ` Dimitrios Apostolou
1 sibling, 2 replies; 26+ messages in thread
From: Andrew Morton @ 2007-08-06 1:28 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel
On Sun, 5 Aug 2007 19:03:12 +0300 Dimitrios Apostolou <jimis@gmx.net> wrote:
> was my report so complicated?
We're bad.
Seems that your context switch rate when running two instances of
badblocks against two different disks went batshit insane. It doesn't
happen here.
Please capture the `vmstat 1' output while running the problematic
workload.
The oom-killing could have been unrelated to the CPU load problem. iirc
badblocks uses a lot of memory, so it might have been genuine. Keep an eye
on the /proc/meminfo output and send the kernel dmesg output from the
oom-killing event.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 1:28 ` Andrew Morton
@ 2007-08-06 14:20 ` Dimitrios Apostolou
2007-08-06 17:33 ` Andrew Morton
2007-08-06 16:09 ` Dimitrios Apostolou
1 sibling, 1 reply; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-06 14:20 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1469 bytes --]
Hello Andrew, thanks for your reply!
Andrew Morton wrote:
> On Sun, 5 Aug 2007 19:03:12 +0300 Dimitrios Apostolou <jimis@gmx.net> wrote:
>
>> was my report so complicated?
>
> We're bad.
>
> Seems that your context switch rate when running two instances of
> badblocks against two different disks went batshit insane. It doesn't
> happen here.
>
> Please capture the `vmstat 1' output while running the problematic
> workload.
>
> The oom-killing could have been unrelated to the CPU load problem. iirc
> badblocks uses a lot of memory, so it might have been genuine. Keep an eye
> on the /proc/meminfo output and send the kernel dmesg output from the
> oom-killing event.
Please see the attached files. Unfortunately I don't see any useful info
in them:
*_before: before running any badblocks process
*_while: while running badblocks process, but without any cron job
having kicked in
*_bad: 5 minutes later that some cron jobs kicked in
About the OOM killer, indeed I believe that it is unrelated. It started
killing after about 2 days, that hundreds of processes were stuck as
running and taking up memory, so I suppose the 256 MB RAM were truly
filled. I just mentioned it because its behaviour is completely
non-helpful. It doesn't touch the badblocks process, it rarely touches
the stuck as running cron jobs, but it kills other irrelevant processes.
If you still want the killing logs, tell me and I'll search for them.
Thanks,
Dimitris
[-- Attachment #2: meminfo_bad.txt --]
[-- Type: text/plain, Size: 728 bytes --]
MemTotal: 255912 kB
MemFree: 22928 kB
Buffers: 123420 kB
Cached: 69168 kB
SwapCached: 0 kB
Active: 118440 kB
Inactive: 86228 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255912 kB
LowFree: 22928 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 76 kB
Writeback: 0 kB
AnonPages: 12088 kB
Mapped: 7608 kB
Slab: 23792 kB
SReclaimable: 18832 kB
SUnreclaim: 4960 kB
PageTables: 508 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 127956 kB
Committed_AS: 24928 kB
VmallocTotal: 770040 kB
VmallocUsed: 2852 kB
VmallocChunk: 766864 kB
[-- Attachment #3: meminfo_before.txt --]
[-- Type: text/plain, Size: 728 bytes --]
MemTotal: 255912 kB
MemFree: 26348 kB
Buffers: 123156 kB
Cached: 68412 kB
SwapCached: 0 kB
Active: 115788 kB
Inactive: 85484 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255912 kB
LowFree: 26348 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 436 kB
Writeback: 0 kB
AnonPages: 9704 kB
Mapped: 5748 kB
Slab: 23680 kB
SReclaimable: 18712 kB
SUnreclaim: 4968 kB
PageTables: 468 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 127956 kB
Committed_AS: 21260 kB
VmallocTotal: 770040 kB
VmallocUsed: 2852 kB
VmallocChunk: 766864 kB
[-- Attachment #4: meminfo_while.txt --]
[-- Type: text/plain, Size: 728 bytes --]
MemTotal: 255912 kB
MemFree: 25428 kB
Buffers: 123280 kB
Cached: 69088 kB
SwapCached: 0 kB
Active: 116216 kB
Inactive: 86068 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255912 kB
LowFree: 25428 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 40 kB
Writeback: 0 kB
AnonPages: 9952 kB
Mapped: 5796 kB
Slab: 23708 kB
SReclaimable: 18764 kB
SUnreclaim: 4944 kB
PageTables: 480 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 127956 kB
Committed_AS: 22060 kB
VmallocTotal: 770040 kB
VmallocUsed: 2852 kB
VmallocChunk: 766864 kB
[-- Attachment #5: vmstat_bad.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
4 1 0 22688 123432 69172 0 0 7 78 45 21 3 0 96 1
4 2 0 22680 123432 69180 0 0 0 15872 249 461 34 66 0 0
4 2 0 22680 123432 69180 0 0 0 15872 247 468 37 63 0 0
4 2 0 22680 123432 69180 0 0 0 15872 251 472 38 62 0 0
4 2 0 22680 123432 69180 0 0 0 16000 252 495 43 57 0 0
4 2 0 22680 123432 69180 0 0 0 15872 252 471 32 68 0 0
3 2 0 22680 123440 69180 0 0 0 15984 251 516 73 27 0 0
3 1 0 22680 123440 69180 0 0 0 15872 250 482 33 67 0 0
4 2 0 22620 123440 69180 0 0 0 15872 251 467 30 70 0 0
4 2 0 22620 123440 69180 0 0 0 15872 250 460 45 55 0 0
[-- Attachment #6: vmstat_before.txt --]
[-- Type: text/plain, Size: 944 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 26332 123196 68480 0 0 7 17 44 19 3 0 96 0
0 0 0 26324 123196 68484 0 0 0 0 45 16 0 0 100 0
0 0 0 26324 123196 68484 0 0 0 0 32 17 0 0 100 0
0 0 0 26324 123196 68484 0 0 0 0 13 14 0 0 100 0
0 0 0 26324 123196 68484 0 0 0 0 29 13 0 1 99 0
0 0 0 26324 123196 68484 0 0 0 0 25 16 0 0 100 0
0 0 0 26324 123204 68484 0 0 0 56 42 26 0 0 100 0
0 0 0 26324 123204 68484 0 0 0 0 29 16 0 0 100 0
0 0 0 26324 123204 68484 0 0 0 0 27 13 0 0 100 0
0 0 0 26324 123204 68484 0 0 0 0 13 14 0 0 100 0
[-- Attachment #7: vmstat_while.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 2 0 25428 123288 69092 0 0 7 27 44 19 3 0 96 0
2 2 0 25420 123288 69096 0 0 0 15744 273 421 0 7 0 93
2 2 0 25420 123288 69096 0 0 0 15872 276 429 0 4 0 96
2 2 0 25420 123288 69096 0 0 0 15872 273 394 0 2 0 98
2 2 0 25420 123288 69096 0 0 0 15872 277 430 0 2 0 98
1 2 0 25420 123288 69096 0 0 0 16000 273 496 2 10 0 88
2 2 0 25360 123292 69096 0 0 0 15996 288 508 0 4 0 96
2 2 0 25360 123292 69096 0 0 0 16000 283 487 0 3 0 97
2 2 0 25360 123292 69096 0 0 0 15872 279 452 0 2 0 98
2 2 0 25360 123292 69096 0 0 0 15872 283 442 0 2 0 98
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 1:28 ` Andrew Morton
2007-08-06 14:20 ` Dimitrios Apostolou
@ 2007-08-06 16:09 ` Dimitrios Apostolou
1 sibling, 0 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-06 16:09 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1752 bytes --]
Andrew Morton wrote:
> On Sun, 5 Aug 2007 19:03:12 +0300 Dimitrios Apostolou <jimis@gmx.net> wrote:
>
>> was my report so complicated?
>
> We're bad.
>
> Seems that your context switch rate when running two instances of
> badblocks against two different disks went batshit insane. It doesn't
> happen here.
Hello again,
I run some more tests and figured out that the problem occurs only when
the I/O is writing to disk. Indeed, when I run two badblocks without the
-w switch, read-only that is, the oprofile output seems normal
(two_discs_read.txt). So does the vmstat output:
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
r b swpd free buff cache si so bi bo in cs us sy
id wa
4 2 0 24136 87124 95660 0 0 28288 0 449 724 92 8
0 0
4 2 0 24076 87136 95648 0 0 28160 12 446 749 91 9
0 0
4 2 0 24016 87144 95664 0 0 28096 88 444 790 89 11
0 0
4 2 0 24016 87144 95664 0 0 28288 0 444 705 88 12
0 0
4 2 0 24016 87144 95660 0 0 28288 0 448 737 95 5
0 0
As you can see the context switching rate is greater now but the system
CPU load much less, than that of two_discs_bad.txt.
However the cron jobs still seem to have a hard time finishing, even
though they seem now to consume about 90% CPU time. Could someone please
explain me some things that seem vital to understanding the situation?
Firstly, what is that "processor" line in the oprofile output without
symbols? And why does *it* take all the CPU and not other important
processes? Finally what do the kernel symbols "__switch_to" and
"schedule" represent?
Thanks in advance,
Dimitris
[-- Attachment #2: two_discs_read.txt --]
[-- Type: text/plain, Size: 12260 bytes --]
Mon Aug 6 17:32:15 EEST 2007
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
Stopping profiling.
Killing daemon.
CPU: PIII, speed 798.031 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
1893 43.9415 processor
1409 32.7066 vmlinux
346 8.0316 libc-2.6.so
217 5.0371 bash
CPU_CLK_UNHALT...|
samples| %|
------------------
216 99.5392 bash
1 0.4608 anon (tgid:29407 range:0xb7f90000-0xb7f91000)
141 3.2730 ld-2.6.so
105 2.4373 ide_core
43 0.9981 ext3
42 0.9749 ISO8859-1.so
24 0.5571 jbd
23 0.5339 oprofiled
CPU_CLK_UNHALT...|
samples| %|
------------------
22 95.6522 oprofiled
1 4.3478 anon (tgid:29402 range:0xb7f1d000-0xb7f1e000)
18 0.4178 oprofile
9 0.2089 reiserfs
6 0.1393 libcrypto.so.0.9.8
5 0.1161 grep
4 0.0929 locale-archive
3 0.0696 gawk
3 0.0696 badblocks
3 0.0696 screen-4.0.3
3 0.0696 imap-login
CPU_CLK_UNHALT...|
samples| %|
------------------
1 33.3333 imap-login
1 33.3333 anon (tgid:28108 range:0xb7eea000-0xb7eeb000)
1 33.3333 anon (tgid:3944 range:0xb7fc8000-0xb7fc9000)
2 0.0464 libpthread-2.6.so
1 0.0232 less
1 0.0232 ls
1 0.0232 rm
1 0.0232 sleep
1 0.0232 libext2fs.so.2.4
1 0.0232 libncurses.so.5.6
1 0.0232 init
1 0.0232 sshd
1 0.0232 syslog-ng
CPU: PIII, speed 798.031 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
162 11.4975 do_wp_page
147 10.4329 native_safe_halt
49 3.4776 __blockdev_direct_IO
45 3.1938 __handle_mm_fault
34 2.4131 put_page
33 2.3421 get_page_from_freelist
27 1.9163 page_fault
24 1.7033 unmap_vmas
22 1.5614 __bio_add_page
20 1.4194 acpi_pm_read
20 1.4194 follow_page
19 1.3485 copy_process
19 1.3485 find_get_page
18 1.2775 __link_path_walk
17 1.2065 irq_entries_start
16 1.1356 delay_tsc
15 1.0646 __d_lookup
15 1.0646 mask_and_ack_8259A
14 0.9936 copy_page_range
14 0.9936 kmem_cache_alloc
14 0.9936 page_address
13 0.9226 dio_send_cur_page
13 0.9226 find_vma
13 0.9226 strnlen_user
12 0.8517 do_page_fault
12 0.8517 radix_tree_lookup
12 0.8517 submit_page_section
11 0.7807 acpi_os_read_port
11 0.7807 schedule
10 0.7097 __copy_to_user_ll
10 0.7097 __find_get_block
10 0.7097 dio_get_page
10 0.7097 do_mmap_pgoff
10 0.7097 native_io_delay
9 0.6388 blk_recount_segments
9 0.6388 filemap_nopage
8 0.5678 bio_add_page
8 0.5678 dio_bio_add_page
8 0.5678 load_elf_binary
8 0.5678 memcpy
7 0.4968 pit_next_event
6 0.4258 acpi_os_write_port
6 0.4258 bio_alloc_bioset
6 0.4258 free_pgtables
5 0.3549 kmem_cache_free
5 0.3549 mark_page_accessed
5 0.3549 restore_nocheck
5 0.3549 strncpy_from_user
5 0.3549 sysenter_past_esp
4 0.2839 __copy_from_user_ll
4 0.2839 __inc_zone_state
4 0.2839 __mutex_lock_slowpath
4 0.2839 _spin_lock_irqsave
4 0.2839 blk_rq_map_sg
4 0.2839 current_fs_time
4 0.2839 end_that_request_last
4 0.2839 error_code
4 0.2839 generic_make_request
4 0.2839 generic_permission
4 0.2839 get_user_pages
4 0.2839 getnstimeofday
4 0.2839 hweight32
4 0.2839 max_block
4 0.2839 sched_clock
4 0.2839 try_to_wake_up
4 0.2839 vfs_read
4 0.2839 vma_merge
3 0.2129 __alloc_pages
3 0.2129 __find_get_block_slow
3 0.2129 __fput
3 0.2129 bio_init
3 0.2129 cache_alloc_refill
3 0.2129 cfq_insert_request
3 0.2129 cond_resched
3 0.2129 do_sys_open
3 0.2129 dput
3 0.2129 flush_tlb_mm
3 0.2129 free_page_and_swap_cache
3 0.2129 generic_file_aio_read
3 0.2129 get_next_timer_interrupt
3 0.2129 hrtimer_get_next_event
3 0.2129 init_request_from_bio
3 0.2129 internal_add_timer
3 0.2129 kmem_cache_zalloc
3 0.2129 ktime_get_ts
3 0.2129 page_remove_rmap
3 0.2129 permission
3 0.2129 set_page_dirty_lock
3 0.2129 sys_close
3 0.2129 sys_mprotect
3 0.2129 touch_atime
3 0.2129 unlock_page
3 0.2129 zone_watermark_ok
2 0.1419 __add_entropy_words
2 0.1419 __atomic_notifier_call_chain
2 0.1419 __dec_zone_state
2 0.1419 __do_page_cache_readahead
2 0.1419 __end_that_request_first
2 0.1419 __make_request
2 0.1419 __switch_to
2 0.1419 __wake_up_bit
2 0.1419 _atomic_dec_and_lock
2 0.1419 anon_vma_prepare
2 0.1419 bio_get_nr_vecs
2 0.1419 copy_strings
2 0.1419 copy_to_user
2 0.1419 d_alloc
2 0.1419 dio_bio_complete
2 0.1419 dnotify_parent
2 0.1419 do_notify_resume
2 0.1419 do_sigaction
2 0.1419 do_sys_poll
2 0.1419 down_read_trylock
2 0.1419 elv_dispatch_sort
2 0.1419 enqueue_hrtimer
2 0.1419 find_busiest_group
2 0.1419 find_vma_prepare
2 0.1419 flush_tlb_page
2 0.1419 free_uid
2 0.1419 generic_fillattr
2 0.1419 kmap_atomic_prot
2 0.1419 native_flush_tlb_single
2 0.1419 new_inode
2 0.1419 path_release
2 0.1419 pipe_write
2 0.1419 prio_tree_insert
2 0.1419 proc_lookup
2 0.1419 quicklist_trim
2 0.1419 rb_erase
2 0.1419 rb_insert_color
2 0.1419 recalc_task_prio
2 0.1419 restore_sigcontext
2 0.1419 ret_from_intr
2 0.1419 rw_verify_area
2 0.1419 sys_brk
2 0.1419 task_rq_lock
2 0.1419 tick_nohz_update_jiffies
2 0.1419 update_wall_time
2 0.1419 vm_normal_page
2 0.1419 vma_link
2 0.1419 vma_prio_tree_add
2 0.1419 wake_up_new_task
1 0.0710 __anon_vma_link
1 0.0710 __copy_user_intel
1 0.0710 __d_path
1 0.0710 __dentry_open
1 0.0710 __free_pages_ok
1 0.0710 __freed_request
1 0.0710 __get_user_4
1 0.0710 __init_rwsem
1 0.0710 __kmalloc
1 0.0710 __page_set_anon_rmap
1 0.0710 __pagevec_lru_add
1 0.0710 __pagevec_lru_add_active
1 0.0710 __rb_rotate_left
1 0.0710 __sk_dst_check
1 0.0710 __tasklet_schedule
1 0.0710 __vm_enough_memory
1 0.0710 __vma_link_rb
1 0.0710 __wait_on_bit
1 0.0710 __wake_up
1 0.0710 __wake_up_common
1 0.0710 _d_rehash
1 0.0710 acpi_hw_low_level_read
1 0.0710 acpi_hw_register_read
1 0.0710 add_timer_randomness
1 0.0710 alloc_pid
1 0.0710 anon_vma_unlink
1 0.0710 atomic_notifier_call_chain
1 0.0710 attach_pid
1 0.0710 bio_alloc
1 0.0710 bit_waitqueue
1 0.0710 blk_backing_dev_unplug
1 0.0710 blk_queue_bounce
1 0.0710 block_llseek
1 0.0710 block_read_full_page
1 0.0710 cache_reap
1 0.0710 cached_lookup
1 0.0710 call_rcu
1 0.0710 cfq_may_queue
1 0.0710 cfq_merge
1 0.0710 cfq_queue_empty
1 0.0710 clockevents_program_event
1 0.0710 cp_new_stat64
1 0.0710 cpu_idle
1 0.0710 create_read_pipe
1 0.0710 current_io_context
1 0.0710 d_path
1 0.0710 debug_mutex_free_waiter
1 0.0710 del_timer
1 0.0710 dequeue_signal
1 0.0710 dequeue_task
1 0.0710 detach_pid
1 0.0710 dio_complete
1 0.0710 dio_new_bio
1 0.0710 dio_zero_block
1 0.0710 disk_round_stats
1 0.0710 dnotify_flush
1 0.0710 do_IRQ
1 0.0710 do_brk
1 0.0710 do_exit
1 0.0710 do_fork
1 0.0710 do_generic_mapping_read
1 0.0710 do_syslog
1 0.0710 do_wait
1 0.0710 dummy_inode_alloc_security
1 0.0710 dummy_vm_enough_memory
1 0.0710 dup_fd
1 0.0710 effective_prio
1 0.0710 elv_insert
1 0.0710 elv_rqhash_add
1 0.0710 enable_8259A_irq
1 0.0710 exit_aio
1 0.0710 exit_mmap
1 0.0710 exit_thread
1 0.0710 fget
1 0.0710 fget_light
1 0.0710 filp_close
1 0.0710 find_extend_vma
1 0.0710 find_next_bit
1 0.0710 find_next_zero_bit
1 0.0710 find_vma_prev
1 0.0710 flush_signal_handlers
1 0.0710 fput
1 0.0710 free_hot_page
1 0.0710 free_pages_bulk
1 0.0710 free_pgd_range
1 0.0710 free_pipe_info
1 0.0710 generic_drop_inode
1 0.0710 generic_file_buffered_write
1 0.0710 generic_file_open
1 0.0710 generic_unplug_device
1 0.0710 get_empty_filp
1 0.0710 get_index
1 0.0710 get_nr_files
1 0.0710 get_request
1 0.0710 get_signal_to_deliver
1 0.0710 get_unused_fd
1 0.0710 getname
1 0.0710 half_md4_transform
1 0.0710 hrtimer_init
1 0.0710 hrtimer_start
1 0.0710 init_new_context
1 0.0710 init_waitqueue_head
1 0.0710 inode_setattr
1 0.0710 inotify_d_instantiate
1 0.0710 install_special_mapping
1 0.0710 ktime_get
1 0.0710 kunmap_atomic
1 0.0710 lapic_next_event
1 0.0710 lock_hrtimer_base
1 0.0710 lock_timer_base
1 0.0710 locks_remove_posix
1 0.0710 may_open
1 0.0710 memory_open
1 0.0710 mempool_alloc
1 0.0710 mempool_alloc_slab
1 0.0710 mm_init
1 0.0710 mm_release
1 0.0710 mntput_no_expire
1 0.0710 native_set_pte_at
1 0.0710 notifier_call_chain
1 0.0710 ns_to_timespec
1 0.0710 open_exec
1 0.0710 open_namei
1 0.0710 page_waitqueue
1 0.0710 percpu_counter_mod
1 0.0710 pipe_release
1 0.0710 proc_flush_task
1 0.0710 proc_sys_lookup_table_one
1 0.0710 profile_munmap
1 0.0710 radix_tree_insert
1 0.0710 radix_tree_preload
1 0.0710 rb_first
1 0.0710 read_chan
1 0.0710 recalc_bh_state
1 0.0710 recalc_sigpending_tsk
1 0.0710 release_task
1 0.0710 resched_task
1 0.0710 resume_kernel
1 0.0710 resume_userspace
1 0.0710 rt_set_nexthop
1 0.0710 secure_ip_id
1 0.0710 select_nohz_load_balancer
1 0.0710 send_signal
1 0.0710 seq_path
1 0.0710 seq_printf
1 0.0710 set_page_dirty
1 0.0710 should_remove_suid
1 0.0710 sigprocmask
1 0.0710 sock_aio_write
1 0.0710 submit_bio
1 0.0710 sys_fstat64
1 0.0710 sys_llseek
1 0.0710 sys_poll
1 0.0710 sys_rt_sigprocmask
1 0.0710 tcp_poll
1 0.0710 tick_do_update_jiffies64
1 0.0710 tick_nohz_restart_sched_tick
1 0.0710 tick_nohz_stop_sched_tick
1 0.0710 tick_sched_timer
1 0.0710 tty_ldisc_try
1 0.0710 tty_wakeup
1 0.0710 vfs_getattr
1 0.0710 vfs_llseek
1 0.0710 vfs_permission
1 0.0710 vfs_write
1 0.0710 vmtruncate
1 0.0710 vsnprintf
1 0.0710 wake_up_bit
1 0.0710 wake_up_process
1 0.0710 write_cache_pages
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-05 18:42 ` Dimitrios Apostolou
2007-08-05 20:08 ` Rafał Bilski
@ 2007-08-06 16:14 ` Rafał Bilski
2007-08-06 19:18 ` Dimitrios Apostolou
1 sibling, 1 reply; 26+ messages in thread
From: Rafał Bilski @ 2007-08-06 16:14 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel
> Hello and thanks for your reply.
Hello again,
> The cron job that is running every 10 min on my system is mpop (a
> fetchmail-like program) and another running every 5 min is mrtg. Both
> normally finish within 1-2 seconds.
>
> The fact that these simple cron jobs don't finish ever is certainly because of
> the high system CPU load. If you see the two_discs_bad.txt which I attached
> on my original message, you'll see that *vmlinux*, and specifically the
> *scheduler*, take up most time.
>
> And the fact that this happens only when running two i/o processes but when
> running only one everything is absolutely snappy (not at all slow, see
> one_disc.txt), makes me sure that this is a kernel bug. I'd be happy to help
> but I need some guidance to pinpoint the problem.
In Your oprofile output I find "acpi_pm_read" particulary interesting. Unlike
other VIA chipsets, which I know, Your doesn't use VLink to connect northbridge
to southbridge. Instead PCI bus connects these two. As You probably know
maximal PCI throughtput is 133MiB/s. In theory. In practice probably less.
ACPI registers are located on southbridge. This probably means that processor
needs access to PCI bus in order to read ACPI timer register.
Now some math. 20GiB disk probably can send data at 20MiB/s rate. 200GiB
disk probably about 40MiB/s. So 20+2*40=100MiB/s. I think that this could
explain why simple inl() call takes so much time and why Your system isn't
very responsive.
> Thanks,
> Dimitris
Let me know if You find my theory amazing or amusing.
Rafał
----------------------------------------------------------------------
Kobiety klamia o wiele skuteczniej niz mezczyzni.
Sprawdz, jak sie na nich poznac
>>>http://link.interia.pl/f1b16
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 14:20 ` Dimitrios Apostolou
@ 2007-08-06 17:33 ` Andrew Morton
2007-08-06 19:27 ` Dimitrios Apostolou
2007-08-06 20:04 ` Dimitrios Apostolou
0 siblings, 2 replies; 26+ messages in thread
From: Andrew Morton @ 2007-08-06 17:33 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel
On Mon, 06 Aug 2007 16:20:30 +0200 Dimitrios Apostolou <jimis@gmx.net> wrote:
> Andrew Morton wrote:
> > On Sun, 5 Aug 2007 19:03:12 +0300 Dimitrios Apostolou <jimis@gmx.net> wrote:
> >
> >> was my report so complicated?
> >
> > We're bad.
> >
> > Seems that your context switch rate when running two instances of
> > badblocks against two different disks went batshit insane. It doesn't
> > happen here.
> >
> > Please capture the `vmstat 1' output while running the problematic
> > workload.
> >
> > The oom-killing could have been unrelated to the CPU load problem. iirc
> > badblocks uses a lot of memory, so it might have been genuine. Keep an eye
> > on the /proc/meminfo output and send the kernel dmesg output from the
> > oom-killing event.
>
> Please see the attached files. Unfortunately I don't see any useful info
> in them:
> *_before: before running any badblocks process
> *_while: while running badblocks process, but without any cron job
> having kicked in
> *_bad: 5 minutes later that some cron jobs kicked in
>
> About the OOM killer, indeed I believe that it is unrelated. It started
> killing after about 2 days, that hundreds of processes were stuck as
> running and taking up memory, so I suppose the 256 MB RAM were truly
> filled. I just mentioned it because its behaviour is completely
> non-helpful. It doesn't touch the badblocks process, it rarely touches
> the stuck as running cron jobs, but it kills other irrelevant processes.
> If you still want the killing logs, tell me and I'll search for them.
ah. Your context-switch rate during the dual-badblocks run is not high at
all.
I suspect I was fooled by the oprofile output, which showed tremendous
amounts of load in schedule() and switch_to(). The percentages which
opreport shows are the percentage of non-halted CPU time. So if you have a
function in the kernel which is using 1% of the total CPU, and the CPU is
halted for 95% of the time, it appears that the function is taking 20% of
CPU!
The fix for that is to boot with the "idle=poll" boot parameter, to make
the CPU spin when it has nothing else to do.
I'm suspecting that your machine is just stuck in D state waiting for disk.
Did we have a sysrq-T trace?
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 16:14 ` Rafał Bilski
@ 2007-08-06 19:18 ` Dimitrios Apostolou
2007-08-06 19:48 ` Alan Cox
2007-08-06 22:12 ` Rafał Bilski
0 siblings, 2 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-06 19:18 UTC (permalink / raw)
To: Rafał Bilski; +Cc: linux-kernel
Rafał Bilski wrote:
>> Hello and thanks for your reply.
> Hello again,
>> The cron job that is running every 10 min on my system is mpop (a
>> fetchmail-like program) and another running every 5 min is mrtg. Both
>> normally finish within 1-2 seconds.
>> The fact that these simple cron jobs don't finish ever is certainly
>> because of the high system CPU load. If you see the two_discs_bad.txt
>> which I attached on my original message, you'll see that *vmlinux*,
>> and specifically the *scheduler*, take up most time.
>> And the fact that this happens only when running two i/o processes but
>> when running only one everything is absolutely snappy (not at all
>> slow, see one_disc.txt), makes me sure that this is a kernel bug. I'd
>> be happy to help but I need some guidance to pinpoint the problem.
> In Your oprofile output I find "acpi_pm_read" particulary interesting.
> Unlike other VIA chipsets, which I know, Your doesn't use VLink to
> connect northbridge to southbridge. Instead PCI bus connects these two.
> As You probably know maximal PCI throughtput is 133MiB/s. In theory. In
> practice probably less.
> ACPI registers are located on southbridge. This probably means that
> processor needs access to PCI bus in order to read ACPI timer register.
> Now some math. 20GiB disk probably can send data at 20MiB/s rate. 200GiB
> disk probably about 40MiB/s. So 20+2*40=100MiB/s. I think that this
> could explain why simple inl() call takes so much time and why Your
> system isn't very responsive.
>> Thanks, Dimitris
> Let me know if You find my theory amazing or amusing.
Hello Rafal,
I find your theory very nice, but unfortunately I don't think it applies
here. As you can see from the vmstat outputs the write throughput is
about 15MB/s for both disks. When reading I get about 30MB/s again from
both disks. The other disk, the small one, is mostly idle, except for
writing little bits and bytes now and then. Since the problem occurs
when writing, 15MB/s is just too little I think for the PCI bus.
However I find it quite possible to have reached the throughput limit
because of software (driver) problems. I have done various testing
(mostly "hdparm -tT" with exactly the same PC and disks since about
kernel 2.6.8 (maybe even earlier). I remember with certainty that read
throughput the early days was about 50MB/s for each of the big disks,
and combined with RAID 0 I got ~75MB/s. Those figures have been dropping
gradually with each new kernel release and the situation today, with
2.6.22, is that hdparm gives maximum throughput 20MB/s for each disk,
and for RAID 0 too!
I have been ignoring these performance regressions because of no
stability problems until now. So could it be that I'm reaching the
20MB/s driver limit and some requests take too long to be served?
Dimitris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 17:33 ` Andrew Morton
@ 2007-08-06 19:27 ` Dimitrios Apostolou
2007-08-06 20:04 ` Dimitrios Apostolou
1 sibling, 0 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-06 19:27 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
Hi,
Andrew Morton wrote:
> I suspect I was fooled by the oprofile output, which showed tremendous
> amounts of load in schedule() and switch_to(). The percentages which
> opreport shows are the percentage of non-halted CPU time. So if you have a
> function in the kernel which is using 1% of the total CPU, and the CPU is
> halted for 95% of the time, it appears that the function is taking 20% of
> CPU!
>
> The fix for that is to boot with the "idle=poll" boot parameter, to make
> the CPU spin when it has nothing else to do.
I'll test again the two_discs_bad situation after booting with that
parameter. Thanks.
>
> I'm suspecting that your machine is just stuck in D state waiting for disk.
> Did we have a sysrq-T trace?
The amazing thing is that this doesn't happen! Every single cron jobs
that keeps running (I intentionally said that before too) and never
ends is in R state. By strace'ing the processes they just seem to be
going *extremely* slow. I also changed the I/O elevator of hdb (the OS
disk) to deadline from cfq, unfortunately with no results. That is why I
've been considering it a CPU scheduler issue.
Dimitris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 19:18 ` Dimitrios Apostolou
@ 2007-08-06 19:48 ` Alan Cox
2007-08-07 0:40 ` Dimitrios Apostolou
2007-08-06 22:12 ` Rafał Bilski
1 sibling, 1 reply; 26+ messages in thread
From: Alan Cox @ 2007-08-06 19:48 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: Rafał Bilski, linux-kernel
> > In Your oprofile output I find "acpi_pm_read" particulary interesting.
> > Unlike other VIA chipsets, which I know, Your doesn't use VLink to
> > connect northbridge to southbridge. Instead PCI bus connects these two.
> > As You probably know maximal PCI throughtput is 133MiB/s. In theory. In
> > practice probably less.
acpi_pm_read is capable of disappearing into SMM traps which will make
it look very slow.
> about 15MB/s for both disks. When reading I get about 30MB/s again from
> both disks. The other disk, the small one, is mostly idle, except for
> writing little bits and bytes now and then. Since the problem occurs
> when writing, 15MB/s is just too little I think for the PCI bus.
Its about right for some of the older VIA chipsets but if you are seeing
speed loss then we need to know precisely which kernels the speed dropped
at. Could be there is an I/O scheduling issue your system shows up or
some kind of PCI bus contention when both disks are active at once.
> I have been ignoring these performance regressions because of no
> stability problems until now. So could it be that I'm reaching the
> 20MB/s driver limit and some requests take too long to be served?
Nope.
Alan
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 17:33 ` Andrew Morton
2007-08-06 19:27 ` Dimitrios Apostolou
@ 2007-08-06 20:04 ` Dimitrios Apostolou
1 sibling, 0 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-06 20:04 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 2274 bytes --]
Andrew Morton wrote:
> On Mon, 06 Aug 2007 16:20:30 +0200 Dimitrios Apostolou <jimis@gmx.net> wrote:
>
>> Andrew Morton wrote:
>>> On Sun, 5 Aug 2007 19:03:12 +0300 Dimitrios Apostolou <jimis@gmx.net> wrote:
>>>
>>>> was my report so complicated?
>>> We're bad.
>>>
>>> Seems that your context switch rate when running two instances of
>>> badblocks against two different disks went batshit insane. It doesn't
>>> happen here.
>>>
>>> Please capture the `vmstat 1' output while running the problematic
>>> workload.
>>>
>>> The oom-killing could have been unrelated to the CPU load problem. iirc
>>> badblocks uses a lot of memory, so it might have been genuine. Keep an eye
>>> on the /proc/meminfo output and send the kernel dmesg output from the
>>> oom-killing event.
>> Please see the attached files. Unfortunately I don't see any useful info
>> in them:
>> *_before: before running any badblocks process
>> *_while: while running badblocks process, but without any cron job
>> having kicked in
>> *_bad: 5 minutes later that some cron jobs kicked in
>>
>> About the OOM killer, indeed I believe that it is unrelated. It started
>> killing after about 2 days, that hundreds of processes were stuck as
>> running and taking up memory, so I suppose the 256 MB RAM were truly
>> filled. I just mentioned it because its behaviour is completely
>> non-helpful. It doesn't touch the badblocks process, it rarely touches
>> the stuck as running cron jobs, but it kills other irrelevant processes.
>> If you still want the killing logs, tell me and I'll search for them.
>
> ah. Your context-switch rate during the dual-badblocks run is not high at
> all.
>
> I suspect I was fooled by the oprofile output, which showed tremendous
> amounts of load in schedule() and switch_to(). The percentages which
> opreport shows are the percentage of non-halted CPU time. So if you have a
> function in the kernel which is using 1% of the total CPU, and the CPU is
> halted for 95% of the time, it appears that the function is taking 20% of
> CPU!
>
> The fix for that is to boot with the "idle=poll" boot parameter, to make
> the CPU spin when it has nothing else to do.
I'm attaching the new oprofile output. I can't see any difference
however. :-s
Dimitris
[-- Attachment #2: bad.txt --]
[-- Type: text/plain, Size: 18206 bytes --]
Mon Aug 6 21:46:59 EEST 2007
+ opcontrol --vmlinux=/usr/src/linux-2.6.22-ARCH/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.025 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
7917 57.2658 vmlinux
2204 15.9421 libc-2.6.so
958 6.9295 libpython2.5.so.1.0
815 5.8951 ide_core
773 5.5913 perl
242 1.7505 ld-2.6.so
216 1.5624 mpop
204 1.4756 bash
121 0.8752 libgnutls.so.13.3.0
64 0.4629 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
48 75.0000 badblocks
10 15.6250 anon (tgid:4338 range:0xb7f6c000-0xb7f6d000)
6 9.3750 anon (tgid:4339 range:0xb7f64000-0xb7f65000)
46 0.3327 ISO8859-1.so
37 0.2676 ext3
34 0.2459 imap-login
CPU_CLK_UNHALT...|
samples| %|
------------------
31 91.1765 imap-login
1 2.9412 anon (tgid:3989 range:0xb7f80000-0xb7f81000)
1 2.9412 anon (tgid:3990 range:0xb7f72000-0xb7f73000)
1 2.9412 anon (tgid:4332 range:0xb7fd6000-0xb7fd7000)
33 0.2387 libext2fs.so.2.4
24 0.1736 jbd
24 0.1736 libpthread-2.6.so
21 0.1519 gawk
15 0.1085 oprofile
13 0.0940 libcrypto.so.0.9.8
12 0.0868 ide_disk
11 0.0796 skge
7 0.0506 sshd
6 0.0434 dovecot-auth
CPU_CLK_UNHALT...|
samples| %|
------------------
5 83.3333 dovecot-auth
1 16.6667 anon (tgid:3971 range:0xb7f8c000-0xb7f8d000)
4 0.0289 libnetsnmp.so.15.0.0
4 0.0289 libnetsnmpmibs.so.15.0.0
4 0.0289 imap
4 0.0289 dovecot
3 0.0217 grep
3 0.0217 reiserfs
3 0.0217 locale-archive
1 0.0072 libdl-2.6.so
1 0.0072 init
1 0.0072 screen-4.0.3
+ echo
+ echo
+ echo
+ opreport -l /usr/src/linux-2.6.22-ARCH/vmlinux
CPU: PIII, speed 798.025 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
2477 31.2871 __switch_to
1625 20.5255 schedule
246 3.1072 mask_and_ack_8259A
222 2.8041 __blockdev_direct_IO
147 1.8568 follow_page
107 1.3515 put_page
106 1.3389 do_wp_page
91 1.1494 native_load_tls
88 1.1115 __bio_add_page
80 1.0105 delay_tsc
63 0.7958 __handle_mm_fault
50 0.6316 submit_page_section
49 0.6189 get_page_from_freelist
48 0.6063 dio_bio_add_page
45 0.5684 dequeue_task
41 0.5179 get_user_pages
40 0.5052 bio_alloc_bioset
39 0.4926 enable_8259A_irq
38 0.4800 dio_send_cur_page
38 0.4800 page_address
38 0.4800 sysenter_past_esp
37 0.4673 dio_get_page
37 0.4673 find_get_page
36 0.4547 bio_add_page
36 0.4547 kmem_cache_alloc
36 0.4547 unmap_vmas
35 0.4421 kmem_cache_free
34 0.4295 blk_rq_map_sg
33 0.4168 blk_recount_segments
32 0.4042 do_page_fault
31 0.3916 generic_file_direct_write
30 0.3789 __generic_file_aio_write_nolock
30 0.3789 vm_normal_page
28 0.3537 __link_path_walk
28 0.3537 __mutex_lock_slowpath
26 0.3284 irq_entries_start
25 0.3158 filemap_nopage
25 0.3158 vfs_write
23 0.2905 find_vma
22 0.2779 __d_lookup
22 0.2779 dio_bio_complete
22 0.2779 mark_page_accessed
21 0.2653 add_timer_randomness
21 0.2653 max_block
21 0.2653 page_fault
19 0.2400 restore_all
18 0.2274 __add_entropy_words
18 0.2274 preempt_schedule
17 0.2147 __copy_to_user_ll
17 0.2147 __generic_unplug_device
17 0.2147 do_sys_poll
17 0.2147 fget_light
17 0.2147 math_state_restore
16 0.2021 __mod_timer
16 0.2021 generic_file_direct_IO
16 0.2021 preempt_schedule_irq
15 0.1895 block_llseek
15 0.1895 do_sync_write
14 0.1768 current_fs_time
14 0.1768 do_generic_mapping_read
14 0.1768 mempool_free
14 0.1768 native_flush_tlb_single
14 0.1768 radix_tree_lookup
14 0.1768 rw_verify_area
13 0.1642 __alloc_pages
13 0.1642 __make_request
13 0.1642 _spin_lock_irqsave
13 0.1642 cond_resched
12 0.1516 cfq_completed_request
12 0.1516 strnlen_user
11 0.1389 __mutex_unlock_slowpath
11 0.1389 do_lookup
11 0.1389 generic_make_request
11 0.1389 get_request
11 0.1389 restore_nocheck
10 0.1263 blkdev_direct_IO
10 0.1263 load_elf_binary
10 0.1263 unix_poll
9 0.1137 cfq_insert_request
9 0.1137 cfq_set_request
9 0.1137 copy_page_range
9 0.1137 do_mmap_pgoff
9 0.1137 elv_insert
9 0.1137 generic_file_aio_write_nolock
9 0.1137 generic_permission
9 0.1137 recalc_task_prio
8 0.1010 blkdev_get_blocks
8 0.1010 cache_reap
8 0.1010 dio_new_bio
8 0.1010 file_update_time
8 0.1010 sys_llseek
8 0.1010 vsnprintf
7 0.0884 bio_put
7 0.0884 blk_backing_dev_unplug
7 0.0884 cfq_dispatch_requests
7 0.0884 copy_process
7 0.0884 device_not_available
7 0.0884 disk_round_stats
7 0.0884 elv_queue_empty
7 0.0884 free_block
7 0.0884 generic_unplug_device
7 0.0884 hrtimer_interrupt
7 0.0884 kmap_atomic_prot
7 0.0884 mutex_remove_waiter
7 0.0884 need_resched
7 0.0884 page_remove_rmap
7 0.0884 permission
6 0.0758 __blk_put_request
6 0.0758 __copy_from_user_ll
6 0.0758 __copy_user_intel
6 0.0758 __end_that_request_first
6 0.0758 __find_get_block_slow
6 0.0758 bio_init
6 0.0758 blk_remove_plug
6 0.0758 cfq_may_queue
6 0.0758 dio_cleanup
6 0.0758 dio_complete
6 0.0758 dput
6 0.0758 getname
6 0.0758 inotify_inode_queue_event
6 0.0758 kfree
6 0.0758 memcpy
6 0.0758 mempool_alloc
6 0.0758 path_walk
6 0.0758 rb_erase
6 0.0758 read_tsc
6 0.0758 task_rq_lock
6 0.0758 try_to_wake_up
6 0.0758 vfs_llseek
5 0.0632 __const_udelay
5 0.0632 __fput
5 0.0632 blk_plug_device
5 0.0632 cfq_remove_request
5 0.0632 copy_strings
5 0.0632 current_io_context
5 0.0632 debug_mutex_add_waiter
5 0.0632 dnotify_parent
5 0.0632 do_IRQ
5 0.0632 do_filp_open
5 0.0632 do_path_lookup
5 0.0632 drain_array
5 0.0632 elv_rqhash_add
5 0.0632 free_poll_entry
5 0.0632 generic_segment_checks
5 0.0632 handle_level_irq
5 0.0632 idle_cpu
5 0.0632 init_request_from_bio
5 0.0632 io_schedule
5 0.0632 ip_append_data
5 0.0632 note_interrupt
5 0.0632 proc_sys_lookup_table_one
5 0.0632 sched_clock
5 0.0632 sys_write
5 0.0632 up_read
4 0.0505 __dentry_open
4 0.0505 __freed_request
4 0.0505 __pagevec_lru_add_active
4 0.0505 _atomic_dec_and_lock
4 0.0505 bdev_read_only
4 0.0505 cfq_queue_empty
4 0.0505 copy_to_user
4 0.0505 debug_mutex_free_waiter
4 0.0505 do_munmap
4 0.0505 do_sync_read
4 0.0505 do_wait
4 0.0505 dummy_inode_permission
4 0.0505 elv_completed_request
4 0.0505 end_that_request_last
4 0.0505 filemap_write_and_wait
4 0.0505 find_extend_vma
4 0.0505 find_vma_prev
4 0.0505 generic_file_aio_read
4 0.0505 get_empty_filp
4 0.0505 get_request_wait
4 0.0505 hweight32
4 0.0505 internal_add_timer
4 0.0505 irq_exit
4 0.0505 native_read_tsc
4 0.0505 number
4 0.0505 poll_freewait
4 0.0505 proc_lookup
4 0.0505 setup_arg_pages
4 0.0505 should_remove_suid
4 0.0505 sock_poll
4 0.0505 submit_bio
4 0.0505 tick_sched_timer
4 0.0505 unmap_region
4 0.0505 vma_adjust
3 0.0379 __atomic_notifier_call_chain
3 0.0379 __do_page_cache_readahead
3 0.0379 __elv_add_request
3 0.0379 __find_get_block
3 0.0379 __mark_inode_dirty
3 0.0379 __pte_alloc
3 0.0379 anon_vma_prepare
3 0.0379 anon_vma_unlink
3 0.0379 bio_endio
3 0.0379 bit_waitqueue
3 0.0379 call_rcu
3 0.0379 cfq_init_prio_data
3 0.0379 cfq_put_queue
3 0.0379 cfq_service_tree_add
3 0.0379 clear_bdi_congested
3 0.0379 common_interrupt
3 0.0379 credit_entropy_store
3 0.0379 deactivate_task
3 0.0379 debug_mutex_lock_common
3 0.0379 debug_mutex_unlock
3 0.0379 del_timer
3 0.0379 down_read_trylock
3 0.0379 dummy_inode_getattr
3 0.0379 elv_dispatch_sort
3 0.0379 elv_may_queue
3 0.0379 elv_put_request
3 0.0379 elv_set_request
3 0.0379 enqueue_task
3 0.0379 error_code
3 0.0379 flush_old_exec
3 0.0379 free_hot_cold_page
3 0.0379 generic_fillattr
3 0.0379 kmem_cache_zalloc
3 0.0379 link_path_walk
3 0.0379 lock_timer_base
3 0.0379 locks_remove_flock
3 0.0379 may_expand_vm
3 0.0379 native_load_esp0
3 0.0379 open_namei
3 0.0379 path_lookup_open
3 0.0379 pipe_poll
3 0.0379 proc_flush_task
3 0.0379 release_vm86_irqs
3 0.0379 remove_vma
3 0.0379 rq_init
3 0.0379 sock_alloc_send_skb
3 0.0379 strncpy_from_user
3 0.0379 sys_mmap2
3 0.0379 sys_mprotect
3 0.0379 touch_atime
3 0.0379 vfs_read
3 0.0379 vm_acct_memory
3 0.0379 vm_stat_account
3 0.0379 vma_link
2 0.0253 I_BDEV
2 0.0253 __dec_zone_state
2 0.0253 __group_complete_signal
2 0.0253 __inc_zone_state
2 0.0253 __mutex_lock_interruptible_slowpath
2 0.0253 __path_lookup_intent_open
2 0.0253 __pollwait
2 0.0253 __rb_rotate_right
2 0.0253 __vm_enough_memory
2 0.0253 __vma_link_rb
2 0.0253 __wake_up
2 0.0253 __wake_up_bit
2 0.0253 alloc_inode
2 0.0253 anon_vma_ctor
2 0.0253 bio_alloc
2 0.0253 bio_free
2 0.0253 bio_fs_destructor
2 0.0253 bio_get_nr_vecs
2 0.0253 bio_phys_segments
2 0.0253 blk_queue_bounce
2 0.0253 cache_alloc_refill
2 0.0253 check_pgt_cache
2 0.0253 debug_mutex_set_owner
2 0.0253 dio_bio_end_io
2 0.0253 dio_zero_block
2 0.0253 do_notify_parent
2 0.0253 do_notify_resume
2 0.0253 do_sys_open
2 0.0253 down_read
2 0.0253 dst_alloc
2 0.0253 effective_prio
2 0.0253 elv_dequeue_request
2 0.0253 elv_next_request
2 0.0253 file_free_rcu
2 0.0253 file_read_actor
2 0.0253 filp_close
2 0.0253 find_next_bit
2 0.0253 find_next_zero_bit
2 0.0253 find_vma_prepare
2 0.0253 flush_tlb_mm
2 0.0253 flush_tlb_page
2 0.0253 generic_file_mmap
2 0.0253 get_index
2 0.0253 get_unused_fd
2 0.0253 getnstimeofday
2 0.0253 handle_IRQ_event
2 0.0253 icmp_push_reply
2 0.0253 icmp_send
2 0.0253 kill_fasync
2 0.0253 kunmap_atomic
2 0.0253 lru_cache_add_active
2 0.0253 mntput_no_expire
2 0.0253 native_read_cr0
2 0.0253 notifier_call_chain
2 0.0253 ordered_bio_endio
2 0.0253 page_cache_readahead
2 0.0253 percpu_counter_mod
2 0.0253 poll_initwait
2 0.0253 profile_munmap
2 0.0253 put_filp
2 0.0253 put_io_context
2 0.0253 quicklist_trim
2 0.0253 radix_tree_gang_lookup_tag
2 0.0253 rb_insert_color
2 0.0253 remove_suid
2 0.0253 resume_userspace
2 0.0253 sched_balance_self
2 0.0253 seq_printf
2 0.0253 smp_apic_timer_interrupt
2 0.0253 sys_brk
2 0.0253 sys_fstat64
2 0.0253 sys_read
2 0.0253 sys_rt_sigaction
2 0.0253 sys_stat64
2 0.0253 system_call
2 0.0253 vma_merge
2 0.0253 work_notifysig
2 0.0253 zone_watermark_ok
1 0.0126 __activate_task
1 0.0126 __alloc_skb
1 0.0126 __blocking_notifier_call_chain
1 0.0126 __bread
1 0.0126 __delay
1 0.0126 __do_softirq
1 0.0126 __follow_mount
1 0.0126 __iget
1 0.0126 __ip_select_ident
1 0.0126 __put_user_4
1 0.0126 __rb_rotate_left
1 0.0126 __rcu_process_callbacks
1 0.0126 __rmqueue
1 0.0126 __sigqueue_alloc
1 0.0126 __sigqueue_free
1 0.0126 __sk_dst_check
1 0.0126 __tasklet_schedule
1 0.0126 __tcp_ack_snd_check
1 0.0126 __tcp_push_pending_frames
1 0.0126 __user_walk_fd
1 0.0126 __vma_link
1 0.0126 _read_lock_irqsave
1 0.0126 account_system_time
1 0.0126 account_user_time
1 0.0126 add_disk_randomness
1 0.0126 add_wait_queue
1 0.0126 all_vm_events
1 0.0126 alloc_buffer_head
1 0.0126 alloc_pid
1 0.0126 anon_pipe_buf_release
1 0.0126 anon_vma_link
1 0.0126 arch_setup_additional_pages
1 0.0126 bio_hw_segments
1 0.0126 blk_do_ordered
1 0.0126 blk_start_queueing
1 0.0126 blockable_page_cache_readahead
1 0.0126 blocking_notifier_call_chain
1 0.0126 bmap
1 0.0126 can_vma_merge_after
1 0.0126 cfq_add_rq_rb
1 0.0126 cfq_choose_req
1 0.0126 cfq_cic_rb_lookup
1 0.0126 cfq_rb_erase
1 0.0126 check_userspace
1 0.0126 copy_semundo
1 0.0126 core_sys_select
1 0.0126 cp_new_stat64
1 0.0126 csum_partial_copy_generic
1 0.0126 d_alloc
1 0.0126 dec_zone_page_state
1 0.0126 dequeue_signal
1 0.0126 devinet_ioctl
1 0.0126 do_fork
1 0.0126 do_group_exit
1 0.0126 do_sigaction
1 0.0126 drive_stat_acct
1 0.0126 dummy_capable
1 0.0126 dummy_file_alloc_security
1 0.0126 dummy_task_kill
1 0.0126 dummy_vm_enough_memory
1 0.0126 dup_fd
1 0.0126 elv_rb_add
1 0.0126 end_that_request_first
1 0.0126 exit_itimers
1 0.0126 fasync_helper
1 0.0126 fget
1 0.0126 file_kill
1 0.0126 file_move
1 0.0126 file_ra_state_init
1 0.0126 flush_sigqueue
1 0.0126 fn_hash_lookup
1 0.0126 fput
1 0.0126 free_page_and_swap_cache
1 0.0126 free_pgtables
1 0.0126 get_futex_key
1 0.0126 get_io_context
1 0.0126 get_task_mm
1 0.0126 hrtimer_try_to_cancel
1 0.0126 icmp_glue_bits
1 0.0126 ifind_fast
1 0.0126 inotify_dentry_parent_queue_event
1 0.0126 invalidate_inode_buffers
1 0.0126 ip_route_input
1 0.0126 irq_enter
1 0.0126 local_bh_enable_ip
1 0.0126 may_open
1 0.0126 mempool_alloc_slab
1 0.0126 mempool_free_slab
1 0.0126 n_tty_receive_buf
1 0.0126 native_apic_write
1 0.0126 native_set_pte_at
1 0.0126 open_exec
1 0.0126 opost
1 0.0126 page_add_file_rmap
1 0.0126 pipe_release
1 0.0126 pipe_write
1 0.0126 prio_tree_insert
1 0.0126 prio_tree_remove
1 0.0126 proc_sys_lookup_table
1 0.0126 profile_tick
1 0.0126 pty_chars_in_buffer
1 0.0126 put_tty_queue
1 0.0126 rb_first
1 0.0126 recalc_sigpending_tsk
1 0.0126 release_open_intent
1 0.0126 release_task
1 0.0126 restore_sigcontext
1 0.0126 ret_from_exception
1 0.0126 ret_from_intr
1 0.0126 scheduler_tick
1 0.0126 search_binary_handler
1 0.0126 secure_ip_id
1 0.0126 show_stat
1 0.0126 sigprocmask
1 0.0126 snprintf
1 0.0126 sock_fasync
1 0.0126 softlockup_tick
1 0.0126 special_mapping_nopage
1 0.0126 split_vma
1 0.0126 sys_access
1 0.0126 sys_clone
1 0.0126 sys_execve
1 0.0126 sys_futex
1 0.0126 sys_open
1 0.0126 sys_rt_sigprocmask
1 0.0126 syscall_exit
1 0.0126 sysctl_head_next
1 0.0126 task_running_tick
1 0.0126 tcp_poll
1 0.0126 tcp_rcv_space_adjust
1 0.0126 tcp_recvmsg
1 0.0126 tcp_transmit_skb
1 0.0126 test_set_page_writeback
1 0.0126 tick_do_update_jiffies64
1 0.0126 tick_program_event
1 0.0126 timespec_trunc
1 0.0126 try_to_del_timer_sync
1 0.0126 tty_ioctl
1 0.0126 tty_ldisc_ref_wait
1 0.0126 udp_flush_pending_frames
1 0.0126 unuse_table
1 0.0126 update_process_times
1 0.0126 update_wall_time
1 0.0126 vfs_stat_fd
1 0.0126 vma_prio_tree_remove
1 0.0126 wake_up_inode
1 0.0126 wake_up_new_task
1 0.0126 wake_up_process
1 0.0126 write_cache_pages
[-- Attachment #3: idle.txt --]
[-- Type: text/plain, Size: 9398 bytes --]
Mon Aug 6 21:38:04 EEST 2007
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
Stopping profiling.
Killing daemon.
CPU: PIII, speed 798.025 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
40131 97.6495 vmlinux
318 0.7738 libc-2.6.so
168 0.4088 oprofiled
CPU_CLK_UNHALT...|
samples| %|
------------------
167 99.4048 oprofiled
1 0.5952 anon (tgid:4159 range:0xb7f55000-0xb7f56000)
162 0.3942 bash
133 0.3236 ld-2.6.so
51 0.1241 oprofile
49 0.1192 ISO8859-1.so
33 0.0803 ext3
20 0.0487 jbd
5 0.0122 locale-archive
4 0.0097 gawk
4 0.0097 imap-login
3 0.0073 grep
3 0.0073 reiserfs
2 0.0049 cat
2 0.0049 libcrypto.so.0.9.8
2 0.0049 dovecot-auth
1 0.0024 ide_core
1 0.0024 ide_disk
1 0.0024 libncurses.so.5.6
1 0.0024 which
1 0.0024 libnetsnmp.so.15.0.0
1 0.0024 libnetsnmpmibs.so.15.0.0
1 0.0024 dovecot
CPU_CLK_UNHALT...|
samples| %|
------------------
1 100.000 anon (tgid:3968 range:0xb7fa7000-0xb7fa8000)
CPU: PIII, speed 798.025 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
39019 97.2291 cpu_idle
263 0.6554 poll_idle
131 0.3264 quicklist_trim
92 0.2292 do_wp_page
50 0.1246 check_pgt_cache
44 0.1096 __handle_mm_fault
37 0.0922 get_page_from_freelist
31 0.0772 page_fault
21 0.0523 unmap_vmas
15 0.0374 __copy_to_user_ll
13 0.0324 filemap_nopage
12 0.0299 radix_tree_lookup
11 0.0274 copy_process
10 0.0249 __link_path_walk
10 0.0249 sysenter_past_esp
9 0.0224 do_page_fault
9 0.0224 find_get_page
8 0.0199 kmem_cache_free
6 0.0150 __d_lookup
6 0.0150 __find_get_block
6 0.0150 page_address
5 0.0125 _atomic_dec_and_lock
5 0.0125 do_mmap_pgoff
5 0.0125 free_hot_cold_page
5 0.0125 kmem_cache_alloc
5 0.0125 load_elf_binary
5 0.0125 strnlen_user
4 0.0100 __wake_up_bit
4 0.0100 copy_page_range
4 0.0100 d_alloc
4 0.0100 error_code
4 0.0100 find_vma
4 0.0100 flush_tlb_mm
4 0.0100 inode_init_once
4 0.0100 kmap_atomic_prot
4 0.0100 put_page
4 0.0100 restore_nocheck
4 0.0100 schedule
4 0.0100 vm_normal_page
3 0.0075 __follow_mount
3 0.0075 __mutex_lock_slowpath
3 0.0075 __rmqueue
3 0.0075 anon_vma_unlink
3 0.0075 ktime_get_ts
3 0.0075 mark_page_accessed
3 0.0075 memcpy
3 0.0075 prio_tree_insert
3 0.0075 strncpy_from_user
3 0.0075 sys_close
2 0.0050 __atomic_notifier_call_chain
2 0.0050 __find_get_block_slow
2 0.0050 __lookup_mnt
2 0.0050 __mutex_unlock_slowpath
2 0.0050 __rb_rotate_left
2 0.0050 __rcu_process_callbacks
2 0.0050 __switch_to
2 0.0050 anon_vma_prepare
2 0.0050 cache_alloc_refill
2 0.0050 debug_mutex_add_waiter
2 0.0050 do_path_lookup
2 0.0050 dup_fd
2 0.0050 generic_fillattr
2 0.0050 generic_make_request
2 0.0050 get_task_mm
2 0.0050 get_unused_fd
2 0.0050 getnstimeofday
2 0.0050 hrtimer_interrupt
2 0.0050 hweight32
2 0.0050 inotify_dentry_parent_queue_event
2 0.0050 link_path_walk
2 0.0050 mask_and_ack_8259A
2 0.0050 may_open
2 0.0050 path_walk
2 0.0050 pgd_alloc
2 0.0050 preempt_schedule
2 0.0050 proc_lookup
2 0.0050 tick_nohz_restart_sched_tick
2 0.0050 unlink_file_vma
2 0.0050 vm_stat_account
2 0.0050 vma_merge
2 0.0050 zone_watermark_ok
1 0.0025 __alloc_pages
1 0.0025 __alloc_skb
1 0.0025 __copy_from_user_ll
1 0.0025 __copy_user_intel
1 0.0025 __dentry_open
1 0.0025 __do_softirq
1 0.0025 __end_that_request_first
1 0.0025 __fput
1 0.0025 __free_pages
1 0.0025 __getblk
1 0.0025 __inc_zone_state
1 0.0025 __kmalloc
1 0.0025 __pagevec_lru_add
1 0.0025 __pte_alloc
1 0.0025 __put_task_struct
1 0.0025 __tasklet_schedule
1 0.0025 __vm_enough_memory
1 0.0025 __vma_link_rb
1 0.0025 _read_lock_irqsave
1 0.0025 _spin_lock_irqsave
1 0.0025 account_user_time
1 0.0025 alloc_buffer_head
1 0.0025 alloc_inode
1 0.0025 apic_timer_interrupt
1 0.0025 arch_get_unmapped_area_topdown
1 0.0025 arch_pick_mmap_layout
1 0.0025 attach_pid
1 0.0025 bio_init
1 0.0025 block_read_full_page
1 0.0025 cache_reap
1 0.0025 call_rcu
1 0.0025 can_share_swap_page
1 0.0025 cfq_set_request
1 0.0025 check_userspace
1 0.0025 cleanup_timers
1 0.0025 clockevents_program_event
1 0.0025 clocksource_get_next
1 0.0025 compute_creds
1 0.0025 copy_from_user
1 0.0025 copy_strings
1 0.0025 create_read_pipe
1 0.0025 current_fs_time
1 0.0025 current_io_context
1 0.0025 d_instantiate
1 0.0025 d_splice_alias
1 0.0025 debug_mutex_unlock
1 0.0025 delayed_put_task_struct
1 0.0025 dnotify_flush
1 0.0025 dnotify_parent
1 0.0025 do_exit
1 0.0025 do_fcntl
1 0.0025 do_lookup
1 0.0025 do_munmap
1 0.0025 do_notify_resume
1 0.0025 do_softirq
1 0.0025 do_sys_open
1 0.0025 do_wait
1 0.0025 down_read_trylock
1 0.0025 down_write
1 0.0025 dput
1 0.0025 dummy_inode_getattr
1 0.0025 eligible_child
1 0.0025 elv_dispatch_sort
1 0.0025 end_buffer_write_sync
1 0.0025 enqueue_hrtimer
1 0.0025 exit_mmap
1 0.0025 fget
1 0.0025 fget_light
1 0.0025 file_read_actor
1 0.0025 filp_close
1 0.0025 find_busiest_group
1 0.0025 find_lock_page
1 0.0025 find_next_zero_bit
1 0.0025 find_or_create_page
1 0.0025 find_vma_prev
1 0.0025 flush_old_exec
1 0.0025 flush_sigqueue
1 0.0025 flush_tlb_page
1 0.0025 fput
1 0.0025 free_block
1 0.0025 free_page_and_swap_cache
1 0.0025 free_pgtables
1 0.0025 free_poll_entry
1 0.0025 generic_file_mmap
1 0.0025 generic_segment_checks
1 0.0025 get_empty_filp
1 0.0025 get_next_timer_interrupt
1 0.0025 getname
1 0.0025 inode_change_ok
1 0.0025 internal_add_timer
1 0.0025 invalidate_inode_buffers
1 0.0025 kill_fasync
1 0.0025 kmap
1 0.0025 kmem_cache_zalloc
1 0.0025 ktime_get
1 0.0025 kunmap_atomic
1 0.0025 ll_back_merge_fn
1 0.0025 lookup_hash
1 0.0025 lookup_mnt
1 0.0025 lru_cache_add_active
1 0.0025 mempool_free_slab
1 0.0025 mm_alloc
1 0.0025 mm_release
1 0.0025 mmput
1 0.0025 mntput_no_expire
1 0.0025 native_set_pte_at
1 0.0025 number
1 0.0025 page_add_file_rmap
1 0.0025 page_add_new_anon_rmap
1 0.0025 page_remove_rmap
1 0.0025 page_waitqueue
1 0.0025 path_release
1 0.0025 peer_avl_rebalance
1 0.0025 percpu_counter_mod
1 0.0025 permission
1 0.0025 pipe_write
1 0.0025 prio_tree_replace
1 0.0025 proc_flush_task
1 0.0025 proc_pid_lookup
1 0.0025 radix_tree_tag_set
1 0.0025 rb_insert_color
1 0.0025 remove_vma
1 0.0025 restore_all
1 0.0025 rm_from_queue_full
1 0.0025 rw_verify_area
1 0.0025 sched_balance_self
1 0.0025 set_bh_page
1 0.0025 set_normalized_timespec
1 0.0025 set_task_comm
1 0.0025 setup_sigcontext
1 0.0025 show_stat
1 0.0025 sync_page
1 0.0025 sys_dup2
1 0.0025 sys_faccessat
1 0.0025 sys_ftruncate
1 0.0025 sys_read
1 0.0025 sys_rt_sigprocmask
1 0.0025 sys_select
1 0.0025 task_rq_lock
1 0.0025 task_running_tick
1 0.0025 tcp_poll
1 0.0025 tick_do_update_jiffies64
1 0.0025 tick_nohz_stop_sched_tick
1 0.0025 try_to_wake_up
1 0.0025 unix_poll
1 0.0025 unlock_buffer
1 0.0025 up_read
1 0.0025 update_wall_time
1 0.0025 vfs_getattr
1 0.0025 vfs_permission
1 0.0025 vfs_read
1 0.0025 vm_acct_memory
1 0.0025 vsnprintf
1 0.0025 wake_up_inode
1 0.0025 wake_up_new_task
1 0.0025 work_notifysig
[-- Attachment #4: vmstat_bad.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
4 2 0 161676 9716 55700 0 0 248 2020 50 131 5 4 84 7
4 2 0 161668 9716 55704 0 0 0 16000 253 504 80 20 0 0
3 2 0 161668 9716 55704 0 0 0 16000 249 522 66 34 0 0
3 2 0 161608 9716 55704 0 0 0 15936 251 508 80 20 0 0
3 2 0 161548 9716 55704 0 0 0 15936 249 523 73 28 0 0
4 2 0 161488 9716 55704 0 0 0 15936 250 517 84 16 0 0
3 2 0 161488 9724 55704 0 0 0 15876 252 469 57 43 0 0
4 2 0 161428 9724 55704 0 0 0 15872 253 504 60 40 0 0
3 2 0 161428 9724 55704 0 0 0 15936 248 500 24 76 0 0
4 2 0 161428 9740 55704 0 0 0 15840 258 475 52 48 0 0
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 19:18 ` Dimitrios Apostolou
2007-08-06 19:48 ` Alan Cox
@ 2007-08-06 22:12 ` Rafał Bilski
2007-08-07 0:49 ` Dimitrios Apostolou
1 sibling, 1 reply; 26+ messages in thread
From: Rafał Bilski @ 2007-08-06 22:12 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel
> Hello Rafal,
Hello,
> However I find it quite possible to have reached the throughput limit
> because of software (driver) problems. I have done various testing
> (mostly "hdparm -tT" with exactly the same PC and disks since about
> kernel 2.6.8 (maybe even earlier). I remember with certainty that read
> throughput the early days was about 50MB/s for each of the big disks,
> and combined with RAID 0 I got ~75MB/s. Those figures have been dropping
> gradually with each new kernel release and the situation today, with
> 2.6.22, is that hdparm gives maximum throughput 20MB/s for each disk,
> and for RAID 0 too!
Just tested (plain curiosity).
via82cxxx average result @533MHz:
/dev/hda:
Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec
Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec
pata_via average result @533MHz:
/dev/sda:
Timing cached reads: 234 MB in 2.01 seconds = 116.27 MB/sec
Timing buffered disk reads: 82 MB in 3.05 seconds = 26.92 MB/sec
Same 2.6.23-rc1-git11 kernel.
Yes - constant 6MB/s difference (31%). Cool.
> Dimitris
Regards
Rafał
----------------------------------------------------------------------
Jestes sexy? Dodaj swoje fotki i daj sie ocenic na
>>>http://link.interia.pl/f1b21
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-07 0:40 ` Dimitrios Apostolou
@ 2007-08-07 0:37 ` Alan Cox
2007-08-07 13:15 ` Dimitrios Apostolou
0 siblings, 1 reply; 26+ messages in thread
From: Alan Cox @ 2007-08-07 0:37 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: Rafał Bilski, linux-kernel
> > acpi_pm_read is capable of disappearing into SMM traps which will make
> > it look very slow.
>
> what is an SMM trap? I googled a bit but didn't get it...
One of the less documented bits of the PC architecture. It is possible to
arrange that the CPU jumps into a special mode when triggered by some
specific external event. Originally this was used for stuff like APM and
power management but some laptops use it for stuff like faking the
keyboard interface and the Geode uses it for tons of stuff.
As SMM mode is basically invisible to the OS what oprofile and friends
see isn't what really occurs. So you see
pci write -> some address
you don't then see
SMM
CPU saves processor state
Lots of code runs (eg i2c polling the battery)
code executes RSM
Back to the OS
and the next visible profile point. This can make an I/O operation look
really slow even if it isn't the I/O which is slow.
> the reason I'm talking about a "software driver limit" is because I am
> sure about some facts:
> - The disks can reach very high speeds (60 MB/s on other systems with udma5)
Is UDMA5 being selected firstly ?
> So what is left? Probably only the corresponding kernel module.
Unlikely to be the disk driver as that really hasn't changed tuning for a
very long time. I/O scheduler interactions are however very possible.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 19:48 ` Alan Cox
@ 2007-08-07 0:40 ` Dimitrios Apostolou
2007-08-07 0:37 ` Alan Cox
0 siblings, 1 reply; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-07 0:40 UTC (permalink / raw)
To: Alan Cox; +Cc: Rafał Bilski, linux-kernel
Hi Alan,
Alan Cox wrote:
>>> In Your oprofile output I find "acpi_pm_read" particulary interesting.
>>> Unlike other VIA chipsets, which I know, Your doesn't use VLink to
>>> connect northbridge to southbridge. Instead PCI bus connects these two.
>>> As You probably know maximal PCI throughtput is 133MiB/s. In theory. In
>>> practice probably less.
>
> acpi_pm_read is capable of disappearing into SMM traps which will make
> it look very slow.
what is an SMM trap? I googled a bit but didn't get it...
>
>> about 15MB/s for both disks. When reading I get about 30MB/s again from
>> both disks. The other disk, the small one, is mostly idle, except for
>> writing little bits and bytes now and then. Since the problem occurs
>> when writing, 15MB/s is just too little I think for the PCI bus.
>
> Its about right for some of the older VIA chipsets but if you are seeing
> speed loss then we need to know precisely which kernels the speed dropped
> at. Could be there is an I/O scheduling issue your system shows up or
> some kind of PCI bus contention when both disks are active at once.
I am sure throughput kept diminishing little by little with many 2.6
releases, and that it wasn't a major regression on a specific version.
Unfortunately I cannot backup my words with measurements from older
kernels right now, since the system is hard to boot with such (new udev,
new glibc). However I promise I'll test in the future (probably using
old liveCDs) and come back then with proof.
>
>> I have been ignoring these performance regressions because of no
>> stability problems until now. So could it be that I'm reaching the
>> 20MB/s driver limit and some requests take too long to be served?
>
> Nope.
the reason I'm talking about a "software driver limit" is because I am
sure about some facts:
- The disks can reach very high speeds (60 MB/s on other systems with udma5)
- The chipset on this specific motherboard can reach much higher
numbers, as was measured with old kernels.
- No cable problems (have been changed), no strange dmesg output.
So what is left? Probably only the corresponding kernel module.
Thanks,
Dimitris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-06 22:12 ` Rafał Bilski
@ 2007-08-07 0:49 ` Dimitrios Apostolou
2007-08-07 9:03 ` Rafał Bilski
0 siblings, 1 reply; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-07 0:49 UTC (permalink / raw)
To: Rafał Bilski; +Cc: linux-kernel
Rafał Bilski wrote:
>> Hello Rafal,
> Hello,
>> However I find it quite possible to have reached the throughput limit
>> because of software (driver) problems. I have done various testing
>> (mostly "hdparm -tT" with exactly the same PC and disks since about
>> kernel 2.6.8 (maybe even earlier). I remember with certainty that read
>> throughput the early days was about 50MB/s for each of the big disks,
>> and combined with RAID 0 I got ~75MB/s. Those figures have been
>> dropping gradually with each new kernel release and the situation
>> today, with 2.6.22, is that hdparm gives maximum throughput 20MB/s for
>> each disk, and for RAID 0 too!
> Just tested (plain curiosity).
> via82cxxx average result @533MHz:
> /dev/hda:
> Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec
> Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec
> pata_via average result @533MHz:
> /dev/sda:
> Timing cached reads: 234 MB in 2.01 seconds = 116.27 MB/sec
> Timing buffered disk reads: 82 MB in 3.05 seconds = 26.92 MB/sec
Interesting! I haven't tried libata myself on that system, I only have
remote access to it so I'm a bit afraid...
Rafal, I hope that system you run hdparm on isn't the archlinux one! Is
it easy to load an old kernel (even two years old) and do the same test?
If it is, please let me know of the results.
Thanks in advance,
Dimitris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-07 0:49 ` Dimitrios Apostolou
@ 2007-08-07 9:03 ` Rafał Bilski
2007-08-07 9:43 ` Dimitrios Apostolou
0 siblings, 1 reply; 26+ messages in thread
From: Rafał Bilski @ 2007-08-07 9:03 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel
>> Just tested (plain curiosity).
>> via82cxxx average result @533MHz:
>> /dev/hda:
>> Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec
>> Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec
>> pata_via average result @533MHz:
>> /dev/sda:
>> Timing cached reads: 234 MB in 2.01 seconds = 116.27 MB/sec
>> Timing buffered disk reads: 82 MB in 3.05 seconds = 26.92 MB/sec
>
> Interesting! I haven't tried libata myself on that system, I only have
> remote access to it so I'm a bit afraid...
Just change root=/dev/hda1 to append="root=/dev/sda1" in lilo.conf.
And change fstab. If You don't change "/" then system will go into single
user mode after reboot.
> Rafal, I hope that system you run hdparm on isn't the archlinux one! Is
> it easy to load an old kernel (even two years old) and do the same test?
> If it is, please let me know of the results.
I don't think it is possible. If I remember right kernel can't be older
then glibc kernel headers.
Btw. My disk is 20GB 2,5" ATA33. I wonder how 26MB/s is possible. I don't
expect more.
>
> Thanks in advance,
> Dimitris
Regards
Rafał
----------------------------------------------------------------------
Wszystko czego potrzebujesz latem: kremy do opalania,
stroje kapielowe, maly romans
>>>http://link.interia.pl/f1b15
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-07 9:03 ` Rafał Bilski
@ 2007-08-07 9:43 ` Dimitrios Apostolou
0 siblings, 0 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-07 9:43 UTC (permalink / raw)
To: Rafał Bilski; +Cc: linux-kernel
On Tuesday 07 August 2007 12:03:28 Rafał Bilski wrote:
> >> Just tested (plain curiosity).
> >> via82cxxx average result @533MHz:
> >> /dev/hda:
> >> Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec
> >> Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec
> >> pata_via average result @533MHz:
> >> /dev/sda:
> >> Timing cached reads: 234 MB in 2.01 seconds = 116.27 MB/sec
> >> Timing buffered disk reads: 82 MB in 3.05 seconds = 26.92 MB/sec
> >
> > Interesting! I haven't tried libata myself on that system, I only have
> > remote access to it so I'm a bit afraid...
>
> Just change root=/dev/hda1 to append="root=/dev/sda1" in lilo.conf.
> And change fstab. If You don't change "/" then system will go into single
> user mode after reboot.
>
> > Rafal, I hope that system you run hdparm on isn't the archlinux one! Is
> > it easy to load an old kernel (even two years old) and do the same test?
> > If it is, please let me know of the results.
>
> I don't think it is possible. If I remember right kernel can't be older
> then glibc kernel headers.
> Btw. My disk is 20GB 2,5" ATA33. I wonder how 26MB/s is possible. I don't
> expect more.
Aha! So perhaps libata gives even greater performance benefit for better
drives. I'll try to get local access to my PC and try it. Thanks.
Dimitris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-07 0:37 ` Alan Cox
@ 2007-08-07 13:15 ` Dimitrios Apostolou
0 siblings, 0 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-07 13:15 UTC (permalink / raw)
To: Alan Cox; +Cc: Rafał Bilski, linux-kernel
On Tuesday 07 August 2007 03:37:08 Alan Cox wrote:
> > > acpi_pm_read is capable of disappearing into SMM traps which will make
> > > it look very slow.
> >
> > what is an SMM trap? I googled a bit but didn't get it...
>
> One of the less documented bits of the PC architecture. It is possible to
> arrange that the CPU jumps into a special mode when triggered by some
> specific external event. Originally this was used for stuff like APM and
> power management but some laptops use it for stuff like faking the
> keyboard interface and the Geode uses it for tons of stuff.
>
> As SMM mode is basically invisible to the OS what oprofile and friends
> see isn't what really occurs. So you see
>
> pci write -> some address
>
> you don't then see
>
> SMM
> CPU saves processor state
> Lots of code runs (eg i2c polling the battery)
> code executes RSM
>
> Back to the OS
>
> and the next visible profile point. This can make an I/O operation look
> really slow even if it isn't the I/O which is slow.
I always thought x86 is becoming a really dirty architecture. I now think it
is even uglier. :-p Thank you for the thorough explanation.
>
> > the reason I'm talking about a "software driver limit" is because I am
> > sure about some facts:
> > - The disks can reach very high speeds (60 MB/s on other systems with
> > udma5)
>
> Is UDMA5 being selected firstly ?
What the kernel selects by default is udma4 (66MB/s). I tried forcing udma5
(100MB/s) with hdparm even though I think my chipset doesn't support it, and
indeed there was a difference! After repetitive tests udma4 gives 20MB/s,
udma5 gives 22MB/s. I'm mostly surprised however that I could even set this
option.
>
> > So what is left? Probably only the corresponding kernel module.
>
> Unlikely to be the disk driver as that really hasn't changed tuning for a
> very long time. I/O scheduler interactions are however very possible.
I'm now trying to use the new libata driver and see what happens...
Thanks,
Dimitris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-03 16:03 high system cpu load during intense disk i/o Dimitrios Apostolou
2007-08-05 16:03 ` Dimitrios Apostolou
@ 2007-08-07 14:50 ` Dimitrios Apostolou
2007-08-08 19:08 ` Rafał Bilski
1 sibling, 1 reply; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-07 14:50 UTC (permalink / raw)
To: linux-kernel; +Cc: Alan Cox, Rafał Bilski, Andrew Morton
[-- Attachment #1: Type: text/plain, Size: 535 bytes --]
Hello again,
I 'm now using libata on the same system described before (see attached
dmesg.txt). When writing to both disks I think the problem is now worse
(pata_oprof_bad.txt, pata_vmstat_bad.txt), even the oprofile script needed
half an hour to complete! For completion I also attach the same tests when I
write to only one disk (pata_vmstat_1disk.txt, pata_oprof_1disk.txt), whence
everything is normal.
FWIW, libata did not give me any performance benefit, 20MB/s is again the peak
hdparm reports.
Thanks,
Dimitris
[-- Attachment #2: pata_vmstat_1disk.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 1 0 128812 19620 82484 0 0 117 7817 355 708 24 50 22 4
0 1 0 128804 19620 82484 0 0 0 21120 330 673 0 1 0 99
0 1 0 128804 19620 82484 0 0 0 21184 341 683 0 4 0 96
0 1 0 128804 19620 82484 0 0 0 21120 335 675 0 4 0 96
0 1 0 128804 19620 82484 0 0 0 21124 340 682 0 3 0 97
0 1 0 128744 19620 82484 0 0 0 21120 341 678 0 2 0 98
1 1 0 128744 19628 82484 0 0 0 20980 339 687 0 2 0 98
1 1 0 128744 19628 82484 0 0 0 21120 346 675 0 3 0 97
1 1 0 128744 19628 82484 0 0 0 21120 345 679 0 4 0 96
0 1 0 128744 19628 82484 0 0 0 21128 337 682 0 3 0 97
[-- Attachment #3: pata_oprof_1disk.txt --]
[-- Type: text/plain, Size: 18351 bytes --]
Tue Aug 7 17:47:43 EEST 2007
+ opcontrol --vmlinux=/usr/src/linux-2.6.22-ARCH/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.031 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
40037 96.5142 vmlinux
413 0.9956 libc-2.6.so
225 0.5424 oprofiled
CPU_CLK_UNHALT...|
samples| %|
------------------
224 99.5556 oprofiled
1 0.4444 anon (tgid:5252 range:0xb7fb6000-0xb7fb7000)
215 0.5183 bash
208 0.5014 ld-2.6.so
87 0.2097 ext3
87 0.2097 oprofile
68 0.1639 libata
54 0.1302 ISO8859-1.so
21 0.0506 jbd
11 0.0265 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
8 72.7273 badblocks
3 27.2727 anon (tgid:5166 range:0xb7f1d000-0xb7f1e000)
7 0.0169 imap-login
6 0.0145 grep
CPU_CLK_UNHALT...|
samples| %|
------------------
5 83.3333 grep
1 16.6667 anon (tgid:5267 range:0x805b000-0x807c000)
6 0.0145 libext2fs.so.2.4
6 0.0145 locale-archive
5 0.0121 sd_mod
3 0.0072 gawk
3 0.0072 libcrypto.so.0.9.8
2 0.0048 tr
2 0.0048 libncurses.so.5.6
2 0.0048 screen-4.0.3
2 0.0048 libnetsnmp.so.15.0.0
2 0.0048 dovecot
2 0.0048 sshd
1 0.0024 ls
1 0.0024 libdl-2.6.so
1 0.0024 libnss_files-2.6.so
1 0.0024 libpcre.so.0.0.1
1 0.0024 libreadline.so.5.2
1 0.0024 dirname
1 0.0024 which
1 0.0024 libpopt.so.0.0.0
1 0.0024 dovecot-auth
+ echo
+ echo
+ echo
+ opreport -l /usr/src/linux-2.6.22-ARCH/vmlinux
CPU: PIII, speed 798.031 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
28617 71.4764 cpu_idle
4163 10.3979 poll_idle
3220 8.0426 quicklist_trim
1465 3.6591 check_pgt_cache
414 1.0340 delay_tsc
188 0.4696 do_wp_page
115 0.2872 iowrite8
82 0.2048 ioread8
67 0.1673 mask_and_ack_8259A
65 0.1623 __handle_mm_fault
52 0.1299 get_page_from_freelist
45 0.1124 __copy_to_user_ll
44 0.1099 __blockdev_direct_IO
43 0.1074 unmap_vmas
34 0.0849 put_page
30 0.0749 page_fault
28 0.0699 schedule
22 0.0549 follow_page
20 0.0500 __bio_add_page
20 0.0500 blk_rq_map_sg
19 0.0475 __d_lookup
19 0.0475 enable_8259A_irq
17 0.0425 kmem_cache_free
16 0.0400 filemap_nopage
16 0.0400 kmem_cache_alloc
15 0.0375 __link_path_walk
15 0.0375 find_get_page
14 0.0350 copy_process
14 0.0350 page_address
12 0.0300 __switch_to
12 0.0300 blk_recount_segments
12 0.0300 getnstimeofday
11 0.0275 generic_make_request
11 0.0275 radix_tree_lookup
11 0.0275 read_tsc
11 0.0275 sysenter_past_esp
10 0.0250 __generic_file_aio_write_nolock
10 0.0250 copy_page_range
9 0.0225 cfq_insert_request
9 0.0225 do_generic_mapping_read
9 0.0225 do_page_fault
9 0.0225 enqueue_hrtimer
9 0.0225 find_vma
9 0.0225 ktime_get_ts
9 0.0225 load_elf_binary
8 0.0200 __mod_timer
8 0.0200 cfq_dispatch_requests
8 0.0200 dio_send_cur_page
8 0.0200 get_next_timer_interrupt
8 0.0200 handle_level_irq
8 0.0200 mempool_free
8 0.0200 rb_insert_color
8 0.0200 sched_clock
8 0.0200 strnlen_user
8 0.0200 submit_page_section
7 0.0175 __mutex_lock_slowpath
7 0.0175 blk_backing_dev_unplug
7 0.0175 dio_bio_add_page
7 0.0175 do_IRQ
7 0.0175 hrtimer_force_reprogram
7 0.0175 iowrite32
7 0.0175 try_to_wake_up
6 0.0150 __do_softirq
6 0.0150 do_mmap_pgoff
6 0.0150 fget_light
6 0.0150 find_busiest_group
6 0.0150 permission
6 0.0150 rb_erase
6 0.0150 scsi_dispatch_cmd
6 0.0150 task_running_tick
5 0.0125 __add_entropy_words
5 0.0125 __const_udelay
5 0.0125 __mutex_unlock_slowpath
5 0.0125 __rcu_pending
5 0.0125 _atomic_dec_and_lock
5 0.0125 blk_remove_plug
5 0.0125 block_llseek
5 0.0125 cfq_set_request
5 0.0125 clockevents_program_event
5 0.0125 debug_mutex_add_waiter
5 0.0125 del_timer
5 0.0125 dio_bio_complete
5 0.0125 elv_completed_request
5 0.0125 get_request
5 0.0125 hweight32
5 0.0125 kfree
5 0.0125 lock_timer_base
5 0.0125 memcpy
5 0.0125 native_load_tls
5 0.0125 preempt_schedule
5 0.0125 scsi_get_command
5 0.0125 submit_bio
5 0.0125 tick_nohz_stop_sched_tick
5 0.0125 update_wall_time
4 0.0100 __dentry_open
4 0.0100 __end_that_request_first
4 0.0100 __make_request
4 0.0100 _spin_lock_irqsave
4 0.0100 add_timer_randomness
4 0.0100 alloc_inode
4 0.0100 bio_add_page
4 0.0100 block_read_full_page
4 0.0100 cfq_remove_request
4 0.0100 cond_resched
4 0.0100 dequeue_task
4 0.0100 dio_get_page
4 0.0100 do_sync_write
4 0.0100 dup_fd
4 0.0100 elv_insert
4 0.0100 error_code
4 0.0100 filp_close
4 0.0100 find_next_zero_bit
4 0.0100 flush_tlb_mm
4 0.0100 free_pgtables
4 0.0100 generic_permission
4 0.0100 generic_unplug_device
4 0.0100 get_user_pages
4 0.0100 hrtimer_try_to_cancel
4 0.0100 lock_hrtimer_base
4 0.0100 max_block
4 0.0100 rw_verify_area
4 0.0100 scsi_request_fn
4 0.0100 select_nohz_load_balancer
4 0.0100 tick_nohz_restart_sched_tick
4 0.0100 unlink_file_vma
3 0.0075 __alloc_pages
3 0.0075 __copy_from_user_ll
3 0.0075 __copy_user_intel
3 0.0075 __find_get_block
3 0.0075 __find_get_block_slow
3 0.0075 __fput
3 0.0075 __kmalloc
3 0.0075 __mutex_init
3 0.0075 __pte_alloc
3 0.0075 __rcu_process_callbacks
3 0.0075 __scsi_put_command
3 0.0075 __wake_up_bit
3 0.0075 anon_vma_link
3 0.0075 arch_get_unmapped_area_topdown
3 0.0075 bio_alloc_bioset
3 0.0075 bio_free
3 0.0075 bio_init
3 0.0075 blk_queue_bounce
3 0.0075 blkdev_get_blocks
3 0.0075 cache_alloc_refill
3 0.0075 call_rcu
3 0.0075 cfq_completed_request
3 0.0075 cfq_service_tree_add
3 0.0075 d_alloc
3 0.0075 debug_mutex_lock_common
3 0.0075 dnotify_parent
3 0.0075 do_path_lookup
3 0.0075 do_softirq
3 0.0075 do_sync_read
3 0.0075 do_sys_poll
3 0.0075 drive_stat_acct
3 0.0075 elv_next_request
3 0.0075 file_update_time
3 0.0075 free_block
3 0.0075 generic_file_aio_write_nolock
3 0.0075 generic_file_direct_write
3 0.0075 generic_segment_checks
3 0.0075 get_empty_filp
3 0.0075 get_request_wait
3 0.0075 get_unused_fd
3 0.0075 getname
3 0.0075 hrtimer_forward
3 0.0075 hrtimer_start
3 0.0075 inode_init_once
3 0.0075 inotify_d_instantiate
3 0.0075 io_schedule
3 0.0075 irq_entries_start
3 0.0075 kmem_cache_zalloc
3 0.0075 kunmap_atomic
3 0.0075 mark_page_accessed
3 0.0075 mutex_remove_waiter
3 0.0075 notifier_call_chain
3 0.0075 pipe_read
3 0.0075 rb_next
3 0.0075 recalc_task_prio
3 0.0075 run_timer_softirq
3 0.0075 scsi_device_unbusy
3 0.0075 scsi_finish_command
3 0.0075 scsi_io_completion
3 0.0075 scsi_run_queue
3 0.0075 set_normalized_timespec
3 0.0075 smp_apic_timer_interrupt
3 0.0075 sys_lseek
3 0.0075 vfs_write
2 0.0050 __atomic_notifier_call_chain
2 0.0050 __blk_put_request
2 0.0050 __d_path
2 0.0050 __dec_zone_state
2 0.0050 __do_page_cache_readahead
2 0.0050 __getblk
2 0.0050 __inc_zone_page_state
2 0.0050 __inc_zone_state
2 0.0050 __remove_hrtimer
2 0.0050 __rmqueue
2 0.0050 __scsi_get_command
2 0.0050 __wake_up
2 0.0050 account_system_time
2 0.0050 alloc_pid
2 0.0050 anon_vma_unlink
2 0.0050 atomic_notifier_call_chain
2 0.0050 bit_waitqueue
2 0.0050 blk_do_ordered
2 0.0050 cfq_init_prio_data
2 0.0050 cfq_queue_empty
2 0.0050 clocksource_get_next
2 0.0050 common_interrupt
2 0.0050 copy_to_user
2 0.0050 current_fs_time
2 0.0050 deactivate_task
2 0.0050 debug_mutex_unlock
2 0.0050 dentry_iput
2 0.0050 dio_bio_end_io
2 0.0050 dio_new_bio
2 0.0050 disk_round_stats
2 0.0050 do_exit
2 0.0050 do_lookup
2 0.0050 do_mremap
2 0.0050 do_sigaction
2 0.0050 do_wait
2 0.0050 dummy_capget
2 0.0050 dummy_inode_permission
2 0.0050 elv_may_queue
2 0.0050 elv_queue_empty
2 0.0050 elv_rqhash_del
2 0.0050 end_that_request_last
2 0.0050 enqueue_task
2 0.0050 exit_itimers
2 0.0050 fget
2 0.0050 file_move
2 0.0050 file_read_actor
2 0.0050 find_or_create_page
2 0.0050 flush_signal_handlers
2 0.0050 flush_tlb_page
2 0.0050 fput
2 0.0050 free_hot_cold_page
2 0.0050 free_page_and_swap_cache
2 0.0050 generic_fillattr
2 0.0050 get_signal_to_deliver
2 0.0050 init_request_from_bio
2 0.0050 inotify_inode_queue_event
2 0.0050 internal_add_timer
2 0.0050 irq_enter
2 0.0050 irq_exit
2 0.0050 kill_fasync
2 0.0050 kref_put
2 0.0050 ktime_get
2 0.0050 link_path_walk
2 0.0050 mempool_alloc
2 0.0050 mm_release
2 0.0050 native_load_esp0
2 0.0050 native_read_tsc
2 0.0050 page_cache_readahead
2 0.0050 pipe_poll
2 0.0050 radix_tree_insert
2 0.0050 raise_softirq
2 0.0050 rcu_needs_cpu
2 0.0050 recalc_sigpending_tsk
2 0.0050 release_pages
2 0.0050 release_task
2 0.0050 run_posix_cpu_timers
2 0.0050 sched_balance_self
2 0.0050 scheduler_tick
2 0.0050 scsi_done
2 0.0050 scsi_prep_fn
2 0.0050 scsi_put_command
2 0.0050 scsi_softirq_done
2 0.0050 sys_fstat64
2 0.0050 sys_mprotect
2 0.0050 sys_open
2 0.0050 tasklet_action
2 0.0050 tick_do_update_jiffies64
2 0.0050 tick_nohz_update_jiffies
2 0.0050 unlock_buffer
2 0.0050 vm_normal_page
2 0.0050 vma_adjust
2 0.0050 vma_prio_tree_add
2 0.0050 vsnprintf
1 0.0025 __activate_task
1 0.0025 __block_prepare_write
1 0.0025 __dequeue_signal
1 0.0025 __elv_add_request
1 0.0025 __free_pages_ok
1 0.0025 __generic_unplug_device
1 0.0025 __get_free_pages
1 0.0025 __get_user_4
1 0.0025 __mark_inode_dirty
1 0.0025 __mod_zone_page_state
1 0.0025 __mutex_lock_interruptible_slowpath
1 0.0025 __page_set_anon_rmap
1 0.0025 __path_lookup_intent_open
1 0.0025 __remove_shared_vm_struct
1 0.0025 __scsi_done
1 0.0025 __sigqueue_alloc
1 0.0025 __sock_create
1 0.0025 __tasklet_schedule
1 0.0025 __tcp_push_pending_frames
1 0.0025 __user_walk_fd
1 0.0025 __vm_enough_memory
1 0.0025 alloc_page_buffers
1 0.0025 anon_vma_prepare
1 0.0025 arch_align_stack
1 0.0025 arch_setup_additional_pages
1 0.0025 autoremove_wake_function
1 0.0025 bio_endio
1 0.0025 bio_fs_destructor
1 0.0025 bio_put
1 0.0025 blk_plug_device
1 0.0025 blk_run_queue
1 0.0025 blk_start_queueing
1 0.0025 blk_unplug_timeout
1 0.0025 blkdev_direct_IO
1 0.0025 cache_reap
1 0.0025 can_vma_merge_after
1 0.0025 cfq_add_rq_rb
1 0.0025 cfq_cic_rb_lookup
1 0.0025 cfq_may_queue
1 0.0025 cleanup_timers
1 0.0025 clear_bdi_congested
1 0.0025 clear_inode
1 0.0025 clocksource_watchdog
1 0.0025 copy_strings_kernel
1 0.0025 cp_new_stat64
1 0.0025 create_empty_buffers
1 0.0025 credit_entropy_store
1 0.0025 d_rehash
1 0.0025 datagram_poll
1 0.0025 dec_zone_page_state
1 0.0025 dequeue_signal
1 0.0025 dev_watchdog
1 0.0025 dio_bio_submit
1 0.0025 dio_cleanup
1 0.0025 dio_complete
1 0.0025 dio_zero_block
1 0.0025 do_select
1 0.0025 do_sys_open
1 0.0025 do_timer
1 0.0025 down_read_trylock
1 0.0025 dput
1 0.0025 drain_array
1 0.0025 dummy_bprm_alloc_security
1 0.0025 dummy_file_alloc_security
1 0.0025 dummy_task_alloc_security
1 0.0025 elf_map
1 0.0025 elv_dequeue_request
1 0.0025 elv_dispatch_sort
1 0.0025 elv_rb_add
1 0.0025 elv_rb_del
1 0.0025 elv_rqhash_add
1 0.0025 elv_set_request
1 0.0025 exit_mmap
1 0.0025 expand_files
1 0.0025 fasync_helper
1 0.0025 fd_install
1 0.0025 file_ra_state_init
1 0.0025 find_extend_vma
1 0.0025 find_next_bit
1 0.0025 find_vma_prev
1 0.0025 flush_old_exec
1 0.0025 flush_thread
1 0.0025 free_pid
1 0.0025 free_poll_entry
1 0.0025 generic_file_aio_read
1 0.0025 generic_file_direct_IO
1 0.0025 generic_file_open
1 0.0025 get_device
1 0.0025 get_nr_files
1 0.0025 get_task_mm
1 0.0025 hrtimer_get_next_event
1 0.0025 hrtimer_interrupt
1 0.0025 hrtimer_reprogram
1 0.0025 idle_cpu
1 0.0025 init_new_context
1 0.0025 init_page_buffers
1 0.0025 init_timer
1 0.0025 inode_change_ok
1 0.0025 inode_sub_bytes
1 0.0025 inotify_dentry_parent_queue_event
1 0.0025 iov_fault_in_pages_read
1 0.0025 ip_local_deliver
1 0.0025 ip_output
1 0.0025 kmap_atomic_prot
1 0.0025 kref_get
1 0.0025 kthread_should_stop
1 0.0025 lapic_next_event
1 0.0025 locks_remove_flock
1 0.0025 lru_cache_add_active
1 0.0025 may_open
1 0.0025 mempool_free_slab
1 0.0025 mod_timer
1 0.0025 mutex_unlock
1 0.0025 native_apic_write
1 0.0025 native_flush_tlb
1 0.0025 native_flush_tlb_single
1 0.0025 native_set_pte_at
1 0.0025 neigh_lookup
1 0.0025 new_inode
1 0.0025 note_interrupt
1 0.0025 notify_change
1 0.0025 number
1 0.0025 ordered_bio_endio
1 0.0025 page_add_file_rmap
1 0.0025 page_remove_rmap
1 0.0025 page_waitqueue
1 0.0025 path_lookup_open
1 0.0025 path_walk
1 0.0025 percpu_counter_mod
1 0.0025 pipe_write
1 0.0025 prepare_to_copy
1 0.0025 prio_tree_insert
1 0.0025 proc_flush_task
1 0.0025 proc_lookup
1 0.0025 put_io_context
1 0.0025 raise_softirq_irqoff
1 0.0025 rb_first
1 0.0025 rb_prev
1 0.0025 rcu_pending
1 0.0025 rcu_process_callbacks
1 0.0025 read_chan
1 0.0025 release_vm86_irqs
1 0.0025 remove_suid
1 0.0025 restore_nocheck
1 0.0025 resume_userspace
1 0.0025 ret_from_intr
1 0.0025 rq_init
1 0.0025 sched_exit
1 0.0025 schedule_tail
1 0.0025 schedule_timeout
1 0.0025 scsi_add_timer
1 0.0025 scsi_alloc_sgtable
1 0.0025 scsi_end_request
1 0.0025 scsi_free_sgtable
1 0.0025 scsi_get_cmd_from_req
1 0.0025 scsi_next_command
1 0.0025 search_binary_handler
1 0.0025 secure_ip_id
1 0.0025 seq_printf
1 0.0025 sha_transform
1 0.0025 show_stat
1 0.0025 special_mapping_nopage
1 0.0025 split_vma
1 0.0025 strncpy_from_user
1 0.0025 sys_brk
1 0.0025 sys_close
1 0.0025 sys_dup2
1 0.0025 sys_faccessat
1 0.0025 sys_gettimeofday
1 0.0025 sys_lookup_dcookie
1 0.0025 sys_mkdirat
1 0.0025 sys_read
1 0.0025 sys_rt_sigprocmask
1 0.0025 sys_set_thread_area
1 0.0025 sys_socketcall
1 0.0025 sys_wait4
1 0.0025 sys_write
1 0.0025 task_rq_lock
1 0.0025 tcp_ack
1 0.0025 tcp_poll
1 0.0025 tcp_v4_rcv
1 0.0025 tick_sched_timer
1 0.0025 tty_ioctl
1 0.0025 unix_poll
1 0.0025 up_write
1 0.0025 vfs_getattr
1 0.0025 vfs_llseek
1 0.0025 vfs_mkdir
1 0.0025 vfs_read
1 0.0025 vm_stat_account
1 0.0025 vma_link
1 0.0025 vma_merge
1 0.0025 vma_prio_tree_insert
1 0.0025 vma_prio_tree_remove
1 0.0025 wake_up_bit
1 0.0025 wake_up_new_task
+ date
Tue Aug 7 17:47:50 EEST 2007
[-- Attachment #4: pata_oprof_bad.txt --]
[-- Type: text/plain, Size: 25267 bytes --]
Tue Aug 7 17:06:32 EEST 2007
+ opcontrol --vmlinux=/usr/src/linux-2.6.22-ARCH/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.031 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
36680 82.6759 vmlinux
2666 6.0091 libc-2.6.so
1069 2.4095 perl
700 1.5778 libpython2.5.so.1.0
638 1.4380 mpop
595 1.3411 libata
326 0.7348 oprofiled
322 0.7258 ld-2.6.so
211 0.4756 libgnutls.so.13.3.0
211 0.4756 libtasn1.so.3.0.10
207 0.4666 bash
CPU_CLK_UNHALT...|
samples| %|
------------------
205 99.0338 bash
1 0.4831 anon (tgid:4308 range:0xb7f72000-0xb7f73000)
1 0.4831 anon (tgid:4321 range:0xb7ea8000-0xb7fd2000)
117 0.2637 ext3
95 0.2141 jbd
85 0.1916 oprofile
77 0.1736 imap-login
CPU_CLK_UNHALT...|
samples| %|
------------------
72 93.5065 imap-login
2 2.5974 anon (tgid:3959 range:0xb7f25000-0xb7f26000)
2 2.5974 anon (tgid:3960 range:0xb7ee9000-0xb7eea000)
1 1.2987 anon (tgid:4102 range:0xb7f92000-0xb7f93000)
51 0.1150 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
31 60.7843 badblocks
16 31.3725 anon (tgid:4024 range:0xb7ef2000-0xb7ef3000)
4 7.8431 anon (tgid:4023 range:0xb7f20000-0xb7f21000)
49 0.1104 ISO8859-1.so
39 0.0879 libpthread-2.6.so
39 0.0879 skge
32 0.0721 sd_mod
31 0.0699 dovecot
CPU_CLK_UNHALT...|
samples| %|
------------------
30 96.7742 dovecot
1 3.2258 anon (tgid:3937 range:0xb7f8b000-0xb7f8c000)
17 0.0383 libext2fs.so.2.4
16 0.0361 libgcrypt.so.11.2.3
16 0.0361 dovecot-auth
CPU_CLK_UNHALT...|
samples| %|
------------------
14 87.5000 dovecot-auth
2 12.5000 anon (tgid:3940 range:0xb7fa8000-0xb7fa9000)
11 0.0248 locale-archive
10 0.0225 libnetsnmpmibs.so.15.0.0
9 0.0203 libcrypto.so.0.9.8
8 0.0180 libnetsnmp.so.15.0.0
6 0.0135 ls
5 0.0113 reiserfs
4 0.0090 gawk
3 0.0068 grep
3 0.0068 snmpd
CPU_CLK_UNHALT...|
samples| %|
------------------
2 66.6667 anon (tgid:3920 range:0xb7fd9000-0xb7fda000)
1 33.3333 snmpd
2 0.0045 libdl-2.6.so
2 0.0045 libnss_files-2.6.so
2 0.0045 mktemp
2 0.0045 libnetsnmpagent.so.15.0.0
2 0.0045 libpopt.so.0.0.0
2 0.0045 imap
1 0.0023 libattr.so.1.1.0
1 0.0023 libncurses.so.5.6
1 0.0023 python2.5
CPU_CLK_UNHALT...|
samples| %|
------------------
1 100.000 anon (tgid:4269 range:0xb7eed000-0xb7eee000)
1 0.0023 screen-4.0.3
1 0.0023 crond
1 0.0023 sshd
+ echo
+ echo
+ echo
+ opreport -l /usr/src/linux-2.6.22-ARCH/vmlinux
CPU: PIII, speed 798.031 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
9556 26.0523 delay_tsc
6424 17.5136 iowrite8
6178 16.8430 __switch_to
4015 10.9460 schedule
3335 9.0921 ioread8
1429 3.8959 iowrite32
363 0.9896 native_load_tls
205 0.5589 __blockdev_direct_IO
154 0.4198 dequeue_task
149 0.4062 mask_and_ack_8259A
129 0.3517 __copy_to_user_ll
125 0.3408 follow_page
118 0.3217 do_wp_page
97 0.2644 put_page
86 0.2345 blk_rq_map_sg
82 0.2236 blk_recount_segments
73 0.1990 __bio_add_page
67 0.1827 scsi_request_fn
64 0.1745 __handle_mm_fault
64 0.1745 unmap_vmas
63 0.1718 get_page_from_freelist
61 0.1663 hweight32
60 0.1636 task_running_tick
58 0.1581 __link_path_walk
54 0.1472 __make_request
54 0.1472 rb_erase
53 0.1445 do_page_fault
53 0.1445 hrtimer_interrupt
50 0.1363 kmem_cache_alloc
46 0.1254 irq_entries_start
46 0.1254 page_address
44 0.1200 static_prio_timeslice
42 0.1145 dio_bio_add_page
41 0.1118 sched_clock
40 0.1091 enable_8259A_irq
39 0.1063 dio_send_cur_page
38 0.1036 elv_insert
38 0.1036 filemap_nopage
38 0.1036 get_user_pages
37 0.1009 submit_page_section
36 0.0981 __d_lookup
36 0.0981 kmem_cache_free
36 0.0981 sysenter_past_esp
35 0.0954 page_fault
34 0.0927 dio_get_page
34 0.0927 restore_all
33 0.0900 __generic_file_aio_write_nolock
33 0.0900 cfq_insert_request
31 0.0845 __mutex_lock_slowpath
31 0.0845 blk_do_ordered
31 0.0845 cfq_dispatch_requests
30 0.0818 scsi_get_command
28 0.0763 _spin_lock_irqsave
28 0.0763 find_get_page
28 0.0763 generic_make_request
28 0.0763 preempt_schedule_irq
27 0.0736 cache_reap
26 0.0709 bio_add_page
25 0.0682 __mod_timer
25 0.0682 elv_next_request
25 0.0682 scsi_prep_fn
24 0.0654 dio_bio_complete
23 0.0627 cfq_set_request
23 0.0627 find_vma
23 0.0627 mempool_free
22 0.0600 __const_udelay
22 0.0600 fget_light
21 0.0573 bio_alloc_bioset
21 0.0573 del_timer
21 0.0573 do_sys_poll
20 0.0545 scsi_dispatch_cmd
19 0.0518 __add_entropy_words
19 0.0518 __generic_unplug_device
19 0.0518 mark_page_accessed
19 0.0518 native_read_tsc
18 0.0491 cfq_may_queue
18 0.0491 vm_normal_page
17 0.0463 __alloc_pages
17 0.0463 copy_process
17 0.0463 drain_array
17 0.0463 try_to_wake_up
17 0.0463 vsnprintf
16 0.0436 generic_file_direct_IO
16 0.0436 vfs_write
15 0.0409 __copy_from_user_ll
15 0.0409 blk_backing_dev_unplug
15 0.0409 block_llseek
15 0.0409 cfq_service_tree_add
15 0.0409 generic_file_direct_write
15 0.0409 ip_append_data
15 0.0409 proc_sys_lookup_table_one
15 0.0409 update_wall_time
14 0.0382 __find_get_block
14 0.0382 cfq_remove_request
14 0.0382 dio_cleanup
13 0.0354 cache_alloc_refill
13 0.0354 generic_file_aio_write_nolock
13 0.0354 math_state_restore
13 0.0354 mempool_alloc
13 0.0354 preempt_schedule
12 0.0327 __pollwait
12 0.0327 __wake_up_bit
12 0.0327 do_lookup
12 0.0327 elv_dequeue_request
12 0.0327 file_update_time
12 0.0327 get_request
12 0.0327 load_elf_binary
12 0.0327 lock_timer_base
12 0.0327 permission
12 0.0327 restore_nocheck
11 0.0300 __do_softirq
11 0.0300 __scsi_get_command
11 0.0300 add_timer_randomness
11 0.0300 cfq_choose_req
11 0.0300 current_fs_time
11 0.0300 do_generic_mapping_read
11 0.0300 free_block
11 0.0300 generic_permission
11 0.0300 init_request_from_bio
10 0.0273 blk_plug_device
10 0.0273 cond_resched
10 0.0273 dnotify_parent
10 0.0273 do_sync_write
10 0.0273 scsi_get_cmd_from_req
10 0.0273 scsi_init_cmd_errh
10 0.0273 smp_apic_timer_interrupt
9 0.0245 __find_get_block_slow
9 0.0245 __mutex_unlock_slowpath
9 0.0245 _atomic_dec_and_lock
9 0.0245 dio_bio_submit
9 0.0245 elv_dispatch_sort
9 0.0245 internal_add_timer
9 0.0245 kref_put
9 0.0245 radix_tree_lookup
9 0.0245 rw_verify_area
9 0.0245 strnlen_user
9 0.0245 tick_sched_timer
8 0.0218 __end_that_request_first
8 0.0218 cfq_completed_request
8 0.0218 debug_mutex_add_waiter
8 0.0218 do_mmap_pgoff
8 0.0218 drive_stat_acct
8 0.0218 io_schedule
8 0.0218 kunmap_atomic
8 0.0218 scsi_alloc_sgtable
8 0.0218 scsi_end_request
8 0.0218 unix_poll
7 0.0191 __fput
7 0.0191 bio_init
7 0.0191 blkdev_get_blocks
7 0.0191 cfq_activate_request
7 0.0191 cfq_cic_rb_lookup
7 0.0191 cfq_dispatch_insert
7 0.0191 elv_rqhash_add
7 0.0191 get_task_mm
7 0.0191 get_unmapped_area
7 0.0191 handle_level_irq
7 0.0191 inode_init_once
7 0.0191 memcpy
7 0.0191 number
7 0.0191 page_remove_rmap
7 0.0191 pipe_poll
7 0.0191 rb_insert_color
7 0.0191 read_tsc
7 0.0191 rq_init
7 0.0191 scsi_add_timer
7 0.0191 scsi_init_io
7 0.0191 strncpy_from_user
7 0.0191 sys_write
6 0.0164 __blk_put_request
6 0.0164 blkdev_direct_IO
6 0.0164 block_write_full_page
6 0.0164 debug_mutex_unlock
6 0.0164 dio_zero_block
6 0.0164 do_path_lookup
6 0.0164 dput
6 0.0164 elv_completed_request
6 0.0164 getnstimeofday
6 0.0164 kfree
6 0.0164 kref_get
6 0.0164 link_path_walk
6 0.0164 open_namei
6 0.0164 run_posix_cpu_timers
6 0.0164 scsi_io_completion
6 0.0164 scsi_next_command
6 0.0164 sock_poll
6 0.0164 submit_bio
6 0.0164 sync_sb_inodes
6 0.0164 up_read
5 0.0136 __copy_user_intel
5 0.0136 __getblk
5 0.0136 __ip_route_output_key
5 0.0136 arch_get_unmapped_area_topdown
5 0.0136 bit_waitqueue
5 0.0136 blk_done_softirq
5 0.0136 blk_queue_bounce
5 0.0136 blk_remove_plug
5 0.0136 cfq_add_rq_rb
5 0.0136 copy_page_range
5 0.0136 copy_to_user
5 0.0136 do_sync_read
5 0.0136 error_code
5 0.0136 file_ra_state_init
5 0.0136 find_next_zero_bit
5 0.0136 generic_segment_checks
5 0.0136 generic_unplug_device
5 0.0136 inotify_inode_queue_event
5 0.0136 irq_exit
5 0.0136 kmap_atomic_prot
5 0.0136 max_block
5 0.0136 may_expand_vm
5 0.0136 neigh_lookup
5 0.0136 proc_lookup
5 0.0136 rb_next
5 0.0136 scheduler_tick
5 0.0136 scsi_decide_disposition
5 0.0136 scsi_device_unbusy
5 0.0136 scsi_finish_command
5 0.0136 sys_open
5 0.0136 sys_rt_sigprocmask
5 0.0136 tcp_transmit_skb
5 0.0136 vma_adjust
5 0.0136 zone_watermark_ok
4 0.0109 __rmqueue
4 0.0109 __xfrm_lookup
4 0.0109 account_system_time
4 0.0109 add_wait_queue
4 0.0109 cfq_queue_empty
4 0.0109 clocksource_get_next
4 0.0109 device_not_available
4 0.0109 dio_bio_end_io
4 0.0109 dio_new_bio
4 0.0109 do_filp_open
4 0.0109 do_gettimeofday
4 0.0109 down_read
4 0.0109 down_read_trylock
4 0.0109 dummy_inode_permission
4 0.0109 elv_queue_empty
4 0.0109 elv_rb_add
4 0.0109 find_extend_vma
4 0.0109 flush_old_exec
4 0.0109 flush_tlb_page
4 0.0109 fput
4 0.0109 generic_fillattr
4 0.0109 getname
4 0.0109 handle_IRQ_event
4 0.0109 kmap_atomic
4 0.0109 kmem_cache_zalloc
4 0.0109 ktime_get
4 0.0109 lru_cache_add_active
4 0.0109 mempool_alloc_slab
4 0.0109 native_flush_tlb_single
4 0.0109 need_resched
4 0.0109 remove_vma
4 0.0109 resume_userspace
4 0.0109 skb_copy_and_csum_bits
4 0.0109 sys_mprotect
4 0.0109 tcp_poll
4 0.0109 vm_stat_account
3 0.0082 I_BDEV
3 0.0082 __delay
3 0.0082 __dentry_open
3 0.0082 __elv_add_request
3 0.0082 __inc_zone_state
3 0.0082 __kmalloc
3 0.0082 __pagevec_lru_add_active
3 0.0082 __rcu_pending
3 0.0082 __scsi_put_command
3 0.0082 __wake_up_common
3 0.0082 account_user_time
3 0.0082 alloc_inode
3 0.0082 anon_vma_unlink
3 0.0082 apic_timer_interrupt
3 0.0082 bio_get_nr_vecs
3 0.0082 blk_complete_request
3 0.0082 cfq_rb_erase
3 0.0082 common_interrupt
3 0.0082 copy_from_user
3 0.0082 copy_strings
3 0.0082 cp_new_stat64
3 0.0082 deactivate_task
3 0.0082 dio_complete
3 0.0082 do_exit
3 0.0082 do_munmap
3 0.0082 do_timer
3 0.0082 do_writepages
3 0.0082 elv_rb_del
3 0.0082 end_that_request_last
3 0.0082 file_read_actor
3 0.0082 filemap_write_and_wait
3 0.0082 filp_close
3 0.0082 free_hot_cold_page
3 0.0082 generic_file_aio_read
3 0.0082 get_device
3 0.0082 get_empty_filp
3 0.0082 get_index
3 0.0082 get_nr_files
3 0.0082 icmp_push_reply
3 0.0082 inode_sub_bytes
3 0.0082 ip_push_pending_frames
3 0.0082 kobject_put
3 0.0082 ktime_get_ts
3 0.0082 locks_remove_flock
3 0.0082 lru_cache_add
3 0.0082 may_open
3 0.0082 n_tty_receive_buf
3 0.0082 neigh_update
3 0.0082 nf_ct_attach
3 0.0082 percpu_counter_mod
3 0.0082 pgd_alloc
3 0.0082 put_device
3 0.0082 radix_tree_gang_lookup_tag
3 0.0082 radix_tree_insert
3 0.0082 rcu_check_callbacks
3 0.0082 recalc_task_prio
3 0.0082 scsi_release_buffers
3 0.0082 scsi_run_queue
3 0.0082 setup_sigcontext
3 0.0082 show_stat
3 0.0082 sigprocmask
3 0.0082 sock_alloc_send_skb
3 0.0082 sys_close
3 0.0082 sys_gettimeofday
3 0.0082 task_rq_lock
3 0.0082 timespec_trunc
3 0.0082 touch_atime
3 0.0082 vfs_llseek
3 0.0082 vma_merge
3 0.0082 wake_up_inode
3 0.0082 xrlim_allow
2 0.0055 __alloc_skb
2 0.0055 __atomic_notifier_call_chain
2 0.0055 __bread
2 0.0055 __dec_zone_page_state
2 0.0055 __free_pages_ok
2 0.0055 __freed_request
2 0.0055 __ip_select_ident
2 0.0055 __qdisc_run
2 0.0055 __rcu_process_callbacks
2 0.0055 __tcp_select_window
2 0.0055 __vm_enough_memory
2 0.0055 __wake_up
2 0.0055 arch_setup_additional_pages
2 0.0055 bdev_read_only
2 0.0055 bio_free
2 0.0055 bio_fs_destructor
2 0.0055 blk_run_queue
2 0.0055 blk_start_queueing
2 0.0055 block_read_full_page
2 0.0055 call_rcu
2 0.0055 cfq_init_prio_data
2 0.0055 cfq_put_queue
2 0.0055 cfq_put_request
2 0.0055 cfq_resort_rr_list
2 0.0055 clear_bdi_congested
2 0.0055 clockevents_program_event
2 0.0055 copy_semundo
2 0.0055 credit_entropy_store
2 0.0055 current_io_context
2 0.0055 d_instantiate
2 0.0055 dcache_readdir
2 0.0055 dec_zone_page_state
2 0.0055 dev_ioctl
2 0.0055 disk_round_stats
2 0.0055 do_sigaction
2 0.0055 do_softirq
2 0.0055 do_sys_open
2 0.0055 do_wait
2 0.0055 dummy_file_alloc_security
2 0.0055 dummy_file_fcntl
2 0.0055 dummy_file_permission
2 0.0055 dummy_vm_enough_memory
2 0.0055 elv_may_queue
2 0.0055 elv_rqhash_del
2 0.0055 enqueue_hrtimer
2 0.0055 enqueue_task
2 0.0055 file_kill
2 0.0055 find_vma_prev
2 0.0055 free_pages_bulk
2 0.0055 free_pgtables
2 0.0055 freed_request
2 0.0055 generic_file_mmap
2 0.0055 get_request_wait
2 0.0055 half_md4_transform
2 0.0055 hrtimer_init
2 0.0055 icmp_send
2 0.0055 idle_cpu
2 0.0055 init_page_buffers
2 0.0055 init_timer
2 0.0055 inotify_dentry_parent_queue_event
2 0.0055 invalidate_inode_buffers
2 0.0055 ip_output
2 0.0055 ip_route_input
2 0.0055 kill_fasync
2 0.0055 ll_back_merge_fn
2 0.0055 locks_remove_posix
2 0.0055 mod_timer
2 0.0055 mutex_remove_waiter
2 0.0055 nameidata_to_filp
2 0.0055 note_interrupt
2 0.0055 notifier_call_chain
2 0.0055 ordered_bio_endio
2 0.0055 page_waitqueue
2 0.0055 path_lookup_open
2 0.0055 path_walk
2 0.0055 poll_initwait
2 0.0055 prio_tree_next
2 0.0055 proc_sys_lookup_table
2 0.0055 put_io_context
2 0.0055 put_pid
2 0.0055 queue_delayed_work
2 0.0055 raise_softirq
2 0.0055 rb_prev
2 0.0055 rcu_pending
2 0.0055 recalc_sigpending_tsk
2 0.0055 release_pages
2 0.0055 ret_from_exception
2 0.0055 ret_from_intr
2 0.0055 run_timer_softirq
2 0.0055 schedule_delayed_work
2 0.0055 schedule_timeout
2 0.0055 scsi_put_command
2 0.0055 seq_printf
2 0.0055 seq_read
2 0.0055 should_remove_suid
2 0.0055 sk_alloc
2 0.0055 sock_init_data
2 0.0055 split_vma
2 0.0055 submit_bh
2 0.0055 sys_brk
2 0.0055 sys_faccessat
2 0.0055 sys_fstat64
2 0.0055 sys_mmap2
2 0.0055 sys_rt_sigaction
2 0.0055 tcp_current_mss
2 0.0055 unlink_file_vma
2 0.0055 vma_link
2 0.0055 worker_thread
2 0.0055 writeback_inodes
1 0.0027 __activate_task
1 0.0027 __block_write_full_page
1 0.0027 __blocking_notifier_call_chain
1 0.0027 __brelse
1 0.0027 __capable
1 0.0027 __d_path
1 0.0027 __dec_zone_state
1 0.0027 __dequeue_signal
1 0.0027 __do_page_cache_readahead
1 0.0027 __first_cpu
1 0.0027 __follow_mount
1 0.0027 __get_user_4
1 0.0027 __iget
1 0.0027 __kfree_skb
1 0.0027 __lru_add_drain
1 0.0027 __mark_inode_dirty
1 0.0027 __mutex_init
1 0.0027 __page_set_anon_rmap
1 0.0027 __path_lookup_intent_open
1 0.0027 __put_task_struct
1 0.0027 __put_unused_fd
1 0.0027 __random32
1 0.0027 __rb_rotate_right
1 0.0027 __remove_hrtimer
1 0.0027 __scsi_done
1 0.0027 __set_page_dirty_nobuffers
1 0.0027 __udp4_lib_rcv
1 0.0027 __vma_link
1 0.0027 __wait_on_buffer
1 0.0027 __writepage
1 0.0027 activate_page
1 0.0027 add_disk_randomness
1 0.0027 add_to_page_cache
1 0.0027 alarm_setitimer
1 0.0027 alloc_pid
1 0.0027 anon_pipe_buf_release
1 0.0027 anon_vma_link
1 0.0027 anon_vma_prepare
1 0.0027 arch_pick_mmap_layout
1 0.0027 arch_unmap_area_topdown
1 0.0027 arp_hash
1 0.0027 arp_process
1 0.0027 atomic_notifier_call_chain
1 0.0027 bictcp_cong_avoid
1 0.0027 bio_alloc
1 0.0027 bio_hw_segments
1 0.0027 bio_phys_segments
1 0.0027 bio_put
1 0.0027 blockable_page_cache_readahead
1 0.0027 cached_lookup
1 0.0027 cfq_allow_merge
1 0.0027 check_userspace
1 0.0027 clear_inode
1 0.0027 clear_page_dirty_for_io
1 0.0027 clear_user
1 0.0027 compute_creds
1 0.0027 copy_files
1 0.0027 csum_partial_copy_generic
1 0.0027 d_alloc
1 0.0027 d_path
1 0.0027 debug_mutex_free_waiter
1 0.0027 debug_mutex_lock_common
1 0.0027 debug_mutex_set_owner
1 0.0027 dentry_iput
1 0.0027 dev_get_flags
1 0.0027 dev_queue_xmit
1 0.0027 do_IRQ
1 0.0027 do_fcntl
1 0.0027 do_mpage_readpage
1 0.0027 do_nanosleep
1 0.0027 do_select
1 0.0027 down_write
1 0.0027 dst_alloc
1 0.0027 dummy_bprm_check_security
1 0.0027 dummy_capable
1 0.0027 dummy_inode_getattr
1 0.0027 dummy_task_free_security
1 0.0027 dummy_task_wait
1 0.0027 effective_prio
1 0.0027 eligible_child
1 0.0027 elv_merge
1 0.0027 elv_set_request
1 0.0027 exit_aio
1 0.0027 exit_itimers
1 0.0027 exit_robust_list
1 0.0027 expand_files
1 0.0027 extract_buf
1 0.0027 fib_get_table
1 0.0027 fib_semantic_match
1 0.0027 fib_validate_source
1 0.0027 file_move
1 0.0027 file_permission
1 0.0027 find_busiest_group
1 0.0027 find_get_pages_tag
1 0.0027 find_or_create_page
1 0.0027 find_vma_prepare
1 0.0027 finish_wait
1 0.0027 flush_signal_handlers
1 0.0027 flush_tlb_mm
1 0.0027 free_page_and_swap_cache
1 0.0027 free_pgd_range
1 0.0027 free_pipe_info
1 0.0027 free_poll_entry
1 0.0027 generic_block_bmap
1 0.0027 generic_file_buffered_write
1 0.0027 get_futex_key
1 0.0027 get_io_context
1 0.0027 get_vmalloc_info
1 0.0027 get_write_access
1 0.0027 groups_search
1 0.0027 hrtimer_run_queues
1 0.0027 icmp_out_count
1 0.0027 iget_locked
1 0.0027 inc_zone_page_state
1 0.0027 inode_add_bytes
1 0.0027 ip_queue_xmit
1 0.0027 ip_rcv
1 0.0027 ip_route_output_flow
1 0.0027 ip_route_output_key
1 0.0027 irq_enter
1 0.0027 jiffies_64_to_clock_t
1 0.0027 kblockd_schedule_work
1 0.0027 kthread_should_stop
1 0.0027 local_bh_enable
1 0.0027 local_bh_enable_ip
1 0.0027 lock_hrtimer_base
1 0.0027 lookup_mnt
1 0.0027 mark_buffer_dirty
1 0.0027 mm_alloc
1 0.0027 mntput_no_expire
1 0.0027 mod_zone_page_state
1 0.0027 module_put
1 0.0027 mutex_unlock
1 0.0027 native_load_esp0
1 0.0027 native_read_cr0
1 0.0027 native_write_cr0
1 0.0027 netif_receive_skb
1 0.0027 notify_change
1 0.0027 page_add_file_rmap
1 0.0027 page_add_new_anon_rmap
1 0.0027 page_cache_readahead
1 0.0027 page_check_address
1 0.0027 pfifo_fast_enqueue
1 0.0027 pipe_wait
1 0.0027 pipe_write_fasync
1 0.0027 poll_freewait
1 0.0027 prepare_binprm
1 0.0027 prepare_to_copy
1 0.0027 prio_tree_insert
1 0.0027 proc_flush_task
1 0.0027 proc_sys_permission
1 0.0027 profile_tick
1 0.0027 pty_write
1 0.0027 put_files_struct
1 0.0027 put_pages_list
1 0.0027 queue_delayed_work_on
1 0.0027 quicklist_trim
1 0.0027 radix_tree_tag_clear
1 0.0027 radix_tree_tag_set
1 0.0027 release_open_intent
1 0.0027 release_vm86_irqs
1 0.0027 remove_suid
1 0.0027 remove_wait_queue
1 0.0027 resched_task
1 0.0027 rt_hash_code
1 0.0027 run_local_timers
1 0.0027 run_rebalance_domains
1 0.0027 run_workqueue
1 0.0027 rwsem_wake
1 0.0027 sched_exit
1 0.0027 scsi_free_sgtable
1 0.0027 scsi_softirq_done
1 0.0027 set_bh_page
1 0.0027 sha_transform
1 0.0027 show_map_internal
1 0.0027 single_release
1 0.0027 sk_common_release
1 0.0027 skb_clone
1 0.0027 skb_release_data
1 0.0027 sock_create
1 0.0027 sock_def_write_space
1 0.0027 softlockup_tick
1 0.0027 sync_buffer
1 0.0027 sync_dirty_buffer
1 0.0027 sys_clone
1 0.0027 sys_connect
1 0.0027 sys_fcntl64
1 0.0027 sys_kill
1 0.0027 sys_llseek
1 0.0027 sys_mkdirat
1 0.0027 sys_poll
1 0.0027 sys_select
1 0.0027 sys_socket
1 0.0027 sys_stat64
1 0.0027 sys_wait4
1 0.0027 syscall_exit
1 0.0027 sysctl_head_next
1 0.0027 tcp_ack
1 0.0027 tcp_rcv_established
1 0.0027 tcp_v4_send_check
1 0.0027 test_set_page_writeback
1 0.0027 tick_program_event
1 0.0027 tty_hung_up_p
1 0.0027 tty_write
1 0.0027 unlock_buffer
1 0.0027 update_process_times
1 0.0027 vfs_read
1 0.0027 vmstat_next
1 0.0027 wake_up_bit
+ date
Tue Aug 7 17:35:14 EEST 2007
[-- Attachment #5: pata_vmstat_bad.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
4 2 0 168488 10972 47144 0 0 218 23466 381 763 26 52 16 7
4 2 0 168480 10972 47152 0 0 0 16000 252 516 19 81 0 0
5 2 0 168480 10972 47152 0 0 0 16000 254 482 26 74 0 0
5 2 0 168480 10972 47152 0 0 0 16000 253 485 16 84 0 0
5 2 0 168480 10972 47152 0 0 0 16004 253 474 13 87 0 0
5 2 0 168480 10972 47152 0 0 0 16000 252 470 22 78 0 0
5 2 0 168480 10980 47152 0 0 0 15860 252 476 22 78 0 0
5 2 0 168480 10980 47152 0 0 0 16000 250 486 13 87 0 0
5 2 0 168480 10980 47152 0 0 0 16000 252 475 39 61 0 0
5 2 0 168480 10980 47152 0 0 0 16000 256 464 45 55 0 0
[-- Attachment #6: dmesg.txt --]
[-- Type: text/plain, Size: 12114 bytes --]
Linux version 2.6.22-ARCH (root@Wohnung) (gcc version 4.2.1 20070704 (prerelease)) #1 SMP PREEMPT Thu Aug 2 18:27:37 CEST 2007
BIOS-provided physical RAM map:
BIOS-e820: 0000000000000000 - 00000000000a0000 (usable)
BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
BIOS-e820: 0000000000100000 - 000000000fff0000 (usable)
BIOS-e820: 000000000fff0000 - 000000000fff3000 (ACPI NVS)
BIOS-e820: 000000000fff3000 - 0000000010000000 (ACPI data)
BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved)
using polling idle threads.
0MB HIGHMEM available.
255MB LOWMEM available.
Entering add_active_range(0, 0, 65520) 0 entries of 256 used
Zone PFN ranges:
DMA 0 -> 4096
Normal 4096 -> 65520
HighMem 65520 -> 65520
early_node_map[1] active PFN ranges
0: 0 -> 65520
On node 0 totalpages: 65520
DMA zone: 32 pages used for memmap
DMA zone: 0 pages reserved
DMA zone: 4064 pages, LIFO batch:0
Normal zone: 479 pages used for memmap
Normal zone: 60945 pages, LIFO batch:15
HighMem zone: 0 pages used for memmap
DMI 2.2 present.
ACPI: RSDP 000F6B30, 0014 (r0 GBT )
ACPI: RSDT 0FFF3000, 0028 (r1 GBT AWRDACPI 42302E31 AWRD 0)
ACPI: FACP 0FFF3040, 0074 (r1 GBT AWRDACPI 42302E31 AWRD 0)
ACPI: DSDT 0FFF30C0, 224C (r1 GBT AWRDACPI 1000 MSFT 100000C)
ACPI: FACS 0FFF0000, 0040
ACPI: PM-Timer IO Port: 0x4008
Allocating PCI resources starting at 20000000 (gap: 10000000:efff0000)
Built 1 zonelists. Total pages: 65009
Kernel command line: auto BOOT_IMAGE=arch ro root=/dev/sdb1 lapic nmi_watchdog=0 idle=poll
Local APIC disabled by BIOS -- reenabling.
Found and enabled local APIC!
mapped APIC to ffffd000 (fee00000)
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Initializing CPU#0
PID hash table entries: 1024 (order: 10, 4096 bytes)
Detected 798.031 MHz processor.
Console: colour VGA+ 80x25
Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
Memory: 254928k/262080k available (2392k kernel code, 6700k reserved, 787k data, 304k init, 0k highmem)
virtual kernel memory layout:
fixmap : 0xfff82000 - 0xfffff000 ( 500 kB)
pkmap : 0xff800000 - 0xffc00000 (4096 kB)
vmalloc : 0xd0800000 - 0xff7fe000 ( 751 MB)
lowmem : 0xc0000000 - 0xcfff0000 ( 255 MB)
.init : 0xc0421000 - 0xc046d000 ( 304 kB)
.data : 0xc03561df - 0xc041b1bc ( 787 kB)
.text : 0xc0100000 - 0xc03561df (2392 kB)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
Calibrating delay using timer specific routine.. 1597.74 BogoMIPS (lpj=2661984)
Security Framework v1.0.0 initialized
Mount-cache hash table entries: 512
CPU: After generic identify, caps: 0387fbff 00000000 00000000 00000000 00000000 00000000 00000000
CPU: L1 I cache: 16K, L1 D cache: 16K
CPU: L2 cache: 256K
CPU serial number disabled.
CPU: After all inits, caps: 0383fbff 00000000 00000000 00000040 00000000 00000000 00000000
Intel machine check architecture supported.
Intel machine check reporting enabled on CPU#0.
Compat vDSO mapped to ffffe000.
Checking 'hlt' instruction... OK.
SMP alternatives: switching to UP code
Freeing SMP alternatives: 11k freed
Early unpacking initramfs... done
ACPI: Core revision 20070126
ACPI: Looking for DSDT in initramfs... error, file /DSDT.aml not found.
ACPI: setting ELCR to 0200 (from 1e00)
CPU0: Intel Pentium III (Coppermine) stepping 06
SMP motherboard not detected.
Brought up 1 CPUs
Booting paravirtualized kernel on bare hardware
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: PCI BIOS revision 2.10 entry at 0xfb370, last bus=1
PCI: Using configuration type 1
Setting up standard PCI resources
ACPI: Interpreter enabled
ACPI: (supports S0 S1 S4 S5)
ACPI: Using PIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (0000:00)
PCI: Probing PCI hardware (bus 00)
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 1 3 4 5 6 7 *10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKB] (IRQs 1 3 4 5 6 7 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKC] (IRQs 1 3 4 5 6 7 10 11 *12 14 15)
ACPI: PCI Interrupt Link [LNKD] (IRQs 1 3 4 5 6 7 10 *11 12 14 15)
Linux Plug and Play Support v0.97 (c) Adam Belay
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp: PnP ACPI: found 12 devices
ACPI: ACPI bus type pnp unregistered
SCSI subsystem initialized
PCI: Using ACPI for IRQ routing
PCI: If a device doesn't work, try "pci=routeirq". If it helps, post a report
NetLabel: Initializing
NetLabel: domain hash size = 128
NetLabel: protocols = UNLABELED CIPSOv4
NetLabel: unlabeled traffic allowed by default
ACPI: RTC can wake from S4
pnp: 00:00: iomem range 0xf0000-0xf3fff could not be reserved
pnp: 00:00: iomem range 0xf4000-0xf7fff could not be reserved
pnp: 00:00: iomem range 0xf8000-0xfbfff could not be reserved
pnp: 00:00: iomem range 0xfc000-0xfffff could not be reserved
Time: tsc clocksource has been installed.
PCI: Bridge: 0000:00:01.0
IO window: disabled.
MEM window: d8000000-dfffffff
PREFETCH window: 20000000-200fffff
PCI: Setting latency timer of device 0000:00:01.0 to 64
NET: Registered protocol family 2
IP route cache hash table entries: 2048 (order: 1, 8192 bytes)
TCP established hash table entries: 8192 (order: 5, 131072 bytes)
TCP bind hash table entries: 8192 (order: 4, 98304 bytes)
TCP: Hash tables configured (established 8192 bind 8192)
TCP reno registered
checking if image is initramfs... it is
Freeing initrd memory: 597k freed
apm: BIOS version 1.2 Flags 0x07 (Driver version 1.16ac)
apm: overridden by ACPI.
VFS: Disk quotas dquot_6.5.1
Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
PCI: VIA PCI bridge detected. Disabling DAC.
Activating ISA DMA hang workarounds.
Boot video device is 0000:01:00.0
isapnp: Scanning for PnP cards...
Switched to high resolution mode on CPU 0
isapnp: No Plug & Play device found
Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled
00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
00:09: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
RAMDISK driver initialized: 16 RAM disks of 16384K size 1024 blocksize
loop: module loaded
input: Macintosh mouse button emulation as /class/input/input0
PNP: PS/2 Controller [PNP0303:PS2K] at 0x60,0x64 irq 1
PNP: PS/2 controller doesn't have AUX irq; using default 12
serio: i8042 KBD port at 0x60,0x64 irq 1
mice: PS/2 mouse device common for all mice
input: AT Translated Set 2 keyboard as /class/input/input1
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
Using IPI No-Shortcut mode
Freeing unused kernel memory: 304k freed
libata version 2.21 loaded.
pata_via 0000:00:07.1: version 0.3.1
scsi0 : pata_via
scsi1 : pata_via
ata1: PATA max UDMA/66 cmd 0x000101f0 ctl 0x000103f6 bmdma 0x0001e000 irq 14
ata2: PATA max UDMA/66 cmd 0x00010170 ctl 0x00010376 bmdma 0x0001e008 irq 15
ata1.00: ATA-7: WDC WD2500JB-55REA0, 20.00K20, max UDMA/100
ata1.00: 488397168 sectors, multi 16: LBA48
ata1.01: ata_hpa_resize 1: hpa sectors (33554433) is smaller than sectors (40132503)
ata1.01: ATA-5: MAXTOR 6L020J1, A93.0500, max UDMA/133
ata1.01: 40132503 sectors, multi 16: LBA
ata1.00: configured for UDMA/66
ata1.01: configured for UDMA/66
ata2.00: ATA-7: WDC WD2500JB-55REA0, 20.00K20, max UDMA/100
ata2.00: 488397168 sectors, multi 16: LBA48
ata2.00: configured for UDMA/66
scsi 0:0:0:0: Direct-Access ATA WDC WD2500JB-55R 20.0 PQ: 0 ANSI: 5
scsi 0:0:1:0: Direct-Access ATA MAXTOR 6L020J1 A93. PQ: 0 ANSI: 5
scsi 1:0:0:0: Direct-Access ATA WDC WD2500JB-55R 20.0 PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 488397168 512-byte hardware sectors (250059 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:0:0: [sda] 488397168 512-byte hardware sectors (250059 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sda: unknown partition table
sd 0:0:0:0: [sda] Attached SCSI disk
sd 0:0:1:0: [sdb] 40132503 512-byte hardware sectors (20548 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: 00 3a 00 00
sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:1:0: [sdb] 40132503 512-byte hardware sectors (20548 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: 00 3a 00 00
sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdb: sdb1 sdb2
sd 0:0:1:0: [sdb] Attached SCSI disk
sd 1:0:0:0: [sdc] 488397168 512-byte hardware sectors (250059 MB)
sd 1:0:0:0: [sdc] Write Protect is off
sd 1:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 1:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 1:0:0:0: [sdc] 488397168 512-byte hardware sectors (250059 MB)
sd 1:0:0:0: [sdc] Write Protect is off
sd 1:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 1:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdc: unknown partition table
sd 1:0:0:0: [sdc] Attached SCSI disk
kjournald starting. Commit interval 5 seconds
EXT3-fs: mounted filesystem with ordered data mode.
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
input: Power Button (FF) as /class/input/input2
ACPI: Power Button (FF) [PWRF]
input: Power Button (CM) as /class/input/input3
ACPI: Power Button (CM) [PWRB]
input: Sleep Button (CM) as /class/input/input4
ACPI: Sleep Button (CM) [SLPB]
ACPI: Processor [CPU0] (supports 2 throttling states)
USB Universal Host Controller Interface driver v3.0
ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
PCI: setting IRQ 11 as level-triggered
ACPI: PCI Interrupt 0000:00:07.2[D] -> Link [LNKD] -> GSI 11 (level, low) -> IRQ 11
uhci_hcd 0000:00:07.2: UHCI Host Controller
uhci_hcd 0000:00:07.2: new USB bus registered, assigned bus number 1
uhci_hcd 0000:00:07.2: irq 11, io base 0x0000e400
usb usb1: configuration #1 chosen from 1 choice
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 2 ports detected
Linux agpgart interface v0.102 (c) Dave Jones
ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 12
PCI: setting IRQ 12 as level-triggered
ACPI: PCI Interrupt 0000:00:0a.0[A] -> Link [LNKC] -> GSI 12 (level, low) -> IRQ 12
skge 1.11 addr 0xe4000000 irq 12 chip Yukon rev 1
skge eth0: addr 00:0f:38:6a:9c:fe
sk98lin: driver has been replaced by the skge driver and is scheduled for removal
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
agpgart: Detected VIA Apollo Pro 133 chipset
agpgart: AGP aperture is 64M @ 0xe0000000
parport_pc 00:0a: reported by Plug and Play ACPI
parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
PPP generic driver version 2.4.2
rtc_cmos 00:04: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one year, y3k
lp0: using parport0 (interrupt-driven).
ppdev: user-space parallel port driver
input: PC Speaker as /class/input/input5
ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
PCI: setting IRQ 10 as level-triggered
ACPI: PCI Interrupt 0000:00:08.0[A] -> Link [LNKA] -> GSI 10 (level, low) -> IRQ 10
md: md0 stopped.
EXT3 FS on sdb1, internal journal
ReiserFS: sdb2: found reiserfs format "3.6" with standard journal
ReiserFS: sdb2: using ordered data mode
ReiserFS: sdb2: journal params: device sdb2, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
ReiserFS: sdb2: checking transaction log (sdb2)
ReiserFS: sdb2: Using r5 hash to sort names
skge eth0: enabling interface
skge eth0: Link is up at 1000 Mbps, full duplex, flow control both
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-07 14:50 ` Dimitrios Apostolou
@ 2007-08-08 19:08 ` Rafał Bilski
2007-08-09 8:17 ` Dimitrios Apostolou
0 siblings, 1 reply; 26+ messages in thread
From: Rafał Bilski @ 2007-08-08 19:08 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel, Alan Cox, Andrew Morton
> Hello again,
Hi,
> I 'm now using libata on the same system described before (see attached
> dmesg.txt). When writing to both disks I think the problem is now worse
> (pata_oprof_bad.txt, pata_vmstat_bad.txt), even the oprofile script needed
> half an hour to complete! For completion I also attach the same tests when I
> write to only one disk (pata_vmstat_1disk.txt, pata_oprof_1disk.txt), whence
> everything is normal.
>
> FWIW, libata did not give me any performance benefit, 20MB/s is again the peak
> hdparm reports.
This OProfile thing is extremly not usefull in this case. It says that Your
system is using 25% CPU time for no-op loops, but it doesn't say why. Your
system really isn't very busy. Look here:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 2 0 225352 5604 91700 0 0 18112 1664 28145 6315 29 71 0 0
2 2 0 225360 5604 91700 0 0 18496 1664 27992 6358 30 70 0 0
1 2 0 225360 5604 91700 0 0 18432 1472 28511 6315 28 72 0 0
1 2 0 225360 5604 91700 0 0 18240 1536 28031 6153 31 69 0 0
+ video 720x576 25fps yuv stream over PCI. And system is fully responsive. Of
course programs which need disk access must wait a bit longer, but later are
running fine.
I don't have disks so fast like Yours and I can't do destructive write test.
First disk:
1 1 0 241848 7312 100768 0 0 27712 0 927 1270 29 13 0 58
1 1 0 241052 7580 100896 0 0 4612 4676 519 702 34 12 0 54
Second disk:
0 1 0 237752 7268 100980 0 0 6464 0 468 583 37 10 0 53
0 1 0 241060 7532 100884 0 0 1728 1728 465 578 31 9 0 60
Both:
0 2 0 241592 7384 100776 0 0 33024 0 905 1415 33 16 0 51
1 2 0 240804 7528 100884 0 0 6848 6848 642 780 38 10 0 52
So sda + sdc = both.
Your single disk:
0 1 0 128804 19620 82484 0 0 0 21120 335 675 0 4 0 96
Both:
5 2 0 168480 10972 47152 0 0 0 16000 252 470 22 78 0 0
I would expect 2*21k, but we have only 2*8k and it is lower then single disk
case. Of course this math isn't moving us forward. Only thing which would
help would be function call trace as Andrew wrote. Which function is calling
delay_tsc()? Is it calling it often or once but with long delay?
So far it looks like some kind of hardware limit for me. Do You have any
options in BIOS which can degrade PCI or disk performance?
>
> Thanks,
> Dimitris
Rafał
----------------------------------------------------------------------
Wszystko czego potrzebujesz latem: kremy do opalania,
stroje kapielowe, maly romans
>>>http://link.interia.pl/f1b15
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-08 19:08 ` Rafał Bilski
@ 2007-08-09 8:17 ` Dimitrios Apostolou
2007-08-10 7:06 ` Rafał Bilski
0 siblings, 1 reply; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-09 8:17 UTC (permalink / raw)
To: Rafał Bilski; +Cc: linux-kernel, Alan Cox, Andrew Morton
Hi Rafal, thank you for your help!
On Wednesday 08 August 2007 22:08:18 Rafał Bilski wrote:
> > Hello again,
>
> Hi,
>
> > I 'm now using libata on the same system described before (see attached
> > dmesg.txt). When writing to both disks I think the problem is now worse
> > (pata_oprof_bad.txt, pata_vmstat_bad.txt), even the oprofile script
> > needed half an hour to complete! For completion I also attach the same
> > tests when I write to only one disk (pata_vmstat_1disk.txt,
> > pata_oprof_1disk.txt), whence everything is normal.
> >
> > FWIW, libata did not give me any performance benefit, 20MB/s is again the
> > peak hdparm reports.
>
> This OProfile thing is extremly not usefull in this case. It says that Your
> system is using 25% CPU time for no-op loops, but it doesn't say why. Your
> system really isn't very busy. Look here:
> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu---- r b swpd free buff cache si so bi bo in
> cs us sy id wa 2 2 0 225352 5604 91700 0 0 18112 1664 28145
> 6315 29 71 0 0 2 2 0 225360 5604 91700 0 0 18496 1664
> 27992 6358 30 70 0 0 1 2 0 225360 5604 91700 0 0 18432
> 1472 28511 6315 28 72 0 0 1 2 0 225360 5604 91700 0 0
> 18240 1536 28031 6153 31 69 0 0 + video 720x576 25fps yuv stream over
> PCI. And system is fully responsive. Of course programs which need disk
> access must wait a bit longer, but later are running fine.
> I don't have disks so fast like Yours and I can't do destructive write
> test. First disk:
> 1 1 0 241848 7312 100768 0 0 27712 0 927 1270 29 13 0
> 58 1 1 0 241052 7580 100896 0 0 4612 4676 519 702 34 12
> 0 54 Second disk:
> 0 1 0 237752 7268 100980 0 0 6464 0 468 583 37 10 0
> 53 0 1 0 241060 7532 100884 0 0 1728 1728 465 578 31 9
> 0 60 Both:
> 0 2 0 241592 7384 100776 0 0 33024 0 905 1415 33 16 0
> 51 1 2 0 240804 7528 100884 0 0 6848 6848 642 780 38 10
> 0 52 So sda + sdc = both.
>
> Your single disk:
> 0 1 0 128804 19620 82484 0 0 0 21120 335 675 0 4 0
> 96 Both:
> 5 2 0 168480 10972 47152 0 0 0 16000 252 470 22 78 0
> 0 I would expect 2*21k, but we have only 2*8k and it is lower then single
> disk case. Of course this math isn't moving us forward. Only thing which
> would help would be function call trace as Andrew wrote. Which function is
> calling delay_tsc()? Is it calling it often or once but with long delay?
Please guide me on doing these call traces. Unfortunately I don't have any
experience at all regarding kernel-space debugging. What tools should I use?
Keep in mind also that I mostly have remote access to this PC.
However I am not really confident that the cause is delay_tsc(). It only
appeared on the libata tests. To summarize I will write down the first lines
of all problematic oprofilings I've done until now:
first test:
3832 23.4818 __switch_to
3380 20.7121 schedule
653 4.0015 mask_and_ack_8259A
452 2.7698 __blockdev_direct_IO
327 2.0038 dequeue_task
second test, with idle=poll (which didn't really help, it seems the cpu is not
idling at all):
2477 31.2871 __switch_to
1625 20.5255 schedule
246 3.1072 mask_and_ack_8259A
222 2.8041 __blockdev_direct_IO
147 1.8568 follow_page
third test, libata and idle=poll (the problem felt like it was greater here,
oprofiling took a really long time):
9556 26.0523 delay_tsc
6424 17.5136 iowrite8
6178 16.8430 __switch_to
4015 10.9460 schedule
3335 9.0921 ioread8
1429 3.8959 iowrite32
So I would assume that delay_tsc() probably only makes the situation worse for
the libata tests, but the real problem is at __switch_to() and schedule(). Do
you agree with these assumptions? What are these functions used for? My guess
would be that __switch_to() is for context switching and schedule() for
process scheduling. However the context switch rate, as reported by vmstat,
is not that great to verify the time lost in __switch_to(). As shown from the
measurements on one disk the context switching rate is greater, but the time
spent is much less.
Is there a way to for oprofile to report the time spent in function calls
depending on the call trace?
> So far it looks like some kind of hardware limit for me. Do You have any
> options in BIOS which can degrade PCI or disk performance?
None that I can think of. Do you have any specific options in mind?
Thanks again for the help. I guess if that doesn't lead anywhere I'll just
start compiling older vanilla kernels and see when the problem dissapears.
But this needs a lot of time and I'm not sure for how long I'll be able to
not offer any service with those disks (I was supposed to use RAID 0 to
provide storage space with them, but with the current problems that wouldn't
be so wise).
Dimitris
>
> > Thanks,
> > Dimitris
>
> Rafał
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-09 8:17 ` Dimitrios Apostolou
@ 2007-08-10 7:06 ` Rafał Bilski
2007-08-17 23:19 ` Dimitrios Apostolou
0 siblings, 1 reply; 26+ messages in thread
From: Rafał Bilski @ 2007-08-10 7:06 UTC (permalink / raw)
To: Dimitrios Apostolou; +Cc: linux-kernel, Alan Cox, Andrew Morton
> So I would assume that delay_tsc() probably only makes the situation worse for
> the libata tests, but the real problem is at __switch_to() and schedule(). Do
> you agree with these assumptions?
Yes. I agree that percentage of CPU time is unreasonably high for these
functions. But not only for them.
> Is there a way to for oprofile to report the time spent in function calls
> depending on the call trace?
I don't know any.
> Thanks again for the help. I guess if that doesn't lead anywhere I'll just
> start compiling older vanilla kernels and see when the problem dissapears.
> But this needs a lot of time and I'm not sure for how long I'll be able to
> not offer any service with those disks (I was supposed to use RAID 0 to
> provide storage space with them, but with the current problems that wouldn't
> be so wise).
Probably this is the best and fastest way to diagnose problem.
> Dimitris
Regards
Rafał
----------------------------------------------------------------------
Taśm Renaty i Andrzeja tu nie znajdziesz.
Mamy coś lepszego ...
>>>http://link.interia.pl/f1b41
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: high system cpu load during intense disk i/o
2007-08-10 7:06 ` Rafał Bilski
@ 2007-08-17 23:19 ` Dimitrios Apostolou
0 siblings, 0 replies; 26+ messages in thread
From: Dimitrios Apostolou @ 2007-08-17 23:19 UTC (permalink / raw)
To: Rafał Bilski; +Cc: linux-kernel, Alan Cox, Andrew Morton
[-- Attachment #1: Type: text/plain, Size: 2841 bytes --]
Hello list,
before trying to reproduce the problem with older kernels I did the
necessary step of compiling and using a *vanilla* simple monolithic
kernel for my measurements. The kernel config (attached config.gz) has
many standard things disabled (like ACPI for example) so the oprofile
output now seems very different. Please keep in mind that I switched
back from libata to the old IDE driver, to be able to use the same
config on old kernels.
The situations I attach are:
idle: The PC doing nothing. Note that now idle time is spent in
irq_handler and not in poll_idle. Strange...
one_disk: Destructive badblocks (badblocks -v -w) on one disk.
Everything is responsive and the CPU is 99% iowait as it should.
two_disks: Destructive badblocks on two disks before the problem
appears. Things are starting to get sluggy.
two_disks_bad2: *PROBLEM* The previous situation after several
minutes, and after several cron jobs kicked in (and never finished).
System in a bad state, highly unresponsive.
The situation seems now completely different (but practically the
problem is exactly the same), probably because of kernel options. Here
are the first lines from opreport with debugging info, for the
two_disks_bad2 scenario:
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a
unit mask of 0x00 (No unit mask) count 100000
samples % linenr info symbol name
282 9.3377 ide-iops.c:1081 pre_reset
231 7.6490 stats.c:187 rpc_print_iostats
222 7.3510 ptrace.c:654 do_syscall_trace
146 4.8344 ide-io.c:1185 ide_do_request
144 4.7682 process.c:529 dump_task_regs
131 4.3377 stats.c:64 rpc_proc_open
122 4.0397 backing-dev.c:46 congestion_wait
98 3.2450 vsprintf.c:622 vsscanf
52 1.7219 process.c:643 __switch_to
33 1.0927 sched.c:4065 interruptible_sleep_on
32 1.0596 slub.c:597 check_object
32 1.0596 signal.c:244 setup_sigcontext
32 1.0596 signal.c:56 sys_sigaction
31 1.0265 buffer.c:2452 block_truncate_page
31 1.0265 fadvise.c:28 sys_fadvise64_64
31 1.0265 page-writeback.c:987 test_set_page_writeback
If you think I should enable/disable other options in the kernel please
tell me. Moreover it would be nice to know how to use the various
debugging options that I enabled, to help figuring out the problem. So
what do you think? Does this help or should I start trying older kernels
(which is *hard* to do with latest libc and udev that I have)?
Thanks again,
Dimitris
[-- Attachment #2: config.gz --]
[-- Type: application/x-gzip, Size: 5793 bytes --]
[-- Attachment #3: dmesg.txt --]
[-- Type: text/plain, Size: 9092 bytes --]
[ 0.000000] Linux version 2.6.22.3mango-monolithic-dbg (jimis@mango) (gcc version 4.2.1) #2 Fri Aug 17 19:18:43 EEST 2007
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 00000000000a0000 (usable)
[ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 000000000fff0000 (usable)
[ 0.000000] BIOS-e820: 000000000fff0000 - 000000000fff3000 (ACPI NVS)
[ 0.000000] BIOS-e820: 000000000fff3000 - 0000000010000000 (ACPI data)
[ 0.000000] BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved)
[ 0.000000] using polling idle threads.
[ 0.000000] 255MB LOWMEM available.
[ 0.000000] Entering add_active_range(0, 0, 65520) 0 entries of 256 used
[ 0.000000] Zone PFN ranges:
[ 0.000000] DMA 0 -> 4096
[ 0.000000] Normal 4096 -> 65520
[ 0.000000] early_node_map[1] active PFN ranges
[ 0.000000] 0: 0 -> 65520
[ 0.000000] On node 0 totalpages: 65520
[ 0.000000] DMA zone: 32 pages used for memmap
[ 0.000000] DMA zone: 0 pages reserved
[ 0.000000] DMA zone: 4064 pages, LIFO batch:0
[ 0.000000] Normal zone: 479 pages used for memmap
[ 0.000000] Normal zone: 60945 pages, LIFO batch:15
[ 0.000000] DMI 2.2 present.
[ 0.000000] Allocating PCI resources starting at 20000000 (gap: 10000000:efff0000)
[ 0.000000] Built 1 zonelists. Total pages: 65009
[ 0.000000] Kernel command line: auto BOOT_IMAGE=2.6.22.3-dbg ro root=341 lapic nmi_watchdog=0 idle=poll
[ 0.000000] Local APIC disabled by BIOS -- reenabling.
[ 0.000000] Found and enabled local APIC!
[ 0.000000] mapped APIC to ffffd000 (fee00000)
[ 0.000000] Enabling fast FPU save and restore... done.
[ 0.000000] Enabling unmasked SIMD FPU exception support... done.
[ 0.000000] Initializing CPU#0
[ 0.000000] PID hash table entries: 1024 (order: 10, 4096 bytes)
[ 0.000000] Detected 798.020 MHz processor.
[ 36.983538] Console: colour VGA+ 80x25
[ 36.986206] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
[ 36.986676] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
[ 37.012595] Memory: 256036k/262080k available (1963k kernel code, 5544k reserved, 611k data, 140k init, 0k highmem)
[ 37.012781] virtual kernel memory layout:
[ 37.012785] fixmap : 0xffffc000 - 0xfffff000 ( 12 kB)
[ 37.012789] vmalloc : 0xd0800000 - 0xffffa000 ( 759 MB)
[ 37.012792] lowmem : 0xc0000000 - 0xcfff0000 ( 255 MB)
[ 37.012796] .init : 0xc0386000 - 0xc03a9000 ( 140 kB)
[ 37.012800] .data : 0xc02eacd7 - 0xc0383b64 ( 611 kB)
[ 37.012804] .text : 0xc0100000 - 0xc02eacd7 (1963 kB)
[ 37.013432] Checking if this processor honours the WP bit even in supervisor mode... Ok.
[ 37.013699] SLUB: Genslabs=22, HWalign=32, Order=0-1, MinObjects=4, CPUs=1, Nodes=1
[ 37.163848] Calibrating delay using timer specific routine.. 1597.05 BogoMIPS (lpj=7985266)
[ 37.164075] Mount-cache hash table entries: 512
[ 37.164358] CPU: After generic identify, caps: 0387fbff 00000000 00000000 00000000 00000000 00000000 00000000
[ 37.164382] CPU: L1 I cache: 16K, L1 D cache: 16K
[ 37.164501] CPU: L2 cache: 256K
[ 37.164587] CPU serial number disabled.
[ 37.164675] CPU: After all inits, caps: 0383fbff 00000000 00000000 00000040 00000000 00000000 00000000
[ 37.164688] Intel machine check architecture supported.
[ 37.164782] Intel machine check reporting enabled on CPU#0.
[ 37.164880] Compat vDSO mapped to ffffe000.
[ 37.164986] CPU: Intel Pentium III (Coppermine) stepping 06
[ 37.165168] Checking 'hlt' instruction... OK.
[ 37.444042] NET: Registered protocol family 16
[ 37.475344] PCI: PCI BIOS revision 2.10 entry at 0xfb370, last bus=1
[ 37.475446] PCI: Using configuration type 1
[ 37.475541] Setting up standard PCI resources
[ 37.478177] PCI: Probing PCI hardware
[ 37.478315] PCI: Probing PCI hardware (bus 00)
[ 37.480044] PCI: Using IRQ router VIA [1106/0596] at 0000:00:07.0
[ 37.489977] PCI: Bridge: 0000:00:01.0
[ 37.490080] IO window: disabled.
[ 37.490180] MEM window: d8000000-dfffffff
[ 37.490276] PREFETCH window: 20000000-200fffff
[ 37.490390] PCI: Setting latency timer of device 0000:00:01.0 to 64
[ 37.490422] NET: Registered protocol family 2
[ 37.493810] Time: tsc clocksource has been installed.
[ 37.573854] IP route cache hash table entries: 2048 (order: 1, 8192 bytes)
[ 37.574034] TCP established hash table entries: 8192 (order: 4, 65536 bytes)
[ 37.574363] TCP bind hash table entries: 8192 (order: 3, 32768 bytes)
[ 37.574578] TCP: Hash tables configured (established 8192 bind 8192)
[ 37.574679] TCP reno registered
[ 37.604371] IA-32 Microcode Update Driver: v1.14a <tigran@aivazian.fsnet.co.uk>
[ 37.611242] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[ 37.611678] fuse init (API version 7.8)
[ 37.612390] io scheduler noop registered
[ 37.612492] io scheduler anticipatory registered (default)
[ 37.612592] io scheduler deadline registered
[ 37.612862] io scheduler cfq registered
[ 37.612980] PCI: VIA PCI bridge detected. Disabling DAC.
[ 37.613082] Activating ISA DMA hang workarounds.
[ 37.613213] Boot video device is 0000:01:00.0
[ 37.678453] Real Time Clock Driver v1.12ac
[ 37.678571] Hangcheck: starting hangcheck timer 0.9.0 (tick is 180 seconds, margin is 60 seconds).
[ 37.678740] Hangcheck: Using get_cycles().
[ 37.678847] Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled
[ 37.680289] serial8250.0: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 37.680872] serial8250.0: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[ 37.682616] loop: module loaded
[ 37.682825] PCI: setting IRQ 12 as level-triggered
[ 37.682834] PCI: Found IRQ 12 for device 0000:00:0a.0
[ 37.683001] skge 1.11 addr 0xe4000000 irq 12 chip Yukon rev 1
[ 37.683579] skge eth0: addr 00:0f:38:6a:9c:fe
[ 37.684108] Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
[ 37.684219] ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
[ 37.684498] VP_IDE: IDE controller at PCI slot 0000:00:07.1
[ 37.684617] VP_IDE: chipset revision 6
[ 37.684714] VP_IDE: not 100% native mode: will probe irqs later
[ 37.684828] VP_IDE: VIA vt82c596b (rev 12) IDE UDMA66 controller on pci0000:00:07.1
[ 37.685004] ide0: BM-DMA at 0xe000-0xe007, BIOS settings: hda:DMA, hdb:DMA
[ 37.685233] ide1: BM-DMA at 0xe008-0xe00f, BIOS settings: hdc:DMA, hdd:pio
[ 37.685452] Probing IDE interface ide0...
[ 38.133759] hda: WDC WD2500JB-55REA0, ATA DISK drive
[ 38.433703] hdb: MAXTOR 6L020J1, ATA DISK drive
[ 38.494132] ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
[ 38.494492] Probing IDE interface ide1...
[ 38.933603] hdc: WDC WD2500JB-55REA0, ATA DISK drive
[ 39.653537] ide1 at 0x170-0x177,0x376 on irq 15
[ 39.653980] hda: max request size: 512KiB
[ 39.662415] hda: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA(66)
[ 39.662778] hda: cache flushes supported
[ 39.662941] hda: unknown partition table
[ 39.671391] hdb: max request size: 128KiB
[ 39.672508] hdb: 40132503 sectors (20547 MB) w/1819KiB Cache, CHS=39813/16/63, UDMA(66)
[ 39.672897] hdb: cache flushes supported
[ 39.673023] hdb: hdb1 hdb2
[ 39.679890] hdc: max request size: 512KiB
[ 39.688535] hdc: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA(66)
[ 39.688882] hdc: cache flushes supported
[ 39.689028] hdc: unknown partition table
[ 39.943833] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 39.944151] mice: PS/2 mouse device common for all mice
[ 39.964472] input: AT Translated Set 2 keyboard as /class/input/input0
[ 39.964662] md: raid0 personality registered for level 0
[ 39.964776] oprofile: using NMI interrupt.
[ 39.964882] TCP cubic registered
[ 39.965194] NET: Registered protocol family 1
[ 39.965294] NET: Registered protocol family 17
[ 39.965605] Using IPI Shortcut mode
[ 39.966053] md: Autodetecting RAID arrays.
[ 39.966151] md: autorun ...
[ 39.966246] md: ... autorun DONE.
[ 39.981714] kjournald starting. Commit interval 5 seconds
[ 39.981839] EXT3-fs: mounted filesystem with ordered data mode.
[ 39.981955] VFS: Mounted root (ext3 filesystem) readonly.
[ 39.982331] Freeing unused kernel memory: 140k freed
[ 43.937797] md: md0 stopped.
[ 47.134790] EXT3 FS on hdb1, internal journal
[ 47.250784] ReiserFS: hdb2: found reiserfs format "3.6" with standard journal
[ 47.250820] ReiserFS: hdb2: using ordered data mode
[ 47.251544] ReiserFS: hdb2: journal params: device hdb2, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
[ 47.256771] ReiserFS: hdb2: checking transaction log (hdb2)
[ 47.312336] ReiserFS: hdb2: Using r5 hash to sort names
[ 48.732000] skge eth0: enabling interface
[ 51.219364] skge eth0: Link is up at 1000 Mbps, full duplex, flow control both
[-- Attachment #4: oprof_idle.txt --]
[-- Type: text/plain, Size: 9502 bytes --]
+ date
Fri Aug 17 20:07:32 EEST 2007
+ rm -rf /var/lib/oprofile/
+ opcontrol --vmlinux=/home/jimis/dist/src/linux-2.6.22.3/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
27648 96.8746 vmlinux
360 1.2614 libc-2.6.1.so
178 0.6237 bash
139 0.4870 oprofiled
CPU_CLK_UNHALT...|
samples| %|
------------------
138 99.2806 oprofiled
1 0.7194 [vdso] (tgid:11997 range:0xb7f96000-0xb7f97000)
131 0.4590 ld-2.6.1.so
59 0.2067 ISO8859-1.so
6 0.0210 locale-archive
5 0.0175 gawk
3 0.0105 libcrypto.so.0.9.8
2 0.0070 libpthread-2.6.1.so
2 0.0070 imap-login
1 0.0035 ls
1 0.0035 libhistory.so.5.2
1 0.0035 screen-4.0.3
1 0.0035 libnetsnmp.so.15.0.0
1 0.0035 dovecot-auth
1 0.0035 dovecot
1 0.0035 sshd
+ echo
+ echo
+ echo
+ opreport -l /home/jimis/dist/src/linux-2.6.22.3/vmlinux
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
26423 95.5693 irq_handler
288 1.0417 kmem_cache_shrink
190 0.6872 create_kmalloc_cache
105 0.3798 congestion_wait
38 0.1374 vsscanf
30 0.1085 assign_all_busses
30 0.1085 sys_fadvise64_64
28 0.1013 interruptible_sleep_on
25 0.0904 sys_madvise
24 0.0868 do_wp_page
14 0.0506 print_hex_dump
12 0.0434 check_object
12 0.0434 ext3_reserve_inode_write
11 0.0398 generic_file_buffered_write
11 0.0398 handle_vm86_fault
11 0.0398 vsnprintf
10 0.0362 kobject_rename
9 0.0326 __relay_reset
9 0.0326 do_syscall_trace
9 0.0326 rtc_cmos_read
8 0.0289 __handle_mm_fault
8 0.0289 setup_sigcontext
7 0.0253 access_process_vm
7 0.0253 locks_mandatory_area
7 0.0253 pcibios_setup
7 0.0253 print_bad_pte
6 0.0217 load_elf_binary
6 0.0217 pirq_piix_set
6 0.0217 vfs_mknod
5 0.0181 ext3_orphan_get
5 0.0181 fcntl_setlk
5 0.0181 loop_alloc
5 0.0181 process_slab
5 0.0181 rt_mutex_setprio
4 0.0145 __switch_to_xtra
4 0.0145 shrink_zone
3 0.0109 __dequeue_signal
3 0.0109 __switch_to
3 0.0109 __vmalloc_area_node
3 0.0109 block_invalidatepage
3 0.0109 do_mremap
3 0.0109 do_sync
3 0.0109 ext3_free_inode
3 0.0109 ext3_new_inode
3 0.0109 follow_page
3 0.0109 out_of_memory
3 0.0109 pcibios_enable_device
3 0.0109 pipe_write
3 0.0109 pirq_enable_irq
3 0.0109 posix_cpu_nsleep
3 0.0109 sys_remap_file_pages
3 0.0109 try_to_wake_up
3 0.0109 vfs_mkdir
3 0.0109 vfs_rename
3 0.0109 zoneinfo_show
2 0.0072 __free_slab
2 0.0072 __remove_shared_vm_struct
2 0.0072 background_writeout
2 0.0072 bitmap_find_free_region
2 0.0072 calibrate_delay
2 0.0072 dentry_open
2 0.0072 do_getitimer
2 0.0072 do_mmap_pgoff
2 0.0072 do_munmap
2 0.0072 do_page_fault
2 0.0072 elf_core_dump
2 0.0072 ext3_xattr_set
2 0.0072 install_page
2 0.0072 log_do_checkpoint
2 0.0072 lookup_one_len
2 0.0072 madvise_need_mmap_write
2 0.0072 pdflush
2 0.0072 prepare_timeout
2 0.0072 register_chrdev
2 0.0072 remap_pfn_range
2 0.0072 send_group_sigqueue
2 0.0072 swap_readpage
2 0.0072 sys_mincore
2 0.0072 sys_mprotect
2 0.0072 sys_timerfd
2 0.0072 t_start
2 0.0072 user_shm_lock
2 0.0072 vm_normal_page
2 0.0072 wake_up_new_task
1 0.0036 __cond_resched
1 0.0036 __filemap_copy_from_user_iovec_inatomic
1 0.0036 __follow_mount
1 0.0036 __free_pages_ok
1 0.0036 __get_user_2
1 0.0036 __inode_dir_notify
1 0.0036 __ip_route_output_key
1 0.0036 __journal_refile_buffer
1 0.0036 __journal_temp_unlink_buffer
1 0.0036 __kill_fasync
1 0.0036 __oom_kill_task
1 0.0036 __posix_lock_file
1 0.0036 __pte_alloc
1 0.0036 __set_special_pids
1 0.0036 __setscheduler
1 0.0036 __sigqueue_alloc
1 0.0036 __vma_link
1 0.0036 __wait_on_freeing_inode
1 0.0036 aio_complete
1 0.0036 badness
1 0.0036 clear_page_dirty_for_io
1 0.0036 clockevents_set_mode
1 0.0036 congestion_wait_interruptible
1 0.0036 copy_page_range
1 0.0036 copy_process
1 0.0036 cpu_idle
1 0.0036 d_lookup
1 0.0036 daemonize
1 0.0036 dcache_dir_lseek
1 0.0036 do_cpu_nanosleep
1 0.0036 do_lookup
1 0.0036 do_msgrcv
1 0.0036 do_notify_parent
1 0.0036 do_sendfile
1 0.0036 do_sync_read
1 0.0036 do_sysctl
1 0.0036 do_utimes
1 0.0036 dup_fd
1 0.0036 early_serial_putc
1 0.0036 early_serial_write
1 0.0036 ext3_count_dirs
1 0.0036 ext3_count_free_inodes
1 0.0036 ext3_read_inode
1 0.0036 ext3_readdir
1 0.0036 ext3_rename
1 0.0036 ext3_xattr_block_set
1 0.0036 ext3_xattr_get
1 0.0036 ext3_xattr_set_handle
1 0.0036 ext3_xattr_trusted_list
1 0.0036 file_send_actor
1 0.0036 find_inode
1 0.0036 find_mergeable_anon_vma
1 0.0036 force_sigsegv
1 0.0036 generic_permission
1 0.0036 generic_segment_checks
1 0.0036 generic_shutdown_super
1 0.0036 generic_write_checks
1 0.0036 get_next_timer_interrupt
1 0.0036 hrtimer_nanosleep
1 0.0036 i8259A_shutdown
1 0.0036 ide_do_request
1 0.0036 inode_sub_bytes
1 0.0036 insert_vm_struct
1 0.0036 internal_add_timer
1 0.0036 interruptible_sleep_on_timeout
1 0.0036 ioremap_nocache
1 0.0036 itimer_get_remtime
1 0.0036 journal_commit_transaction
1 0.0036 journal_get_create_access
1 0.0036 journal_recover
1 0.0036 journal_refile_buffer
1 0.0036 kill_block_super
1 0.0036 kill_litter_super
1 0.0036 kmem_cache_create
1 0.0036 kmem_cache_destroy
1 0.0036 kmem_ptr_validate
1 0.0036 kobject_get_path
1 0.0036 kobject_shadow_add
1 0.0036 kobject_uevent_env
1 0.0036 lock_timer
1 0.0036 locks_insert_block
1 0.0036 may_open
1 0.0036 memcmp
1 0.0036 memcpy
1 0.0036 mempool_resize
1 0.0036 microcode_write
1 0.0036 modify_acceptable_latency
1 0.0036 number
1 0.0036 pci_write
1 0.0036 percpu_pagelist_fraction_sysctl_handler
1 0.0036 pirq_serverworks_set
1 0.0036 posix_cpu_timer_set
1 0.0036 posix_timer_event
1 0.0036 prepare_to_wait
1 0.0036 prio_tree_remove
1 0.0036 proc_dodebug
1 0.0036 put_files_struct
1 0.0036 put_io_context
1 0.0036 rb_erase
1 0.0036 reiserfs_delete_xattrs
1 0.0036 reiserfs_listxattr
1 0.0036 reiserfs_xattr_init
1 0.0036 relay_create_buf
1 0.0036 relay_switch_subbuf
1 0.0036 rpc_print_iostats
1 0.0036 rpc_proc_exit
1 0.0036 rpc_proc_init
1 0.0036 sched_setscheduler
1 0.0036 send_signal
1 0.0036 set_bdi_congested
1 0.0036 set_page_dirty
1 0.0036 show_regs
1 0.0036 show_state_filter
1 0.0036 skb_checksum
1 0.0036 skb_seq_read
1 0.0036 skge_poll
1 0.0036 sprint_symbol
1 0.0036 submit_bh
1 0.0036 sync_cmos_clock
1 0.0036 sys_chroot
1 0.0036 sys_close
1 0.0036 sys_fchdir
1 0.0036 sys_get_thread_area
1 0.0036 sys_mq_timedreceive
1 0.0036 sys_sched_getparam
1 0.0036 sys_setreuid
1 0.0036 sys_setuid
1 0.0036 sys_sync_file_range
1 0.0036 sys_vm86old
1 0.0036 task_running_tick
1 0.0036 tcp_rcv_established
1 0.0036 test_clear_page_writeback
1 0.0036 test_set_page_writeback
1 0.0036 throttle_vm_writeout
1 0.0036 tick_notify
1 0.0036 timer_list_show
1 0.0036 trace
1 0.0036 unregister_timer_hook
1 0.0036 update_wall_time
1 0.0036 vfs_create
1 0.0036 vfs_ioctl
1 0.0036 vfs_unlink
1 0.0036 vm_insert_pfn
1 0.0036 vma_adjust
1 0.0036 vma_link
1 0.0036 vma_merge
1 0.0036 vprintk
1 0.0036 wakeup_pdflush
1 0.0036 walk_page_buffers
+ date
Fri Aug 17 20:07:38 EEST 2007
[-- Attachment #5: oprof_one_disk.txt --]
[-- Type: text/plain, Size: 17721 bytes --]
+ date
Fri Aug 17 20:17:46 EEST 2007
+ rm -rf /var/lib/oprofile/
+ opcontrol --vmlinux=/home/jimis/dist/src/linux-2.6.22.3/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
10455 90.6922 vmlinux
410 3.5566 libc-2.6.1.so
239 2.0732 bash
CPU_CLK_UNHALT...|
samples| %|
------------------
237 99.1632 bash
2 0.8368 [vdso] (tgid:12553 range:0xb7f81000-0xb7f82000)
190 1.6482 ld-2.6.1.so
121 1.0496 oprofiled
57 0.4944 ISO8859-1.so
10 0.0867 gawk
10 0.0867 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
6 60.0000 badblocks
4 40.0000 [vdso] (tgid:10904 range:0xb7eed000-0xb7eee000)
5 0.0434 grep
5 0.0434 locale-archive
4 0.0347 libext2fs.so.2.4
3 0.0260 libcrypto.so.0.9.8
3 0.0260 imap-login
CPU_CLK_UNHALT...|
samples| %|
------------------
2 66.6667 imap-login
1 33.3333 [vdso] (tgid:1946 range:0xb7f17000-0xb7f18000)
2 0.0173 libncurses.so.5.6
2 0.0173 screen-4.0.3
2 0.0173 dovecot
2 0.0173 sshd
1 0.0087 cat
1 0.0087 mkdir
1 0.0087 rm
1 0.0087 libreadline.so.5.2
1 0.0087 libnetsnmp.so.15.0.0
1 0.0087 libnetsnmphelpers.so.15.0.0
1 0.0087 libnetsnmpmibs.so.15.0.0
1 0.0087 syslog-ng
+ echo
+ echo
+ echo
+ opreport -l /home/jimis/dist/src/linux-2.6.22.3/vmlinux
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
6879 65.7963 irq_handler
881 8.4266 kmem_cache_shrink
672 6.4275 create_kmalloc_cache
188 1.7982 congestion_wait
140 1.3391 pre_reset
104 0.9947 vsscanf
84 0.8034 do_syscall_trace
78 0.7461 ide_do_request
51 0.4878 assign_all_busses
46 0.4400 interruptible_sleep_on
35 0.3348 sys_fadvise64_64
33 0.3156 sys_madvise
30 0.2869 rtc_cmos_read
29 0.2774 do_wp_page
28 0.2678 kobject_rename
25 0.2391 check_object
25 0.2391 print_hex_dump
19 0.1817 __relay_reset
17 0.1626 setup_sigcontext
16 0.1530 vsnprintf
15 0.1435 ide_setup_pci_devices
15 0.1435 zoneinfo_show
14 0.1339 block_truncate_page
14 0.1339 test_set_page_writeback
13 0.1243 __blkdev_put
13 0.1243 cont_prepare_write
13 0.1243 load_elf_binary
12 0.1148 access_process_vm
12 0.1148 pcibios_setup
11 0.1052 __handle_mm_fault
11 0.1052 __switch_to
11 0.1052 do_ide_setup_pci_device
11 0.1052 dump_task_extended_fpu
11 0.1052 rpc_print_iostats
11 0.1052 zap_pte
10 0.0956 bdget
10 0.0956 blk_release_queue
10 0.0956 do_page_fault
10 0.0956 elv_next_request
10 0.0956 sys_mq_open
9 0.0861 bdev_clear_inode
9 0.0861 generic_file_buffered_write
9 0.0861 locks_mandatory_area
9 0.0861 scsi_cmd_ioctl
8 0.0765 bd_claim_by_disk
8 0.0765 bd_release_from_disk
8 0.0765 dump_task_regs
8 0.0765 fcntl_setlk
7 0.0670 blk_cleanup_queue
7 0.0670 do_mpage_readpage
7 0.0670 loop_alloc
7 0.0670 unmap_vmas
7 0.0670 vfs_rename
6 0.0574 __end_that_request_first
6 0.0574 bio_split
6 0.0574 blkdev_close
6 0.0574 do_generic_mapping_read
6 0.0574 do_mremap
6 0.0574 generic_ide_ioctl
6 0.0574 pirq_piix_set
6 0.0574 prio_tree_insert
6 0.0574 test_clear_page_writeback
5 0.0478 __break_lease
5 0.0478 __switch_to_xtra
5 0.0478 daemonize
5 0.0478 dio_complete
5 0.0478 flush_old_exec
5 0.0478 handle_vm86_fault
5 0.0478 kmem_cache_create
5 0.0478 pipe_write
5 0.0478 pirq_enable_irq
5 0.0478 process_slab
5 0.0478 read_cache_page_async
5 0.0478 vfs_mkdir
4 0.0383 __blk_put_request
4 0.0383 add_to_page_cache
4 0.0383 cap_inode_removexattr
4 0.0383 cap_task_post_setuid
4 0.0383 dio_cleanup
4 0.0383 dio_get_page
4 0.0383 do_kern_mount
4 0.0383 do_mmap_pgoff
4 0.0383 do_notify_parent
4 0.0383 do_sendfile
4 0.0383 do_sysctl
4 0.0383 do_sysctl_strategy
4 0.0383 ext3_xattr_set_handle
4 0.0383 generic_write_checks
4 0.0383 idedisk_check_hpa
4 0.0383 interruptible_sleep_on_timeout
4 0.0383 log_do_checkpoint
4 0.0383 pcibios_scan_root
4 0.0383 print_bad_pte
4 0.0383 ptrace_request
4 0.0383 send_group_sigqueue
4 0.0383 set_using_dma
4 0.0383 sys_remap_file_pages
4 0.0383 vmtruncate
3 0.0287 __pte_alloc
3 0.0287 alloc_node_mem_map
3 0.0287 as_read_expire_store
3 0.0287 bitmap_find_free_region
3 0.0287 check_disk_change
3 0.0287 do_open
3 0.0287 do_sync
3 0.0287 do_sync_readv_writev
3 0.0287 early_serial_putc
3 0.0287 generic_ide_resume
3 0.0287 generic_shutdown_super
3 0.0287 ide_intr
3 0.0287 ide_register_hw_with_fixup
3 0.0287 install_file_pte
3 0.0287 install_page
3 0.0287 kobject_get_path
3 0.0287 ll_back_merge_fn
3 0.0287 lock_timer
3 0.0287 mpage_readpages
3 0.0287 on_freelist
3 0.0287 posix_cpu_nsleep
3 0.0287 rb_erase
3 0.0287 release_task
3 0.0287 rt_mutex_setprio
3 0.0287 sched_exit
3 0.0287 sg_scsi_ioctl
3 0.0287 show_partition
3 0.0287 show_schedstat
3 0.0287 sprint_symbol
3 0.0287 svc_seq_show
3 0.0287 sync_page_range
3 0.0287 sys_chown
3 0.0287 sys_mincore
3 0.0287 sys_mprotect
3 0.0287 sys_statfs
3 0.0287 sys_tee
3 0.0287 wake_up_new_task
2 0.0191 __elv_add_request
2 0.0191 __is_prefetch
2 0.0191 __register_chrdev_region
2 0.0191 account_steal_time
2 0.0191 apply_microcode
2 0.0191 as_antic_expire_store
2 0.0191 bd_forget
2 0.0191 bdput
2 0.0191 blk_init_queue_node
2 0.0191 blk_recount_segments
2 0.0191 blkdev_open
2 0.0191 calibrate_delay
2 0.0191 clear_page_dirty_for_io
2 0.0191 copy_page_range
2 0.0191 copy_process
2 0.0191 d_invalidate
2 0.0191 dentry_open
2 0.0191 dio_new_bio
2 0.0191 dio_send_cur_page
2 0.0191 do_brk
2 0.0191 do_msgrcv
2 0.0191 do_sync_read
2 0.0191 do_utimes
2 0.0191 dx_probe
2 0.0191 elv_insert
2 0.0191 elv_iosched_allow_merge
2 0.0191 elv_rb_del
2 0.0191 est_time_show
2 0.0191 exit_itimers
2 0.0191 ext3_free_inode
2 0.0191 ext3_mknod
2 0.0191 ext3_new_inode
2 0.0191 ext3_orphan_get
2 0.0191 filemap_nopage
2 0.0191 find_mergeable_anon_vma
2 0.0191 free_as_io_context
2 0.0191 freed_request
2 0.0191 get_signal_to_deliver
2 0.0191 get_timestamp
2 0.0191 grab_cache_page_nowait
2 0.0191 handle_stop_signal
2 0.0191 hrtimer_run_queues
2 0.0191 init_once
2 0.0191 inode_wait
2 0.0191 kobject_move
2 0.0191 kobject_shadow_rename
2 0.0191 load_elf_library
2 0.0191 lookup_address
2 0.0191 number
2 0.0191 open_namei
2 0.0191 pcibios_enable_device
2 0.0191 pcibios_fixup_bus
2 0.0191 poison_store
2 0.0191 prio_tree_next
2 0.0191 prio_tree_remove
2 0.0191 put_io_context
2 0.0191 read_port
2 0.0191 register_blkdev
2 0.0191 register_chrdev
2 0.0191 reiserfs_delete_xattrs
2 0.0191 reiserfs_xattr_init
2 0.0191 relay_destroy_buf
2 0.0191 request_irq
2 0.0191 rpc_proc_exit
2 0.0191 rpc_proc_init
2 0.0191 rpc_proc_open
2 0.0191 rw_copy_check_uvector
2 0.0191 sb_set_blocksize
2 0.0191 sched_setscheduler
2 0.0191 simple_strtoul
2 0.0191 skge_probe
2 0.0191 slab_pad_check
2 0.0191 strcspn
2 0.0191 sys_chdir
2 0.0191 sys_faccessat
2 0.0191 sys_fchdir
2 0.0191 sys_mq_timedreceive
2 0.0191 sys_mq_timedsend
2 0.0191 sys_openat
2 0.0191 sys_sync_file_range
2 0.0191 t_start
2 0.0191 tick_notify
2 0.0191 timer_list_show
2 0.0191 try_to_wake_up
2 0.0191 vfs_mknod
2 0.0191 vgacon_startup
2 0.0191 vm_normal_page
2 0.0191 vmtruncate_range
2 0.0191 zone_watermark_ok
1 0.0096 __activate_task
1 0.0096 __blockdev_direct_IO
1 0.0096 __call_usermodehelper
1 0.0096 __cleanup_sighand
1 0.0096 __d_lookup
1 0.0096 __ext3_get_inode_loc
1 0.0096 __free_pages_ok
1 0.0096 __free_slab
1 0.0096 __generic_file_aio_write_nolock
1 0.0096 __journal_drop_transaction
1 0.0096 __link_path_walk
1 0.0096 __posix_lock_file
1 0.0096 __ptrace_link
1 0.0096 __remove_shared_vm_struct
1 0.0096 __round_jiffies_relative
1 0.0096 __sigqueue_alloc
1 0.0096 __vma_link_rb
1 0.0096 __vmalloc_area_node
1 0.0096 __wait_on_freeing_inode
1 0.0096 __wake_up_common
1 0.0096 arch_ptrace
1 0.0096 arp_ioctl
1 0.0096 as_can_break_anticipation
1 0.0096 as_choose_req
1 0.0096 athlon_setup_ctrs
1 0.0096 background_writeout
1 0.0096 badness
1 0.0096 bd_get_sb
1 0.0096 bio_alloc_bioset
1 0.0096 blk_execute_rq_nowait
1 0.0096 blk_hw_contig_segment
1 0.0096 blk_ordered_complete_seq
1 0.0096 blk_ordered_cur_seq
1 0.0096 blk_ordered_req_seq
1 0.0096 blk_queue_find_tag
1 0.0096 blk_queue_make_request
1 0.0096 blk_queue_resize_tags
1 0.0096 blk_sync_queue
1 0.0096 blkdev_get_block
1 0.0096 blkdev_get_blocks
1 0.0096 block_fsync
1 0.0096 block_invalidatepage
1 0.0096 block_uevent_filter
1 0.0096 call_filldir
1 0.0096 call_usermodehelper_pipe
1 0.0096 cap_bprm_apply_creds
1 0.0096 cap_bprm_secureexec
1 0.0096 cap_capset_check
1 0.0096 cap_inode_setxattr
1 0.0096 cap_settime
1 0.0096 cap_vm_enough_memory
1 0.0096 cdev_get
1 0.0096 chrdev_show
1 0.0096 clockevents_set_mode
1 0.0096 common_timer_get
1 0.0096 complete_all
1 0.0096 congestion_wait_interruptible
1 0.0096 cp_old_stat
1 0.0096 cpu_idle
1 0.0096 current_io_context
1 0.0096 d_rehash
1 0.0096 deactivate_slab
1 0.0096 dev_kfree_skb_any
1 0.0096 discard_slab
1 0.0096 dma_declare_coherent_memory
1 0.0096 do_fcntl
1 0.0096 do_getitimer
1 0.0096 do_lookup
1 0.0096 do_munmap
1 0.0096 do_sched_setscheduler
1 0.0096 do_tkill
1 0.0096 do_wait
1 0.0096 drive_stat_acct
1 0.0096 dupfd
1 0.0096 elevator_alloc
1 0.0096 elevator_init
1 0.0096 elf_core_dump
1 0.0096 elv_attr_store
1 0.0096 elv_completed_request
1 0.0096 elv_unregister
1 0.0096 end_buffer_async_write
1 0.0096 expand_files
1 0.0096 expand_stack
1 0.0096 ext3_count_dirs
1 0.0096 ext3_find_entry
1 0.0096 ext3_free_blocks_sb
1 0.0096 ext3_getblk
1 0.0096 ext3_group_add
1 0.0096 ext3_new_blocks
1 0.0096 ext3_read_inode
1 0.0096 ext3_reserve_inode_write
1 0.0096 ext3_setattr
1 0.0096 ext3_symlink
1 0.0096 ext3_try_to_allocate
1 0.0096 ext3_xattr_block_set
1 0.0096 ext3_xattr_get
1 0.0096 ext3_xattr_trusted_list
1 0.0096 ext3_xattr_user_set
1 0.0096 file_send_actor
1 0.0096 filemap_fdatawait
1 0.0096 find_extend_vma
1 0.0096 follow_up
1 0.0096 force_sig_info_fault
1 0.0096 frag_show
1 0.0096 free_compound_page
1 0.0096 free_pgd_range
1 0.0096 generic_drop_inode
1 0.0096 generic_permission
1 0.0096 generic_unplug_device
1 0.0096 get_request_wait
1 0.0096 get_symbol_pos
1 0.0096 getname
1 0.0096 ide_hwif_request_regions
1 0.0096 ide_mm_inb
1 0.0096 ide_pci_setup_ports
1 0.0096 ide_setup_ports
1 0.0096 init_idedisk_capacity
1 0.0096 init_object
1 0.0096 init_request_from_bio
1 0.0096 init_tag_map
1 0.0096 inode_sub_bytes
1 0.0096 invalidate_bh_lrus
1 0.0096 ioctl_by_bdev
1 0.0096 ip_fragment
1 0.0096 iput
1 0.0096 it_real_fn
1 0.0096 itimer_get_remtime
1 0.0096 journal_commit_transaction
1 0.0096 journal_recover
1 0.0096 journal_refile_buffer
1 0.0096 kill_anon_super
1 0.0096 kill_litter_super
1 0.0096 kmem_cache_destroy
1 0.0096 kobject_shadow_add
1 0.0096 ll_rw_block
1 0.0096 lo_ioctl
1 0.0096 lookup_bdev
1 0.0096 may_attach
1 0.0096 may_open
1 0.0096 mb_cache_create
1 0.0096 mb_cache_entry_insert
1 0.0096 microcode_write
1 0.0096 modify_acceptable_latency
1 0.0096 mounts_release
1 0.0096 mpage_end_io_write
1 0.0096 neigh_resolve_output
1 0.0096 netif_rx
1 0.0096 nobh_prepare_write
1 0.0096 notify_arch_cmos_timer
1 0.0096 open_by_devnum
1 0.0096 p4_fill_in_addresses
1 0.0096 page_address_in_vma
1 0.0096 pci_write
1 0.0096 pcibios_assign_all_busses
1 0.0096 pcibios_disable_device
1 0.0096 pdflush
1 0.0096 pipe_read_open
1 0.0096 pirq_serverworks_set
1 0.0096 poll_idle
1 0.0096 posix_cpu_timer_del
1 0.0096 posix_timer_event
1 0.0096 prepare_to_wait_exclusive
1 0.0096 proc_dodebug
1 0.0096 proc_pid_auxv
1 0.0096 proc_task_lookup
1 0.0096 ptrace_attach
1 0.0096 ptrace_writedata
1 0.0096 raw_bind
1 0.0096 read_chan
1 0.0096 register_chrdev_region
1 0.0096 reiserfs_listxattr
1 0.0096 relay_file_mmap
1 0.0096 relay_file_open
1 0.0096 rpc_mkdir
1 0.0096 rpc_proc_show
1 0.0096 rt_check_expire
1 0.0096 run_local_timers
1 0.0096 run_posix_cpu_timers
1 0.0096 sb_min_blocksize
1 0.0096 schedule_tail
1 0.0096 set_blocksize
1 0.0096 set_ksettings
1 0.0096 set_load_weight
1 0.0096 setup_arg_pages
1 0.0096 setup_irq
1 0.0096 sg_io
1 0.0096 show_regs
1 0.0096 show_stat
1 0.0096 show_state_filter
1 0.0096 shrink_zone
1 0.0096 skb_checksum
1 0.0096 skge_get_coalesce
1 0.0096 skge_poll
1 0.0096 skge_set_coalesce
1 0.0096 sock_aio_write
1 0.0096 strnicmp
1 0.0096 svc_proc_register
1 0.0096 sys_chroot
1 0.0096 sys_fchmodat
1 0.0096 sys_get_thread_area
1 0.0096 sys_getcwd
1 0.0096 sys_mknodat
1 0.0096 sys_mlockall
1 0.0096 sys_nice
1 0.0096 sys_renameat
1 0.0096 sys_sched_get_priority_min
1 0.0096 sys_sendfile
1 0.0096 sys_sendfile64
1 0.0096 sys_timer_settime
1 0.0096 sys_uselib
1 0.0096 sysctl_head_next
1 0.0096 sysfs_follow_link
1 0.0096 sysfs_slab_add
1 0.0096 tcp_enter_quickack_mode
1 0.0096 tcp_v4_connect
1 0.0096 tcp_v4_hash
1 0.0096 throttle_vm_writeout
1 0.0096 trace
1 0.0096 udp_lib_unhash
1 0.0096 udp_proc_unregister
1 0.0096 uevent_helper_store
1 0.0096 unmap_mapping_range_vma
1 0.0096 unuse_table
1 0.0096 user_shm_unlock
1 0.0096 validate_store
1 0.0096 vfs_ioctl
1 0.0096 vgacon_scrolldelta
1 0.0096 vm_stat_account
1 0.0096 vma_adjust
1 0.0096 vprintk
1 0.0096 wait_for_helper
1 0.0096 wait_for_partner
1 0.0096 wait_on_retry_sync_kiocb
1 0.0096 wq_sleep
1 0.0096 write_boundary_block
1 0.0096 write_cache_pages
+ date
Fri Aug 17 20:17:53 EEST 2007
[-- Attachment #6: oprof_two_disks.txt --]
[-- Type: text/plain, Size: 14877 bytes --]
+ date
Fri Aug 17 20:22:29 EEST 2007
+ rm -rf /var/lib/oprofile/
+ opcontrol --vmlinux=/home/jimis/dist/src/linux-2.6.22.3/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
2528 52.9535 vmlinux
874 18.3075 libc-2.6.1.so
759 15.8986 ld-2.6.1.so
358 7.4990 bash
CPU_CLK_UNHALT...|
samples| %|
------------------
357 99.7207 bash
1 0.2793 [vdso] (tgid:13041 range:0xb7f12000-0xb7f13000)
109 2.2832 gawk
68 1.4244 ISO8859-1.so
13 0.2723 grep
10 0.2095 locale-archive
7 0.1466 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
5 71.4286 badblocks
2 28.5714 [vdso] (tgid:10910 range:0xb7fc6000-0xb7fc7000)
7 0.1466 libnetsnmp.so.15.0.0
7 0.1466 imap-login
5 0.1047 libdl-2.6.1.so
4 0.0838 libncurses.so.5.6
3 0.0628 ophelp
3 0.0628 libcrypto.so.0.9.8
3 0.0628 libpopt.so.0.0.0
3 0.0628 dovecot-auth
2 0.0419 libhistory.so.5.2
2 0.0419 libnetsnmpmibs.so.15.0.0
1 0.0209 ls
1 0.0209 libext2fs.so.2.4
1 0.0209 libreadline.so.5.2
1 0.0209 init
1 0.0209 id
1 0.0209 oprofiled
1 0.0209 screen-4.0.3
1 0.0209 libnetsnmpagent.so.15.0.0
1 0.0209 sshd
CPU_CLK_UNHALT...|
samples| %|
------------------
1 100.000 [vdso] (tgid:6746 range:0xb7f31000-0xb7f32000)
+ echo
+ echo
+ echo
+ opreport -l /home/jimis/dist/src/linux-2.6.22.3/vmlinux
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
227 8.9794 rpc_print_iostats
206 8.1487 pre_reset
175 6.9225 kmem_cache_shrink
160 6.3291 irq_handler
112 4.4304 congestion_wait
100 3.9557 rpc_proc_open
79 3.1250 create_kmalloc_cache
76 3.0063 ide_do_request
72 2.8481 dump_task_extended_fpu
64 2.5316 vsscanf
56 2.2152 dump_task_regs
52 2.0570 do_syscall_trace
41 1.6218 interruptible_sleep_on
39 1.5427 __relay_reset
32 1.2658 do_wp_page
31 1.2263 kobject_rename
29 1.1472 sys_madvise
27 1.0680 __switch_to
24 0.9494 print_hex_dump
22 0.8703 test_set_page_writeback
20 0.7911 check_object
19 0.7516 do_ide_setup_pci_device
18 0.7120 sys_fadvise64_64
17 0.6725 rpc_proc_show
15 0.5934 ide_setup_pci_devices
14 0.5538 __handle_mm_fault
14 0.5538 handle_vm86_fault
14 0.5538 iput
13 0.5142 svc_seq_show
12 0.4747 do_page_fault
12 0.4747 vsnprintf
11 0.4351 calibrate_delay
11 0.4351 generic_file_buffered_write
11 0.4351 zap_pte
11 0.4351 zoneinfo_show
10 0.3956 access_process_vm
10 0.3956 vm_normal_page
9 0.3560 blk_release_queue
9 0.3560 block_truncate_page
9 0.3560 kmem_cache_create
9 0.3560 setup_sigcontext
8 0.3165 dio_get_page
7 0.2769 bdev_clear_inode
7 0.2769 elv_next_request
7 0.2769 loop_alloc
7 0.2769 print_bad_pte
7 0.2769 rpc_proc_exit
7 0.2769 unmap_vmas
7 0.2769 write_cache_pages
6 0.2373 __switch_to_xtra
6 0.2373 bio_split
6 0.2373 load_elf_binary
6 0.2373 on_freelist
6 0.2373 sys_mprotect
5 0.1978 __blk_put_request
5 0.1978 __blkdev_put
5 0.1978 bdget
5 0.1978 blk_cleanup_queue
5 0.1978 cont_prepare_write
5 0.1978 do_open
5 0.1978 do_sysctl_strategy
5 0.1978 idedisk_check_hpa
5 0.1978 lock_timer
5 0.1978 posix_cpu_nsleep
5 0.1978 sys_mq_timedsend
5 0.1978 test_clear_page_writeback
4 0.1582 __pte_alloc
4 0.1582 __setlease
4 0.1582 assign_all_busses
4 0.1582 bd_release_from_disk
4 0.1582 blk_get_request
4 0.1582 cdev_get
4 0.1582 dio_cleanup
4 0.1582 generic_ide_ioctl
4 0.1582 ioctl_by_bdev
4 0.1582 lookup_bdev
4 0.1582 prio_tree_insert
4 0.1582 relay_switch_subbuf
4 0.1582 rpc_count_iostats
4 0.1582 rpc_proc_init
4 0.1582 sched_setscheduler
4 0.1582 scsi_cmd_ioctl
4 0.1582 svc_proc_unregister
4 0.1582 sys_chroot
4 0.1582 sys_mincore
4 0.1582 sys_mq_open
4 0.1582 wake_up_new_task
3 0.1187 as_can_break_anticipation
3 0.1187 blk_init_queue_node
3 0.1187 blk_phys_contig_segment
3 0.1187 blkdev_close
3 0.1187 cap_bprm_apply_creds
3 0.1187 dio_new_bio
3 0.1187 do_sendfile
3 0.1187 do_sync_readv_writev
3 0.1187 do_sysctl
3 0.1187 elf_core_dump
3 0.1187 fcntl_setlk
3 0.1187 filemap_nopage
3 0.1187 follow_page
3 0.1187 generic_unplug_device
3 0.1187 locks_mandatory_area
3 0.1187 pcibios_setup
3 0.1187 pipe_write
3 0.1187 prio_tree_next
3 0.1187 proc_dodebug
3 0.1187 register_chrdev
3 0.1187 remap_pfn_range
3 0.1187 rw_copy_check_uvector
3 0.1187 send_group_sigqueue
3 0.1187 sys_fchmodat
3 0.1187 sys_remap_file_pages
3 0.1187 vfs_mkdir
3 0.1187 vfs_mknod
3 0.1187 vma_adjust
3 0.1187 vma_merge
2 0.0791 __end_that_request_first
2 0.0791 __free_slab
2 0.0791 add_to_page_cache_lru
2 0.0791 as_fifo_expired
2 0.0791 bd_claim_by_disk
2 0.0791 bio_pair_end_2
2 0.0791 blk_alloc_queue_node
2 0.0791 blk_done_softirq
2 0.0791 blk_ordered_req_seq
2 0.0791 blk_sync_queue
2 0.0791 calculate_totalreserve_pages
2 0.0791 copy_page_range
2 0.0791 dentry_open
2 0.0791 do_exit
2 0.0791 do_mmap_pgoff
2 0.0791 do_mremap
2 0.0791 do_msgrcv
2 0.0791 do_munmap
2 0.0791 do_notify_parent
2 0.0791 dump_thread
2 0.0791 elv_rb_del
2 0.0791 ext3_count_dirs
2 0.0791 ext3_free_inode
2 0.0791 ext3_new_inode
2 0.0791 flush_old_exec
2 0.0791 force_sig_info_fault
2 0.0791 generic_shutdown_super
2 0.0791 grab_cache_page_nowait
2 0.0791 handle_stop_signal
2 0.0791 ide_hwif_request_regions
2 0.0791 init_cpu_workqueue
2 0.0791 init_idedisk_capacity
2 0.0791 insert_vm_struct
2 0.0791 install_file_pte
2 0.0791 interruptible_sleep_on_timeout
2 0.0791 ll_back_merge_fn
2 0.0791 page_address_in_vma
2 0.0791 pirq_enable_irq
2 0.0791 proc_task_lookup
2 0.0791 read_port
2 0.0791 relay_file_read_consume
2 0.0791 rt_mutex_setprio
2 0.0791 rtc_cmos_read
2 0.0791 sg_scsi_ioctl
2 0.0791 subbuf_send_actor
2 0.0791 sync_page_range
2 0.0791 sys_chdir
2 0.0791 sys_chown
2 0.0791 sys_mlockall
2 0.0791 sys_sendfile
2 0.0791 sys_setfsuid
2 0.0791 sys_sysinfo
2 0.0791 t_start
2 0.0791 tick_notify
2 0.0791 vfs_rename
2 0.0791 vfs_unlink
2 0.0791 vgacon_startup
2 0.0791 vmtruncate
1 0.0396 __blk_free_tags
1 0.0396 __d_find_alias
1 0.0396 __elv_add_request
1 0.0396 __f_setown
1 0.0396 __filemap_copy_from_user_iovec_inatomic
1 0.0396 __group_complete_signal
1 0.0396 __is_prefetch
1 0.0396 __link_path_walk
1 0.0396 __netif_rx_schedule
1 0.0396 __oom_kill_task
1 0.0396 __pte_alloc_kernel
1 0.0396 __put_user_1
1 0.0396 __remove_suid
1 0.0396 __set_page_dirty_buffers
1 0.0396 __set_special_pids
1 0.0396 __strncpy_from_user
1 0.0396 __vmalloc_area_node
1 0.0396 __wait_on_freeing_inode
1 0.0396 __wake_up_common
1 0.0396 alloc_chrdev_region
1 0.0396 apply_microcode
1 0.0396 arch_align_stack
1 0.0396 as_antic_stop
1 0.0396 as_choose_req
1 0.0396 as_read_batch_expire_show
1 0.0396 as_read_expire_store
1 0.0396 as_update_iohist
1 0.0396 badness
1 0.0396 blk_execute_rq
1 0.0396 blk_execute_rq_nowait
1 0.0396 blk_free_tags
1 0.0396 blk_hw_contig_segment
1 0.0396 blk_plug_device
1 0.0396 blk_queue_find_tag
1 0.0396 blk_queue_hardsect_size
1 0.0396 blk_register_queue
1 0.0396 blk_remove_plug
1 0.0396 blkdev_direct_IO
1 0.0396 blkdev_get
1 0.0396 blkdev_open
1 0.0396 block_fsync
1 0.0396 call_usermodehelper_pipe
1 0.0396 can_do_mlock
1 0.0396 cap_ptrace
1 0.0396 cap_settime
1 0.0396 cap_task_post_setuid
1 0.0396 cascade
1 0.0396 cdev_del
1 0.0396 check_disk_change
1 0.0396 check_slab
1 0.0396 congestion_wait_interruptible
1 0.0396 cpu_idle_wait
1 0.0396 create_new_namespaces
1 0.0396 current_io_context
1 0.0396 d_alloc
1 0.0396 d_hash_and_lookup
1 0.0396 del_timer
1 0.0396 dev_hard_start_xmit
1 0.0396 dio_complete
1 0.0396 do_brk
1 0.0396 do_fcntl
1 0.0396 do_generic_mapping_read
1 0.0396 do_iret_error
1 0.0396 do_mpage_readpage
1 0.0396 do_sched_setscheduler
1 0.0396 do_sync
1 0.0396 do_writepages
1 0.0396 dup_fd
1 0.0396 elevator_alloc
1 0.0396 elv_completed_request
1 0.0396 elv_insert
1 0.0396 elv_merged_request
1 0.0396 elv_requeue_request
1 0.0396 elv_unregister
1 0.0396 end_buffer_read_sync
1 0.0396 exit_itimers
1 0.0396 ext3_bread
1 0.0396 ext3_rename
1 0.0396 ext3_xattr_trusted_list
1 0.0396 free_pgtables
1 0.0396 freed_request
1 0.0396 generic_drop_inode
1 0.0396 generic_file_sendfile
1 0.0396 generic_fillattr
1 0.0396 generic_ide_resume
1 0.0396 generic_ide_suspend
1 0.0396 generic_permission
1 0.0396 get_next_timer_interrupt
1 0.0396 get_symbol_offset
1 0.0396 get_timestamp
1 0.0396 getname
1 0.0396 ide_intr
1 0.0396 ide_pci_setup_ports
1 0.0396 ide_setup_pci_device
1 0.0396 ide_taskfile_ioctl
1 0.0396 init_object
1 0.0396 init_once
1 0.0396 internal_add_timer
1 0.0396 ip_options_get_from_user
1 0.0396 journal_add_journal_head
1 0.0396 kill_anon_super
1 0.0396 kmem_ptr_validate
1 0.0396 kobject_get_path
1 0.0396 kobject_move
1 0.0396 laptop_io_completion
1 0.0396 link_path_walk
1 0.0396 lock_task_sighand
1 0.0396 log_do_checkpoint
1 0.0396 lookup_one_len
1 0.0396 madvise_need_mmap_write
1 0.0396 may_open
1 0.0396 mb_cache_create
1 0.0396 microcode_write
1 0.0396 mm_release
1 0.0396 mounts_release
1 0.0396 mpage_readpages
1 0.0396 mqueue_read_file
1 0.0396 netstat_show
1 0.0396 nobh_prepare_write
1 0.0396 normalize_rt_tasks
1 0.0396 notify_arch_cmos_timer
1 0.0396 nr_processes
1 0.0396 number
1 0.0396 open_bdev_excl
1 0.0396 open_namei
1 0.0396 out_of_memory
1 0.0396 p4_start
1 0.0396 page_cache_read
1 0.0396 page_mkclean
1 0.0396 pcibios_disable_device
1 0.0396 pdflush
1 0.0396 pipe_poll
1 0.0396 poison_store
1 0.0396 posix_cpu_nsleep_restart
1 0.0396 prepare_timeout
1 0.0396 prio_tree_remove
1 0.0396 proc_pid_auxv
1 0.0396 profile_event_unregister
1 0.0396 profile_hits
1 0.0396 ptmx_open
1 0.0396 ptrace_attach
1 0.0396 read_cache_page_async
1 0.0396 read_profile
1 0.0396 recalc_task_prio
1 0.0396 register_console
1 0.0396 relay_file_mmap
1 0.0396 relay_file_open
1 0.0396 relay_file_poll
1 0.0396 release_task
1 0.0396 reparent_thread
1 0.0396 request_irq
1 0.0396 rpc_alloc_iostats
1 0.0396 rt_fill_info
1 0.0396 run_posix_cpu_timers
1 0.0396 rw_verify_area
1 0.0396 sb_set_blocksize
1 0.0396 sched_exit
1 0.0396 sched_fork
1 0.0396 send_signal
1 0.0396 set_bdi_congested
1 0.0396 set_blocksize
1 0.0396 set_load_weight
1 0.0396 set_page_dirty_lock
1 0.0396 setup_irq
1 0.0396 sg_io
1 0.0396 show_partition
1 0.0396 show_regs
1 0.0396 show_schedstat
1 0.0396 shrink_zone
1 0.0396 strpbrk
1 0.0396 subbuf_read_actor
1 0.0396 svc_proc_register
1 0.0396 sys_faccessat
1 0.0396 sys_io_getevents
1 0.0396 sys_mlock
1 0.0396 sys_munlock
1 0.0396 sys_sched_get_priority_min
1 0.0396 sys_setitimer
1 0.0396 sys_setregid
1 0.0396 sys_sigaction
1 0.0396 sys_splice
1 0.0396 sys_stat
1 0.0396 sys_statfs
1 0.0396 sys_sync_file_range
1 0.0396 sys_sysctl
1 0.0396 sys_vm86old
1 0.0396 sysctl_head_finish
1 0.0396 sysctl_head_next
1 0.0396 try_to_free_buffers
1 0.0396 unuse_table
1 0.0396 user_shm_lock
1 0.0396 validate_store
1 0.0396 vfs_ioctl
1 0.0396 vm_stat_account
1 0.0396 vma_link
1 0.0396 vprintk
1 0.0396 wait_on_retry_sync_kiocb
1 0.0396 wait_on_work
1 0.0396 wake_up_inode
1 0.0396 will_become_orphaned_pgrp
1 0.0396 write_boundary_block
1 0.0396 write_full
+ date
Fri Aug 17 20:23:34 EEST 2007
[-- Attachment #7: oprof_two_disks_bad2.txt --]
[-- Type: text/plain, Size: 17366 bytes --]
+ date
Sat Aug 18 00:13:48 EEST 2007
+ rm -rf /var/lib/oprofile/
+ opcontrol --vmlinux=/home/jimis/dist/src/linux-2.6.22.3/vmlinux
+ opcontrol --start
Using default event: CPU_CLK_UNHALTED:100000:0:1:1
Daemon started.
Profiler running.
+ sleep 5
+ opcontrol --shutdown
Stopping profiling.
Killing daemon.
+ echo
+ echo
+ echo
+ opreport
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
CPU_CLK_UNHALT...|
samples| %|
------------------
3020 34.9902 vmlinux
1920 22.2454 libc-2.6.1.so
1274 14.7607 libpython2.5.so.1.0
1140 13.2082 perl
432 5.0052 mpop
CPU_CLK_UNHALT...|
samples| %|
------------------
430 99.5370 mpop
2 0.4630 [vdso] (tgid:16432 range:0xb7f24000-0xb7f25000)
253 2.9313 bash
CPU_CLK_UNHALT...|
samples| %|
------------------
252 99.6047 bash
1 0.3953 [vdso] (tgid:16796 range:0xb7fbe000-0xb7fbf000)
229 2.6532 ld-2.6.1.so
105 1.2165 libgnutls.so.13.3.0
49 0.5677 ISO8859-1.so
46 0.5330 libgcrypt.so.11.2.3
38 0.4403 libpthread-2.6.1.so
27 0.3128 badblocks
CPU_CLK_UNHALT...|
samples| %|
------------------
17 62.9630 badblocks
8 29.6296 [vdso] (tgid:16297 range:0xb7fb6000-0xb7fb7000)
2 7.4074 [vdso] (tgid:16298 range:0xb7f62000-0xb7f63000)
20 0.2317 screen-4.0.3
CPU_CLK_UNHALT...|
samples| %|
------------------
19 95.0000 screen-4.0.3
1 5.0000 [vdso] (tgid:16282 range:0xb7f25000-0xb7f26000)
11 0.1274 slocate
CPU_CLK_UNHALT...|
samples| %|
------------------
7 63.6364 slocate
4 36.3636 [vdso] (tgid:16652 range:0xb7efe000-0xb7eff000)
10 0.1159 imap-login
CPU_CLK_UNHALT...|
samples| %|
------------------
8 80.0000 imap-login
2 20.0000 [vdso] (tgid:15896 range:0xb7ef9000-0xb7efa000)
9 0.1043 libncurses.so.5.6
8 0.0927 gawk
7 0.0811 grep
6 0.0695 python2.5
CPU_CLK_UNHALT...|
samples| %|
------------------
5 83.3333 [vdso] (tgid:16627 range:0xb7efd000-0xb7efe000)
1 16.6667 [vdso] (tgid:16686 range:0xb7fa7000-0xb7fa8000)
6 0.0695 libnetsnmp.so.15.0.0
5 0.0579 libext2fs.so.2.4
3 0.0348 dovecot
CPU_CLK_UNHALT...|
samples| %|
------------------
2 66.6667 dovecot
1 33.3333 [vdso] (tgid:1923 range:0xb7ef7000-0xb7ef8000)
3 0.0348 sshd
CPU_CLK_UNHALT...|
samples| %|
------------------
2 66.6667 sshd
1 33.3333 [vdso] (tgid:15858 range:0xb7f64000-0xb7f65000)
2 0.0232 libcrypto.so.0.9.8
1 0.0116 ls
1 0.0116 tr
1 0.0116 libpcre.so.0.0.1
1 0.0116 which
1 0.0116 libnetsnmpagent.so.15.0.0
1 0.0116 libnetsnmpmibs.so.15.0.0
1 0.0116 locale-archive
1 0.0116 dovecot-auth
+ echo
+ echo
+ echo
+ opreport -l /home/jimis/dist/src/linux-2.6.22.3/vmlinux
CPU: PIII, speed 798.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % symbol name
282 9.3377 pre_reset
231 7.6490 rpc_print_iostats
222 7.3510 do_syscall_trace
146 4.8344 ide_do_request
144 4.7682 dump_task_regs
131 4.3377 rpc_proc_open
122 4.0397 congestion_wait
98 3.2450 vsscanf
52 1.7219 __switch_to
33 1.0927 interruptible_sleep_on
32 1.0596 check_object
32 1.0596 setup_sigcontext
32 1.0596 sys_sigaction
31 1.0265 block_truncate_page
31 1.0265 sys_fadvise64_64
31 1.0265 test_set_page_writeback
28 0.9272 __blkdev_put
28 0.9272 sys_madvise
26 0.8609 svc_seq_show
22 0.7285 do_wp_page
22 0.7285 zap_pte
21 0.6954 dump_task_extended_fpu
21 0.6954 elv_next_request
21 0.6954 generic_file_buffered_write
21 0.6954 unmap_vmas
20 0.6623 zoneinfo_show
19 0.6291 __relay_reset
19 0.6291 cont_prepare_write
18 0.5960 bdget
18 0.5960 calibrate_delay
18 0.5960 do_page_fault
17 0.5629 ide_setup_pci_devices
17 0.5629 scsi_cmd_ioctl
17 0.5629 vsnprintf
16 0.5298 bd_release_from_disk
16 0.5298 bdev_clear_inode
16 0.5298 do_ide_setup_pci_device
15 0.4967 kobject_rename
14 0.4636 blkdev_close
14 0.4636 print_hex_dump
13 0.4305 access_process_vm
13 0.4305 do_open
12 0.3974 bio_split
12 0.3974 blk_release_queue
12 0.3974 do_sendfile
12 0.3974 handle_vm86_fault
11 0.3642 __handle_mm_fault
11 0.3642 bd_claim_by_disk
11 0.3642 rpc_proc_show
10 0.3311 __switch_to_xtra
10 0.3311 posix_cpu_nsleep
9 0.2980 do_generic_mapping_read
9 0.2980 install_file_pte
9 0.2980 sg_scsi_ioctl
9 0.2980 vgacon_startup
8 0.2649 do_mpage_readpage
8 0.2649 prio_tree_insert
8 0.2649 test_clear_page_writeback
7 0.2318 init_once
7 0.2318 locks_mandatory_area
7 0.2318 loop_alloc
7 0.2318 register_blkdev
7 0.2318 rt_mutex_setprio
6 0.1987 daemonize
6 0.1987 idedisk_check_hpa
6 0.1987 pcibios_setup
6 0.1987 ptrace_request
6 0.1987 shrink_zone
6 0.1987 sys_chdir
6 0.1987 sys_mq_open
5 0.1656 as_update_iohist
5 0.1656 blk_cleanup_queue
5 0.1656 blkdev_direct_IO
5 0.1656 current_io_context
5 0.1656 do_sysctl_strategy
5 0.1656 ext3_xattr_list
5 0.1656 ide_setup_ports
5 0.1656 igrab
5 0.1656 on_freelist
5 0.1656 register_chrdev
5 0.1656 show_partition
5 0.1656 vfs_rename
5 0.1656 vgacon_scrolldelta
5 0.1656 vm_normal_page
5 0.1656 vmtruncate
5 0.1656 write_cache_pages
4 0.1325 __free_slab
4 0.1325 __generic_file_aio_write_nolock
4 0.1325 __make_request
4 0.1325 bd_forget
4 0.1325 blk_queue_make_request
4 0.1325 dio_get_page
4 0.1325 dio_new_bio
4 0.1325 do_exit
4 0.1325 do_fcntl
4 0.1325 elv_insert
4 0.1325 generic_file_sendfile
4 0.1325 generic_ide_suspend
4 0.1325 generic_segment_checks
4 0.1325 init_idedisk_capacity
4 0.1325 kill_litter_super
4 0.1325 load_elf_binary
4 0.1325 lock_timer
4 0.1325 number
4 0.1325 page_cache_read
4 0.1325 prio_tree_next
4 0.1325 process_slab
4 0.1325 read_cache_page_async
4 0.1325 setup_irq
4 0.1325 sg_io
4 0.1325 sync_page_range
3 0.0993 __break_lease
3 0.0993 __end_that_request_first
3 0.0993 add_to_page_cache
3 0.0993 athlon_setup_ctrs
3 0.0993 blk_execute_rq_nowait
3 0.0993 blk_hw_contig_segment
3 0.0993 blk_ordered_req_seq
3 0.0993 blk_queue_find_tag
3 0.0993 blk_rq_map_kern
3 0.0993 cap_bprm_apply_creds
3 0.0993 cap_settime
3 0.0993 cap_task_post_setuid
3 0.0993 check_disk_change
3 0.0993 check_slab
3 0.0993 d_invalidate
3 0.0993 dio_cleanup
3 0.0993 dio_complete
3 0.0993 do_mremap
3 0.0993 do_sync_write
3 0.0993 do_wait
3 0.0993 elv_completed_request
3 0.0993 est_time_show
3 0.0993 ext3_xattr_set_handle
3 0.0993 fcntl_setlk
3 0.0993 generic_ide_ioctl
3 0.0993 ioctl_by_bdev
3 0.0993 iput
3 0.0993 ll_back_merge_fn
3 0.0993 lookup_bdev
3 0.0993 open_by_devnum
3 0.0993 page_address_in_vma
3 0.0993 pipe_write
3 0.0993 posix_cpu_nsleep_restart
3 0.0993 relay_file_read
3 0.0993 rpc_proc_init
3 0.0993 sb_min_blocksize
3 0.0993 sb_set_blocksize
3 0.0993 svc_proc_unregister
3 0.0993 sys_mprotect
3 0.0993 sys_mq_timedsend
3 0.0993 tcp_rcv_state_process
3 0.0993 vma_adjust
2 0.0662 __blk_free_tags
2 0.0662 __journal_abort_hard
2 0.0662 __journal_drop_transaction
2 0.0662 __pte_alloc
2 0.0662 __vmalloc_area_node
2 0.0662 add_to_page_cache_lru
2 0.0662 aio_complete
2 0.0662 alloc_node_mem_map
2 0.0662 as_can_break_anticipation
2 0.0662 as_choose_req
2 0.0662 as_read_batch_expire_store
2 0.0662 as_read_expire_store
2 0.0662 assign_all_busses
2 0.0662 background_writeout
2 0.0662 bdput
2 0.0662 bio_alloc_bioset
2 0.0662 bio_endio
2 0.0662 blk_alloc_queue_node
2 0.0662 blk_end_sync_rq
2 0.0662 blkdev_open
2 0.0662 block_uevent_filter
2 0.0662 cdev_del
2 0.0662 cdev_get
2 0.0662 copy_page_range
2 0.0662 dio_send_cur_page
2 0.0662 do_mmap_pgoff
2 0.0662 do_notify_parent
2 0.0662 do_sync_readv_writev
2 0.0662 early_serial_putc
2 0.0662 early_serial_write
2 0.0662 elevator_init
2 0.0662 elv_iosched_allow_merge
2 0.0662 elv_rq_merge_ok
2 0.0662 free_as_io_context
2 0.0662 generic_shutdown_super
2 0.0662 grab_cache_page_nowait
2 0.0662 hwif_request_region
2 0.0662 ide_pci_setup_ports
2 0.0662 ide_setup_pci_device
2 0.0662 ide_taskfile_ioctl
2 0.0662 idle_cpu
2 0.0662 journal_flush
2 0.0662 kmem_cache_create
2 0.0662 lookup_one_len
2 0.0662 mpage_readpages
2 0.0662 notify_arch_cmos_timer
2 0.0662 open_bdev_excl
2 0.0662 pirq_piix_set
2 0.0662 print_bad_pte
2 0.0662 prio_tree_remove
2 0.0662 proc_dodebug
2 0.0662 release_task
2 0.0662 rpc_proc_exit
2 0.0662 rtc_cmos_read
2 0.0662 send_sigio
2 0.0662 set_blocksize
2 0.0662 set_ksettings
2 0.0662 set_using_dma
2 0.0662 sha_transform
2 0.0662 show_schedstat
2 0.0662 sprint_symbol
2 0.0662 sys_faccessat
2 0.0662 sys_fchmodat
2 0.0662 sys_remap_file_pages
2 0.0662 sys_vm86old
2 0.0662 sysctl_head_next
2 0.0662 t_start
2 0.0662 throttle_vm_writeout
2 0.0662 vfs_ioctl
2 0.0662 vfs_mknod
2 0.0662 vfs_unlink
2 0.0662 vgacon_deinit
2 0.0662 vmalloc_sync_all
2 0.0662 wake_up_new_task
2 0.0662 write_boundary_block
2 0.0662 zone_watermark_ok
1 0.0331 __blk_put_request
1 0.0331 __blkdev_get
1 0.0331 __filemap_copy_from_user_iovec_inatomic
1 0.0331 __find_get_block
1 0.0331 __follow_mount
1 0.0331 __free_pages_ok
1 0.0331 __is_prefetch
1 0.0331 __netif_schedule
1 0.0331 __register_chrdev_region
1 0.0331 __remove_hrtimer
1 0.0331 __set_page_dirty_buffers
1 0.0331 __set_page_dirty_nobuffers
1 0.0331 add_timer_randomness
1 0.0331 arch_ptrace
1 0.0331 as_fifo_expired
1 0.0331 as_put_io_context
1 0.0331 as_trim
1 0.0331 badness
1 0.0331 bio_pair_end_2
1 0.0331 bitmap_find_free_region
1 0.0331 blk_free_tags
1 0.0331 blk_init_queue_node
1 0.0331 blk_ordered_cur_seq
1 0.0331 blk_queue_resize_tags
1 0.0331 blk_remove_plug
1 0.0331 blk_sync_queue
1 0.0331 blkdev_get_block
1 0.0331 calculate_totalreserve_pages
1 0.0331 cap_vm_enough_memory
1 0.0331 clocksource_watchdog
1 0.0331 complete
1 0.0331 complete_all
1 0.0331 congestion_wait_interruptible
1 0.0331 copy_process
1 0.0331 cpu_idle
1 0.0331 create_new_namespaces
1 0.0331 current_is_keventd
1 0.0331 dentry_open
1 0.0331 dma_declare_coherent_memory
1 0.0331 do_alignment_check
1 0.0331 do_coredump
1 0.0331 do_getitimer
1 0.0331 do_kern_mount
1 0.0331 do_munmap
1 0.0331 do_sched_setscheduler
1 0.0331 do_sync
1 0.0331 do_sync_read
1 0.0331 do_sys_poll
1 0.0331 do_sysctl
1 0.0331 do_syslog
1 0.0331 do_timer
1 0.0331 do_utimes
1 0.0331 drive_stat_acct
1 0.0331 dump_thread
1 0.0331 dup_fd
1 0.0331 elevator_alloc
1 0.0331 eligible_child
1 0.0331 elv_attr_store
1 0.0331 elv_rb_add
1 0.0331 elv_rb_del
1 0.0331 elv_unregister
1 0.0331 end_buffer_async_write
1 0.0331 expand_stack
1 0.0331 ext3_count_dirs
1 0.0331 ext3_new_blocks
1 0.0331 ext3_orphan_get
1 0.0331 ext3_rename
1 0.0331 ext3_xattr_block_set
1 0.0331 ext3_xattr_get
1 0.0331 ext3_xattr_set
1 0.0331 f_delown
1 0.0331 filemap_fdatawait
1 0.0331 filemap_nopage
1 0.0331 flush_old_exec
1 0.0331 flush_thread
1 0.0331 fn_hash_insert
1 0.0331 follow_mount
1 0.0331 force_sig_info_fault
1 0.0331 force_sigsegv
1 0.0331 frag_start
1 0.0331 free_fdtable_work
1 0.0331 freed_request
1 0.0331 generic_fillattr
1 0.0331 generic_ide_resume
1 0.0331 generic_permission
1 0.0331 generic_unplug_device
1 0.0331 get_request_wait
1 0.0331 get_signal_to_deliver
1 0.0331 get_symbol_offset
1 0.0331 ide_abort
1 0.0331 init_object
1 0.0331 init_tag_map
1 0.0331 inode_add_bytes
1 0.0331 insert_wq_barrier
1 0.0331 install_page
1 0.0331 interruptible_sleep_on_timeout
1 0.0331 ip_fragment
1 0.0331 itimer_get_remtime
1 0.0331 journal_start
1 0.0331 kill_anon_super
1 0.0331 kill_fasync
1 0.0331 kmem_ptr_validate
1 0.0331 kobject_register
1 0.0331 kobject_shadow_add
1 0.0331 kobject_uevent_env
1 0.0331 ktime_get_real
1 0.0331 link_path_walk
1 0.0331 lo_ioctl
1 0.0331 locks_insert_block
1 0.0331 log_do_checkpoint
1 0.0331 madvise_need_mmap_write
1 0.0331 memory_open
1 0.0331 mincore_page
1 0.0331 nobh_prepare_write
1 0.0331 normalize_rt_tasks
1 0.0331 out_of_memory
1 0.0331 page_mkclean
1 0.0331 pcibios_fixup_bus
1 0.0331 posix_cpu_timer_set
1 0.0331 posix_timer_event
1 0.0331 prepare_to_wait_exclusive
1 0.0331 prio_tree_left
1 0.0331 proc_pid_auxv
1 0.0331 proc_task_lookup
1 0.0331 profile_hits
1 0.0331 profile_task_exit
1 0.0331 ptrace_attach
1 0.0331 ptrace_detach
1 0.0331 ptrace_writedata
1 0.0331 put_io_context
1 0.0331 rb_first
1 0.0331 read_port
1 0.0331 read_profile
1 0.0331 red_zone_store
1 0.0331 register_posix_clock
1 0.0331 relay_file_mmap
1 0.0331 relay_file_open
1 0.0331 relay_file_read_consume
1 0.0331 relay_file_release
1 0.0331 release_console_sem
1 0.0331 reparent_thread
1 0.0331 request_irq
1 0.0331 rq_init
1 0.0331 run_local_timers
1 0.0331 run_posix_cpu_timers
1 0.0331 rw_copy_check_uvector
1 0.0331 rw_verify_area
1 0.0331 sched_exit
1 0.0331 sched_setscheduler
1 0.0331 send_group_sigqueue
1 0.0331 set_close_on_exec
1 0.0331 set_load_weight
1 0.0331 skge_set_coalesce
1 0.0331 sock_aio_write
1 0.0331 sock_sendmsg
1 0.0331 strcasecmp
1 0.0331 strncasecmp
1 0.0331 strstr
1 0.0331 subbuf_send_actor
1 0.0331 svc_proc_register
1 0.0331 sys_chroot
1 0.0331 sys_mincore
1 0.0331 sys_munlockall
1 0.0331 sys_openat
1 0.0331 sys_sendfile64
1 0.0331 sys_setfsuid
1 0.0331 sys_sysctl
1 0.0331 sys_tee
1 0.0331 sysfs_follow_link
1 0.0331 task_prio
1 0.0331 tcp_add_reno_sack
1 0.0331 timekeeping_resume
1 0.0331 try_acquire_console_sem
1 0.0331 uevent_helper_store
1 0.0331 unix_release_sock
1 0.0331 unregister_timer_hook
1 0.0331 unuse_table
1 0.0331 update_iter
1 0.0331 user_shm_lock
1 0.0331 vfs_mkdir
1 0.0331 vma_merge
1 0.0331 wait_on_page_writeback_range
1 0.0331 wait_on_retry_sync_kiocb
1 0.0331 wb_kupdate
+ date
Sat Aug 18 00:16:13 EEST 2007
[-- Attachment #8: vmstat_idle.txt --]
[-- Type: text/plain, Size: 944 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 138400 28520 68632 0 0 35 1854 143 96 20 3 72 5
0 0 0 138400 28520 68632 0 0 0 0 101 11 0 0 100 0
0 0 0 138400 28520 68632 0 0 0 0 103 13 0 0 100 0
0 0 0 138400 28520 68632 0 0 0 756 154 15 0 0 100 0
0 0 0 138400 28520 68632 0 0 0 0 106 13 0 0 100 0
0 0 0 138400 28520 68632 0 0 0 0 106 13 0 0 100 0
0 0 0 138400 28528 68632 0 0 0 52 111 21 0 0 100 0
0 0 0 138400 28528 68632 0 0 0 0 104 18 0 1 99 0
0 0 0 138400 28528 68632 0 0 0 0 102 18 0 0 100 0
0 0 0 138400 28528 68632 0 0 0 0 103 9 0 0 100 0
[-- Attachment #9: vmstat_one_disk.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 1 0 137068 29776 68644 0 0 28 2948 160 126 23 4 67 6
0 1 0 137068 29776 68644 0 0 0 21696 442 690 0 2 0 98
0 1 0 137068 29776 68644 0 0 0 21632 440 690 0 1 0 99
0 1 0 137068 29776 68644 0 0 0 21696 440 688 0 3 0 97
1 1 0 137068 29776 68644 0 0 0 21632 440 690 0 3 0 97
1 1 0 137068 29784 68644 0 0 0 21688 441 697 0 2 0 98
1 1 0 137068 29784 68644 0 0 0 21696 444 695 0 1 0 99
0 1 0 137068 29784 68644 0 0 0 21632 439 689 0 1 0 99
0 1 0 137068 29784 68644 0 0 0 21632 439 689 0 3 0 97
0 1 0 137068 29784 68644 0 0 0 21696 440 687 0 2 0 98
[-- Attachment #10: vmstat_two_disks.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 2 0 133908 31200 70236 0 0 26 4585 185 177 22 4 61 14
2 2 0 133908 31200 70236 0 0 0 15936 354 488 0 2 0 98
1 2 0 133908 31200 70236 0 0 0 16064 356 514 0 1 0 99
1 2 0 133908 31200 70236 0 0 0 16000 354 496 0 1 0 99
2 2 0 133908 31200 70236 0 0 0 15872 350 481 0 3 0 97
2 2 0 133908 31200 70236 0 0 0 15872 350 477 0 3 0 97
2 2 0 133908 31208 70236 0 0 0 15984 357 507 0 2 0 98
2 2 0 133908 31208 70236 0 0 0 15872 352 481 0 4 0 96
3 2 0 133908 31208 70236 0 0 0 15872 351 469 0 2 0 98
3 2 0 133908 31208 70236 0 0 0 16000 355 500 0 1 0 99
[-- Attachment #11: vmstat_two_disks_bad2.txt --]
[-- Type: text/plain, Size: 936 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
8 1 0 8260 64260 117276 0 0 18 2481 239 279 44 7 44 5
8 2 0 8200 64260 117276 0 0 0 15920 361 510 89 11 0 0
9 2 0 8200 64268 117276 0 0 0 16100 362 511 89 11 0 0
9 2 0 8200 64268 117276 0 0 0 16000 353 495 85 15 0 0
8 2 0 8200 64268 117276 0 0 0 16000 352 502 90 10 0 0
9 2 0 8200 64268 117276 0 0 0 15872 350 494 88 12 0 0
8 2 0 8200 64268 117276 0 0 0 15872 350 497 91 9 0 0
9 2 0 8200 64268 117276 0 0 0 15872 353 471 75 25 0 0
8 2 0 8200 64276 117276 0 0 0 15956 356 521 88 12 0 0
9 2 0 8200 64276 117276 0 0 0 16000 353 516 92 8 0 0
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2007-08-17 22:19 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-08-03 16:03 high system cpu load during intense disk i/o Dimitrios Apostolou
2007-08-05 16:03 ` Dimitrios Apostolou
2007-08-05 17:58 ` Rafał Bilski
2007-08-05 18:42 ` Dimitrios Apostolou
2007-08-05 20:08 ` Rafał Bilski
2007-08-06 16:14 ` Rafał Bilski
2007-08-06 19:18 ` Dimitrios Apostolou
2007-08-06 19:48 ` Alan Cox
2007-08-07 0:40 ` Dimitrios Apostolou
2007-08-07 0:37 ` Alan Cox
2007-08-07 13:15 ` Dimitrios Apostolou
2007-08-06 22:12 ` Rafał Bilski
2007-08-07 0:49 ` Dimitrios Apostolou
2007-08-07 9:03 ` Rafał Bilski
2007-08-07 9:43 ` Dimitrios Apostolou
2007-08-06 1:28 ` Andrew Morton
2007-08-06 14:20 ` Dimitrios Apostolou
2007-08-06 17:33 ` Andrew Morton
2007-08-06 19:27 ` Dimitrios Apostolou
2007-08-06 20:04 ` Dimitrios Apostolou
2007-08-06 16:09 ` Dimitrios Apostolou
2007-08-07 14:50 ` Dimitrios Apostolou
2007-08-08 19:08 ` Rafał Bilski
2007-08-09 8:17 ` Dimitrios Apostolou
2007-08-10 7:06 ` Rafał Bilski
2007-08-17 23:19 ` Dimitrios Apostolou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).