LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: "Ray Lee" <ray-lk@madrabbit.org>
To: "Nick Piggin" <nickpiggin@yahoo.com.au>
Cc: "Eric Dumazet" <dada1@cosmosbay.com>,
	"David Miller" <davem@davemloft.net>,
	dmantipov@yandex.ru, linux-kernel@vger.kernel.org
Subject: Re: Are Linux pipes slower than the FreeBSD ones ?
Date: Wed, 5 Mar 2008 07:55:53 -0800	[thread overview]
Message-ID: <2c0942db0803050755u7e17118h923328fb79ee206b@mail.gmail.com> (raw)
In-Reply-To: <200803060238.39484.nickpiggin@yahoo.com.au>

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 3734 bytes --]

On Wed, Mar 5, 2008 at 7:38 AM, Nick Piggin <nickpiggin@yahoo.com.au> wrote:>> On Thursday 06 March 2008 01:55, Eric Dumazet wrote:>  > Nick Piggin a écrit :>  > > On Wednesday 05 March 2008 20:47, Eric Dumazet wrote:>  > >> David Miller a écrit :>  > >>> From: Antipov Dmitry <dmantipov@yandex.ru>>  > >>> Date: Wed, 05 Mar 2008 10:46:57 +0300>  > >>>>  > >>>> Despite of this obvious fact, recently I've tried to compare pipe>  > >>>> performance on Linux and FreeBSD systems. Unfortunately, Linux>  > >>>> results are poor - ~2x slower than FreeBSD. The detailed description>  > >>>> of the test case, preparation, environment and results are located>  > >>>> at http://213.148.29.37/PipeBench, and everyone are pleased to look>  > >>>> at, reproduce, criticize, etc.>  > >>>>  > >>> FreeBSD does page flipping into the pipe receiver, so rerun your test>  > >>> case but have either the sender or the receiver make changes to>  > >>> their memory buffer in between the read/write calls.>  > >>>>  > >>> FreeBSD's scheme is only good for benchmarks, rather then real life.>  > >>>  > >> page flipping might explain differences for big transferts, but note the>  > >> difference with small buffers (64, 128, 256, 512 bytes)>  > >>>  > >> I tried the 'pipe' prog on a fresh linux-2.6.24.2, on a dual Xeon 5120>  > >> machine, and we can notice that four cpus are used (but only two threads>  > >> are running on this benchmark)>  > >>  > > One thing to try is pinning both processes on the same CPU. This>  > > may be what the FreeBSD scheduler is preferring to do, and it ends>  > > up being really a tradeoff that helps some workloads and hurts>  > > others. With a very unscientific test with an old kernel, the>  > > pipe.c test gets anywhere from about 1.5 to 3 times faster when>  > > running it as taskset 1 ./pipe>  > >>  > >> # opreport -l /boot/vmlinux-2.6.24.2 |head -n 30>  > >> CPU: Core 2, speed 1866.8 MHz (estimated)>  > >> Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a>  > >> unit mask of 0x00 (Unhalted core cycles) count 100000>  > >> samples  %        symbol name>  > >> 52137     9.3521  kunmap_atomic>  > >>  > > I wonder if FreeBSD doesn't allocate their pipe buffers from kernel>  > > addressable memory. We could do this to eliminate the cost completely>  > > on highmem systems (whether it is a good idea I don't know, normally>  > > you'd actually do a bit of work between reading or writing from a>  > > pipe...)>  > >>  > >> 50983     9.1451  mwait_idle_with_hints>  > >> 50448     9.0492  system_call>  > >> 49727     8.9198  task_rq_lock>  > >> 24531     4.4003  pipe_read>  > >> 19820     3.5552  pipe_write>  > >> 16176     2.9016  dnotify_parent>  > >>  > > Just say no to dnotify.>  > >>  > >> 15455     2.7723  file_update_time>  > >>  > > Dumb question: anyone know why pipe.c calls this?>  >>  > Because pipe writer calls write() syscall -> file_update_time() in kernel>  > while pipe reader calls read() syscall -> touch_atime() in kernel>>  Yeah, but why does the pipe inode need to have its times updated?>  I guess there is some reason... hopefully not C&P related.
In principle so that the reader or writer can find out the last timethe other end did any processing of the pipe. And yeah, for POSIXcompliance: "Upon successful completion, pipe() will mark for updatethe st_atime, st_ctime and st_mtime fields of the pipe. " But it'd benice if there were a way to avoid touching it more than once a second(note the 'will mark for update' language). Or if the pipe is aphysical FIFO on a noatime filesystem?ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

  reply	other threads:[~2008-03-05 15:56 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-05  7:46 Antipov Dmitry
2008-03-05  8:00 ` David Miller
2008-03-05  9:47   ` Eric Dumazet
2008-03-05 10:19     ` Andi Kleen
2008-03-05 12:12     ` David Newall
2008-03-05 14:20     ` Nick Piggin
2008-03-05 14:55       ` Eric Dumazet
2008-03-05 15:38         ` Nick Piggin
2008-03-05 15:55           ` Ray Lee [this message]
2008-03-05 16:02             ` Nick Piggin
2008-03-06 12:11       ` Dmitry Antipov
2008-03-17 12:53         ` Nick Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2c0942db0803050755u7e17118h923328fb79ee206b@mail.gmail.com \
    --to=ray-lk@madrabbit.org \
    --cc=dada1@cosmosbay.com \
    --cc=davem@davemloft.net \
    --cc=dmantipov@yandex.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nickpiggin@yahoo.com.au \
    --subject='Re: Are Linux pipes slower than the FreeBSD ones ?' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).