LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Jeremy Higdon <jeremy@sgi.com>
To: David Chinner <dgc@sgi.com>
Cc: Jens Axboe <jens.axboe@oracle.com>, Nick Piggin <npiggin@suse.de>,
	linux-kernel@vger.kernel.org, Alan.Brunelle@hp.com,
	arjan@linux.intel.com
Subject: Re: IO queuing and complete affinity with threads (was Re: [PATCH 0/8] IO queuing and complete affinity)
Date: Tue, 12 Feb 2008 00:28:39 -0800	[thread overview]
Message-ID: <20080212082839.GA216917@sgi.com> (raw)
In-Reply-To: <20080211052211.GS155407@sgi.com>

On Mon, Feb 11, 2008 at 04:22:11PM +1100, David Chinner wrote:
> 
> What I think Nick is referring to is the comments I made that at a
> higher layer (e.g. filesystems) migrating completions to the
> submitter CPU may be exactly the wrong thing to do. I don't recall
> making any comments on migrating submitters - I think others have
> already commented on that so I'll ignore that for the moment and
> try to explain why completion on submitter CPU /may/ be bad.
> 
> For example, in the case of XFS it is fine for data I/O but it is
> wrong for transaction I/O completion. We want to direct all
> transaction completions to as few CPUs as possible (one, ideally) so
> that all the completion processing happens on the same CPU, rather
> than bouncing global cachelines and locks between all the CPUs
> taking completion interrupts.

So what you want is all XFS processing (for a given filesystem,
presumably) on a limited set of cores (ideally 1) and all block
and SCSI processing (for a given device) on a similarly limited
set.

On Altix, that was far more important than having the interrupt
and issue CPU be close to the hardware -- at least with typical
LSI or Qlogic controllers where there are only one or two MMIO
reads per command issued, and completions can be stacked up.

There is still an advantage to being close to the hardware, but
a much bigger advantage to not bouncing cachelines.

Maybe what you want is a multistage completion mechanism where
each stage can run on a different CPU, if thread context switches
are cheaper than bouncing data structures around....

jeremy

      reply	other threads:[~2008-02-12  8:28 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-07  9:18 [PATCH 0/8] IO queuing and complete affinity Jens Axboe
2008-02-07  9:18 ` [PATCH 1/8] block: split softirq handling into blk-softirq.c Jens Axboe
2008-02-07  9:18 ` [PATCH 2/8] Add interface for queuing work on a specific CPU Jens Axboe
2008-02-07  9:45   ` Andrew Morton
2008-02-07  9:49     ` Jens Axboe
2008-02-07 17:44       ` Harvey Harrison
2008-02-11 10:51     ` Oleg Nesterov
2008-02-07  9:19 ` [PATCH 3/8] block: make kblockd_schedule_work() take the queue as parameter Jens Axboe
2008-02-07  9:19 ` [PATCH 4/8] x86: add support for remotely triggering the block softirq Jens Axboe
2008-02-07 10:07   ` Ingo Molnar
2008-02-07 10:17     ` Jens Axboe
2008-02-07 10:25       ` Ingo Molnar
2008-02-07 10:31         ` Jens Axboe
2008-02-07 10:38           ` Ingo Molnar
2008-02-07 14:18             ` Jens Axboe
2008-02-07 10:49           ` [patch] block layer: kmemcheck fixes Ingo Molnar
2008-02-07 17:42             ` Linus Torvalds
2008-02-07 17:55               ` Jens Axboe
2008-02-07 19:31               ` Ingo Molnar
2008-02-07 20:06                 ` Jens Axboe
2008-02-08  1:22               ` David Miller
2008-02-08  1:28                 ` Linus Torvalds
2008-02-08 15:09                 ` Arjan van de Ven
2008-02-08 22:44                   ` Nick Piggin
2008-02-08 22:56                     ` Arjan van de Ven
2008-02-08 23:58                       ` Nick Piggin
2008-02-08 11:38               ` Jens Axboe
2008-02-07  9:19 ` [PATCH 5/8] x86-64: add support for remotely triggering the block softirq Jens Axboe
2008-02-07  9:19 ` [PATCH 6/8] ia64: " Jens Axboe
2008-02-07  9:19 ` [PATCH 7/8] kernel: add generic softirq interface for triggering a remote softirq Jens Axboe
2008-02-07  9:19 ` [PATCH 8/8] block: add test code for testing CPU affinity Jens Axboe
2008-02-07 15:16 ` [PATCH 0/8] IO queuing and complete affinity Alan D. Brunelle
2008-02-07 18:25 ` IO queuing and complete affinity with threads (was Re: [PATCH 0/8] IO queuing and complete affinity) Jens Axboe
2008-02-07 20:40   ` Alan D. Brunelle
2008-02-08  7:38   ` Nick Piggin
2008-02-08  7:47     ` Jens Axboe
2008-02-08  7:53       ` Nick Piggin
2008-02-08  7:59         ` Jens Axboe
2008-02-08  8:12           ` Nick Piggin
2008-02-08  8:24             ` Jens Axboe
2008-02-08  8:33               ` Nick Piggin
2008-02-11  5:22           ` David Chinner
2008-02-12  8:28             ` Jeremy Higdon [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080212082839.GA216917@sgi.com \
    --to=jeremy@sgi.com \
    --cc=Alan.Brunelle@hp.com \
    --cc=arjan@linux.intel.com \
    --cc=dgc@sgi.com \
    --cc=jens.axboe@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=npiggin@suse.de \
    --subject='Re: IO queuing and complete affinity with threads (was Re: [PATCH 0/8] IO queuing and complete affinity)' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).