LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Arjan van de Ven <arjan@linux.intel.com>
To: Jens Axboe <jens.axboe@oracle.com>
Cc: Zach Brown <zach.brown@oracle.com>, David Chinner <dgc@sgi.com>,
	Nick Piggin <npiggin@suse.de>,
	"Siddha, Suresh B" <suresh.b.siddha@intel.com>,
	linux-kernel@vger.kernel.org, mingo@elte.hu, ak@suse.de,
	James.Bottomley@SteelEye.com, andrea@suse.de, clameter@sgi.com,
	akpm@linux-foundation.org, andrew.vasquez@qlogic.com,
	willy@linux.intel.com
Subject: Re: [rfc] direct IO submission and completion scalability issues
Date: Mon, 04 Feb 2008 13:45:59 -0800	[thread overview]
Message-ID: <47A78797.8060601@linux.intel.com> (raw)
In-Reply-To: <20080204201027.GJ15220@kernel.dk>

Jens Axboe wrote:
>> I was imagining the patch a little bit differently (per-cpu tasks, do a
>> wake_up from the driver instead of cpu nr testing up in blk, work
>> queues, whatever), but we know how to iron out these kinds of details ;).
> 
> per-cpu tasks/wq's might be better, it's a little awkward to jump
> through hoops
> 

one caveat btw; when the multiqueue storage hw becomes available for Linux,
we need to figure out how to deal with the preference thing; since there
honoring a "non-logical" preference would be quite expensive (it means
you can't make the local submit queues lockless etc etc), so before we go down
the road of having widespread APIs for this stuff.. we need to make sure we're
not going to do something that's going to be really stupid 6 to 18 months down the road.

  reply	other threads:[~2008-02-04 21:46 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-07-28  1:21 Siddha, Suresh B
2007-07-30 18:20 ` Christoph Lameter
2007-07-30 20:35   ` Siddha, Suresh B
2007-07-31  4:19     ` Nick Piggin
2007-07-31 17:14       ` Siddha, Suresh B
2007-08-01  0:41         ` Nick Piggin
2007-08-01  0:55           ` Siddha, Suresh B
2007-08-01  1:24             ` Nick Piggin
2008-02-03  9:52 ` Nick Piggin
2008-02-03 10:53   ` Pekka Enberg
2008-02-03 11:58     ` Nick Piggin
2008-02-04  2:10   ` David Chinner
2008-02-04  4:14     ` Arjan van de Ven
2008-02-04  4:40       ` David Chinner
2008-02-04 10:09         ` Nick Piggin
2008-02-05  0:14           ` David Chinner
2008-02-08  7:50             ` Nick Piggin
2008-02-04 18:21     ` Zach Brown
2008-02-04 20:10       ` Jens Axboe
2008-02-04 21:45         ` Arjan van de Ven [this message]
2008-02-05  8:24           ` Jens Axboe
2008-02-04 10:12   ` Jens Axboe
2008-02-04 10:31     ` Nick Piggin
2008-02-04 10:33       ` Jens Axboe
2008-02-04 22:28         ` James Bottomley
2008-02-04 10:30   ` Andi Kleen
2008-02-04 21:47   ` Siddha, Suresh B

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47A78797.8060601@linux.intel.com \
    --to=arjan@linux.intel.com \
    --cc=James.Bottomley@SteelEye.com \
    --cc=ak@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=andrea@suse.de \
    --cc=andrew.vasquez@qlogic.com \
    --cc=clameter@sgi.com \
    --cc=dgc@sgi.com \
    --cc=jens.axboe@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=npiggin@suse.de \
    --cc=suresh.b.siddha@intel.com \
    --cc=willy@linux.intel.com \
    --cc=zach.brown@oracle.com \
    --subject='Re: [rfc] direct IO submission and completion scalability issues' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).