LKML Archive on lore.kernel.org help / color / mirror / Atom feed
* [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler @ 2008-02-08 16:47 Paul Clements 2008-02-08 17:33 ` Randy Dunlap 0 siblings, 1 reply; 17+ messages in thread From: Paul Clements @ 2008-02-08 16:47 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, nbd-general [-- Attachment #1: Type: text/plain, Size: 152 bytes --] There have been numerous reports of problems with nbd and cfq. Deadline gives better performance for nbd, anyway, so let's use it by default. -- Paul [-- Attachment #2: nbd_default_to_deadline.diff --] [-- Type: text/x-patch, Size: 525 bytes --] There have been numerous reports of problems with nbd and cfq. Deadline gives better performance for nbd, anyway, so let's use it by default. Signed-Off-By: Paul Clements <paul.clements@steeleye.com> --- ./drivers/block/nbd.c.max_nbd_killed 2008-02-07 16:46:24.000000000 -0500 +++ ./drivers/block/nbd.c 2008-02-08 11:38:47.000000000 -0500 @@ -667,6 +667,7 @@ static int __init nbd_init(void) put_disk(disk); goto out; } + elevator_init(disk->queue, "deadline"); } if (register_blkdev(NBD_MAJOR, "nbd")) { ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 16:47 [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler Paul Clements @ 2008-02-08 17:33 ` Randy Dunlap 2008-02-08 18:11 ` Andrew Morton 0 siblings, 1 reply; 17+ messages in thread From: Randy Dunlap @ 2008-02-08 17:33 UTC (permalink / raw) To: Paul Clements; +Cc: Andrew Morton, linux-kernel, nbd-general On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote: > There have been numerous reports of problems with nbd and cfq. Deadline > gives better performance for nbd, anyway, so let's use it by default. so what happens with this patch on cfq-only or as-only kernels? --- ~Randy ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 17:33 ` Randy Dunlap @ 2008-02-08 18:11 ` Andrew Morton 2008-02-08 18:41 ` Paul Clements 0 siblings, 1 reply; 17+ messages in thread From: Andrew Morton @ 2008-02-08 18:11 UTC (permalink / raw) To: Randy Dunlap; +Cc: Paul Clements, linux-kernel, nbd-general, Jens Axboe On Fri, 8 Feb 2008 09:33:41 -0800 Randy Dunlap <randy.dunlap@oracle.com> wrote: > On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote: > > > There have been numerous reports of problems with nbd and cfq. Deadline > > gives better performance for nbd, anyway, so let's use it by default. Please define "problems". If it's just "slowness" then we can live with that, but I'd hope that Jens is aware and that it's understood. It it's "hangs" or "oopses" then we panic. > so what happens with this patch on cfq-only or as-only kernels? I assume the elevator_init() call fails and the default elevator continues to be used. Perhaps an informative printk is needed. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 18:11 ` Andrew Morton @ 2008-02-08 18:41 ` Paul Clements 2008-02-08 20:45 ` Jens Axboe 2008-02-08 22:45 ` [Nbd] " Mike Snitzer 0 siblings, 2 replies; 17+ messages in thread From: Paul Clements @ 2008-02-08 18:41 UTC (permalink / raw) To: Andrew Morton; +Cc: Randy Dunlap, linux-kernel, nbd-general, Jens Axboe Andrew Morton wrote: > On Fri, 8 Feb 2008 09:33:41 -0800 Randy Dunlap <randy.dunlap@oracle.com> wrote: > >> On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote: >> >>> There have been numerous reports of problems with nbd and cfq. Deadline >>> gives better performance for nbd, anyway, so let's use it by default. > > Please define "problems". If it's just "slowness" then we can live with > that, but I'd hope that Jens is aware and that it's understood. > > It it's "hangs" or "oopses" then we panic. The two problems I have experienced (which may already be fixed): 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed There's a similar debian bug that has been filed as well: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=447638 2) nbd performs about 10% better (the last time I tested) with deadline vs. cfq (the overhead of cfq doesn't provide much advantage to nbd [not being a real disk], and you end up going through the I/O scheduler on the nbd server anyway, so it makes sense that deadline is better with nbd) There have been posts to nbd-general mailing list about problems with cfq and nbd also. -- Paul ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 18:41 ` Paul Clements @ 2008-02-08 20:45 ` Jens Axboe 2008-02-08 20:47 ` Jens Axboe 2008-02-08 22:45 ` [Nbd] " Mike Snitzer 1 sibling, 1 reply; 17+ messages in thread From: Jens Axboe @ 2008-02-08 20:45 UTC (permalink / raw) To: Paul Clements; +Cc: Andrew Morton, Randy Dunlap, linux-kernel, nbd-general On Fri, Feb 08 2008, Paul Clements wrote: > Andrew Morton wrote: > >On Fri, 8 Feb 2008 09:33:41 -0800 Randy Dunlap <randy.dunlap@oracle.com> > >wrote: > > > >>On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote: > >> > >>>There have been numerous reports of problems with nbd and cfq. Deadline > >>>gives better performance for nbd, anyway, so let's use it by default. > > > >Please define "problems". If it's just "slowness" then we can live with > >that, but I'd hope that Jens is aware and that it's understood. > > > >It it's "hangs" or "oopses" then we panic. > > The two problems I have experienced (which may already be fixed): > > 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed > > There's a similar debian bug that has been filed as well: > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=447638 > > 2) nbd performs about 10% better (the last time I tested) with deadline > vs. cfq (the overhead of cfq doesn't provide much advantage to nbd [not > being a real disk], and you end up going through the I/O scheduler on > the nbd server anyway, so it makes sense that deadline is better with nbd) > > There have been posts to nbd-general mailing list about problems with > cfq and nbd also. I'm fine with that, it's one of those things we'll do automatically when we have some sort of disk profile system setup. Devices without seek penalties should not use AS or CFQ. Asking for a non-existing elevator is not an issue, but it may trigger both printks and a switch to another elevator. So if you ask for "deadline" and it's modular, you'll get cfq again if it's the default. Your patch looks bad though, you forget to exit the old elevator. And you don't check the return value of elevator_init(). All in all, your patch definitely needs more work before it can be included. -- Jens Axboe ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 20:45 ` Jens Axboe @ 2008-02-08 20:47 ` Jens Axboe 2008-02-08 21:23 ` Paul Clements 0 siblings, 1 reply; 17+ messages in thread From: Jens Axboe @ 2008-02-08 20:47 UTC (permalink / raw) To: Paul Clements; +Cc: Andrew Morton, Randy Dunlap, linux-kernel On Fri, Feb 08 2008, Jens Axboe wrote: > On Fri, Feb 08 2008, Paul Clements wrote: > > Andrew Morton wrote: > > >On Fri, 8 Feb 2008 09:33:41 -0800 Randy Dunlap <randy.dunlap@oracle.com> > > >wrote: > > > > > >>On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote: > > >> > > >>>There have been numerous reports of problems with nbd and cfq. Deadline > > >>>gives better performance for nbd, anyway, so let's use it by default. > > > > > >Please define "problems". If it's just "slowness" then we can live with > > >that, but I'd hope that Jens is aware and that it's understood. > > > > > >It it's "hangs" or "oopses" then we panic. > > > > The two problems I have experienced (which may already be fixed): > > > > 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed > > > > There's a similar debian bug that has been filed as well: > > > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=447638 > > > > 2) nbd performs about 10% better (the last time I tested) with deadline > > vs. cfq (the overhead of cfq doesn't provide much advantage to nbd [not > > being a real disk], and you end up going through the I/O scheduler on > > the nbd server anyway, so it makes sense that deadline is better with nbd) > > > > There have been posts to nbd-general mailing list about problems with > > cfq and nbd also. > > I'm fine with that, it's one of those things we'll do automatically when > we have some sort of disk profile system setup. Devices without seek > penalties should not use AS or CFQ. > > Asking for a non-existing elevator is not an issue, but it may trigger > both printks and a switch to another elevator. So if you ask for > "deadline" and it's modular, you'll get cfq again if it's the default. > > Your patch looks bad though, you forget to exit the old elevator. And > you don't check the return value of elevator_init(). > > All in all, your patch definitely needs more work before it can be > included. argr and please don't CC closed lists on lkml, thanks! it's hugely annoying to everyone that replies in the thread. -- Jens Axboe ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 20:47 ` Jens Axboe @ 2008-02-08 21:23 ` Paul Clements 2008-02-08 22:02 ` Andrew Morton 0 siblings, 1 reply; 17+ messages in thread From: Paul Clements @ 2008-02-08 21:23 UTC (permalink / raw) To: Jens Axboe; +Cc: Andrew Morton, Randy Dunlap, linux-kernel [-- Attachment #1: Type: text/plain, Size: 2019 bytes --] Jens Axboe wrote: > On Fri, Feb 08 2008, Jens Axboe wrote: >> On Fri, Feb 08 2008, Paul Clements wrote: >>> Andrew Morton wrote: >>>> On Fri, 8 Feb 2008 09:33:41 -0800 Randy Dunlap <randy.dunlap@oracle.com> >>>> wrote: >>>> >>>>> On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote: >>>>> >>>>>> There have been numerous reports of problems with nbd and cfq. Deadline >>>>>> gives better performance for nbd, anyway, so let's use it by default. >>>> Please define "problems". If it's just "slowness" then we can live with >>>> that, but I'd hope that Jens is aware and that it's understood. >>>> >>>> It it's "hangs" or "oopses" then we panic. >>> The two problems I have experienced (which may already be fixed): >>> >>> 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed >>> >>> There's a similar debian bug that has been filed as well: >>> >>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=447638 >>> >>> 2) nbd performs about 10% better (the last time I tested) with deadline >>> vs. cfq (the overhead of cfq doesn't provide much advantage to nbd [not >>> being a real disk], and you end up going through the I/O scheduler on >>> the nbd server anyway, so it makes sense that deadline is better with nbd) >>> >>> There have been posts to nbd-general mailing list about problems with >>> cfq and nbd also. >> I'm fine with that, it's one of those things we'll do automatically when >> we have some sort of disk profile system setup. Devices without seek >> penalties should not use AS or CFQ. >> >> Asking for a non-existing elevator is not an issue, but it may trigger >> both printks and a switch to another elevator. So if you ask for >> "deadline" and it's modular, you'll get cfq again if it's the default. >> >> Your patch looks bad though, you forget to exit the old elevator. And >> you don't check the return value of elevator_init(). >> >> All in all, your patch definitely needs more work before it can be >> included. Thanks Jens. This one should be better. -- Paul [-- Attachment #2: nbd_default_to_deadline_then_noop.diff --] [-- Type: text/x-patch, Size: 1218 bytes --] NBD doesn't work well with CFQ (or AS) schedulers, so let's default to something else. The two problems I have experienced with nbd and cfq are: 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed There's a similar debian bug that has been filed as well: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=447638 2) nbd performs about 10% better (the last time I tested) with deadline vs. cfq (the overhead of cfq doesn't provide much advantage to nbd [not being a real disk], and you end up going through the I/O scheduler on the nbd server anyway, so it makes sense that deadline is better with nbd) There have been posts to nbd-general mailing list about problems with cfq and nbd also. Signed-Off-By: Paul Clements <paul.clements@steeleye.com> --- ./drivers/block/nbd.c.max_nbd_killed 2008-02-07 16:46:24.000000000 -0500 +++ ./drivers/block/nbd.c 2008-02-08 16:13:01.000000000 -0500 @@ -667,6 +667,12 @@ static int __init nbd_init(void) put_disk(disk); goto out; } + if (elevator_init(disk->queue, "deadline") != 0) { + if (elevator_init(disk->queue, "noop") != 0) { + put_disk(disk); + goto out; + } + } } if (register_blkdev(NBD_MAJOR, "nbd")) { ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 21:23 ` Paul Clements @ 2008-02-08 22:02 ` Andrew Morton 2008-02-09 13:30 ` Paul Clements 0 siblings, 1 reply; 17+ messages in thread From: Andrew Morton @ 2008-02-08 22:02 UTC (permalink / raw) To: Paul Clements; +Cc: jens.axboe, randy.dunlap, linux-kernel On Fri, 08 Feb 2008 16:23:01 -0500 Paul Clements <paul.clements@steeleye.com> wrote: > --- ./drivers/block/nbd.c.max_nbd_killed 2008-02-07 16:46:24.000000000 -0500 > +++ ./drivers/block/nbd.c 2008-02-08 16:13:01.000000000 -0500 > @@ -667,6 +667,12 @@ static int __init nbd_init(void) > put_disk(disk); > goto out; > } > + if (elevator_init(disk->queue, "deadline") != 0) { > + if (elevator_init(disk->queue, "noop") != 0) { > + put_disk(disk); > + goto out; > + } > + } > } - if the user doesn't have deadline or noop configured, NBD will now fail. That's a non-backward-compatible change. - when it fails, it will fail silently. Puzzled and angry users. - when it fails, it will inappropriately return -ENOMEM. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 22:02 ` Andrew Morton @ 2008-02-09 13:30 ` Paul Clements 2008-02-12 23:16 ` Andrew Morton 0 siblings, 1 reply; 17+ messages in thread From: Paul Clements @ 2008-02-09 13:30 UTC (permalink / raw) To: Andrew Morton; +Cc: jens.axboe, randy.dunlap, linux-kernel [-- Attachment #1: Type: text/plain, Size: 50 bytes --] Take 3...this should address all the issues. [-- Attachment #2: nbd_default_to_deadline_then_noop.diff --] [-- Type: text/x-patch, Size: 1442 bytes --] NBD doesn't work well with CFQ (or AS) schedulers, so let's default to something else. The two problems I have experienced with nbd and cfq are: 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed There's a similar debian bug that has been filed as well: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=447638 There have been posts to nbd-general mailing list about problems with cfq and nbd also. 2) nbd performs about 10% better (the last time I tested) with deadline vs. cfq (the overhead of cfq doesn't provide much advantage to nbd [not being a real disk], and you end up going through the I/O scheduler on the nbd server anyway, so it makes sense that deadline is better with nbd) Signed-Off-By: Paul Clements <paul.clements@steeleye.com> --- ./drivers/block/nbd.c.max_nbd_killed 2008-02-07 16:46:24.000000000 -0500 +++ ./drivers/block/nbd.c 2008-02-09 08:14:18.000000000 -0500 @@ -654,6 +654,7 @@ static int __init nbd_init(void) for (i = 0; i < nbds_max; i++) { struct gendisk *disk = alloc_disk(1); + elevator_t *old_e; if (!disk) goto out; nbd_dev[i].disk = disk; @@ -667,6 +668,11 @@ static int __init nbd_init(void) put_disk(disk); goto out; } + old_e = disk->queue->elevator; + if (elevator_init(disk->queue, "deadline") == 0 || + elevator_init(disk->queue, "noop") == 0) { + elevator_exit(old_e); + } } if (register_blkdev(NBD_MAJOR, "nbd")) { ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-09 13:30 ` Paul Clements @ 2008-02-12 23:16 ` Andrew Morton 2008-02-18 18:16 ` Jens Axboe 0 siblings, 1 reply; 17+ messages in thread From: Andrew Morton @ 2008-02-12 23:16 UTC (permalink / raw) To: Paul Clements; +Cc: jens.axboe, randy.dunlap, linux-kernel On Sat, 09 Feb 2008 08:30:40 -0500 Paul Clements <paul.clements@steeleye.com> wrote: > + old_e = disk->queue->elevator; > + if (elevator_init(disk->queue, "deadline") == 0 || > + elevator_init(disk->queue, "noop") == 0) { > + elevator_exit(old_e); > + } > } afacit elevator_init() will not trigger a request_module(). And you really do want to trigger the request_module() here. Perhaps the block layer should provide a means of doing so? ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-12 23:16 ` Andrew Morton @ 2008-02-18 18:16 ` Jens Axboe 2008-02-18 23:50 ` Andrew Morton 0 siblings, 1 reply; 17+ messages in thread From: Jens Axboe @ 2008-02-18 18:16 UTC (permalink / raw) To: Andrew Morton; +Cc: Paul Clements, randy.dunlap, linux-kernel On Tue, Feb 12 2008, Andrew Morton wrote: > On Sat, 09 Feb 2008 08:30:40 -0500 > Paul Clements <paul.clements@steeleye.com> wrote: > > > + old_e = disk->queue->elevator; > > + if (elevator_init(disk->queue, "deadline") == 0 || > > + elevator_init(disk->queue, "noop") == 0) { > > + elevator_exit(old_e); > > + } > > } > > afacit elevator_init() will not trigger a request_module(). And you really > do want to trigger the request_module() here. Perhaps the block layer > should provide a means of doing so? Good point, I think elevator_get() should do that automatically. Does this look sane? diff --git a/block/elevator.c b/block/elevator.c index bafbae0..88318c3 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -134,6 +134,21 @@ static struct elevator_type *elevator_get(const char *name) spin_lock(&elv_list_lock); e = elevator_find(name); + if (!e) { + char elv[ELV_NAME_MAX + strlen("-iosched")]; + + spin_unlock(&elv_list_lock); + + if (!strcmp(name, "anticipatory")) + sprintf(elv, "as-iosched"); + else + sprintf(elv, "%s-iosched", name); + + request_module(elv); + spin_lock(&elv_list_lock); + e = elevator_find(name); + } + if (e && !try_module_get(e->elevator_owner)) e = NULL; -- Jens Axboe ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-18 18:16 ` Jens Axboe @ 2008-02-18 23:50 ` Andrew Morton 2008-02-19 9:19 ` Jens Axboe 0 siblings, 1 reply; 17+ messages in thread From: Andrew Morton @ 2008-02-18 23:50 UTC (permalink / raw) To: Jens Axboe; +Cc: Paul Clements, randy.dunlap, linux-kernel On Mon, 18 Feb 2008 19:16:30 +0100 Jens Axboe <jens.axboe@oracle.com> wrote: > On Tue, Feb 12 2008, Andrew Morton wrote: > > On Sat, 09 Feb 2008 08:30:40 -0500 > > Paul Clements <paul.clements@steeleye.com> wrote: > > > > > + old_e = disk->queue->elevator; > > > + if (elevator_init(disk->queue, "deadline") == 0 || > > > + elevator_init(disk->queue, "noop") == 0) { > > > + elevator_exit(old_e); > > > + } > > > } > > > > afacit elevator_init() will not trigger a request_module(). And you really > > do want to trigger the request_module() here. Perhaps the block layer > > should provide a means of doing so? > > Good point, I think elevator_get() should do that automatically. Does > this look sane? > > diff --git a/block/elevator.c b/block/elevator.c > index bafbae0..88318c3 100644 > --- a/block/elevator.c > +++ b/block/elevator.c > @@ -134,6 +134,21 @@ static struct elevator_type *elevator_get(const char *name) > spin_lock(&elv_list_lock); > > e = elevator_find(name); > + if (!e) { > + char elv[ELV_NAME_MAX + strlen("-iosched")]; > + > + spin_unlock(&elv_list_lock); > + > + if (!strcmp(name, "anticipatory")) > + sprintf(elv, "as-iosched"); > + else > + sprintf(elv, "%s-iosched", name); > + > + request_module(elv); > + spin_lock(&elv_list_lock); > + e = elevator_find(name); > + } > + > if (e && !try_module_get(e->elevator_owner)) > e = NULL; Looks nice and simple. There might be some of the usual ordering problems when this is called during boot, maybe is-initramfs-available-yet problems, etc. But it's unlikely to make things regress from where they are now. Should we emit a warning if the desired elevator wasn't available? ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-18 23:50 ` Andrew Morton @ 2008-02-19 9:19 ` Jens Axboe 2008-02-19 9:24 ` Jens Axboe 0 siblings, 1 reply; 17+ messages in thread From: Jens Axboe @ 2008-02-19 9:19 UTC (permalink / raw) To: Andrew Morton; +Cc: Paul Clements, randy.dunlap, linux-kernel On Mon, Feb 18 2008, Andrew Morton wrote: > On Mon, 18 Feb 2008 19:16:30 +0100 Jens Axboe <jens.axboe@oracle.com> wrote: > > > On Tue, Feb 12 2008, Andrew Morton wrote: > > > On Sat, 09 Feb 2008 08:30:40 -0500 > > > Paul Clements <paul.clements@steeleye.com> wrote: > > > > > > > + old_e = disk->queue->elevator; > > > > + if (elevator_init(disk->queue, "deadline") == 0 || > > > > + elevator_init(disk->queue, "noop") == 0) { > > > > + elevator_exit(old_e); > > > > + } > > > > } > > > > > > afacit elevator_init() will not trigger a request_module(). And you really > > > do want to trigger the request_module() here. Perhaps the block layer > > > should provide a means of doing so? > > > > Good point, I think elevator_get() should do that automatically. Does > > this look sane? > > > > diff --git a/block/elevator.c b/block/elevator.c > > index bafbae0..88318c3 100644 > > --- a/block/elevator.c > > +++ b/block/elevator.c > > @@ -134,6 +134,21 @@ static struct elevator_type *elevator_get(const char *name) > > spin_lock(&elv_list_lock); > > > > e = elevator_find(name); > > + if (!e) { > > + char elv[ELV_NAME_MAX + strlen("-iosched")]; > > + > > + spin_unlock(&elv_list_lock); > > + > > + if (!strcmp(name, "anticipatory")) > > + sprintf(elv, "as-iosched"); > > + else > > + sprintf(elv, "%s-iosched", name); > > + > > + request_module(elv); > > + spin_lock(&elv_list_lock); > > + e = elevator_find(name); > > + } > > + > > if (e && !try_module_get(e->elevator_owner)) > > e = NULL; > > Looks nice and simple. There might be some of the usual ordering problems > when this is called during boot, maybe is-initramfs-available-yet problems, > etc. But it's unlikely to make things regress from where they are now. Isn't request_module() and below robust enough to handle that? > Should we emit a warning if the desired elevator wasn't available? Hmm, not sure. Either the request came from a driver, in which case it'll be notified that we could not load that elevator. Or it'll come through sysfs online switching, in which case we already print a warning. -- Jens Axboe ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-19 9:19 ` Jens Axboe @ 2008-02-19 9:24 ` Jens Axboe 2008-02-19 10:02 ` Andrew Morton 0 siblings, 1 reply; 17+ messages in thread From: Jens Axboe @ 2008-02-19 9:24 UTC (permalink / raw) To: Andrew Morton; +Cc: Paul Clements, randy.dunlap, linux-kernel On Tue, Feb 19 2008, Jens Axboe wrote: > On Mon, Feb 18 2008, Andrew Morton wrote: > > On Mon, 18 Feb 2008 19:16:30 +0100 Jens Axboe <jens.axboe@oracle.com> wrote: > > > > > On Tue, Feb 12 2008, Andrew Morton wrote: > > > > On Sat, 09 Feb 2008 08:30:40 -0500 > > > > Paul Clements <paul.clements@steeleye.com> wrote: > > > > > > > > > + old_e = disk->queue->elevator; > > > > > + if (elevator_init(disk->queue, "deadline") == 0 || > > > > > + elevator_init(disk->queue, "noop") == 0) { > > > > > + elevator_exit(old_e); > > > > > + } > > > > > } > > > > > > > > afacit elevator_init() will not trigger a request_module(). And you really > > > > do want to trigger the request_module() here. Perhaps the block layer > > > > should provide a means of doing so? > > > > > > Good point, I think elevator_get() should do that automatically. Does > > > this look sane? > > > > > > diff --git a/block/elevator.c b/block/elevator.c > > > index bafbae0..88318c3 100644 > > > --- a/block/elevator.c > > > +++ b/block/elevator.c > > > @@ -134,6 +134,21 @@ static struct elevator_type *elevator_get(const char *name) > > > spin_lock(&elv_list_lock); > > > > > > e = elevator_find(name); > > > + if (!e) { > > > + char elv[ELV_NAME_MAX + strlen("-iosched")]; > > > + > > > + spin_unlock(&elv_list_lock); > > > + > > > + if (!strcmp(name, "anticipatory")) > > > + sprintf(elv, "as-iosched"); > > > + else > > > + sprintf(elv, "%s-iosched", name); > > > + > > > + request_module(elv); > > > + spin_lock(&elv_list_lock); > > > + e = elevator_find(name); > > > + } > > > + > > > if (e && !try_module_get(e->elevator_owner)) > > > e = NULL; > > > > Looks nice and simple. There might be some of the usual ordering problems > > when this is called during boot, maybe is-initramfs-available-yet problems, > > etc. But it's unlikely to make things regress from where they are now. > > Isn't request_module() and below robust enough to handle that? BTW, I've verified that it works as expected (at least after boot): carl:/sys/block/sda/queue # cat scheduler noop [cfq] carl:/sys/block/sda/queue # echo anticipatory > scheduler carl:/sys/block/sda/queue # dmesg [...] io scheduler anticipatory registered carl:/sys/block/sda/queue # cat scheduler noop cfq [anticipatory] So it properly loads as-iosched instead of failing, like it would have done before and required the user to do a modprobe as-iosched first. carl:/sys/block/sda/queue # echo foobar > scheduler -bash: echo: write error: Invalid argument carl:/sys/block/sda/queue # dmesg [...] elevator: type foobar not found -- Jens Axboe ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-19 9:24 ` Jens Axboe @ 2008-02-19 10:02 ` Andrew Morton 2008-02-19 10:05 ` Jens Axboe 0 siblings, 1 reply; 17+ messages in thread From: Andrew Morton @ 2008-02-19 10:02 UTC (permalink / raw) To: Jens Axboe; +Cc: Paul Clements, randy.dunlap, linux-kernel On Tue, 19 Feb 2008 10:24:28 +0100 Jens Axboe <jens.axboe@oracle.com> wrote: > On Tue, Feb 19 2008, Jens Axboe wrote: > > On Mon, Feb 18 2008, Andrew Morton wrote: > > > > + > > > > if (e && !try_module_get(e->elevator_owner)) > > > > e = NULL; > > > > > > Looks nice and simple. There might be some of the usual ordering problems > > > when this is called during boot, maybe is-initramfs-available-yet problems, > > > etc. But it's unlikely to make things regress from where they are now. > > > > Isn't request_module() and below robust enough to handle that? > > BTW, I've verified that it works as expected (at least after boot): > > carl:/sys/block/sda/queue # cat scheduler > noop [cfq] > carl:/sys/block/sda/queue # echo anticipatory > scheduler > carl:/sys/block/sda/queue # dmesg > [...] > io scheduler anticipatory registered > carl:/sys/block/sda/queue # cat scheduler > noop cfq [anticipatory] > > So it properly loads as-iosched instead of failing, like it would have > done before and required the user to do a modprobe as-iosched first. > > carl:/sys/block/sda/queue # echo foobar > scheduler > -bash: echo: write error: Invalid argument > carl:/sys/block/sda/queue # dmesg > [...] > elevator: type foobar not found Looks promising - let's run with it. If there _are_ startup ordering problems then we won't be any worse off than we are now. otoh, the system must have _some_ io scheduler installed when talking to disks, so perhaps there won't be any such problems at all. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-19 10:02 ` Andrew Morton @ 2008-02-19 10:05 ` Jens Axboe 0 siblings, 0 replies; 17+ messages in thread From: Jens Axboe @ 2008-02-19 10:05 UTC (permalink / raw) To: Andrew Morton; +Cc: Paul Clements, randy.dunlap, linux-kernel On Tue, Feb 19 2008, Andrew Morton wrote: > On Tue, 19 Feb 2008 10:24:28 +0100 Jens Axboe <jens.axboe@oracle.com> wrote: > > > On Tue, Feb 19 2008, Jens Axboe wrote: > > > On Mon, Feb 18 2008, Andrew Morton wrote: > > > > > + > > > > > if (e && !try_module_get(e->elevator_owner)) > > > > > e = NULL; > > > > > > > > Looks nice and simple. There might be some of the usual ordering problems > > > > when this is called during boot, maybe is-initramfs-available-yet problems, > > > > etc. But it's unlikely to make things regress from where they are now. > > > > > > Isn't request_module() and below robust enough to handle that? > > > > BTW, I've verified that it works as expected (at least after boot): > > > > carl:/sys/block/sda/queue # cat scheduler > > noop [cfq] > > carl:/sys/block/sda/queue # echo anticipatory > scheduler > > carl:/sys/block/sda/queue # dmesg > > [...] > > io scheduler anticipatory registered > > carl:/sys/block/sda/queue # cat scheduler > > noop cfq [anticipatory] > > > > So it properly loads as-iosched instead of failing, like it would have > > done before and required the user to do a modprobe as-iosched first. > > > > carl:/sys/block/sda/queue # echo foobar > scheduler > > -bash: echo: write error: Invalid argument > > carl:/sys/block/sda/queue # dmesg > > [...] > > elevator: type foobar not found > > Looks promising - let's run with it. If there _are_ startup ordering > problems then we won't be any worse off than we are now. I've merged it for inclusion, since it tests fine here. > otoh, the system must have _some_ io scheduler installed when talking to > disks, so perhaps there won't be any such problems at all. We are guarenteed to always have noop available, since you cannot de-select that. -- Jens Axboe ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Nbd] [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler 2008-02-08 18:41 ` Paul Clements 2008-02-08 20:45 ` Jens Axboe @ 2008-02-08 22:45 ` Mike Snitzer 1 sibling, 0 replies; 17+ messages in thread From: Mike Snitzer @ 2008-02-08 22:45 UTC (permalink / raw) To: Paul Clements Cc: Andrew Morton, Randy Dunlap, nbd-general, linux-kernel, Jens Axboe On Feb 8, 2008 1:41 PM, Paul Clements <paul.clements@steeleye.com> wrote: > Andrew Morton wrote: > > On Fri, 8 Feb 2008 09:33:41 -0800 Randy Dunlap <randy.dunlap@oracle.com> wrote: > > > >> On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote: > >> > >>> There have been numerous reports of problems with nbd and cfq. Deadline > >>> gives better performance for nbd, anyway, so let's use it by default. > > > > Please define "problems". If it's just "slowness" then we can live with > > that, but I'd hope that Jens is aware and that it's understood. > > > > It it's "hangs" or "oopses" then we panic. > > The two problems I have experienced (which may already be fixed): > > 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed It has been fixed in RHEL5 (but not yet released AFAIK): https://bugzilla.redhat.com/show_bug.cgi?id=241540#c43 On a slightly related performance note; I'm seeing that for NBD devices both max_hw_sectors_kb and max_sectors_kb are 127. If I set it to be 256 with the following patch I see a 25% improvement in overall throughput: diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 5def9c5..ed63e2f 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -764,6 +764,7 @@ static int __init nbd_init(void) put_disk(disk); goto out; } + blk_queue_max_sectors(disk->queue, 512); } if (register_blkdev(NBD_MAJOR, "nbd")) { Any chance we can take steps to make NBD not be artificially slow? Any other recommendations for improving NBD throughput? regards, Mike ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2008-02-19 10:05 UTC | newest] Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2008-02-08 16:47 [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler Paul Clements 2008-02-08 17:33 ` Randy Dunlap 2008-02-08 18:11 ` Andrew Morton 2008-02-08 18:41 ` Paul Clements 2008-02-08 20:45 ` Jens Axboe 2008-02-08 20:47 ` Jens Axboe 2008-02-08 21:23 ` Paul Clements 2008-02-08 22:02 ` Andrew Morton 2008-02-09 13:30 ` Paul Clements 2008-02-12 23:16 ` Andrew Morton 2008-02-18 18:16 ` Jens Axboe 2008-02-18 23:50 ` Andrew Morton 2008-02-19 9:19 ` Jens Axboe 2008-02-19 9:24 ` Jens Axboe 2008-02-19 10:02 ` Andrew Morton 2008-02-19 10:05 ` Jens Axboe 2008-02-08 22:45 ` [Nbd] " Mike Snitzer
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).