LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Christoph Lameter <clameter@sgi.com>
Cc: linux-kernel@vger.kernel.org, linux-mm <linux-mm@kvack.org>,
akpm@linux-foundation.org
Subject: Re: SLUB: The unqueued Slab allocator
Date: Thu, 22 Feb 2007 09:34:34 +0100 [thread overview]
Message-ID: <1172133274.6374.12.camel@twins> (raw)
In-Reply-To: <Pine.LNX.4.64.0702212250271.30485@schroedinger.engr.sgi.com>
On Wed, 2007-02-21 at 23:00 -0800, Christoph Lameter wrote:
> +/*
> + * Lock order:
> + * 1. slab_lock(page)
> + * 2. slab->list_lock
> + *
That seems to contradict this:
> +/*
> + * Lock page and remove it from the partial list
> + *
> + * Must hold list_lock
> + */
> +static __always_inline int lock_and_del_slab(struct kmem_cache *s,
> + struct page *page)
> +{
> + if (slab_trylock(page)) {
> + list_del(&page->lru);
> + s->nr_partial--;
> + return 1;
> + }
> + return 0;
> +}
> +
> +/*
> + * Get a partial page, lock it and return it.
> + */
> +#ifdef CONFIG_NUMA
> +static struct page *get_partial(struct kmem_cache *s, gfp_t flags, int node)
> +{
> + struct page *page;
> + int searchnode = (node == -1) ? numa_node_id() : node;
> +
> + if (!s->nr_partial)
> + return NULL;
> +
> + spin_lock(&s->list_lock);
> + /*
> + * Search for slab on the right node
> + */
> + list_for_each_entry(page, &s->partial, lru)
> + if (likely(page_to_nid(page) == searchnode) &&
> + lock_and_del_slab(s, page))
> + goto out;
> +
> + if (likely(!(flags & __GFP_THISNODE))) {
> + /*
> + * We can fall back to any other node in order to
> + * reduce the size of the partial list.
> + */
> + list_for_each_entry(page, &s->partial, lru)
> + if (likely(lock_and_del_slab(s, page)))
> + goto out;
> + }
> +
> + /* Nothing found */
> + page = NULL;
> +out:
> + spin_unlock(&s->list_lock);
> + return page;
> +}
> +#else
> +static struct page *get_partial(struct kmem_cache *s, gfp_t flags, int node)
> +{
> + struct page *page;
> +
> + /*
> + * Racy check. If we mistakenly see no partial slabs then we
> + * just allocate an empty slab.
> + */
> + if (!s->nr_partial)
> + return NULL;
> +
> + spin_lock(&s->list_lock);
> + list_for_each_entry(page, &s->partial, lru)
> + if (likely(lock_and_del_slab(s, page)))
> + goto out;
> +
> + /* No slab or all slabs busy */
> + page = NULL;
> +out:
> + spin_unlock(&s->list_lock);
> + return page;
> +}
> +#endif
next prev parent reply other threads:[~2007-02-22 8:40 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-02-22 7:00 Christoph Lameter
2007-02-22 8:34 ` Peter Zijlstra [this message]
2007-02-22 15:25 ` Christoph Lameter
2007-02-22 8:58 ` David Miller
2007-02-22 15:26 ` Christoph Lameter
2007-02-22 10:49 ` Pekka Enberg
2007-02-22 15:15 ` Christoph Lameter
2007-02-22 17:54 ` Andi Kleen
2007-02-22 18:42 ` Christoph Lameter
2007-02-23 0:16 ` Andi Kleen
2007-02-23 4:55 ` Christoph Lameter
2007-02-24 5:28 ` KAMEZAWA Hiroyuki
2007-02-24 5:47 ` Christoph Lameter
2007-02-24 5:54 ` David Miller
2007-02-24 17:32 ` Christoph Lameter
2007-02-24 19:33 ` Jörn Engel
2007-02-25 0:14 ` Christoph Lameter
2007-02-25 12:23 ` Jörn Engel
2007-02-25 0:53 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1172133274.6374.12.camel@twins \
--to=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=clameter@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--subject='Re: SLUB: The unqueued Slab allocator' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).