LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Thierry Delisle <tdelisle@uwaterloo.ca>
To: <posk@posk.io>
Cc: <avagin@google.com>, <bsegall@google.com>, <jannh@google.com>,
	<linux-api@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<mingo@redhat.com>, <peterz@infradead.org>, <pjt@google.com>,
	<posk@google.com>, <tdelisle@uwaterloo.ca>, <tglx@linutronix.de>,
	Peter Buhr <pabuhr@uwaterloo.ca>
Subject: Re: [PATCH 4/4 v0.4] sched/umcg: RFC: implement UMCG syscalls
Date: Wed, 4 Aug 2021 18:04:57 -0400	[thread overview]
Message-ID: <3530714d-125b-e0f5-45b2-72695e2fc4ee@uwaterloo.ca> (raw)
In-Reply-To: <20210801200617.623745-5-posk@google.com>

[-- Attachment #1: Type: text/plain, Size: 1093 bytes --]

I have attached an atomic stack implementation I wrote. I believe it would
be applicable here. It is very similar except the kernel side no longer
needs a retry loop, the looping is moved to the user-space after the pop.
Using it instead of the code you have in enqueue_idle_worker means the
timeout is no longer needed.

 > - ``uint64_t idle_server_tid_ptr``: points to a pointer variable in the
 >   userspace that points to an idle server, i.e. a server in IDLE 
state waiting
 >   in sys_umcg_wait(); read-only; workers must have this field set; 
not used
 >   in servers.
 >
 >   When a worker's blocking operation in the kernel completes, the kernel
 >   changes the worker's state from ``BLOCKED`` to ``IDLE``, adds the 
worker
 >   to the list of idle workers, and checks whether
 >   ``*idle_server_tid_ptr`` is not zero. If not, the kernel tries to 
cmpxchg()
 >   it with zero; if cmpxchg() succeeds, the kernel will then wake the 
server.
 >   See `State transitions`_ below for more details.

In this case, I believe cmpxchg is not necessary and xchg suffices.


[-- Attachment #2: atomic_stack.c --]
[-- Type: text/x-csrc, Size: 3150 bytes --]

// This is free and unencumbered software released into the public domain.
//
// Anyone is free to copy, modify, publish, use, compile, sell, or
// distribute this software, either in source code form or as a compiled
// binary, for any purpose, commercial or non-commercial, and by any
// means.
//
// In jurisdictions that recognize copyright laws, the author or authors
// of this software dedicate any and all copyright interest in the
// software to the public domain. We make this dedication for the benefit
// of the public at large and to the detriment of our heirs and
// successors. We intend this dedication to be an overt act of
// relinquishment in perpetuity of all present and future rights to this
// software under copyright law.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
// IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
// OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
// OTHER DEALINGS IN THE SOFTWARE.
//
// For more information, please refer to <https://unlicense.org>

#include <assert.h>
#include <stdbool.h>

struct node {
	struct node * volatile next;
};

// Two sentinels, the values do not matter but must be different
// and unused by real addresses.
static struct node * const STACK_NO_VAL  = 0;
static struct node * const STACK_PENDING = 1;

// push a node to the stack
static inline void atomic_stack_push(struct node * volatile * head, struct node * n) {
	/* paranoid */ assert( n->next == STACK_NO_VAL );
	// Mark as pending so if it gets poped before the assignment to next
	// the reader knows this isn't necessarily the end of the list
	n->next = STACK_PENDING;

	// actually add the node to the list
	struct node * e = __atomic_exchange_n(head, n, __ATOMIC_SEQ_CST);

	// update the next field
	__atomic_store_n(&n->next, e, __ATOMIC_RELAXED);
}

// Pop all nodes from the stack
// Once popped, nodes should be iterate on using atomic_stack_next
static inline struct node * atomic_stack_pop_all(struct node * volatile * head) {
	// Steal the entire list for ourselves atomically
	// Nodes can still have pending next fields but everyone should agree
	// the nodes are ours.
	return __atomic_exchange_n(head, STACK_NO_VAL, __ATOMIC_SEQ_CST);
}

// from a given node, advance to the next node, waiting for pending nodes
// to be resolved
// if clear is set, the nodes that are advanced from are unlinked before the
// previous node is returned
static inline struct node * atomic_stack_next(struct node * n, bool clear) {
	// Wait until the next field is pending
	while(STACK_PENDING == __atomic_load_n(&n->next, __ATOMIC_RELAXED)) asm volatile("pause" : : :);

	// The field is no longer pending, any subsequent concurrent write to that field
	// should now be dependent on the next read.
	struct node * r = n->next;

	// For convenience, unlink the node if desired and return.
	if(clear) n->next = STACK_NO_VAL;
	return r;
}

  reply	other threads:[~2021-08-04 22:05 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-01 20:06 [PATCH 0/4 v0.4] sched/umcg: RFC UMCG patchset Peter Oskolkov
2021-08-01 20:06 ` [PATCH 1/4 v0.4] sched/umcg: add WF_CURRENT_CPU and externise ttwu Peter Oskolkov
2021-08-01 20:06 ` [PATCH 2/4 v0.4] sched/umcg: RFC: add userspace atomic helpers Peter Oskolkov
2021-08-01 20:06 ` [PATCH 3/4 v0.4] sched/umcg: add Documentation/userspace-api/umcg.rst Peter Oskolkov
2021-08-01 20:08   ` Peter Oskolkov
2021-08-04 19:12   ` Thierry Delisle
2021-08-04 21:48     ` Peter Oskolkov
2021-08-06 16:51       ` Thierry Delisle
2021-08-06 17:25         ` Peter Oskolkov
2021-08-09 14:15           ` Thierry Delisle
2021-08-01 20:06 ` [PATCH 4/4 v0.4] sched/umcg: RFC: implement UMCG syscalls Peter Oskolkov
2021-08-04 22:04   ` Thierry Delisle [this message]
2021-08-04 23:30     ` Peter Oskolkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3530714d-125b-e0f5-45b2-72695e2fc4ee@uwaterloo.ca \
    --to=tdelisle@uwaterloo.ca \
    --cc=avagin@google.com \
    --cc=bsegall@google.com \
    --cc=jannh@google.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pabuhr@uwaterloo.ca \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=posk@google.com \
    --cc=posk@posk.io \
    --cc=tglx@linutronix.de \
    --subject='Re: [PATCH 4/4 v0.4] sched/umcg: RFC: implement UMCG syscalls' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
on how to clone and mirror all data and code used for this inbox