I’d like to add a critical section operation, the intended use case is for synchronizing access to shared mutable state when async dialect is lowered to LLVM coroutines + concurrent execution.
Examle: compute “something” in parallel, and aggregate results into shared memref
Should it be resource + critical_section or something else? I like resource/critical section because it allows to model things like synchronized access to “device”.
For Async->LLVM lowering this can be also lowered to coroutines, so there will be no “blocking” per se (no holding a mutex lock for guarding a critical section), but a “wait list” of suspended coroutines (critical sections) that will be processed sequentially in some random order at runtime.
Supporting critical sections is not specifically tied to the async dialect. Except that you plan to lower it to a shared runtime, the two concepts are unrelated. You could even model this as a library call (assuming you had async. function calls).
could also work but then you would need support for a value (the %r) that changes its readiness state back and forth, so that only one waiting co-routine (or async dependency) fires at a time.
For some reason I thought that atomic operations in std are just counterparts of c++ fetch_add, fetch_sub, etc…, looks like generic_atomic_rmw is exactly what I need.
Yes, you are right, the first example only works with std.call as we do not model nested asynchronicity. I wanted to encode the fact that the lock operations do not block the thread but instead deschedule until the lock token is available. My idea was to have aquire_lock return a token that becomes ready when the lock was acquired. That way, computations that depend on the lock would not be scheduled until then. Likewise with the release_lock.
In the second example, I have made this explicit by actually returning these tokens without using nesting. I just noticed I missed some dependencies though.