Should it be resource + critical_section or something else? I like resource/critical section because it allows to model things like synchronized access to “device”.
For Async->LLVM lowering this can be also lowered to coroutines, so there will be no “blocking” per se (no holding a mutex lock for guarding a critical section), but a “wait list” of suspended coroutines (critical sections) that will be processed sequentially in some random order at runtime.
Supporting critical sections is not specifically tied to the async dialect. Except that you plan to lower it to a shared runtime, the two concepts are unrelated. You could even model this as a library call (assuming you had async. function calls).
Yes, you are right, the first example only works with std.call as we do not model nested asynchronicity. I wanted to encode the fact that the lock operations do not block the thread but instead deschedule until the lock token is available. My idea was to have aquire_lock return a token that becomes ready when the lock was acquired. That way, computations that depend on the lock would not be scheduled until then. Likewise with the release_lock.
In the second example, I have made this explicit by actually returning these tokens without using nesting. I just noticed I missed some dependencies though.