lewissbaker / cppcoro

A library of C++ coroutine abstractions for the coroutines TS
MIT License
3.36k stars 462 forks source link

async_mutex::unlock() behavior should be better documented #204

Open aschultz opened 3 years ago

aschultz commented 3 years ago

async_mutex::unlock() makes a direct call to handle.resume() on the next awaiter in the queue. This prevents the code releasing the mutex from continuing its execution until the next awaiter -- potentially all the awaiters -- have completed. (Completion here meaning either naturally or by co_awaiting an operation that schedules the remaining work for later execution).

It also means the thread an awaiter resumes on is always the thread of the first lock holder in the chain.

static async_mutex s_mutex{};

IAsyncAction DoWork() {
    {
        auto scopedLock = co_await s_mutex.lock_async();
        // (A) Do work that needs the mutex
    }
    // Lock is destroyed, calling s_mutex.unlock(); 
    // The next queued call to DoWork starts running. We are now blocked here until it completes.

    // (B) This line doesn't run until every queued DoWork is processed.
}

This behavior isn't intuitive. Its unusual for a future operation to have an impact on the current one. I would have expected instead that the code calling unlock() queues the next awaiter through some scheduler (thread pool, UI queue, etc), allowing the current operation to run through (B) immediately.

I realize the current design is more agnostic to the environment and devs can still customize where execution continues. So I'd ask to at least improve the documentation here to make clear 1) the impact of this design and 2) that you may want a call to something like winrt::resume_* after the lock_async() call to avoid blocking previous operations.

Maximsiv1410 commented 2 years ago

What's more - what if resumed coroutine throws an exception? async_mutex_lock releases async_mutex in destructor via calling mutex->unlock() So there is potential undefined behavior

jeanga commented 2 years ago

Please be aware of #208 .