Mutex implementation question

I’m hoping to write an efficient read-write lock for coroutines.

There is an inefficient implementation given here: Read-write Mutex · Issue #94 · Kotlin/kotlinx.coroutines · GitHub

Initially, I was using as a starting point the official Mutex source: https://github.com/Kotlin/kotlinx.coroutines/blob/master/core/kotlinx-coroutines-core/src/main/kotlin/kotlinx/coroutines/experimental/sync/Mutex.kt

But I was quickly overwhelmed. So I decided to go for a semi-efficient solution first, starting with the simplified Mutex example code here: coroutines-examples/mutex.kt at master · Kotlin/coroutines-examples · GitHub

My question is about this simplified implementation. In particular, I don’t see how line 37 is reachable. There is only one place where the resumed field can be mutated: on line 38. Is there a missing line to set a waiter’s resumed field to true in the unlock() method? Or am I missing something?

Also, it would be useful if someone could elaborate on the purpose of the “StackOverflow avoidance code” from the official Mutex implementation. I’m having trouble wrapping my head around that part.

I’ll start from the easy part. The “StackOverflow avoidance code” stems from this bug report to the original implementation: Mutex.unlock with Unconfined dispatcher causes StackOverflowError · Issue #80 · Kotlin/kotlinx.coroutines · GitHub You can take a look at the original bug report and how it was fixed.

Indeed, in the simplified implementation there was an unreachable code. It was a left-over from refactoring of a more complex implementation (you can check the git history). Thanks a lot for report. I’ve fixed the code itself and comments surrounding that code.

Thanks for taking a look at this!

I managed to implement a rw mutex and a semaphore but I didn’t include any stack overflow avoidance. Is it possible to somehow pull this stack overflow avoidance into the language and out of library design?

For now, I can just try to avoid using Unconfined entirely, but I noticed that some library functions that I want to use have Unconfined as the default context (e.g. ReceiveChannel.takeWhile from Channels.kt).

Can the problem be reproduced with application code that doesn’t explicitly mention “Unconfined”? If so I’m basically forced to figure out the stack overflow avoidance.

This problem with stack overflow is unique to Unconfined dispatcher and other dispatchers that do not actually schedule execution of coroutine for a later time when asked to do so (return false on their isDispatchNeeded). Unconfined dispatcher is strictly reserved for cases where additional performance is desired and when consequences of its use are well understood. The rule of thumb that you should not use it.

The Unconfined dispatcher problem is also specific to the type of the inter-coroutine communication primitive and the communication pattern that is used. In particular, when you build pipelines with channels and Unconfined coroutines, the consumed size of the stack will be proportional to the number of coroutines in your pipeline, which is usually not very big.

Mutex is an outlier here. It tend to be used to synchronize a lot of coroutines and it implicitly chains them into very big communication chains (one coroutine unlocks the mutex and resumes the next one, that one locks/unlock and then resumes the next one, etc), so it is particularly prone to stack overflow in combination with Unconfined dispatcher.

Now, the stack overflow avoidance could be built directly into Unconfined dispatcher, but I don’t know a cheap way to do it there. On the other hand, implementing an “expensive” solution in the Unconfined dispatcher runs contrary to the whole idea that Unconfined dispatcher is a “performance optimization”. So, the compromise is to have it only in primitives that are prone to this problem in the first place.