What is the mutex thread confinement (Coroutines)?


I have been going through the excellent Coroutine guide https://github.com/Kotlin/kotlinx.coroutines/blob/master/coroutines-guide.md#mutual-exclusion.

One part of the Mutex has got me a little puzzled. Near the end of the Mutex section, there is a statement which says this will run slower because it is fine-grained, referring to the thread confinement. What I cannot figure out is what the context that the Mutex itself is running on? Is it a single thread context maybe? The example code shown is running the actual coroutine on the CommonPool, so the statement that it is fine-grained and will take a hit would imply that it is switching out of the CommonPool and to some other context to enter the lock/unlock, then back to the CommonPool.

Anyone have any insight? Thanks in advance.


A mutex is not a thread, it is a bit of “state” shared across the threads that they use (with valid synchronization primitives) to ensure that only one thread at a time can “hold” the mutex (never forget to release it :wink: ). Remember that your coroutines can access shared state and can even have multiple instances running in parallel (started in different threads). A threadpool is just a way to reduce the parallelism (generally to the available computing resources), not normally to restrict the thread count to 1.


Agreed that a mutex is not a thread, however one way coroutines approach issues with shared mutable state is to confine the change of the state to a specific thread.
This does not mean it is not shared across threads, instead it means that the lock/unlock, read, etc… would be done in a single thread (the context).

As stated in the initial post, the guide alludes to Mutex being “fine-grained”. This leaves me to wonder if Mutex is using the same implementation of thread-confinement. And if so, then my initial questions about the context the Mutex is locking/unlocking in would stand. Or is it a different type of implementation?


Any kind of synchronization is costly. When you do “fine-grained” (per item) synchronization you pay the price per each item processed, so the code that needs very high throughput (number of items process per second) typically groups processed items into larger batches to amortize synchronization costs.