I wonder if it is possible to set up a dispatcher that ensures that coroutines inside it cannot ever be run simultaneously on different threads. That is, the dispatcher is not necessarily pinned to a specific thread, but if for example the dispatcher runs coroutine 1 on thread A and coroutine 2 on thread B, then 1 and 2 cannot ever run at the same time. In other words, such a dispatcher would ensure that all coroutines that are dispatched by it always run in sequence.
This is useful in cases where you use coroutines not for distributing workload but rather for disentangling state machines. For example, if you write code for controlling some peripheral device, the handshake, command sequences etc. typically are either implemented as a state machine that is tied to a reactor pattern (very often found in POSIX IO based C/C++ code) or a separate dedicated thread that runs all of that stuff. In such code, you usually do not want to run parts of those sequences in parallel. Such code isn’t computationally expensive, but writing it as coroutines greatly simplifies it.
So, any ideas? I know that you can get a single thread executor if you use the JVM, but I wonder if there is a variant that works across platforms.
Do you need to limit concurrency or parallelism? Assuming you launch some kind of tasks, what should happen if currently executing task needs to wait, e.g. for IO? Should we allow another task to be processed while the first one is waiting (so we limit parallelism) or do you want to finish the first task fully, before another task can start executing (we limit concurrency).
If you need to limit concurrency, then this is not really the responsibility of dispatchers. Use mutex or
Parallelism. One source of inspiraton were the Sequences in the Chrome codebase. A sequence runs tasks, well, sequentially, so no two tasks run at the same time. The sequenced task runner distributes tasks across threads from the threadpool, so even if sequence A is currently blocked by one of its tasks, sequence B can still be executed on the same threadpool. From that page:
Thread-safe but not thread-affine; how so? Tasks posted to the same sequence will run in sequential order. After a sequenced task completes, the next task may be picked up by a different worker thread, but that task is guaranteed to see any side-effects caused by the previous one(s) on its sequence.
In Kotlin coroutines, the sequenced task runner would be one of these proposed “limited dispatcher views” I suppose, and the sequence would exist as the list of tasks for the dispatcher to execute.
Well, alright then - I’ll use the single threaded executor for now. The JVM is currently the most important target anyway. I suggest to study that document I linked to though. The task runners in chrome are quite interesting.
Hmm, I never used these sequences, but the description sounds too me like they actually limit concurrency. Whenever we speak about running some tasks one at a time, sequentially, we mean limited concurrency. Note that even using single threaded dispatcher, you still can execute multiple tasks concurrently - they aren’t guaranteed to run sequentially.
True, their guarantee to run tasks in that sequence also limit concurrency - but they do within that sequence. I just mentioned that runner because I wanted to emphasize how it spreads tasks across thread pools and still limits parallelism.
In my case, I’m OK with allowing concurrency. (In fact, I plan on it, but concurrency in the cooperative sense, like in the reactor pattern.) Most states in my code are coroutine-local or exclusively accessed by a single coroutine anyway, and states that are shared across coroutines don’t need to be modified in a specific sequence. I control the coroutines that touch those internal states anyway - they are not accessible from the outside. So, as long as no coroutine can modify a state while another also accesses that same state I’m fine. And, since I don’t benefit from multithreading here, I prefer limiting the number of coroutines that can run simultaneously (as in, truly simultaneously, on multiple threads) to 1 to avoid those data races. The alternative would be mutexes with all of their inherent problems.
Ahh, ok, I understand. I think this is exactly the point of this new planned feature. To limit parallelism, but without a need to restrict execution to our magic thread (or thread pool). It will allow to use standard thread pools like
Dispatchers.Default, but will ensure that we don’t execute multiple coroutines in parallel. I don’t know if there is any way to achieve this right now.
It’s possible to write a custom dispatcher that uses a single thread instead of a thread pool.
Use a thread pool with fixed size of 1
By the way, in that case, it’s not guaranteed what you run always in same thread, because the pool can kill and recreate the thread. This can lead to some problems, with ThreadLocal, for example.