Dispatcher.IO looks like better option for default dispatcher

The reason Dispatchers.Default is a good default dispatcher is because it is limited to the number of physical threads on your machine (unless it only has one core, but that’s another story). Aligning the number of logical and physical cores like that means you reduce the chance of context switches during your application lifetime, something that is extremely (relatively speaking) expensive.

This comes with a caveat though, as you mentioned. If you have 8 phyiscal threads on your machine and you launch 8 long-running tasks then the 9th task will have to wait until one of those tasks reaches a suspension point.

The ‘solution’ to this is to not have long-running tasks run on the default dispatcher. If you do have a computationally intensive piece of code running you can introduce suspension points by placing some yield() calls in the code, which might suspend the current operation and yield the thread to other tasks.

Then what about IO tasks? IO has mostly been implemented as an ‘atomic’ operation; you do a request to a URL, wait for a response and block the thread. You can’t really put your own yield calls anywhere and you’ll be waiting for a (relatively) long time to get an answer from the request.
That’s what Dispatchers.IO is for, it allows you to run a bunch of blocking IO without sabotaging the default dispatcher. This pool is generally limited to 64 threads, which means that you do have the increased potential of context switches, but given the cost of blocking IO this extra overhead can be rationalized as acceptable.

If you run into a CPU intensive task that you cannot yield in (because you don’t control the source code of the function for example), then you are in a bit of pickle. I don’t really understand enough about coroutines to know if there even is a solution there, but it doesn’t seem like you have this problem.

1 Like