Threading model for coroutines in backend applications

In the latest post about Explicit concurrency, there is the following statement: “Many successful server-side applications run totally in a single thread without any parallelism, yet scale quite well (think about the whole node.js platform).”

I haven’t seen any coherent suggestion for backend java application that are using coroutines regarding threads and async execution. I would like to point few things:

  • The default dispatcher for coroutines defined like that
internal val useCoroutinesScheduler = systemProp(COROUTINES_SCHEDULER_PROPERTY_NAME).let { value ->
    when (value) {
        null, "", "on" -> true
        "off" -> false
        else -> error("System property '$COROUTINES_SCHEDULER_PROPERTY_NAME' has unrecognized value '$value'")
    }
}

internal actual fun createDefaultDispatcher(): CoroutineDispatcher =
    if (useCoroutinesScheduler) DefaultScheduler else CommonPool

It means by default it is not using fork join pool.

  • There are also other frameworks like netty, ktor, java parallel streams which either using other threads or I am not sure about.

  • The suggestion in the post is that it might be efficient to use one thread for the backend.

To sum up:
Of course in reality application servers should be optimized based on performance tests, but is it true that in theory we should strive to have one pool in about the size as number of cores to execute everything? Can we try to tune all our frameworks to use the Fork Join Pool? Is that the best approach to start with?

In a single-core computer you can execute one thread at time, so having multiple threads force the operating system to swap them on CPU: one thread is running and others are waiting, sleeping or idle.
The cores in a CPU limits the number of running thread at same time, having more threads is not a great benefits unless some thread have to be waiting, so you can use an extra thread for each blocking tasks.

Yes, you can.
Premature optimization is always an option.

No, that is the worst.

just to clear some things. I am talking about servers with ~30 cores usually. so obviously not one core.
In addition I don’t consider that as an optimization but I am looking for a coherent architectural decision - like choosing what database to use choose what thread pool to use for common operations.

This is the default in Go (I think since Go 1.6, but for sure up till now). In Go no one can create its own scheduler as all scheduling is done by the Go runtime and there is no way to access it from outside other than changing Go and rebuilding it. On the JVM it’s different. For instance, when you start JBoss Wildfly about 200 threads are gone only for starting it up.

1 Like

To give some more context: I am in a backend infrastructure team. we have our own legacy framework that uses it’s own thread pool for aysnc operations and also netty. In addition, we implemented mysql async db driver that uses both Fork Join Pool from CompletableFuture, coroutines common dispatcher for actor, and netty thread pools. So I guess this non-optimized setup is not the best we can do.

Found the following citation about fork join pool: in most cases, the best decision is to use one thread pool per application or system