Can Coroutines leverage the new Loom compatible JDBC drivers?

With the introduction of Loom-compatible database drivers that enables non-blocking database operations, can coroutines also take advantage of these drivers? If not, would a hybrid approach like creating a custom dispatcher that uses Loom be viable? I’m not sure how bad the context switching would be in this case

I doubt coroutines can take advantage of it until they are entirely reimplemented to run on virtual threads.

I must admit, I don’t fully understand what is the change in the newer JDBC drivers. Unfortunately, I didn’t find any explanation on what does it mean exactly that it is now compatible with Loom. Maybe in the earlier versions DB operations pinned to the carrier, so if we executed 50 DB operations using virtual threads, we still required 50 platform threads. But I’m mostly guessing.

Anyway, if this is the case, then while using coroutines we don’t necessarily need these JDBC improvements. If we can guard every DB operation with a semaphore, then we already solve this problem. But again, I’m mostly speculating here.

it primarily involves replacing the use of synchronized with ReentrantLock to prevent the Loom virtual thread from being pinned. An example of this can be found in the PostgreSQL JDBC.

JDBC requests are still synchronous. Coroutines are made for asynchronous operations. So, I do not think so.

You can wrap synchronous call in a coroutine, but I am not sure how it can help you without changing the rest of the logic.

Maybe I misunderstood you, but well: it is all about efficient use of resources and even if DB server is synchronous, I believe we still can benefit from asynchronous processing on the client side. First of all, we can queue tasks asynchronously, secondly, we can perform I/O in a non-blocking way.

Again, I don’t know JDBC internals, but I would assume if we do 50 concurrent DB operations, we limited the number of DB connections to 8 and all of them are awaiting the response, then:

  • Classic Java - we need at least 50 platform threads.
  • Virtual threads with the older JDBC driver - still 50+ platform threads.
  • Coroutines with Dispatchers.IO - still 50+ threads.
  • Coroutines with a semaphore - 8+ threads.
  • Virtual threads with the newer JDBC driver - potentially even 0 threads (1 shared by using epoll).

So there is still a potential benefit of switching to virtual threads. However, this benefit is not that big comparing to what we can do with coroutines already, so as I said above, we don’t necessarily need it.

Hmm, I think even something as simple as: Executors.newVirtualThreadPerTaskExecutor().asCoroutineDispatcher() should work as you would expect (?). We won’t block any thread on the coroutines side and on the Java side it will use VTs as usual. I would be probably careful using suspend code inside VT, so maybe instead of creating a dispatcher, it would be better to create a function like: runInVirtualThread(), which would suspend, but accept a non-suspendable lambda only. But I don’t know, suspendable code could/should be potentially compatible for running inside a VT, so maybe a dispatcher like above is fine.

Consider you have a synchronous call. Meaning that you need to wait for its results to proceed. In case of classic java threads, you need to allocate a separate thread to wait for it. It is ineffective. In Loom you can do that without allocating a thread stack, so it becomes more effective, but it still is synchronous. Here, Kotlin Deferred will be a full equivalent of Java Future on top of Loom thread. If you just get the data in a cycle, it won’t give you anything new.

In order to get all benefits of Kotlin coroutines, you need not only to get data without blocking a thread, but also process it in asynchronous way. For example, push data to a Channel/Flow as soon as you get it and process it in a map/reduce way. Or you can start an actor-like coroutine that gets the data and sends it into the web.

Using Loom inside JDBC could help to use less resources, but I can’t see how integrating coroutines will help.

Ideally, in java 21+, Dispatchers.IO should use an unlimited number of virtual threads. Until that happens, you can always make your own dispatcher that does.

I’m not sure what benefits you’re looking for from “leveraging” or “taking advantage of” these drivers, so I can’t say whether or not this is satisfactory.

Roman Elizarov said in a talk at KotlinConf that dispatchers are optimized to minimize context switching, and the Loom API doesn’t allow them to write a dispatcher that both uses virtual threads and minimizes context switching. So they’re not planning on using Loom in coroutines.

1 Like

Hopefully, it is fairly easy to create its own dispatcher based on Virtual threads:

import kotlinx.coroutines.asCoroutineDispatcher
import kotlinx.coroutines.launch
import kotlinx.coroutines.withContext
import java.util.concurrent.Executors
import kotlinx.coroutines.runBlocking

fun main() : Unit  = runBlocking {
			 .use { loom ->
				withContext(loom) {
					launch { println("Thread ${Thread.currentThread()}") }
					launch { println("Thread ${Thread.currentThread()}") }

“using Loom in coroutines” there refers to a completely different idea, and the reason given refers to the implications of the Java 21 libraries blocking access to the APIs that create virtual threads with non-default schedulers.