let’s say i have a blocking function which makes a heavy calculation
For example
fun cachedFib(x:Long):Long {
//check if value is cached
// calculate if not in cache and populate cache
// return value
}
In a blocking world, i can run this function in a single thread pool executor and i am sure that two function calls are not happening in parallel.
Now let’s go to a non blocking coroutines world:
import kotlinx.coroutines.CoroutineScope
import kotlinx.coroutines.delay
import kotlinx.coroutines.launch
import kotlinx.coroutines.newSingleThreadContext
import kotlinx.coroutines.runBlocking
fun main() {
val scope = CoroutineScope(newSingleThreadContext("aaa"))
scope.launch { foo() }
scope.launch { foo() }
runBlocking {
delay(2000)
}
}
suspend fun foo() {
println("looking in cache")
println("making http call")
delay(1000)
println("after call")
}
The output is
looking in cache
making http call
looking in cache
making http call
after call
after call
This is correct because all about coroutines is not blocking but this means that if i have two parallel requests i am making two http calls, which is bad
I found a convoluted workaround which comes from recursions but i am sure that there is a better solution
import kotlinx.coroutines.CoroutineScope
import kotlinx.coroutines.Deferred
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.async
import kotlinx.coroutines.delay
import kotlinx.coroutines.runBlocking
fun main() {
val scope = CoroutineScope(Dispatchers.IO)
val deferred1 = get_next_async(scope)
val deferred2 = get_next_async(scope)
runBlocking {
deferred1.await()
deferred2.await()
}
}
var sharedDeferred: Deferred<Int>? = null
@Synchronized
fun get_next_async(scope: CoroutineScope): Deferred<Int> {
val deferredToWait = sharedDeferred
sharedDeferred = scope.async {
deferredToWait?.await()
foo()
}
return sharedDeferred as Deferred<Int>
}
suspend fun foo() {
println("looking in cache")
println("making http call")
delay(1000)
println("after call")
}
What is the idiomatic way to solve this issue?