Limits of delay()?

Hello, simple question, maybe the answer isn’t that simple. Are there known limitations for delay()?
Like would there be any overflow if I delay() a coroutine for several weeks (a great number of coroutines to be more correct, like 100000)?
Actually the target is the JVM.

1 Like

Hi there

I am not aware of any limitation in the kotlinx.coroutines library about that, you might fill the dispatcher queues a lot which could be a performance hazard but beside that I can’t see anything bad. you might want to have a coroutine taking care of the dispatching of the delayed tasks in order to free the dispatcher queue tho.

Also keep in mind that suspended coroutines contain their execution context, preventing any kind of GC so you should make sure to reduce the quantity of data the coroutine is bound to or you will have memory leaks.

Hope this helps

“you might want to have a coroutine taking care of the dispatching of the delayed tasks in order to free the dispatcher queue tho”
Could you rephrase this, I don’t really understand it. Like I will have one coroutine, creating all the other coroutines with launch(Dispatchers.Default) and than immediately the coroutines will delay() themselves, is this what you mean?

“Quantity of data”
The coroutines will only use one data object with some strings, and the good thing is after the delay(daysInMillis) they do their job and are finished so I hope gc won’t be a problem. But there will always be like 100000 coroutines because if one finishes in the mean time new ones will come.

If you want to make the the dispatchers job easier you could use a dedicated thread and do a runBlocking(Dispatchers.Unconfined){} (but I would recommend testing first, Unconfined was experimental until quite recently and might behave unexpectedly), or even better have a single coroutine doing the actual suspention and launch new coroutines from it when it is time.

But again, apart from a potential performance hazard, using delay with a lot of coroutines should work fine, you should try to launch a great number of coroutine that call delay() and see if new coroutines suffer from dispatching latency, if not you are good to go.

If you are sure the lambda of your coroutine only uses this data object it’s fine, if it uses some other external variables, those will be put in the coroutine context. If you are not sure, use a profiler to check for leaks.

“might behave unexpectedly”
For this program reliability and stability is much more important than performance.

“have a single coroutine doing the actual suspention and launch new coroutines from it when it is time”
Now I got what you mean. I will think about this idea but my intuition tells me that it won’t be possible. Because the coroutine would need to suspend multiple (let me call it) “tasks” in parallel during runtime (even the actual time to delay/suspend for each coroutine will only be known at runtime), and the they I solve patalel problems is… creating new coroutines.

“delay with a lot of coroutines”
I was more concerned about the large time of the delay than the amount of coroutines which use it.

But thank you, I think now thinks will work the way I had planned.

“the lambda of your coroutine”
Means everything inside of the launch(){} block, right?

“if it uses some other external variables”
No thinking more about how I will do it, even the object will be created inside of the launch block if I’m correct.

exactly, if you know the held values are not an issue you are good to go.

1 Like

If reliability is much more important than performance, delay()'s scalability is not the big question, I’d say. Even if delay() is flawlessly reliable, what happens to the job that’s scheduled for week 4 if your server has a hardware glitch and reboots spontaneously in week 3?

Instead, you could write all your scheduled events to a persistent store. Then there’s no need to worry about whether delay() can handle thousands of events with month-long delays; instead you wake up every so often, poll for the events that are scheduled between now and the next wake-up time, and (depending on how important precise scheduling is to your application) run them all or schedule them all using an in-memory scheduler. Recovering from a server restart is then a much less complex problem, and it will scale to any duration and any number of scheduled events with high reliability.