Fearless concurrency in Rust


This is a good intro to how Rust eliminates data races through its ownership and borrowing system:


Of course, I do not advocate big new features in Kotlin now - much more important to stabilise the language and ship 1.0. But it’s interesting to see Rusts approach. Kotlin does not do much in the area of concurrency, perhaps in some future version it may wish to explore similar language features. It seems like most of their stuff can be implemented just with a notion of a move assignment.


The only issue with this (and any other fancy type-system stuff) is that whatever Java library you are using, there's not knowledge whatsoever about what it does with data passed to it (just remember the null battle).


Yes, indeed. I guess a similar approach could work though. You define a platform type as never triggering move assignment or whatever approach is needed, and if you need to interop with Java code then you get less of the benefit.


Rust has isolates, e.g. every thread has its own memory space. You can't do this on the JVM, I guess. What you could do is that the compiler always makes sure an object is deep cloned when it is passed from one thread to another. Or the compiler could make sure that an object exchanged between threads is immutable (including subobjects, and the subobjects of the subojects, and so on). I mean the method to send data from one thread to the other would require the object being passed being declared as immutable. This is how D is doing things in this way. Either or would probably be difficult to implement.

Then last thing I recalled about M:N threading in Rust (aka “green threads”) was that it was moved out of the runtime into the library. Now, I’m not sure what the current state is here with Rust. Anyway, you can’t have green threads on the JVM (e.g. threads with a reduced context). You can do some thread pooling instead which already exists. So looks like there is not much potential on the JVM for new approaches in concurrency as in Rust, Go, et. al.


As you said, it's difficult for the compiler to make sure of such things, in particular:

  • with the presence of legacy APIs it is highly non-trivial to detect whether something is being passed to another thread or not
  • it's also hard to say whether something is immutable or not for realistic cases
  • same for deep-cloning


I would not call something that has its own address space a thread: almost by definition, that's a process, not a thread. Rust does have shared memory threading, as the article lays out, it just has different tools for managing it.

One simple feature Rust has which should be implementable in Kotlin and perhaps can be already in a library is a mutex that more strongly guards its associated data. That is, a mutex where you can only get access to the reference it’s holding inside a closure the mutex invokes, so you can’t forget to take the lock. Seems better than @GuardedBy type annotations. To actually enforce this would I guess require an escape analysis in the compiler itself, to ensure nobody copies the protected object outside of the closure. But the idiom might still be handy anyway, especially if it’s marked as “might one day be enforced by the compiler”.

I’d like to see support for immutability in the type system, as the Java-style builder pattern always seemed like a gross hack to me, but yes I do understand the difficulty of enforcing that at the compiler level. I guess detecting that every field in an object tree is marked final would be a start.


Kotlin benefits a lot from the seamless Java interop, that's the main thing that draws many of us to the language (me included). That said, as Kotlin grows and matures there will be more and more code where all the code it uses is itself also Kotlin, and so as time goes on the cost/benefit ratio for features that only apply to Kotlin code will slowly tilt in favour of things where legacy interop is hard. Nullability is sort of right on that line already because even though Java can use annotations, realistically many libraries don't, so nullability checking is mostly of benefit when Kotlin code calls Kotlin code.

The right way to go here is probably the same way as Java 8 has gone, with pluggable type systems. The Checker framework is IMO not entirely easy to use, but the underlying idea seems good. Allow other people to write plugin type checkers and then perhaps over time, fold the best ones into the core. Checker has two immutability type systems.

I did take a brief look at what it’d take to make Checker work with Kotlin, months ago, and concluded it’d be a big job. Checker integrates very deeply with javac and even duplicates parts of its code. You’d probably need a quite heavily modified fork of it to work with Kotlin and all the extra features. But perhaps by doing that you could find a way to integrate with both Kotlin and IntelliJ simultaneously, or some other benefits.

The advantage of that is, in the event that Checker catches on in the Java world, Java libraries would come pre-annotated and you’d get good interop with the extra features.


By the way Fantom, which is also a JVM language, has some nice things baked into the language for concurrency, see http://fantom.org/doc/docLang/Concurrency. Looks like it contains some good ideas that would be beneficial for other JVM languages as well.


Well the Factom approach is the same as many new languages are taking these days - just ban shared memory concurrency entirely and rely on message passing. You can of course implement this style in any language with a small amount of discipline and if you're willing to accept the inefficiency that can come with everything being immutable.

I’ve got two very concurrent codebases on my hands right now and I’ve been experimenting with different approaches. One is a large library that uses classical thread safety techniques like locking of mutable state. It’s a pain to implement indeed, but the big advantage is that the library is thread safe so from the users (i.e. developers) perspective, it’s quite easy to use. There are no thread affinity rules like there would be for a GUI toolkit. You can pretty much use the API in any style or context you like and it will work.

The other is an app that uses the library. Internally it uses an actor-ish sort of model. The “messages” between the frontend actor (GUI thread) and backend actor are actually closures, but otherwise it’s a pretty close approximation to the model. The pure actor model has some pretty tricky problems though.

  1. If you use real messages you end up writing tons of boilerplate to serialize state into messages and deserialize it again. Using closures and lambdas can help avoid a lot of this boilerplate.
  2. Inevitably in real apps, some messages are latency sensitive and others are not. Every time I've tried to use the actor model I've encountered this problem. When using old fashioned concurrency programming you can make your locking finer grained quite easily, as long as you watch out for inversions. Then a request that needs a fast response (because it's required for UI rendering) can be served as quickly as possible. With the actor model it's a lot harder to fix this sort of thing because if a big slow request is being processed the small must-be-fast requests jam up behind it.
  3. With locking you tend to get natural back pressure. With a pure actor design there's no back pressure so if one actor/thread produces work faster than another can consume it you can get OOMs.

So my app has ended up with a mishmash of styles. The core is still actor based. State the UI needs fast access to is replicated across threads using what I call mirrored collections. A mirrored collection is one that observes another (using the JavaFX observable collections) and replicates the deltas across threads. So the backend actor can mutate a bunch of lists and maps (containing immutable objects) and the frontend actor (UI thread) will see the same changes occur on its own copies of those collections at a later time, so those collections can be bound directly onto UI widgets. It's a model that's worked out quite well for me and I think I'll use it again in future.

Given the importance of concurrency I think observable/mirrorable collections would be a neat addition to the Kotlin standard library.

However some things cannot be easily handled this way, like responding to a button press. Technically you could do it web style and have the button disable itself, then send a message to the backend, and wait for the backend to send a response message back to the frontend, but this is hardly worth it when 99% of the time the response can be immediate and fast by simply reaching directly across actors and grabbing the data it needs using a lock.


Hi Mike,

I see you point. To address several kinds of concurrency problems actors are a big hammer. Channels in Go combined with so-called goroutines are a nice and easy way to do concurrency. You can use channels for low-level stuff. If you need a bigger hammer, you can build actors on top of channels. In case you didn’t play with Go, here is a good primer to see what it is about: http://golang.org/doc/effective_go.html#concurrencyI believe this does not compete with Kotlin ;-). Go and the JVM are really very different things … It’s just about talking about something that has a hammer of the right size.

In Go threads do blocking takes on channels. In case the channel is empty the runtime withdraws the thread from the channels and assigns it to a non-empty one. This can not easily be done in Java. So instead of doing blocking takes you have to go with asynchronous callbacks. I’m playing with a little framework to get this accomplished for Java8: https://github.com/oplohmann/Wilco It is still sketchy and yet not free of timing problems. If you are interested you can have a look at classes PingPongTest and PipelineTest to see how it works. These two test classes try to do the same thing as in some well-known Go samples (http://talks.golang.org/2013/advconc.slide#6 and https://blog.golang.org/pipelines). Unhappily, class Channel contains some synchronized blocks. I have no clue how I could replace them with CAS-style operations. And this is worrying me, as this will have bad lock contention. Maybe the whole thing will die because of this. But it is fun to play with it.

I don’t know whether you have seen JDK8 CompletableFutures. The also provider a smaller hammer than actors and may be a possible choice for certain kinds of situation in your framework.

Regards, Oliver


Yes, I'm familiar with Go and CompletableFuture.

If you want something like the Go channels and lightweight fibers/threads, you can get that on the JVM using Quasar:


It provides something similar. Under the hood of course the Go channels are just implemented using select(), which is also available via NIO. So you could also implement them that way.

I must admit though that I’ve never felt a need for select() like semantics outside of networking.

CompletableFuture is unfortunate. I used it in my most recent project. I must say, that it does not meet the high standards I’ve come to expect from the Java team. Just a few problems CompletableFuture has:

  • Unbelievably large and confusing API. The API is full of javadocs like this one:

    Returns a new CompletableFuture that is completed when this CompletableFuture completes, with the result of the given function of the exception triggering this CompletableFuture’s completion when it completes exceptionally; otherwise, if this CompletableFuture completes normally, then the returned CompletableFuture also completes normally with the same value. Note: More flexible versions of this functionality are available using methods whenComplete and handle”

  • The source code of the class itself is almost entirely uncommented, doesn't follow the normal Java style guidelines and is basically unreadable as far as I'm concerned.
  • The same type is used for both receivers of the future and creators.
  • Nothing in the JDK uses it.
  • It has unexpected quirks (bugs?). One particularly irritating one that got me a bunch of times is that one of the methods (I forget which) that composes stages together will block the provided executor in order to wait for the earlier stage to complete!

Although the Guava ListenableFuture API is pre-lambda and so parts of the syntax are a bit verbose, I find it to be a better and easier to understand design overall.

It’d be nice if the Kotlin standard library had futures which didn’t suck. Possibly some extension functions/properties on ListenableFuture are good enough.


If you want something like the Go channels and lightweight fibers/threads, you can get that on the JVM using Quasar:


Yes, I know Quasar, thanks. Probably the best actor framework is Akka (akka.io), which also has clustering and resilience. But, as you said, there is a lot of code to write to implement only a simple actor. My Wilco framework is really modeled after concurrency in Go. Certain things it can’t do as in Go as the JVM does not have green threads and continuations. But for several things in Go you can solve it the same way in Wilco.

CompletableFuture is unfortunate. I used it in my most recent project. I must say, that it does not meet the high standards I've come to expect from the Java team.

I absolutely agree wiht this. The API is not easy to grasp, at least not at first sight. I had developed something on my own that was similar: CoLanes. But then I discontinued on it when I saw CompletableFutures. Maybe I shouldn't have.

The source code of the class itself is almost entirely uncommented, doesn't follow the normal Java style guidelines and is basically unreadable as far as I'm concerned.

This is a strange thing for several classes new since JDK7. Really strange. Sun was so diligent with having the javadocs in shape.

It'd be nice if the Kotlin standard library had futures which didn't suck. Possibly some extension functions/properties on ListenableFuture are good enough.

Yep, especially for people doing Android development that would be beneficial (no JDK7+ on Android). By the way, I found a solution to get rid of the synchronized blocks in Wilco. So the thing will continue :-).

Regards, Oliver