Kotlin and the Expression Problem


After toying a bit with Kotlin, it seems it doesn’t have a proper solution to the famous Expression Problem.

The problem

The original statement:

The goal is to define a datatype by cases, where one can add new cases
to the datatype and new functions over the datatype, without recompiling
existing code, and while retaining static type safety (e.g., no casts).

Also see this very didactic explanation.

In OO language like Java in Kotlin, you can easily add “new cases” via subclassing, but you can’t add new operations (this necessitates adding methods to existing classes). If the code uses the visitor pattern, then you can add new operations (new visitors), but you can’t add new classes, because they have not been included in the visitor class.

There exist “solutions” to that problem in Java, but they are deeply unsatisfactory. In particular, new classes or operations must be “linearized”, meaning that new operations/classes added by different people (e.g. in different libraries) do not compose.

It’s not an exotic problem either, and I encounter it shockingly often. For instance, when writing a parser combinator library, I want to give the users the ability to specify custom parsers (subclass the parser interface) but also define new operations over parsers, in order to implement grammar transformations by walking a tree of parsers.


Perhaps the most famous (and in my opinion, elegant) solution to the problem are typeclasses. Let’s ignore them for a second, though.

I believe that the solution that would make the most sense in Kotlin would be to add retroactive interface implementation: making existing classes implement new interfaces. The advantage is that this solves the problem (we can now easily add new operations by retroactively implementing interfaces) while adding no new concepts: users already know what is means to implement an interface.

Note that this is close to what Cedric Beust proposed as Extension Types on this very forum.

Alas, I believe this to be impossible to implement on the JVM. I’m not an expert, so feel free to correct me, but I believe that to making a compiled class implement a new interface would at least require modifying the class file.

And so, we’re back to typeclasses. Except we don’t need to call them that. Here’s what they could look like:

plugin interface Addable<T, R> {
    fun add(x: T): R

implement String: Addable<String, String> {
    fun add(x: String): String = this + x

fun test(x: Addable<String, String>) = x + "bar"

We could implement all of this by desugaring to:

interface Addable<Self, T, R> {
    fun add(self: Self, x: T): R

val addableStringStringString = object: Addable<String, String, String> {
    override fun add(self: String, x: String): String = self + x

fun <T> test(typeclass: Addable<T, String, String>, x: T) = typeclass.add(x, "bar")

test(addableStringStringString, "foo")

Most of the transformation is straightforward. In the last line, we need to select the correct typeclass for “foo”. The simplest solution is to implement this as a lookup in Class -> Typeclass map. In cases where the static type of the argument is “final” (as is often the case in Kotlin) we can optimize this lookup away. Ditto when a method using a typeclass calls a method that requires the same typeclass. This is a form of caching, which minimizes the number of lookup required.

The trick from Haskell is that typeclasses are really an additional implicit argument to a function. By using this trick, we can preserve the illusion that a type implements a new interface.

Most of the transformation is straightforward. In the last line, we need to select the correct typeclass for “foo”. The simplest solution is to implement this as a lookup in Class -> Typeclass map. In cases where the static type of the argument is “final” (as is often the case in Kotlin) we can optimize this lookup away. Ditto when a method using a typeclass calls a method that requires the same typeclass. This is a form of caching, which minimizes the number of lookup required.

There are a few downsides: first, the underlying plumbing would have to be exposes to Java clients, no way around it. Second, a few additional precautions must be taken to maintain the illusion that String implements Addable. For instance, we need to transform expression like "foo" is Addable<String, String> to perform a lookup in our Class -> Typeclass map. Ditto for some typecasts.


I hope to have shown that Kotlin really needs a solution to the expression problem, and that such a solution isn’t too difficult to implement, in addition to being user-friendly. The downside is having to expose implement details to Java clients, even though these details aren’t terribly complex.

"Extension types" for Kotlin

You may be interested in this paper: https://www.cs.utexas.edu/~wcook/Drafts/2012/ecoop2012.pdf


I did read it a few months back.

In my humble opinion, object algebras solve the expression problem only under a very generous interpretation of it.

The problem is that object algebra do not introduce new operations over data types, but rather over a tree of nested method factories, i.e. code!

I ported the example from the paper (in which it is scattered all over) to Kotlin: ObjectAlgebra.kt

First, it’s pleasantly more terse than in Java. But still, what a mouthful! To contrast, here is the version using my proposal: PluginInterface.kt

So about the object algebra version, the problem is that the data structure is embedded in

fun <A> exp(v: IntAlg<A>) = v.add(v.lit(42), v.lit(0x52))
val exp0: Exp = exp(IntFactory())

There is no distinction between the construction code and the data structure. In fact, the data structure is not first class at all! You cannot pass it around. Passing exp0 around doesn’t count: you can’t invoke the print() methods using exp0.

This feature made object algebra incredibly awkward to use when I tried them out.

To create your data structure, you have two choices: either hardcode it (like in the example or in the paper), or “seed” your code with yet another “descriptor” data structure, as if there were not enough classes lying around already.

More than class proliferation though, you run into issues with the type system fairly quickly. I won’t enter into the details, but just to illustrate using the example: if you have a descriptor for an IntBoolAlg constructor, and you want to reuse a descriptor for IntAlg, things get tricky.

I’m sure object algebra have nice use cases (e.g. as a DSL, when things are always hardcoded), but this is not the solution we’re looking for.


Fair enough. We are not ready to discuss such massive features ATM, but something like type classes is on the table


You can make a proxy object that implements the same interface as the original object, plus the new interface. I have a bit of code locally that does exactly this, you use it like so:

interface Bar { fun duckTyped() }
val x: Bar = foo.dynamicCast<Bar>()

“foo” does not have to implement Bar, but the code works. Behind the scenes it uses JDK proxies and either reflection or method handles (for speed) … can’t recall if I got the MethodHandles version working though.

Making it work isn’t the hard part. Using the JVM effectively so you don’t lose any performance is the hard part. However, I believe it is perfectly possible. I just don’t care about type classes/extension interfaces enough to bother working on it more: I don’t actually encounter the need to retrofit interfaces onto existing types very often.


Cool! Definitely something to look into. I might even have a few use cases for the trick in my current code.

Do you have some code for this I could look at?


This approach has a very important limitation of breaking expectations about identity and equality. For example, such proxy objects can not be safely used as keys in a HashMap


If you forward equals and hashCode as well, I’d think the only identity leak would be the monitor, right?

Re: code. I have a bit but looking again, it’s in a half edited state, I was trying to make it work with method handles instead of core reflection as that should be optimised a lot better (in theory, HotSpot would be able to remove almost all the overhead). But like I said, after a while I decided that I didn’t really care about this feature and stopped working on it.


It depends on how the original equals/hashCode are implemented, and a very common way of doing it is comparing getClass(), which will fail (and other common implementations will fail too, under some circumstances each).


(I’m a bit late to this discussion, but the other discussion on almost the same topic is still active, and my comment fits better in this thread)

An important use case for a type class / extension type-like feature would be passing an object of a type, e.g. String, to a method that expects an interface type that String does not implement, e.g. Addable. I think it would be quite useless to require the interface to know about some Kotlin magic such as an extra hidden self parameter, since that would break Java interop. Then this feature would not help at all if the expected interface was a pre-existing plain old Java interface used in a Java library. While I would like to have some nice syntax in Kotlin to use type classes, I don’t think there is a way around having an implementation based on wrappers. AFAIK wrapping the String in a wrapper that implements Addable is the only Java-compatible way to implement this. And if wrappers are used for interop with Java it is probably best to use the same implementation in pure Kotlin code as well.

Regarding the problems with wrapping and equality, such a wrapper would only be passed to code that expects an Addable (in this example). If that code is written in Java it would be unlikely to try to compare the wrapper to a string and expect both objects to test equal because as far as this code knows String does not implement Addable. If the code is Kotlin the compiler could insert some extra magic to remove the wrapper. And besides, using wrappers is also the usual Java way to pass an object Foo to a method that expects a Bar. While it isn’t a leak-proof abstraction it often works well enough.


@JanKanis the description you gave makes it sound like Java-Interop is going to become the same anchor around Kotlin’s neck that backwards compatibility has become around Java’s neck.

Further, I wonder what the oft-neglected java union type in generics can do here?

<T extends String & Addable<String>> void doThings(T aString){


This breaks an assumption static code analysis has about sealed classes (but thats IDEA’s problem :p), but it does compile.

I presume erasure makes runtime interop relatively easy (maybe?)


Every production programming language has ‘backward compatibility’ as an anchor around their necks. Doing backward incompatible changes in production languages is currently an unsolved problem. The difference in Java is just that their anchor is a bit older and more convoluted than that of others. IMO one of the more difficult aspects in such bridges are differences in type systems. See the amount of work & iterations the Kotlin team has done to get nullability (the main difference between Java and Kotlin type systems) working smoothly across the language boundary. Kotlin specifically chooses to be Java compatible so that (at least) existing Java libraries feel like idiomatic Kotlin. And this compatibility requirement has shaped the entire design of the language. See Scala or Ceylon for languages that don’t care about such a level of compatibility.

I don’t know where exactly Andrey Breslav wants to place the balance between Java interop and nice features, but I think interop will be important.

I don’t think unions in java generics will help a lot, such methods still expect a single object that implements/extends the specified interfaces/classes, this discussion is about adding an interface to a type that doesn’t yet have it.


I believe this will solve the expression problem in Kotlin https://github.com/Kotlin/KEEP/pull/87