Inline classes and stronger optimization

Trying out Kotlin, I have been quite pleased with the overall design and power of the language. As part of doing so I came across an interesting problem where the language “works” but does not have the results I want.

Case: encapsulate setInt, setLong of PreparedStatement so that the following code does the right thing:
statement.params { +"param1" +"param2" + "param3" }
But also: generate equivalent code to statement.setString(1, "param1"); statement.setString(2,"param2") etc; in addition as the types are primitives the scope of the overloads must be restricted to inside the lamda.

I have managed to make things work, but not the way I want. There are two restrictions: classes are needed to introduce a name scope, they then need to hold a reference to the prepared statement as that is not accessible otherwise (the index could be passed as operator “result”). The problem is that for the library to provide “free” notational elegance it is necessary for both “scope classes” to be elided (optimised away) as well as inline functions should optimise away variables and branches where it is statically possible to do so.

While the JVM will likely be able to do a lot of this, the feature (being mainly for libraries) would save significant amount of bytecode and reliance on jvm optimisation that can be avoided. For classes an inline keyword (and single field requirement) would probably make sense, stronger optimisation guarantees in inline functions would not change much of the language and none of the grammar)

What do people think about this?

Not clear what you are asking for here. Kotlin gives you a great deal of control over optimization with inline functions and extension functions. Probably best to discuss a specific code sample than such general statements

@dalewking The idea is to have the ability to write library code like:

    inline class Index_(val pos:Int) {
      inline operator fun plus(value:Int):Index_ = Index_{val+value}
    }

    inline class ParamHelper_(val statement: PreparedStatement) {
        inline operator fun Int.unaryPlus():Index_ { statement.setInt(1, this); return Index_(2) }
        inline operator fun Long.unaryPlus():Index_ { statement.setLong(1, this); return Index_(2) }
        inline operator fun String.unaryPlus():Index_ { statement.setString(1, this); return Index_(2) }
        inline operator fun Boolean.unaryPlus():Index_ { statement.setBoolean(1, this); return Index_(2) }
        inline operator fun Byte.unaryPlus():Index_ { statement.setByte(1, this); return Index_(2) }
        inline operator fun Short.unaryPlus():Index_ { statement.setShort(1, this); return Index_(2) }

        inline operator fun Index_.plus(value:Int):Int { statement.setInt(this.pos, value); return this+1 }
        inline operator fun Index_.plus(value:Long):Int { statement.setLong(this.pos, value); return this +1 }
        inline operator fun Index_.plus(value:String):Int { statement.setString(this.pos, value); return this +1 }
        inline operator fun Index_.plus(value:Boolean):Int { statement.setBoolean(this.pos, value); return this +1 }
        inline operator fun Index_.plus(value:Byte):Int { statement.setByte(this.pos, value); return this +1 }
        inline operator fun Index_.plus(value:Short):Int { statement.setShort(this.pos, value); return this +1 }
    }

    inline fun <R> PreparedStatement.params(block:ParamHelper_.() -> R) =ParamHelper_(this).block()

This library code would then be used the following way:

  val s : PreparedStatement = /* some java.sql.PreparedStatement out of the context */
  s.params { +"Param 1" + IntParam2 + booleanParam3 }

That would be compiled to the same bytecode as:

  val s : PreparedStatement = /* some java.sql.PreparedStatement out of the context */
  s.setString(1, "Param 1")
  s.setString(2, IntParam2)
  s.setString(3, booleanParam3)

The way it would work is that the inline class would basically introduce a symbol/name scope, but otherwise would be backed by the fields. These fields would actually live as local variables on the caller stack (preferably not duplicated if they already were locals, but new variables if they were not (field references, expressions, etc.).

So for example for the Index_ class, this class would at bytecode level not exist, and just be a regular int (primitive preferred), but the compiler would only allow the specified operations (in this case operator Plus) to be invoked and no others.

The optimizer should also be smart enough to know that the values are known at compile time, and propagate the values instead of using fields/operations.

Note that the code would (without inline class) work, but be quite inefficient with too many temporaries. The temporaries would also very likely not be elided by the jvm. While it may be possible to achieve the desired effect through lambda tricks with higher order functions,that would still have the issue of not being as elegant (the operators would be introduced as parameters on the lambda), and make name scoping quite problematic.

I am however more concerned on the elegance of using this functionality that the elegance of the library code that provides it (clear, concise, elegant, simple are of course of value there too).

Btw. this (small-ish) change would also give “free” multiple return values from inline functions.

I think I understand what you want here, but I don’t understand why you are so keen to avoid reliance on the JVM’s optimising compilers, which are far more powerful than anything that the Kotlin frontend will have any time soon. Indeed it’s not even clear that the Kotlin compiler should be doing optimisations at all, except in cases where it affects the language design and where it’s known that JVMs struggle to do things well, like for the support for inlining through lambdas.

In the case where you are instantiating temporary classes that live only briefly, this is a case the JVM puts a lot of effort into optimising, both through fast generational GCs and the scalar replacement optimisation. You say the JVM most likely wouldn’t optimise this, but have you checked the generated assembly to see? My bet is that it would, if the executed code was hot enough to reach the threshold for C2.

1 Like

It appears that simple cases may very well be inlined by (some) JVMs (and/or ART). There is however a caveat, in that in this style you end up with lots of little objects and functions chaining to each other (every parameter creates a new boxed integer with custom type). This leads to two issues: deep method chains with interspersed objects that go beyond JVM default inlining limits. Having this done at compiler level may save some bytecode, but especially allows the design the comfort of knowing that the abstraction does not have a run-time cost, as well as to allow the compiler to enforce the restrictions needed to avoid those costs.

Most of the compiler code would likely be shared with that for inline functions. It would be fairly simple to replace the object itself with an inline function that has the fields as a parameter, and the calls to the methods on the object to the equivalent static method calls on the class. The resulting equivalent, but not pretty code would look like (not done for Index_):

  inline fun <R> ParamHelper_AsLambda(statement: PreparedStatement, block: (PreparedStatement)->R):R {
    return block(statement)
  }
  
  object ParamHelper_ {
    @JVMStatic inline fun unaryPlus(statement:PreparedStatement, v1:Int):Index_ { statement.setInt(1, v1); return Index_(2) }

    @JVMStatic inline fun plus(statement:PreparedStatement, v1:Index_, value:Int):Index_ { statement.setInt(v1.value,value); return v1+1 }
  /* repetitive other methods left out. */
  }

inline fun <R> PreparedStatement.params(block:(PreparedStatement) -> R) = ParamHelper_AsLambda(block)

/* usage */
  s.params = ParamHelper_AsLambda(s) {
    ParamHelper_.plus(statement, 
      ParamHelper_.plus(statement,  
        ParamHelper_.unaryPlus(statement, "Param 1"),
        IntParam2),
      booleanParam3)
     // This nested tree thing is not new, this is just wat operator chaining translates to.
  }

This transformation can be done quite early in the compilation process, when still using a source/mapped AST. Every separate use of the variable would have to be fed through the ParamHelper_AsLambda operation with the actual functions as an inline block though (something that should optimize down to what you expect already).

Perhaps what I want instead is a way to define a custom function scope for use inside a lambda. Names that are only available inside that lambda. Classes do so with extension lambda’s, but sometimes I don’t actually need the class for anything other than as a thing to hang the methods on.

As a short point, optimization in the runtime doesn’t address code size or method count (inline classes would allow inline functions across class boundaries), something quite important on Android. Late optimization as done in the jvm does allow late binding/linking, but inline classes would have to be final anyway and its methods would end up as statics(or inlines) so the binding idea would be a moot point.

What you are asking for (inlining a class with one field) looks like type aliasing, which comes very soon (v1.1 probably)

I’m not sure what type aliasing will do precisely, but if functions could be specified only on an alias (but not on the original) that would solve most of the issue. The case with multiple members would not be solved, but I’m not sure that that would occur often.

How do you propose to pass to a function an instance of an inline class with multiple members? Also I am not sure how a collection of such instances would work. Value types might be a solution, but they are only coming to JVM.

The way I see them is that they don’t exist at runtime. They would certainly be final, their members would have the same visibility as the “class”. Functions that would have the inline class as parameter would instead have that parameter substituted by the members at compile time (or it could be forbidden for non-inline functions). Use as return value (in case the source is not inline) would work like multiple return values if the non-inline case were allowed.

Obviously value classes would serve the function of inline classes as well. The purpose of this idea is to help building more powerful abstractions in libraries without incurring unnecessary runtime or code size costs. Decoupling language visibility from jvm visibility could help with this as well.

That sounds very interesting, but I see some limitations:

  1. such classes don’t really work with usual functions (returning “boxed” values, java interop), so let’s assume we can only use them with inline functions
  2. subclassing/implementing interfaces does not work because a (useful) interface assumes a specific method signature, which can’t be inlined
  3. usual collections don’t work, since they contain links to values, nothing more
  4. all this creates a subsystem of the language which is unable co interact with an thing around it (ups…)

Given all the limitations above, I am not sure what the usage of such a feature would be.
A common wish would be to have ComplexNumber, for example, and not to pay for object allocations. But if I can’t store them, or have to have all my (very non-trivial) algorithms inlined, there is not much use for me in them.

What usage do you have in mind?

BTW, you can look an Apples’s swift, cause they have such a semantic (struct). All the questions above solved there, and people are actually happy with that. I personally find structs and all around them (arrays!!) very hard to use properly, but that’s just me, I guess.

The main usage I have in mind is as a scope to define functions in. I’m not sure whether inline classes should be able to be used in a non-inline way. The semantics would be as regular classes, but inlining can only work with concrete types (not allowing inline open classes seems reasonable). The fallback to non-inline may be something you want to prevent from a language perspective, not because it is hard, but because inline might be something you may want to be certain of.

Btw functions using the type don’t have to be inlined but would behave like an inline wrapper that calls the unpacked function. Obviously value types cannot be completely solved and any help in that direction is more side benefit than primary goal. Of course if one feature serves multiple purposes that is great. In some way the idea is actually close to C++ templates.

There is no reason why the compiler could not do this without an inline keyword, but inline would make sure that it can (or will).

You mean function cope like C++ std::cout? For that Kotlin has objects like Delegates: Delegates.NotNull(). One can define extensions on an object, just like you described.

Why objects don’t work in your case? Could you describe your usage a little bit more?

@voddan Basically the issue is the scope of the extension functions and/or overloads as well as a lack of encapsulation. The main case motivating it is in the attempt to encapsulate the JDBC api. As part of the encapsulation, the idea of making poor practice hard, means that you don’t just add “good practice” functions onto the type (say a preparedstatement), but actually hide away the operations that are unsafe or “poor practice” (perhaps some invariant, or variable needs to be maintained). The encapsulation at source level is a core feature.

Beyond the necessity to be able to encapsulate the “delegate” field, some functions should not be allowed to escape their scopes as their usage can be confusing in the general case (this is especially true for operators and primitive wrappers). In the example given earlier, using a raw int instead of a wrapper would not even work as the operator overloading requires the type not to be an int (instead of calling setInt(this,intVal) it would just do regular addition). Those overloads should certainly not be allowed to escape into the wild.

As a final part, when using constant propagation, inline classes (translated into lambda’s and wriggling about with invocation) would allow this to happen through the class as well as inline functions. While technically, inline functions make the feature unneeded, the use of lambda’s this way (with all functions passed as function parameters to the lambda) would be very verbose (the caller would need to name them), cumbersome and counter-intuitive. It would also bring this heavily into the lambda calculus territory (nice for compiler writers and language designers, not great for library writers, not nice for users).

I have another use case for this. Look at the following (valid in current) library code:


interface __Override__<T> {
  infix fun by(alternative:T):T
}

class __Override__T<T>():__Override__<T> {
  inline final override infix fun by(alternative: T):T {
    return alternative
  }
}

class __Override__F<T>(val value:T):__Override__<T> {
  inline final override infix fun by(alternative: T):T {
    return value
  }
}

inline fun <T> T?.overrideIf(crossinline condition: T.() -> Boolean): __Override__<T> {
  return if (this==null || condition()) __Override__T() else __Override__F(this)
}

This can be used with:

  fun f(someString:String?) {
    val someOtherValue = someString.overrideIf({length<5}) by "Some longer alternative"
    val someOtherValue2 = someString.overrideIf{length%2=0} by "Some odd string"
    /* do something else */
  }

Being able to omit the parentheses instead of the curly braces would be nice, but making the code compile without any intermediate classes would be great. An alternative implementation that would store the condition would be acceptable instead of the polymorphic solution used.