Current version of compiler supports umd/commonjs modules which is fine for integration and makes things work but is not enough for producing size optimized js.
My current tests with ~20kloc kotlin source produces ~800kb js file with 1200kb kotlin.js file. After uglification it is still 1Mb js file. Which is about 4 times worse then analogous GWT version. And 3 times worse then typescript.
Although this is not fair test due to my environment setup (code is almost same but not equivalent, GWT may have cut too much “dead code”, kotlin output does not actually run anything etc) I think ratios are pretty close. My guess would be 2x size reduction is possible with minor tweaks to code generation and supporting es6 native module kind to make treeshaking possible.
What you actually need is dead code elimination (or treeshaking, which is another clumsy term for DCE), not ES6 module support. ES6 modules + rollup is one of many ways to make DCE. Other options I can see are: implement DCE in Kotlin compiler, compile to multiple micromodules, tweak code for Google Closure Compiler.
DCE in kotlin compiler is big task which will duplicate existing tools and will not cut external dependencies and kotlin.js stdlib itself unless kotlin team wants to implement DCE for js itself.
My observations of GWT mailing lists makes me think supporting GCC target is really painful task. Also it would be rather strange to tweak compiler output for specific tool instead of official standard like es6
Emitting
export function f(){}
instead of
function f(){}
module.exports.f = f
looks like much simpler thing to do then implementing full dce
Current version of kotlin js compiler even does all imports via reassigning all uses of Kotlin.kotlin.* to local vars
The only real problem I see is emitted js will not be executable in older browsers and has to be processed to es3/5.
I think Chrome, FF>=54, node supports es6 modules nativly so it is not like be problem for debugger while development or such
DCE in kotlin compiler is big task
Yes, that’s right. But my observations tell me that DCE which uses typing information works much better than DCE for dynamically-typed languages like JS. Also, we can start with pretty simple implementation and then improve it.
will not cut external dependencies and kotlin.js stdlib itself
It won’t, but you can use Kotlin built-in DCE to reduce size of Kotlin-generated code and WebPack2/Roollup for external dependencies. As for kotlin.js
, we can’t compile against any Kotlin library if you provide only raw JS. Compiler requires metadata files. We can simply include all metadata necessary for DCE.
unless kotlin team wants to implement DCE for js itself.
It’s possible. Kotlin compiler has a lot of code for optimizing/analysing JS code, so we don’t need to write everything from scratch. This DCE may rely on our knowledge of structure of code produces by Kotlin compiler, so it may be more efficient for Kotlin that general-purpose DCE tools.
As for me, the problem with ES6 is somewhat strange if we implement only small part of it. It’s definitely good to have this target, but it’s a long-term task to support it completely. So I would implement DCE via ES6 modules only when there’s no other observable short-term ways.
Which in will require using es6 imports/exports to work with good efficiency.
This sounds like feature creep. Does TeaVM do anything like that? Specifically uglification of compilation results?
Which in will require using es6 imports/exports to work with good efficiency.
It will require ES6 imports/exports from external declarations only? Remember, hypotetical Kotlin DCE will produce already minified JS.
This sounds like feature creep.
What do you mean?
Does TeaVM do anything like that? Specifically uglification of compilation results?
TeaVM is not related to Kotlin. It does all minification itself, no external tools necessary. It relies heavily on type information extracted from bytecode, and its DCE is based on sophisticated dataflow analysis over bytecode, which gives far better results compared to naive graph traversal in rollup. “Uglification” in terms of TeaVM is just using “uglified” implementation of some classes responsible for rendering JS AST to text, no additional pass is performed.
You look at problem from stand of kotlin compiler/toolset. I’m talking about full bundling/integration solution. There is no point implementing all js existing infrastructure and frameworks in kotlin. Since kotlin is tiny fraction of js world any restriction or even bumps like limited dce in integration will hurt kotlin adoption in this kind of use cases hard. GWT got away with this environment lock and multiple NIH because of release timeframe. js was not a hot topic, no popular tools existed etc. Correct me if I’m wrong but initial GWT release was even before jquery. From this point of view I see no reason to spend effort implementing DCE inside kotlin itself since js tools already exist for such tasks and on Jvm/Android side proguard looks like standard.
IMHO this is not about specific features of compiler or stdlib. This is more of design choice between providing full platform or platform specific tools to make kotlin code cross platform. Looks like you favor platform and see solutions in optimizing for that case.
I mean generic optimizer for js unrelated to kotlin does not sound like kotlin feature.
GWT got away with this environment lock and multiple NIH because of release timeframe. js was not a hot topic, no popular tools existed etc.
Additionally, GWT uses static type information to apply agressive code optimizations. Usually, GWT produces code that is faster than the code you could ever write manually.
From this point of view I see no reason to spend effort implementing DCE inside kotlin itself since js tools already exist for such tasks
Yes, they exist, but they were initially designed for JS ecosystem with its dynamic nature. Kotlin is a statically-typed OO language, and specialized tools which understand Kotlin type system can benefit comparing to general-purpose JS tools.
There is no point implementing all js existing infrastructure and frameworks in kotlin
Nobody is going to. DCE I am talking about can be implemented as a small preprocessor which can be plugged, say, into webpack, and only work it is going to do is strip out Kotlin functions and classes only.
Also, you are talking as we are really going to implement such DCE tool. I mentioned, that we can take decision to create our own DCE. Repeat, there are several options I personally can observe:
- our own DCE tool;
- generation of multiple small ES5 modules;
- generation of ES6 modules;
- relying on Google Closure Compiler.
It does not mean that we are going to choose one, most likely we end up with implementing all of them, or at least, most of them. The only problem is priorities. Currently, we are focused on other tasks, and we even have no time to discuss and prioritize this options. Then, when we have time, remember, we should analyze much more use cases than yours. From your stand point it’s the easiest option to generate ES6 modules, however I can name hundreds of reasons (but won’t do it now) why it’s long-term task.
Since kotlin is tiny fraction of js world any restriction
Unfortunately, Kotlin has some restrictions, like
- it mangles names, since its type system supports ad-hoc polymorphism (i.e. function overloading) which is unsupported in JS;
- it requires you to write typed headers for JS APIs (or fall-back do dynamics, losing all benefits of Kotlin);
- etc, etc
What we actually can do is to reduce pain from these problems, not eliminate them completely. If we take decision to write our own DCE, of course, we’ll integrate it with existing ecosystem as tight as possible.