Kotlin native vs Rust

Even better, it seems like they include JVM startup time :smiley: :smiley: :smiley: https://github.com/hanabi1224/Programming-Language-Benchmarks/blob/main/bench/algorithm/mandelbrot/4-m.java I wonder why they don’t start the measurements when pushing the power button on the computer.

3 Likes

Recently I had several minutes of free time and checked CBLG benchmark code for different languages. I took a sample that is slowest on java - the regex-redux and checked implementation in Java and other languages. Guess what I found?

They are not measuring the performance of compiler, they are measuring the performance of regex library. And in cases of java they use java stdlib, whereas in most other languages they use pcre2. They even have differen feature sets with Java.

3 Likes

its common, very few in the industry actually do realistic/useful benchmarks, those who do never take averages
 its never a stupid single number


here is an example of good benchmark:

All times in nanoseconds:
Limit order placement
50%: 1160
90%: 1180
97%: 1220
99%: 1420
99.7%: 1900
99.9%: 2220
99.97%: 3440
99.99%: 4580

this is called a distribution of whatever you’re measuring
 when you see 99% it means the worst number your benchmark observed in 100 attempts was 1420 but at 99.99% e.g. worst number after trying 10000 times was 4580 nanoseconds
 thats 0.45 milliseconds.

Averages are the worst most misleading numbers, we never use them in our benchmarks, its only good for marketing people


2 Likes

CBLG actually had all those problems, but they fixed most of them.

The main problem remains though. Benchmarks are a lie. Micro-benchmarks are a lie squared. Different systems are optimized for different work mode.

The difference like 10 times could shouw you something, especially when you compare things in the same ecosystem. But difference of the factor of 2 in different environments is very hard to interpret. My example with regex shows exactly that.

Well, if you are working in HFT industry, then I believe you are much more interested in latency than throughput. And you need latency to be predictable and consistent as much as possible. So it’s no surprise average is useless for you and you need distribution.

In many (most?) cases people are much more interested in throughput and then (correct if I’m wrong) measuring the average and the standard deviation is just fine.

we measure both, i just happen to stumble upon latency example first, but even when you’re interested in throughput you want to have a distribution not an average, we just intuitively want to see one number, but if your system average is 100ms and 99.99% is 10 seconds then you may have a huge problem.

Averages hide information, they loose a lot of info about your benchmark


2 Likes

Rust is really way faster than Kotlin. But I felt Kotlin is easier to learn and Kotlin could be slower wrt Rust due to its Java compatibility.
Rust is most loved language, but its use case is different than of Kotlin. I see Kotlin in web development, backend API, Games alot. Where as Rust is mainly used in System and embedded world alot.

When you are saying that something is “fast” or “faster”, please always provide benchmarks you use.

1 Like

Their benchmark is startup time
 I already demonstrated to a few rust devs that rust I not fast.
If you use JVM right, let it warm up and then measure performance it’s normally better than rust, even with the overhead of OS


It’s all subjective, I personally did not like rust at all.
The memory model is an interesting trick but it sacrifices code readability and performance for something that is not a big issue (how many times did you have memory leaks in your life?)

Most importantly its a very opinionated language, it only gives you one way of doing things don’t like it, there is no choice. Want to throw an exception instead of returning Results every goddamn function, sorry we are too lazy to implement exception support.

Time will show where will rust be after few years. Its basically a nice idea of memory model, slapped on top of LLVM and done
 it throws away 20 years of good work done in JVM to support lots of cool features.

JIT is better than anything else, GC is better than other memory models


3 Likes

There writes someone who’s never done anything serious in C, C++, or similar


Manual memory management is subtle and tricky to get right, requires constant vigilance, and causes errors that can be extremely hard to find but very serious. Anything* that removes the burden from developers is a huge benefit.

(* I’m not speaking about Rust, coz I know nothing about it.)

At chronicle we manage memory ourselves all the time, almost everything we use is using shared memory.

I’ve done my share of low level programming for years now, it’s not as hard as everyone makes it to be. Developers don’t need to reinvent these things, all the tools are already published as open source and almost every Bank or exchange is using our software. Yes if you start to do this from scratch is subtle and tricky.

I know my stuff, and no you don’t need to use C or Cpp in this day and age to have amazing performance.

However c will be faster than rust just because it does not sacrifice performance for safe memory management.

2 Likes

I never needed C/C++ the topic is Kotlin vs Rust not C/C++

2 Likes

Here is Linus himself education some Rust fanboys how real life programs work: LKML: Linus Torvalds: Re: [PATCH v9 12/27] rust: add `kernel` crate

the notion of panic is borderline stupid unless your goal is to create programming language specifically for developing applications that are used in a terminal, even then its questionable


Now imagine someone messing up and doing something foolish and rust panics within kernel.

2 Likes

@vach You seem to be too obsessed with the topic, this alone raises red flags if you’re criticizing the language on technical and conceptual grounds, or you’re just doing it for the sake of it because of a personal distaste (which is totally understood).

The main language I code in is Kotlin, and I truly appreciate all of the advancements on the multiplatform side (among other things). I love Kotlin deeply, and its ecosystem of idiomatic multiplatform libraries that take advantage of the language’s full potential is one of the biggest reasons for that, with Compose UI being a great example in the UI space.

Like everyone else, I read the news. I saw Rust being this beloved language for many years, and I was like “Cool, the hello world syntax looks cool, anything that’s not C/C++ is great!” and just moved on with my life
 Then Linux adopted Rust in the kernel. And a random YouTuber showed up with some nice hype videos for the language, so I began learning it with an open mind, directly from the Rust Book.

The syntax looked nice and didn’t look too unfamiliar from Kotlin (C-style with postfix types), until I got into ownership and borrowing, Rust’s unique features that make it stand out.

Since I’m a student, as well as coding Kotlin in my main time, learning these concepts well and applying them in small CLI tools and programs took quite some time. Around 6 months before I got familiar and comfortable with them, and knew some anti-patterns (like using .clone() everywhere).

I think you didn’t give the language enough time, or simply started with a closed mind to begin with. While not everyone likes Rust, and it certainly isn’t the best tool for everything, and it certainly isn’t the fastest in all workloads, it’s still a nice culmination of many existing patterns, in combination with new ideas, and it has many real-world uses. The focus on correctness in the Rust community means that what you learn can be transferred as best practices to other languages for better, more reliable code. I’m certainly using sealed classes a lot more after Rust.

I still will use Kotlin (multiplatform) for everything app-related, but I think Rust taught me a lot of good patterns that make a lot of sense but somehow are usually missed when using other languages.

1 Like

You’re right i should give it another run, i heared rust has made a lot of progress since then

Either way my opinion about borrow checker does not really change, its solving a non issue (memory leaks are not such a common thing) especially if your team has some reasonable level of competence.

But the cost of borrow checker is readability and maintainability
 another thing that i think is sill there is the stuipd panic mechanic. Its never a good idea to terminate your app just like that, you want it to flush some logs or do some work
 maybe it was one request by one user that did something “bad” why shut down the entire service/server
 log the error


Its a bad idea and its made at the language level instead of being something that teams decide for themselves
 i used to do it (except i called it WTF like android folks did) and its a stupid idea.

As for speed, i like that rust is fast, but i do not believe its speed advantage is meaningful, if you understand mechanical sympathy/how hardware works you can write performant applications on frickin pascal. Its mostly bad software that is the reason for the slowdown not the language


JVM still kills it and kotlin gave it a second breath by making coding actually enjoyable and making programming language getting out of your way.

I’ll give rust another run soon, i hope i’ll change my mind but i doubt it.

2 Likes

Just a small correction; the borrow checker is not meant to prevent memory leaks - in fact memory leaks are not unsafe in Rust, you can introduce them in safe code (although Rust’s design makes it hard to introduce memory leaks). The borrow checker’s purpose is to prevent a large host of common memory-safety bugs that are inevitable in a C/C++ codebase. Rust does provide many primitives for easy interior mutability (again, everything is explicit) as well as “fearless concurrency”.

While some of Rust’s design choices can only truly useful in Rust, multithreading is one of those things where Rust is the easiest and the safest compared to even some of high-level languages (Kotlin coroutines included!).

As for panics, I don’t really know why do you consider them that problematic?
Proper Rust code normally never panics without it being extremely explicit in the code, and the library ecosystem agrees strongly. You’ll rarely (if ever) find a mainstream Rust crate that has functions panicking like it’s Java; all functions that may fail return Option (which is like Kotlin’s nullable types), or Result (which is like Kotlin’s rarely-used Result type) which forces you to explicitly handle both the happy and sad paths.
If you choose to ignore the None or Err path (maybe because you’re prototyping, or because you’re sure it should never return either because you checked the necessary invariants before), then the only way to get the value is by calling unwrap(), which will panic if the result is not Some or Ok.
That means you can do a search for “unwrap” or “panic” in any project to know where this could happen.

This means that if your app panics anywhere, it’s probably code that you wrote yourself assuming it’ll never panic (which is a great indication of a bug). It is an anti-pattern in Rust to use panics in API functions to signal errors, since it means that if the API consumer doesn’t read the docs (if they mention it at all), they might be hit by surprise with a crash at 4 a.m. All functions that may fail return Option or Result.

If we argue against panics, we need another mechanism to crash on unexpected behavior, so what’s the alternative? I used to think Exceptions were fine (to be honest, I thought they were THE way to throw and handle errors), until I used Rust.
While Kotlin helped us get rid of most NPEs (except from Java libraries without null-annotations), there’s always still the hazard of random exceptions thrown on failures, and the try/catch mechanism to handle them is far from pretty or easy to reason about. The main reason lies in the name: “Exceptions”, which nurtures an attitude of treating errors as an “exception to the rule”, when practically, the unhappy path is as important and critical as the happy path, and we shouldn’t treat it as an afterthought.

I used Ktor recently, and I just love every part of it
 until an unsuspecting function threw an exception about some SSL error :grimacing: Every part of me wanted that function to return something similar to Rust’s Result<T> type at that moment.

Now there’s an important detail about panics, is that they can behave in one of two ways depending on a compiler flag: unwind or abort.

  • unwind is the default on most targets, and it works a lot like exceptions: it starts unwinding the entire call-stack (and preparing the stack trace to print), until it reaches the top of the stack (which will terminate the program), or it gets “caught” using catch_unwind (which returns a Result!).
  • abort is the default on some targets where unwinding is not supported (e.g. some microcontrollers), and can be enabled manually. In this case, the program exits immediately on panic without doing any cleanup or stack popping.

Now let’s address your concerns about logging: it’s not much different from Java/Kotlin in this regard. If for example you want to send the stack trace to some web service right before the program terminates, you can do that by setting a global panic hook. This gets run in both the unwind and abort runtimes, even before the panic runtime starts executing (to unwind the stack or abort the program).

In real-world applications, the global panic hook is very useful, while catch_unwind is used as a last resort for some very specific use-cases and it’s not encouraged.
Logging in all other cases that don’t panic is as simple as handling the failure path and logging on that.

When it comes to speed, saying that “if we understand hardware well-enough
” is really like a C developers claiming that we can write large C codebases safely by “being careful enough”.
The cool thing about Rust is that almost all of its high-level abstractions are “Zero Cost Abstractions”. In other words, while you may use fancy high-level constructs to build your code, these constructs add zero overhead at runtime. In fact, a lot of them are optimized in some crazy ways when the compiler can make some assumptions.
One prime example of that is iterators. They’re one of the highest-level parts of Rust that make iterating on ranges, arrays, lists and so on as easy as it is in Kotlin or Python, but without any additional cost as it’s all compiled to a simple, optimized loop, without even needing bounds checks because the compiler can make some assumptions. Other examples include enums (including Option and Result), match statements (with all of their powerful pattern-matching), traits, generics, and a lot more.
Contrast that with Java, where using the Optional<T> type creates a whole class with tons of overhead, while the Rust equivalent is simplified to a regular if statement checking the equality of two bytes.

Of course, that doesn’t mean that Rust is unanimously faster and better for workloads where performance matters. In C, C++, Rust, Go, Kotlin/Native, and any other AOT compiled language (to machine code), while the compiler can optimize a lot, it can’t optimize what it doesn’t know yet, like the state of a program while it’s executing. In highly dynamic applications, the JVM has a pretty big advantage: the JIT compiler can optimize your code based on its current dynamic state. Java developers have put a lot of effort into making this a highly smart and performant operation, which is why Rust seems to fail against JVM languages in cases of dynamic dispatch, since Rust will always use a vtable with all its overhead, while the JVM might notice that we’re reusing the same stuff in a loop, and can optimize it to fast, static dispatch.
Of course, for the JVM to do all of this (plus provide a full Java runtime), it’s uses a LOT of memory, and JVM initialization slows down launch times, none of which are issues with native languages, with Rust / C / C++ being at the top as they don’t have a GC (and its performance + memory impact) at all. Swift is also close, exclusively using reference counting without a GC.

Again, all of this relies heavily on the use-cases, but generally speaking, Rust will output the fastest instructions based on the knowledge it has at compile time, while the JVM will lack behind unless it’s lucky to spot some optimizable patterns (it’s so good at spotting many of them!).

I hope this clears some of the confusion! It’s a lot to grasp, but yeah, everything is there for a reason. Of course some of this comes at the expense of a little more verbosity and a steep learning curve, but everything in life is a tradeoff!

1 Like

Exactly, the reason i do not like panic is because i had many times where our rust devs would report a service down, but there are no logs, no snapshots no trace of what happened
 why? because some new crappy library they use paniced for something completely insignificant and it took down our entire service.

We were using rust as an experiment. They would run it without an OS.

Fundamentally anyone who writes reusable code should not panic, so why is that even a language thing? because rust is lazy and instead of implementing some proper error handling it forces you to write ugly optionals on everything. Thats why i do not like it. I should have options, if i want optionals i will do optionals if i want to put a giant try and catch and have some generalized error handling i shoud be able to do so.

Just with borrow checker this little lazy thing of not doing error handling mechanism (like its a good thing) is makign your code ugly and less readable


Never heared of global panic hooks, i’ll take a look.
Anyhow my opinions on this topic are few years out of date.

I simply do not like the lack of options and language telling me how i’m supposed to write “good” code, i do not think its good in any way, good code to me is readable short and to the point (and does not do anything foolish in terms of performance).

1 Like

You said it: you used a new crappy library. Panics in libraries (without exposing an option/result type) is a BUG, otherwise it’s a design mistake (i.e. it’s a bad library).

I don’t know what makes panics so different from any other error handling mechanism. The coolest thing about optionals is that you’re forced to handle them (this is no different than Kotlin), but now you also have Results that enumerate the possible errors (a lot like exceptions, but all compile-checked and enforced).
If the language waters these things down, then what’s the point of Rust to begin with? I think that’s one of its main powers.

And yet again: the language really tries to nudge you towards explicit error handling using Option/Result, but you don’t really have to. If you don’t trust a certain library, you can still do “try/catch”-style handling by wrapping the code in catch_unwind, which returns the result on success, and returns an error on panic, preventing the panic from crashing the code.

Otherwise, exceptions also crash the code if you don’t handle them (which is worse, because as you’re not forced to deal with them, it’s extremely easy to forget, or for a library to introduce a new possible exception to an unsuspecting function).

Also, as you said you’re running the code without an OS, that’s a special case where you always need to do more work and do things differently no matter the language, and Rust doesn’t prevent you from doing so. The Linux example you posted is a very useful one. Do these use-cases mean that panics are bad? Absolutely not, for the majority of use-cases they’re a way to let you know something bad happened, and that if the program has resumed, its integrity is probably compromised (which means: unexpected or even undefined behavior. Good luck debugging those nazal demons or dealing with a corrupted database).

I’ve written a considerable amount of Rust, and honestly error handling (and even the borrow-checker) do not cause a lot of verbosity. Again, I wouldn’t use Rust for an application, since I don’t care about all the manual memory management stuff, but many of its patterns are very useful and could prevent a host of bugs in any language really.

1 Like

Note, the only advantage of it in GC languages is non UB behavior through restricting non-determinism as you can’t get segfaults in GC languages.
It would be nice to have some keywords restricting non-determism optionally through when passing structures to multiple threads.
However, I heard multithreading can also be unsafe when implementing send and sync at the same time, but can’t really argue about it’s impact.

1.) It is better as it separates error logic from success logic, which is conceptually easier to track.
2.) It is more efficient than to have a sumtype of a result type and an error type by comparing bits for the exact type.
3.) The example given here: Recoverable Errors with Result - The Rust Programming Language shows the case where you directly can handle the error inside the function itself, but this is often not the case, so a function have to check only certain error kinds and if not matched propagate the result up which again converts the function’s return type into an error.

So in return any function you call accessing any external resources is required to return a Result type requiring the error to be uppropagated manually all the time which is very tedious and error-prone in case you forgot to uppropagate it.

So Rust has reinvented the wheel of exceptions which in its ugliness must be controlled over compiler flags, so I have always to ask myself how the library was compiled.

That’s a marketing gag and isn’t true, Rusts memory management didn’t manage memory at compile time just let you know when to deallocate it at runtime which is a better version of RAII, and it isn’t necessary more performant than using a GC but surely more deterministic.

These are rather optimizations than guarantees. Maybe the JVM is doing similar to llvm, can’t argue.

Keep in mind that you often need a heap in more dynamic contexts and as I’m aware Rusts solution is to use Rc as well, leaving out cycle detection.
I’ve heard Rust didn’t not forbid all cases of cycle detection leading to memory leaks, Swift likewise.

2 Likes

It feels like you haven’t used Rust (or at least, not given it enough time, since it does have a steep learning curve).

I heard multithreading can also be unsafe when implementing send and sync at the same time, but can’t really argue about it’s impact.

Multithreading (and anything else) is only really unsafe if you use unsafe in your code. Sync and Send are unsafe traits, so implementing them manually means that you have to make sure to break nothing (which is not easy at all, and some Rust specifics make dealing with unsafe code a bit harder compared to C or Zig). Otherwise, built-in implementations of Sync and Send on built-in types, as well as the ones automatically inferred by the compiler, are all safe to use, even if both are implemented at the same time.

1.) It is better as it separates error logic from success logic, which is conceptually easier to track.

Well, it really depends
 I guess this is a more subjective point. I find explicit handling better (although it doesn’t have to be as verbose. Kotlin itself has optionals and they’re prettier than Rust, also Vlang).

So in return any function you call accessing any external resources is required to return a Result type requiring the error to be uppropagated manually all the time which is very tedious and error-prone in case you forgot to uppropagate it.

I’m now in the middle of writing a full desktop app (point of sale) with Rust, and I hardly find Results and Options to be an annoyance. Again, not saying that the Rust way is the prettiest, but the idea stands. For a nicer and easier syntax, I think Vlang does pretty well.

I also don’t see how is it “error-prone”. The best thing about this whole thing is that you can’t “forget”, you have to do something about it, explicitly in code. In fact it’s easy to find all possible (explicitly ignored) failure points with a global search for unwrap, helping with local reasoning. To take the value, unlike exceptions which you can ignore or forget without issue.

So Rust has reinvented the wheel of exceptions which in its ugliness must be controlled over compiler flags, so I have always to ask myself how the library was compiled.

In 99% of cases, you have to ask yourself nothing, it’s automatically unwind.
abort exists for very specific use-cases, but mainly for obscure processor architectures (usually on microcontrollers) that do not support the unwind panic runtime (either it’s not yet implemented for those targets, or these targets can’t support it at all).
And yet, the global panic hook runs no matter the panic runtime used.

Also, you don’t have to “ask yourself how a library was compiled”. Libraries are compiled the way your project is, so you probably already know if you’ve overridden this yourself.

This is literally the least of concerns when it comes to error handling, and even if you code for these obscure platforms, if you’re handling errors properly nothing really changes practically.

That’s a marketing gag and isn’t true, Rusts memory management didn’t manage memory at compile time just let you know when to deallocate it at runtime which is a better version of RAII, and it isn’t necessary more performant than using a GC but surely more deterministic.

Umm
 I wasn’t even talking about memory management when talking about zero cost abstractions
 But I don’t understand what’s your alternative? Having the compiler deallocate nothing (by adding no deallocation code)? Not allocating in the first place?

I don’t see your point here. The Rust version of RAII (deallocating owned values on the end of their owning scope) is quite literally doing the same thing C developers do manually (or, well, forget to do, or do twice). That’s quite literally the meaning of compile-time memory management in Rust. If you read the Rust book, I don’t think this would’ve been unclear.

However, what I was actually talking about are high-level constructs (iterators, Option / Result or any enum type, traits, generics, and more). Rust isn’t particularly unique in this aspect either, but combined with the memory management constructs it enforces, it can usually optimize them a lot better than other languages.

These are rather optimizations than guarantees. Maybe the JVM is doing similar to llvm, can’t argue.

I know it’s hard to say, and I also used to think the same about them. But once you realize how a lot of these high-level constructs work under-the-hood, you’ll find it’s almost always better to use them instead of writing the same manual low-level code by hand. This why premature optimization is strongly discouraged unless supported by benchmarks showing an actual improvements. People did this before and the compiler beats them 90% of the time.

Keep in mind that you often need a heap in more dynamic contexts and as I’m aware Rusts solution is to use Rc as well, leaving out cycle detection.
I’ve heard Rust didn’t not forbid all cases of cycle detection leading to memory leaks, Swift likewise.

True. Let me quote myself in the first sentence of a previous post on this thread:

Just a small correction; the borrow checker is not meant to prevent memory leaks - in fact memory leaks are not unsafe in Rust, you can introduce them in safe code (although Rust’s design makes it hard to introduce memory leaks). The borrow checker’s purpose is to prevent a large host of common memory-safety bugs that are inevitable in a C/C++ codebase.

Memory leaks are possible in safe Rust (and Swift as well), they are not considered unsafe.

However, Rust still has a performance edge over Swift and others because you use Rc, Arc, Mutex, and other heap “box”-es only when you need to, while in Swift everything is an Rc. It’s quite literally the same code you would write in C, afterall, but safe from nazal demons (UB) and has an easy-to-use high-level API.

In real-world applications, you almost always will use the heap, and Rust helps you do so safely.
As I said, that doesn’t mean Rust is faster everywhere.

Swift and other exclusively reference-counted languages are very close to C, C++, Rust, and Zig because the overhead of reference-counting (generally-speaking) is less than GC, but they also require you to be a bit more explicit (weak vs. strong refs and so on) and you can introduce reference cycles in them leading to memory leaks.

Kotlin/Native was initially like this, but it added boilerplate and extra concepts like freezing which were confusing to Kotlin developers coming from JVM (myself included, I never got it). I’m pretty happy about the decision to use a GC, since it keeps Kotlin true to its spirit.

I hope this answers your questions!

1 Like