Even better, it seems like they include JVM startup time https://github.com/hanabi1224/Programming-Language-Benchmarks/blob/main/bench/algorithm/mandelbrot/4-m.java I wonder why they donât start the measurements when pushing the power button on the computer.
Recently I had several minutes of free time and checked CBLG benchmark code for different languages. I took a sample that is slowest on java - the regex-redux and checked implementation in Java and other languages. Guess what I found?
They are not measuring the performance of compiler, they are measuring the performance of regex library. And in cases of java they use java stdlib, whereas in most other languages they use pcre2. They even have differen feature sets with Java.
its common, very few in the industry actually do realistic/useful benchmarks, those who do never take averages⊠its never a stupid single numberâŠ
here is an example of good benchmark:
All times in nanoseconds:
Limit order placement
50%: 1160
90%: 1180
97%: 1220
99%: 1420
99.7%: 1900
99.9%: 2220
99.97%: 3440
99.99%: 4580
this is called a distribution of whatever youâre measuring⊠when you see 99% it means the worst number your benchmark observed in 100 attempts was 1420 but at 99.99% e.g. worst number after trying 10000 times was 4580 nanoseconds⊠thats 0.45 milliseconds.
Averages are the worst most misleading numbers, we never use them in our benchmarks, its only good for marketing peopleâŠ
CBLG actually had all those problems, but they fixed most of them.
The main problem remains though. Benchmarks are a lie. Micro-benchmarks are a lie squared. Different systems are optimized for different work mode.
The difference like 10 times could shouw you something, especially when you compare things in the same ecosystem. But difference of the factor of 2 in different environments is very hard to interpret. My example with regex shows exactly that.
Well, if you are working in HFT industry, then I believe you are much more interested in latency than throughput. And you need latency to be predictable and consistent as much as possible. So itâs no surprise average is useless for you and you need distribution.
In many (most?) cases people are much more interested in throughput and then (correct if Iâm wrong) measuring the average and the standard deviation is just fine.
we measure both, i just happen to stumble upon latency example first, but even when youâre interested in throughput you want to have a distribution not an average, we just intuitively want to see one number, but if your system average is 100ms and 99.99% is 10 seconds then you may have a huge problem.
Averages hide information, they loose a lot of info about your benchmarkâŠ
Rust is really way faster than Kotlin. But I felt Kotlin is easier to learn and Kotlin could be slower wrt Rust due to its Java compatibility.
Rust is most loved language, but its use case is different than of Kotlin. I see Kotlin in web development, backend API, Games alot. Where as Rust is mainly used in System and embedded world alot.
When you are saying that something is âfastâ or âfasterâ, please always provide benchmarks you use.
Their benchmark is startup time⊠I already demonstrated to a few rust devs that rust I not fast.
If you use JVM right, let it warm up and then measure performance itâs normally better than rust, even with the overhead of OSâŠ
Itâs all subjective, I personally did not like rust at all.
The memory model is an interesting trick but it sacrifices code readability and performance for something that is not a big issue (how many times did you have memory leaks in your life?)
Most importantly its a very opinionated language, it only gives you one way of doing things donât like it, there is no choice. Want to throw an exception instead of returning Results every goddamn function, sorry we are too lazy to implement exception support.
Time will show where will rust be after few years. Its basically a nice idea of memory model, slapped on top of LLVM and done⊠it throws away 20 years of good work done in JVM to support lots of cool features.
JIT is better than anything else, GC is better than other memory modelsâŠ
There writes someone whoâs never done anything serious in C, C++, or similarâŠ
Manual memory management is subtle and tricky to get right, requires constant vigilance, and causes errors that can be extremely hard to find but very serious. Anything* that removes the burden from developers is a huge benefit.
(* Iâm not speaking about Rust, coz I know nothing about it.)
At chronicle we manage memory ourselves all the time, almost everything we use is using shared memory.
Iâve done my share of low level programming for years now, itâs not as hard as everyone makes it to be. Developers donât need to reinvent these things, all the tools are already published as open source and almost every Bank or exchange is using our software. Yes if you start to do this from scratch is subtle and tricky.
I know my stuff, and no you donât need to use C or Cpp in this day and age to have amazing performance.
However c will be faster than rust just because it does not sacrifice performance for safe memory management.
I never needed C/C++ the topic is Kotlin vs Rust not C/C++
Here is Linus himself education some Rust fanboys how real life programs work: LKML: Linus Torvalds: Re: [PATCH v9 12/27] rust: add `kernel` crate
the notion of panic is borderline stupid unless your goal is to create programming language specifically for developing applications that are used in a terminal, even then its questionableâŠ
Now imagine someone messing up and doing something foolish and rust panics within kernel.
@vach You seem to be too obsessed with the topic, this alone raises red flags if youâre criticizing the language on technical and conceptual grounds, or youâre just doing it for the sake of it because of a personal distaste (which is totally understood).
The main language I code in is Kotlin, and I truly appreciate all of the advancements on the multiplatform side (among other things). I love Kotlin deeply, and its ecosystem of idiomatic multiplatform libraries that take advantage of the languageâs full potential is one of the biggest reasons for that, with Compose UI being a great example in the UI space.
Like everyone else, I read the news. I saw Rust being this beloved language for many years, and I was like âCool, the hello world syntax looks cool, anything thatâs not C/C++ is great!â and just moved on with my life⊠Then Linux adopted Rust in the kernel. And a random YouTuber showed up with some nice hype videos for the language, so I began learning it with an open mind, directly from the Rust Book.
The syntax looked nice and didnât look too unfamiliar from Kotlin (C-style with postfix types), until I got into ownership and borrowing, Rustâs unique features that make it stand out.
Since Iâm a student, as well as coding Kotlin in my main time, learning these concepts well and applying them in small CLI tools and programs took quite some time. Around 6 months before I got familiar and comfortable with them, and knew some anti-patterns (like using .clone()
everywhere).
I think you didnât give the language enough time, or simply started with a closed mind to begin with. While not everyone likes Rust, and it certainly isnât the best tool for everything, and it certainly isnât the fastest in all workloads, itâs still a nice culmination of many existing patterns, in combination with new ideas, and it has many real-world uses. The focus on correctness in the Rust community means that what you learn can be transferred as best practices to other languages for better, more reliable code. Iâm certainly using sealed classes a lot more after Rust.
I still will use Kotlin (multiplatform) for everything app-related, but I think Rust taught me a lot of good patterns that make a lot of sense but somehow are usually missed when using other languages.
Youâre right i should give it another run, i heared rust has made a lot of progress since thenâŠ
Either way my opinion about borrow checker does not really change, its solving a non issue (memory leaks are not such a common thing) especially if your team has some reasonable level of competence.
But the cost of borrow checker is readability and maintainability⊠another thing that i think is sill there is the stuipd panic mechanic. Its never a good idea to terminate your app just like that, you want it to flush some logs or do some work⊠maybe it was one request by one user that did something âbadâ why shut down the entire service/server⊠log the errorâŠ
Its a bad idea and its made at the language level instead of being something that teams decide for themselves⊠i used to do it (except i called it WTF like android folks did) and its a stupid idea.
As for speed, i like that rust is fast, but i do not believe its speed advantage is meaningful, if you understand mechanical sympathy/how hardware works you can write performant applications on frickin pascal. Its mostly bad software that is the reason for the slowdown not the languageâŠ
JVM still kills it and kotlin gave it a second breath by making coding actually enjoyable and making programming language getting out of your way.
Iâll give rust another run soon, i hope iâll change my mind but i doubt it.
Just a small correction; the borrow checker is not meant to prevent memory leaks - in fact memory leaks are not unsafe in Rust, you can introduce them in safe code (although Rustâs design makes it hard to introduce memory leaks). The borrow checkerâs purpose is to prevent a large host of common memory-safety bugs that are inevitable in a C/C++ codebase. Rust does provide many primitives for easy interior mutability (again, everything is explicit) as well as âfearless concurrencyâ.
While some of Rustâs design choices can only truly useful in Rust, multithreading is one of those things where Rust is the easiest and the safest compared to even some of high-level languages (Kotlin coroutines included!).
As for panics, I donât really know why do you consider them that problematic?
Proper Rust code normally never panics without it being extremely explicit in the code, and the library ecosystem agrees strongly. Youâll rarely (if ever) find a mainstream Rust crate that has functions panicking like itâs Java; all functions that may fail return Option
(which is like Kotlinâs nullable types), or Result
(which is like Kotlinâs rarely-used Result
type) which forces you to explicitly handle both the happy and sad paths.
If you choose to ignore the None
or Err
path (maybe because youâre prototyping, or because youâre sure it should never return either because you checked the necessary invariants before), then the only way to get the value is by calling unwrap()
, which will panic if the result is not Some
or Ok
.
That means you can do a search for âunwrapâ or âpanicâ in any project to know where this could happen.
This means that if your app panics anywhere, itâs probably code that you wrote yourself assuming itâll never panic (which is a great indication of a bug). It is an anti-pattern in Rust to use panics in API functions to signal errors, since it means that if the API consumer doesnât read the docs (if they mention it at all), they might be hit by surprise with a crash at 4 a.m. All functions that may fail return Option
or Result
.
If we argue against panics, we need another mechanism to crash on unexpected behavior, so whatâs the alternative? I used to think Exceptions were fine (to be honest, I thought they were THE way to throw and handle errors), until I used Rust.
While Kotlin helped us get rid of most NPEs (except from Java libraries without null-annotations), thereâs always still the hazard of random exceptions thrown on failures, and the try
/catch
mechanism to handle them is far from pretty or easy to reason about. The main reason lies in the name: âExceptionsâ, which nurtures an attitude of treating errors as an âexception to the ruleâ, when practically, the unhappy path is as important and critical as the happy path, and we shouldnât treat it as an afterthought.
I used Ktor recently, and I just love every part of it⊠until an unsuspecting function threw an exception about some SSL error Every part of me wanted that function to return something similar to Rustâs Result<T>
type at that moment.
Now thereâs an important detail about panics, is that they can behave in one of two ways depending on a compiler flag: unwind
or abort
.
unwind
is the default on most targets, and it works a lot like exceptions: it starts unwinding the entire call-stack (and preparing the stack trace to print), until it reaches the top of the stack (which will terminate the program), or it gets âcaughtâ usingcatch_unwind
(which returns aResult
!).abort
is the default on some targets where unwinding is not supported (e.g. some microcontrollers), and can be enabled manually. In this case, the program exits immediately on panic without doing any cleanup or stack popping.
Now letâs address your concerns about logging: itâs not much different from Java/Kotlin in this regard. If for example you want to send the stack trace to some web service right before the program terminates, you can do that by setting a global panic hook. This gets run in both the unwind
and abort
runtimes, even before the panic runtime starts executing (to unwind the stack or abort the program).
In real-world applications, the global panic hook is very useful, while catch_unwind
is used as a last resort for some very specific use-cases and itâs not encouraged.
Logging in all other cases that donât panic is as simple as handling the failure path and logging on that.
When it comes to speed, saying that âif we understand hardware well-enoughâŠâ is really like a C developers claiming that we can write large C codebases safely by âbeing careful enoughâ.
The cool thing about Rust is that almost all of its high-level abstractions are âZero Cost Abstractionsâ. In other words, while you may use fancy high-level constructs to build your code, these constructs add zero overhead at runtime. In fact, a lot of them are optimized in some crazy ways when the compiler can make some assumptions.
One prime example of that is iterators. Theyâre one of the highest-level parts of Rust that make iterating on ranges, arrays, lists and so on as easy as it is in Kotlin or Python, but without any additional cost as itâs all compiled to a simple, optimized loop, without even needing bounds checks because the compiler can make some assumptions. Other examples include enum
s (including Option
and Result
), match statements (with all of their powerful pattern-matching), traits, generics, and a lot more.
Contrast that with Java, where using the Optional<T>
type creates a whole class with tons of overhead, while the Rust equivalent is simplified to a regular if statement checking the equality of two bytes.
Of course, that doesnât mean that Rust is unanimously faster and better for workloads where performance matters. In C, C++, Rust, Go, Kotlin/Native, and any other AOT compiled language (to machine code), while the compiler can optimize a lot, it canât optimize what it doesnât know yet, like the state of a program while itâs executing. In highly dynamic applications, the JVM has a pretty big advantage: the JIT compiler can optimize your code based on its current dynamic state. Java developers have put a lot of effort into making this a highly smart and performant operation, which is why Rust seems to fail against JVM languages in cases of dynamic dispatch, since Rust will always use a vtable with all its overhead, while the JVM might notice that weâre reusing the same stuff in a loop, and can optimize it to fast, static dispatch.
Of course, for the JVM to do all of this (plus provide a full Java runtime), itâs uses a LOT of memory, and JVM initialization slows down launch times, none of which are issues with native languages, with Rust / C / C++ being at the top as they donât have a GC (and its performance + memory impact) at all. Swift is also close, exclusively using reference counting without a GC.
Again, all of this relies heavily on the use-cases, but generally speaking, Rust will output the fastest instructions based on the knowledge it has at compile time, while the JVM will lack behind unless itâs lucky to spot some optimizable patterns (itâs so good at spotting many of them!).
I hope this clears some of the confusion! Itâs a lot to grasp, but yeah, everything is there for a reason. Of course some of this comes at the expense of a little more verbosity and a steep learning curve, but everything in life is a tradeoff!
Exactly, the reason i do not like panic is because i had many times where our rust devs would report a service down, but there are no logs, no snapshots no trace of what happened⊠why? because some new crappy library they use paniced for something completely insignificant and it took down our entire service.
We were using rust as an experiment. They would run it without an OS.
Fundamentally anyone who writes reusable code should not panic, so why is that even a language thing? because rust is lazy and instead of implementing some proper error handling it forces you to write ugly optionals on everything. Thats why i do not like it. I should have options, if i want optionals i will do optionals if i want to put a giant try and catch and have some generalized error handling i shoud be able to do so.
Just with borrow checker this little lazy thing of not doing error handling mechanism (like its a good thing) is makign your code ugly and less readableâŠ
Never heared of global panic hooks, iâll take a look.
Anyhow my opinions on this topic are few years out of date.
I simply do not like the lack of options and language telling me how iâm supposed to write âgoodâ code, i do not think its good in any way, good code to me is readable short and to the point (and does not do anything foolish in terms of performance).
You said it: you used a new crappy library. Panics in libraries (without exposing an option/result type) is a BUG, otherwise itâs a design mistake (i.e. itâs a bad library).
I donât know what makes panics so different from any other error handling mechanism. The coolest thing about optionals is that youâre forced to handle them (this is no different than Kotlin), but now you also have Results that enumerate the possible errors (a lot like exceptions, but all compile-checked and enforced).
If the language waters these things down, then whatâs the point of Rust to begin with? I think thatâs one of its main powers.
And yet again: the language really tries to nudge you towards explicit error handling using Option
/Result
, but you donât really have to. If you donât trust a certain library, you can still do âtry/catchâ-style handling by wrapping the code in catch_unwind
, which returns the result on success, and returns an error on panic, preventing the panic from crashing the code.
Otherwise, exceptions also crash the code if you donât handle them (which is worse, because as youâre not forced to deal with them, itâs extremely easy to forget, or for a library to introduce a new possible exception to an unsuspecting function).
Also, as you said youâre running the code without an OS, thatâs a special case where you always need to do more work and do things differently no matter the language, and Rust doesnât prevent you from doing so. The Linux example you posted is a very useful one. Do these use-cases mean that panics are bad? Absolutely not, for the majority of use-cases theyâre a way to let you know something bad happened, and that if the program has resumed, its integrity is probably compromised (which means: unexpected or even undefined behavior. Good luck debugging those nazal demons or dealing with a corrupted database).
Iâve written a considerable amount of Rust, and honestly error handling (and even the borrow-checker) do not cause a lot of verbosity. Again, I wouldnât use Rust for an application, since I donât care about all the manual memory management stuff, but many of its patterns are very useful and could prevent a host of bugs in any language really.
Note, the only advantage of it in GC languages is non UB behavior through restricting non-determinism as you canât get segfaults in GC languages.
It would be nice to have some keywords restricting non-determism optionally through when passing structures to multiple threads.
However, I heard multithreading can also be unsafe when implementing send and sync at the same time, but canât really argue about itâs impact.
1.) It is better as it separates error logic from success logic, which is conceptually easier to track.
2.) It is more efficient than to have a sumtype of a result type and an error type by comparing bits for the exact type.
3.) The example given here: Recoverable Errors with Result - The Rust Programming Language shows the case where you directly can handle the error inside the function itself, but this is often not the case, so a function have to check only certain error kinds and if not matched propagate the result up which again converts the functionâs return type into an error.
So in return any function you call accessing any external resources is required to return a Result type requiring the error to be uppropagated manually all the time which is very tedious and error-prone in case you forgot to uppropagate it.
So Rust has reinvented the wheel of exceptions which in its ugliness must be controlled over compiler flags, so I have always to ask myself how the library was compiled.
Thatâs a marketing gag and isnât true, Rusts memory management didnât manage memory at compile time just let you know when to deallocate it at runtime which is a better version of RAII, and it isnât necessary more performant than using a GC but surely more deterministic.
These are rather optimizations than guarantees. Maybe the JVM is doing similar to llvm, canât argue.
Keep in mind that you often need a heap in more dynamic contexts and as Iâm aware Rusts solution is to use Rc as well, leaving out cycle detection.
Iâve heard Rust didnât not forbid all cases of cycle detection leading to memory leaks, Swift likewise.
It feels like you havenât used Rust (or at least, not given it enough time, since it does have a steep learning curve).
I heard multithreading can also be unsafe when implementing send and sync at the same time, but canât really argue about itâs impact.
Multithreading (and anything else) is only really unsafe if you use unsafe
in your code. Sync
and Send
are unsafe traits, so implementing them manually means that you have to make sure to break nothing (which is not easy at all, and some Rust specifics make dealing with unsafe code a bit harder compared to C or Zig). Otherwise, built-in implementations of Sync
and Send
on built-in types, as well as the ones automatically inferred by the compiler, are all safe to use, even if both are implemented at the same time.
1.) It is better as it separates error logic from success logic, which is conceptually easier to track.
Well, it really depends⊠I guess this is a more subjective point. I find explicit handling better (although it doesnât have to be as verbose. Kotlin itself has optionals and theyâre prettier than Rust, also Vlang).
So in return any function you call accessing any external resources is required to return a Result type requiring the error to be uppropagated manually all the time which is very tedious and error-prone in case you forgot to uppropagate it.
Iâm now in the middle of writing a full desktop app (point of sale) with Rust, and I hardly find Results and Options to be an annoyance. Again, not saying that the Rust way is the prettiest, but the idea stands. For a nicer and easier syntax, I think Vlang does pretty well.
I also donât see how is it âerror-proneâ. The best thing about this whole thing is that you canât âforgetâ, you have to do something about it, explicitly in code. In fact itâs easy to find all possible (explicitly ignored) failure points with a global search for unwrap
, helping with local reasoning. To take the value, unlike exceptions which you can ignore or forget without issue.
So Rust has reinvented the wheel of exceptions which in its ugliness must be controlled over compiler flags, so I have always to ask myself how the library was compiled.
In 99% of cases, you have to ask yourself nothing, itâs automatically unwind
.
abort
exists for very specific use-cases, but mainly for obscure processor architectures (usually on microcontrollers) that do not support the unwind
panic runtime (either itâs not yet implemented for those targets, or these targets canât support it at all).
And yet, the global panic hook runs no matter the panic runtime used.
Also, you donât have to âask yourself how a library was compiledâ. Libraries are compiled the way your project is, so you probably already know if youâve overridden this yourself.
This is literally the least of concerns when it comes to error handling, and even if you code for these obscure platforms, if youâre handling errors properly nothing really changes practically.
Thatâs a marketing gag and isnât true, Rusts memory management didnât manage memory at compile time just let you know when to deallocate it at runtime which is a better version of RAII, and it isnât necessary more performant than using a GC but surely more deterministic.
Umm⊠I wasnât even talking about memory management when talking about zero cost abstractions⊠But I donât understand whatâs your alternative? Having the compiler deallocate nothing (by adding no deallocation code)? Not allocating in the first place?
I donât see your point here. The Rust version of RAII (deallocating owned values on the end of their owning scope) is quite literally doing the same thing C developers do manually (or, well, forget to do, or do twice). Thatâs quite literally the meaning of compile-time memory management in Rust. If you read the Rust book, I donât think this wouldâve been unclear.
However, what I was actually talking about are high-level constructs (iterators, Option
/ Result
or any enum
type, trait
s, generics, and more). Rust isnât particularly unique in this aspect either, but combined with the memory management constructs it enforces, it can usually optimize them a lot better than other languages.
These are rather optimizations than guarantees. Maybe the JVM is doing similar to llvm, canât argue.
I know itâs hard to say, and I also used to think the same about them. But once you realize how a lot of these high-level constructs work under-the-hood, youâll find itâs almost always better to use them instead of writing the same manual low-level code by hand. This why premature optimization is strongly discouraged unless supported by benchmarks showing an actual improvements. People did this before and the compiler beats them 90% of the time.
Keep in mind that you often need a heap in more dynamic contexts and as Iâm aware Rusts solution is to use Rc as well, leaving out cycle detection.
Iâve heard Rust didnât not forbid all cases of cycle detection leading to memory leaks, Swift likewise.
True. Let me quote myself in the first sentence of a previous post on this thread:
Just a small correction; the borrow checker is not meant to prevent memory leaks - in fact memory leaks are not unsafe in Rust, you can introduce them in safe code (although Rustâs design makes it hard to introduce memory leaks). The borrow checkerâs purpose is to prevent a large host of common memory-safety bugs that are inevitable in a C/C++ codebase.
Memory leaks are possible in safe Rust (and Swift as well), they are not considered unsafe.
However, Rust still has a performance edge over Swift and others because you use Rc, Arc, Mutex, and other heap âboxâ-es only when you need to, while in Swift everything is an Rc. Itâs quite literally the same code you would write in C, afterall, but safe from nazal demons (UB) and has an easy-to-use high-level API.
In real-world applications, you almost always will use the heap, and Rust helps you do so safely.
As I said, that doesnât mean Rust is faster everywhere.
Swift and other exclusively reference-counted languages are very close to C, C++, Rust, and Zig because the overhead of reference-counting (generally-speaking) is less than GC, but they also require you to be a bit more explicit (weak vs. strong refs and so on) and you can introduce reference cycles in them leading to memory leaks.
Kotlin/Native was initially like this, but it added boilerplate and extra concepts like freezing which were confusing to Kotlin developers coming from JVM (myself included, I never got it). Iâm pretty happy about the decision to use a GC, since it keeps Kotlin true to its spirit.
I hope this answers your questions!