A colleague argues that performing a solid null check of mutable reference with a local copy of these references would be “too costly” in terms of memory. The overhead would be 2 * 8 bytes for the references on a 64 bit cpu architecture. I consider this ridiculous by itself, but now I’m curious what the overhead of his alternative, nested let blocks with safe-call operators, is.
Version with local copy:
var a: Something? = null
fun doWithMutableThing() {
val aCopy = a
if (a != null) {
a.perform()
}
}
Presumed memory overhead: 8 bytes.
Version with let:
fun doWithMutableThing() {
a?.let { it.perform()
}
(The variant with let looks actually better in my simplified example. But let loses its beauty with multiple references and nested blocks.)
Following this article there is also a local copy.So this solution should not be better than the one above.
How many references are checked for null in the highest stack, and how many threads do you have? Speculating: 1,000 checks on 1,000 threads = 1,000 × 1,000 × 8 = 8 MB (= 8kB per thread). But I think I am overestimating here.
If you worry about this amount of memory, then Kotlin is likely the wrong choice.
let(...) is an inline function with the following contract: callsInPlace(block, InvocationKind.EXACTLY_ONCE). So the block won’t be allocated on the heap, and there will be no memory overhead.
And there is a good chance the additional local variable will be optimized away by the JIT (on the JVM).
My conclusion is that the memory and CPU overhead incurred by the local copy is impossible to measure in a real word scenario; the overhead will get lost in the noise.
In the concrete scenario we have only two references and usually only one thread (the check is more or less to satisfy the compiler and for the unlikely case that it is actually null).
Do you mean that let + safe-call operator has actually zero memory overhead? Does callsInPlace(block, InvocationKind.EXACTLY_ONCE) avoid NullPointException without a local copy of the reference on the stack?
I agree with you that this is not a real world problem, but now I want to know
I cannot state this as an absolute certainty, but I think it is very likely. If you are on the JVM, you can do some JIT logging (I have never done this myself though) to see what code is generated.
I would say that for most cases a chain of safe call operators isn’t really an issue (even if only the first of the bits is nullable). The only case where a difference could exist is in the case of primitives that need boxing. It is however also a problem that a compiler can really easily do itself (determine that nullability is purely decided on the input nullability) so I’d go with readability unless you actually have a concrete speed problem.