And yet I can multiply an int and a double without any problem. How is that?
val i = 1 val d = 1.1 val d2 = i * d
I ask because the topic has come up in another discussion forum, about Apple’s Swift language. They don’t have implicit widening conversions like Java (yet), but that 3rd line isn’t possible either. It becomes something like:
let d2 = Double(i) * d // Swift code
Which gets ugly when you have a longer expression. Do you have implicit conversion in some cases? Or have you defined a bunch of overloads for functions like *, or … ?
Absence of implicit conversions is rarely noticeable because one can use literals almost freely cause the type is inferred from the context, and arithmetical operations are overloaded for appropriate conversions
However, it seems to sort of repeat Java's mistake with "widening" long to double:
fun main(args: Array<String>) { val n = java.lang.Long.MAX_VALUE - 1 val d = 0.0 + n; // <- this is lossy! val n2 = d.toLong(); println(n == n2)}
This prints false. It's by far not as bad as in Java, as you need an operation here to get the rouding error (and you can expect some precision loss when computing, although adding 0.0 should be safe).
Btw., by clicking on toLong you get into file Numbers declaring all the overloads.
I’m happy with Java’s primitive conversions, in practice. In exchange for fast primitives, one should know about this imperfection if you use numbers near the range limits. If you need a perfect number you can always use BigInteger, BigDecimal, etc. I don’t know a way to have the best of both worlds.
I'm happy with Java's primitive conversions, in practice. In exchange for fast primitives, one should know about this imperfection if you use numbers near the range limits.
I an not happy with the Java definition of “widening”. There are these three cases
int -> float long -> float long -> double
you lose precision without any warning. All of them are rarther rare, but this makes it only worse as many programmers are unaware of it. At the same time, requiring an explicit conversion would be no problem because of their rarity.
If you need a perfect number you can always use BigInteger, BigDecimal, etc.
Sure, but this is no solution as the problem is not the precision loss itself. The problem is that it happens without anyone expecting it and that in a language which is otherwise pretty safe.
For Kotlin, I'd suggest
int OP float -> double long OP float FORBIDDEN long OP double FORBIDDEN
So that there’s no precission loss without an explicit cast.
I agree that theoretically it is not the ideal thing. But I'm curious, do you (or anyone) have a story of encountering a bug in real code related to this? I have not, ever.
To prevent precision loss without explicit cast, you’d need to go further than banning the operations you listed. There are overflows, and floats and doubles degrade before they overflow. For example, pick two big doubles and start doing basic arithmetic… you’ll think you’re going insane…
double a = Double.MAX_VALUE / 1234.0; double b = a - 100; System.out.println(a == b); System.out.println(a - b);
Prints ‘true’ and ‘0’. I didn’t know much of this until recently.
agree that theoretically it is not the ideal thing. But I'm curious, do you (or anyone) have a story of encountering a bug in real code related to this? I have not, ever.
Neither have I, but someone surely has.
To prevent precision loss without explicit cast, you'd need to go further than banning the operations you listed. There are overflows, and floats and doubles degrade before they overflow. For example, pick two big doubles and start doing basic arithmetic... you'll think you're going insane...
I know about this and everybody should. This is something we have to live with, unless we want to use much more costly datatypes. But compare the three conversions:
float f = …; double d = f; // really widening, safe, no cast needed double d = …; float f = (float) d; // narrowing, unsafe, cast needed
long l = ...; double d = l; // not really widening, unsafe, but no cast needed