Why was Double#toShort() deprecated?

I’ve just upgraded a project to use Kotlin 1.4 and am seeing these warnings when converting a Double to Short using toShort():

Unclear conversion. To achieve the same result convert to Int explicitly and then to Short

Exactly what is unclear about conversion from Double to Short? If it is unclear, then so is converting to Int. Obviously if the value is outside the range representable by Short the result will be undefined, but that will be true even if converted to Int first.

It seems to me that using toInt().toShort() is less clear and more verbose than simply using toShort() and has exactly the same result.

Is there something I have overlooked?

You can read the motivation here: https://youtrack.jetbrains.com/issue/KT-30360

Current behavior could produce counter-intuitive results, like getting a positive Byte value from a negative Double and vice-versa.

    val double = 129.0
    println(double.toByte()) // prints -127

I don’t see how this deprecation helps - the suggested replacement:

    val double = 129.0
    println(double.toInt().toByte())

Still prints -127. Whether this is “counter-intuitive” or not is a matter of opinion (to me it’s perfectly logical) but it is certainly not unexpected (because that’s how the JVM specifies it.) I suspect the issue author does not work in realms where Byte and Short are used very much (byte and bit manipulation is an area where Kotlin is still somewhat deficient in general.)

If the motivation expressed in the issue were to be taken to its logical conclusion, then Double#toInt() would also be deprecated since it will produce results even more counter-intuitive than Double#toShort(), e.g.

val double = Int.MAX_VALUE * 20.0

println("double = $double, double.toInt() = ${double.toInt()}")

prints

double = 4.294967294E10, double.toInt() = 2147483647

Here the result isn’t related in any way to the original value, it’s simply the maximum value of Int. Again this is not unexpected because it is how the JVM specifies it, but it’s quite arbitrary and thus to me much less intuitive than a truncation.

There is an valid argument that overflows should be detected when converting from one data type to another that cannot represent the full range of values in the original, but this would be better done as a run-time exception when the value is truly unrepresentable rather than as a blanket prohibition of the conversion. Forcing coders to write more verbose (and thus less readable) code that still suffers from exactly the same shortcomings is not, IMHO, an improvement.

Personally I would expect Double.toShort() to coerce the value into Short.MIN_VALUE…Short.MAX_VALUE (as with Int), so getting a value with the opposite would be unexpected.

However, when truncating integer types, I do expect sign changes to be a possibility.

So, for me at least, the Double.toShort() behavior was unintuitive, and Double.toInt().toShort() is intuitive.

The suggested replacement is more explicit, because it has two conversion steps, Double->Int and Int->Short, and each of them is dealing with values out of the target type range in its own way:

  • Double.toInt() clamps a number into the Int range,
  • Int.toShort() discard the highest bits of a number.

On the contrary, the combination of these two approaches in the single function Double.toShort is that counter-intuitive and hardy expected behavior.

Note that JVM doesn’t specify how the conversion from Double to Short should be done, it only provides bytecode instructions to perform some primitive numeric type conversions and even doesn’t have a single instruction to convert a double value to short: Chapter 2. The Structure of the Java Virtual Machine

I agree that the conversion from Double to Short became more verbose in 1.4, so in future, we may introduce the following variants of Double->Short conversion:

  • one that clamps a number into the Short range,
  • one that throws an exception if a number is out of the Short range,
  • one that returns null if a number is out of the Short range.

It’s unlikely that we’ll reintroduce the variant of the conversion that behaves as the one formerly known as Double.toShort().

4 Likes

My basic gripe is that although I agree the current behaviour can produce surprising results, which is undesirable, the change did nothing to rectify that.

It seems to be a case where a problem was identified, but an ideal solution was currently impractical so a change was made for the sake of being seen to do “something”.

If that is done, I would strongly recommend not implementing the first option (clamping.) I consider the clamping done by the JVM when converting double to int to be a design flaw, so why replicate it?

The second option (throwing an exception) seems to me to be the right approach. It’s clear, at least to me, that a programmer should never rely on clamping, truncation or other “best-effort” approaches - the value to be converted should fit and if it doesn’t there should preferably be a hard failure. If the compiler is smart enough then it would be able to omit the runtime check if static analysis proves the value is always in range.

The third option (defaulting to null) seems like a step back in time for little obvious benefit.

1 Like