Why Int is correct? Is it not Double?

#1

Why Int is correct? Is it not Double?

fun main() {
val x: Int = 1
val y: Int = 3

// val answer:Double = x/y
val answer2: Int = x / y
println(answer2)

}

/*
0

Process finished with exit code 0
*/

I can’t understand this problem.
I hope someone would help me.

#2

When used with Int this operator is the integer division, which works differently from mathematical division.

If you want to keep the decimal part, you need to work with floating point variables, like Float or Double.

1 Like
#3

Thank you very much for an appropriate answer.
I didn’t know that Kotlin’s division is different from a mathematical division.

I should write in order to keep the decimal part:
val x: Double = 1.0
val y: Double = 3.0
println(x / y)

I’m happy to know it.

#4

You normally want to use Int and Long for counting objects, etc. If you do any kind of calculation you should use Double or Float. I’d use Double as long as there are no constraints on memory because of hardware as it’s more precise.
Also if you do calculation where precision is really important you should take a look into BigDecimal. Float and Double can’t represent all floating point numbers so if you do calculations with let’s say monetary values you should use that instead.

1 Like
#5

I didn’t know that Kotlin’s division is different from a mathematical division.

Nice that now you know.
Kotlin (as well as Java and C/C++) uses

  • fp double/float divide (32-64 bits)
  • integer signed/unsigned div (8-16-32-64 bits)

All of them are different from the mathematical division.

Please note that.

1 Like
#6

Been working in Dart/Flutter a lot and I kind of like their solution. Rather than basing it on the types, they have 2 different operators:

assert(5 / 2 == 2.5); // Result is a double
assert(5 ~/ 2 == 2); // Result is an int
1 Like
#7

By definition, any finite representation of numbers is going to be different than mathematical division. There are an infinite number of numbers in mathematics and a finite representation can only represent a finite number of them. Different representations choose different finite subsets. Integral types choose a all mathematical integers within a given range. Standard floating point choose a subset of those numbers that can be represented exactly by integer * 2 ^ integer. BigDecimal favors a subset of integer * 10 ^ integer.

2 Likes
#8

This explanation is so intelligible.
Thanks.