You normally want to use Int and Long for counting objects, etc. If you do any kind of calculation you should use Double or Float. I’d use Double as long as there are no constraints on memory because of hardware as it’s more precise.
Also if you do calculation where precision is really important you should take a look into BigDecimal. Float and Double can’t represent all floating point numbers so if you do calculations with let’s say monetary values you should use that instead.
By definition, any finite representation of numbers is going to be different than mathematical division. There are an infinite number of numbers in mathematics and a finite representation can only represent a finite number of them. Different representations choose different finite subsets. Integral types choose a all mathematical integers within a given range. Standard floating point choose a subset of those numbers that can be represented exactly by integer * 2 ^ integer. BigDecimal favors a subset of integer * 10 ^ integer.