Why is the default type a double?
That's a question that would be best asked of the designers of the Java language. They are the only people who know the real reasons why that language design decision was made. But I expect that the reasoning was something along the following lines:
They needed to distinguish between the two types of literals because they do actually mean different values ... from a mathematical perspective.
Supposing they made "float" the default for literals, consider this example
// (Hypothetical "java" code ... )
double d = 0.1;
double d2 = 0.1d;
In the above, the d
and d2
would actually have different values. In the first case, a low precision float
value is converted to a higher precision double
value at the point of assignment. But you cannot recover precision that isn't there.
I posit that a language design where those two statements are both legal, and mean different things is a BAD idea ... considering that the actual meaning of the first statement is different to the "natural" meaning.
By doing it the way they've done it:
double d = 0.1f;
double d2 = 0.1;
are both legal, and mean different things again. But in the first statement, the programmer's intention is clear, and the second statement the "natural" meaning is what the programmer gets. And in this case:
float f = 0.1f;
float f2 = 0.1; // compilation error!
... the compiler picks up the mismatch.
I am guessing using floats is the exception and not the rule (using doubles instead) with modern hardware so at some point it would make sense to assume that the user intends 0.1f when he writes float f = 0.1;
They could do that already. But the problem is coming up with a set of type conversion rules that work ... and are simple enough that you don't need a degree in Java-ology to actually understand. Having 0.1
mean different things in different context would be confusing. And consider this:
void method(float f) { ... }
void method(double d) { ... }
// Which overload is called in the following?
this.method(1.0);
Programming language design is tricky. A change in one area can have consequences in others.
UPDATE to address some points raised by @supercat.
@supercat: Given the above overloads, which method will be invoked for method(16777217)? Is that the best choice?
I incorrectly commented ... compilation error. In fact the answer is method(float)
.
The JLS says this:
15.12.2.5. Choosing the Most Specific Method
If more than one member method is both accessible and applicable to a
method invocation, it is necessary to choose one to provide the
descriptor for the run-time method dispatch. The Java programming
language uses the rule that the most specific method is chosen.
...
[The symbols m1 and m2 denote methods that are applicable.]
[If] m2 is not generic, and m1 and m2 are applicable by strict or
loose invocation, and where m1 has formal parameter types S1, ..., Sn
and m2 has formal parameter types T1, ..., Tn, the type Si is more
specific than Ti for argument ei for all i (1 ≤ i ≤ n, n = k).
...
The above conditions are the only circumstances under which one method
may be more specific than another.
A type S is more specific than a type T for any expression if S <: T
(§4.10).
In this case, we are comparing method(float)
and method(double)
which are both applicable to the call. Since float
<: double
, it is more specific, and therefore method(float)
will be selected.
@supercat: Such behavior may cause problems if e.g. an expression
like int2 = (int) Math.Round(int1 * 3.5)
or long2 = Math.Round(long1 * 3.5)
gets replaced with int1 = (int) Math.Round(int2 * 3)
or long2 = Math.Round(long1 * 3)
The change would look harmless, but the first two expressions are
correct up to 613566756
or 2573485501354568
and the latter two
fail above 5592405
[the last being completely bogus above
715827882
].
If you are talking about a person making that change ... well yes.
However, the compiler won't make that change behind your back. For example, int1 * 3.5
has type double
(the int
is converted to a double
), so you end up calling the Math.Round(double)
.
As a general rule, Java arithmetic will implicitly convert from "smaller" to "larger" numeric types, but not from "larger" to "smaller".
However, you do still need to be careful since (in your rounding example):
the product of a integer and floating point may not be representable with sufficient precision because (say) a float
has fewer bits of precision than an int
.
casting the result of Math.round(double)
to an integer type can result in conversion to the smallest / largest value of the integer type.
But all of this illustrates that arithmetic support in a programming language is tricky, and there are inevitable gotcha's for a new or unwary programmer.