Your code isn't C, but this is:
#include <stdio.h>
int main ()
{
float add = 0;
int count[] = { 3, 2, 1, 3, 1, 2, 3, 3, 1, 2, 1 };
for (int i = 0; i < 11; i++) {
add += 1 / ((float) count[i] + 1);
}
printf("%f
", add);
return 0;
}
I've executed this code with add += 1 / ((float) count[i] + 1);
and add += 1.0 / ((float) count[i] + 1);
.
In both cases, printf("%f
", add);
prints 4.000000
.
However, when I print each bit of the variable add
, it gives me 01000000011111111111111111111111
(3.9999998) and 01000000100000000000000000000000
(4)
As pointed by phuclv, this is because 1.0
is a double
, hence the calculation is done with double precision, whereas when using 1
, the calculation is done using single precision (because of the cast to float).
If you replace the cast to double
in the first equation or if you change 1.0
into 1.0f
in the second equation, you'll obtain the same result.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…