That looks a lot like integer division to me since you are dividing two integer numbers. When dividing two integers (in most if not all programming languages) ignores the remainder of the division.
Think of how that line is being evaluated. First the integer division is calculated and then the integer result is cast to a float to be stored in the float variable.
Note that integers don't store any decimals at all so even 0.999 as an integer would be 0. It's not a problem about rounding.
It's also not about the denominator being bigger than the numerator. Try dividing 100/30 and the result will be 3, not 3.33333.. as it would be for float division.
You can solve this by casting the numbers to floats or making sure they are float numbers.
Casting
option2 = ((float)50/(float)100);
Dividing floats
option2 = 50.0f/100.0f;
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…