It is very implementation-dependent.
For one example, on x86 platform the set of FPU commands includes commands for loading/storing data in IEEE754 float
and double
formats (as well as many other formats). The data is loaded into the internal FPU registers that have 80-bit width. So in reality on x86 all floating-point calculations are performed with 80-bit floating-point precision. i.e. all floating-point data is actually promoted to 80-bit precision. How is data represented inside those registers is completely irrelevant, since you cannot observe them directly anyway.
This means that on x86 platform there's no such thing as a single-step float-to-double conversion. Whenever a need for such conversion arises, it is actually implemented as two-step conversion: float-to-internal-fpu and internal-fpu-to-double.
This BTW created a significant semantic difference between x86 FPU computation model and C/C++ computation models. In order to fully match the language model the processor has to forcefully reduce precision of intermediate floating-point results, thus negatively affecting performance. Many compilers provide user with options that control FPU computation model, allowing the user to opt for strict C/C++ conformance, better performance or something in between.
Not so many years ago FPU unit was an optional component of x86 platform. Floating-point computations on FPU-less platforms were performed in software, either by emulating FPU or by generating code without any FPU instructions at all. In such implementations things could work differently, like, for example, perform software conversion from IEEE754 float
to IEEE754 double
directly.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…