I am reading the book: CS-APPe2. C has unsigned and signed int type and in most architectures uses two's-complement arithmetic to implement signed value; but after learning some assembly code, I found that very few instructions distinguish between unsigned and signed. So my question are:
Is it the compiler's responsibility to differentiate signed and
unsigned? If yes, how does it do that?
Who implements the two's-complement arithmetic - the CPU or the complier?
Add some more info:
After learning some more instructions, actually there are some of them differentiate between signed and unsigned, such as setg,seta,etc. Further, CF and OF apply to unsigned and respectively. But most integer arithmetic instructions treat unsigned and signed the same,e.g.
int s = a + b
and
unsigned s = a + b
generate the same instruction.
So when executing ADD s d
, should the CPU treat s&d unsigned or signed? Or it is irrelevant, because the bit pattern of both result are the same and it is the compiler's task to convert the underlying bit pattern result to unsigned or signed?
P.S i am using x86 and gcc
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…