I'm using FPC (with -Mdelphi) and am trying to get a formal idea on how do the signed and unsigned types interact, and how operand sizes are figured out.
Let's u1 and u2 are unsigned (dword or cardinal).
How does
get evaluated?
Should it see an unsigned parameter and treat the whole thing as unsigned, or narrow it down to signed because a - sign can flip the expression to negative?
Given
const
d=18446744073709551615;
e=$FFFFFFFFFFFFFFFF;
...
writeln(d);
writeln(e);
I'm getting unsigned value for d, but signed (-1) value for e.
What determines that?
I guess the total question would be how does the compiler determines whether each step of an expression is a signed operation or not?