Whereever the target type int64 is explicit, Delphi implements implicit typecast
const MyVal: int64 = $FFFFFFFFFFFFFFFF; // is accepted as -1 (but warning)
const MyVal = $FFFFFFFFFFFFFFFF; // target type absent, needs manual correction:
const MyVal = int64($FFFFFFFFFFFFFFFF);
In Delphi, the implicit typecast to int64 overrules the strict range checking,
and int64 is loaded correctly. The same happens if they are no overloads:
procedure writeSomeInt( i: int64);
writeSomeInt( $FFFFFFFFFFFFFFFF); // implicit typecast (i becomes -1)
Problems with old negative 64 bit literals occur only if target type is indefinite (overloads) or absent (thus uint64).
The porting situation is not that dramatic as it might appear at first glance.
Typical app developers (majority of Lazarus users) don't have such 64bit hex literals in their source code. More occurrences in this order: scientists, engineers, system developers and finally compiler developers because they need to cover the whole language with all edge cases. In short, compiler developers have the most work, but also the most experience with lowlevel stuff.
And it's less a decision between incompatible and compatible,
but between compatible with one side (1) or compatible with the other side (2):
1) with other languages (Delphi, ...), FPC docs (int64 literal undocumented),
logical assumptions, smooth transition to 128 bit literals, first impressions
in professional/academic worlds (how to interpret hexadecimal literals)
2) old Freepascal source untyped negative 64 bit hex literals