And once again. It should be adapting to the type on the left side.The problem is, the scanner cannot adapt to the type on the left side because the scanner doesn't have a clue what type $FF is.
There would be a few better solutions (at least in my oppinion) to that problemThere is always more than one way to skin a cat but, the solutions you propose all have the same deficiency, which is, they all assume that the hex literal will be used as a number (signed or unsigned) instead of possibly as a bitmask.
There is always more than one way to skin a cat but, the solutions you propose all have the same deficiency, which is, they all assume that the hex literal will be used as a number (signed or unsigned) instead of possibly as a bitmask.
But bitmasks are numbers.They _can_ be interpreted as numbers but, doing so is often not convenient (and/or problematic) and, in addition to that, whenever they are interpreted as numbers they must be assigned either a signed or unsigned type and, there are always cases where the sign assigned to it is not the desirable one (or convenient one.) Even though a bitmap can and, very often is, expressed as a number, a bitmap is a structural map, not really a number.
All in all, the fact that you needed to write this post to explain the behaviour of the fpc seems to me as an indicator that the system as it is currently is pretty bad and unintuitiveIt's not just FPC that behaves in somewhat less than desirable ways when it comes to compiler constants. Pascal's abilities to define and manipulate compiler constants is not particularly stellar.
All in all, the fact that you needed to write this post to explain the behaviour of the fpc seems to me as an indicator that the system as it is currently is pretty bad and unintuitive
There is always more than one way to skin a cat but, the solutions you propose all have the same deficiency, which is, they all assume that the hex literal will be used as a number (signed or unsigned) instead of possibly as a bitmask.
But bitmasks are numbers.
There would be a few better solutions (at least in my oppinion) to that problem
1. Literals like in C and C++, where $FFU would be unsigned, $FFUL would unsigned long and so on. Then it can be defined that constants are always signed 32 bits except they have a postfix identifying them as otherwise
2. Use another intermediate representation for the lexer and let the parser decide. I think this is how LLVM handles it, internally they simply use arbitrary precision integers and do the type casting in the very end when they have all the neccesary information (also has something to do with optimization I guess)
3. Simply treat everything without a - as positive. $FF would always be 255 because if you wanted -1 you would have written -1. Which is in my oppinion the most intuitive solution, because this way, if you write a positive number you get a positive number and if you want a negative number you write a negative number
They _can_ be interpreted as numbers but, doing so is often not convenient (and/or problematic) and, in addition to that, whenever they are interpreted as numbers they must be assigned either a signed or unsigned type and, there are always cases where the sign assigned to it is not the desirable one (or convenient one.) Even though a bitmap can and, very often is, expressed as a number, a bitmap is a structural map, not really a number.Ok, now I get what you are saying, but in that case using hex numbers is the exact same problem, because hex is just another way to write numbers. If one wants to decouple the number aspect from the set aspect of bitsets, it should be done on a higher language level like for example using sets (which pascal does support). This way it's up to the compiler to decide how to implement them and I as a use does not need to think about things like datatypes, signedness, etc.
They _can_ be interpreted as numbers but, doing so is often not convenient (and/or problematic) and, in addition to that, whenever they are interpreted as numbers they must be assigned either a signed or unsigned type and, there are always cases where the sign assigned to it is not the desirable one (or convenient one.) Even though a bitmap can and, very often is, expressed as a number, a bitmap is a structural map, not really a number.Ok, now I get what you are saying, but in that case using hex numbers is the exact same problem, because hex is just another way to write numbers. If one wants to decouple the number aspect from the set aspect of bitsets, it should be done on a higher language level like for example using sets (which pascal does support). This way it's up to the compiler to decide how to implement them and I as a use does not need to think about things like datatypes, signedness, etc.
But as soon as you use hexadecimal, you just brought numbers and all problems they entail into the mix.
I think numbers should be treated as numbers and sets should be treated as sets, which is one of the reasons I like python, because it got rid of all the bitiness of numbers and simply has arbitrary precision integers everywhere. This way a number is nothing more than a number
And once again. It should be adapting to the type on the left side.The problem is, the scanner cannot adapt to the type on the left side because the scanner doesn't have a clue what type $FF is.
Hands up those who've used -ve numbers in any base other than 10.
Hands up those who've used -ve numbers in any base other than 10.
Where does -ve suddenly come from?
See left column at:
http://docwiki.embarcadero.com/RADStudio/Rio/en/Declared_Constants
Delphi docs refer to "$FFFFFFFFFFFFFFFF", "-$8000000000000000" as constantExpression.
hexadecimal literal $... is part of the constantExpression
Hands up those who've used -ve numbers in any base other than 10.
Where does -ve suddenly come from?
64-bit etc. literals being signed by default. Why not simply say that if a literal's expressed in other than base 10 that it's assumed to be unsigned, while decimals are signed?