Forum > General

How to force the not operator to return a byte

<< < (13/14) > >>

y.ivanov:

--- Quote from: tetrastes on May 16, 2021, 03:58:11 pm ---*snip*
What types are you talking about for ordinary constants? Show how " to solve it with types", I don't afraid to say that I do not know that.
--- End quote ---
My point was that, the Sizeof() for example when used on expression, must evaluate the type of the expression, not the expression itself. Then substituting with the terminals (e.g. 128 for a, 64 for b) into the expression (byte(not (a or b)), and given the rules for expansion, we should be able to determine at which step the sign-extension takes place and to put a proper typecast to prevent it. But it turns out it was just a speculation.   

@molly
+1 for your detailed explanation

tetrastes:

--- Quote from: molly on May 16, 2021, 06:02:46 pm ---
--- Quote from: tetrastes on May 16, 2021, 02:24:19 pm ---
--- Code: ---Remark: The compiler decides on the type of an integer constant
based on the value: An integer constant gets the smallest possible
signed type.
...
constants in range 128..255 are mapped to byte, etc.
...

--- End code ---

These two statements contradict each other, am I wrong?

--- End quote ---
I fail to see how these two statements could contradict. If you still think the same after reading this post then please feel free to elaborate to make me better understand.

--- End quote ---
An integer constant gets the smallest possible SIGNED type.
...
constants in range 128..255 are mapped to BYTE

Byte is unsigned type, so according to first statement constants in range 128..255 must be mapped to SmallInt.
But English is not my native language, and maybe I don't undestand something... 

molly:
Ah, thank you for the elaboration tetrates.

Now I am able to understand why you might think that.

If you look at the original text: https://freepascal.org/docs-html/ref/refsu4.html#x26-250003.1.1

The statement at the remark-part is a generic statement on how the compiler works. Immediately after that statement the original documentation elaborates on how that is done by the compiler by telling:


--- Quote ---The first match in table (3.3) is used.

--- End quote ---
By using the table you are able to determine the order of precedence.

fwiw: I left out the actual table in my code because I have beter things to do other than copying a html-table into source-code  :D

First the compiler tries to match shortint, then byte, then smallint, etc, and as shown in the table.

The number/value 64 first match is that of a shortint.
the number/value 128 does not fit in a shortint and the next candidate in the table is a byte.
the same applies the number/value 192.

The resulting answer/value turning into a 64 bit integer is the exception, which is also explained by the documentation (note that the 64-bit is mentioned in the documentation, but that might differ because the documentation can be different for other platforms and therefor not always return a 64-bit integer. e.g. that could also be a 32-bit integer on 32 bit cpu, or perhaps an even smaller integer depending on the platform).

Does the above explanation compute better for you ?


--- Quote ---y.ivanov wrote:
@molly
+1 for your detailed explanation

--- End quote ---
Thank you for the +1 but I can't really take the credit for that as others have (more or less) stated the same.

The only thing I did is make it a bit more easier to understand by creating an example that is as simple as possible to understand and that matches the documentation.

In case the example and explanation was able to help the reader better understand of what is actually happening then I'm glad I was able to help out (a little).

Practise shows that it is far more difficult to be able to disprove what is written in the documentation , and then nag about that :P

tetrastes:
@molly
Thank you for explanations, I understand all. But I still think that the first statement must be "An integer constant gets the smallest possible INTEGER type".

lucamar:

--- Quote from: tetrastes on May 16, 2021, 07:23:34 pm ---Thank you for explanations, I understand all. But I still think that the first statement must be "An integer constant gets the smallest possible INTEGER type".
--- End quote ---

Context is everything here ;)

The original wording is because it's a quotation from the documentation and it refers to the table of integer types, so "smallest possible signed type" in this case has to be taken to mean "smallest possible signed type from those in the table above [of predefined integer types]".

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version