@Martin,
The _strict_, _mathematical_ and _correct_ definition of a boolean is an object (not in the sense of OOP) that can only have two (2) values: 0 and 1. Nothing else is allowed. By convention, 1 is usually considered to be TRUE and 0 usually considered FALSE.
Problems sometimes arise because of the fact that booleans are usually represented with 8, 16, 32 or 64 bits which is a lot more than the 1 bit required for 0 and 1.
Strictly speaking in the byte, word, dword and qword used to represent a boolean, only 1 bit is significant, the rest of the bits are uninvited guests the compiler has no business accounting for under _any_ circumstances and the code it generates _must_ account for that. Code that in any way allows those extra bits to alter the truth value is simply incorrect.
The documentation is correct, for those types, only the value 1 is TRUE and, the documentation implies by its stating "with its two predefined values True and False" that, only the value 0 is FALSE.
In some situations, the code generated by the compiler, FPC in this case, fails to enforce that condition. That is a bug and it is made evident when the inconsistent treatment yields different truth values assigned to _bits_ when going from a larger types to a single bit type.
Lastly, it is important to note that ByteBool, WordBool, LongBool and QWordBool are _not_ booleans, they are a C-ism where 0 is FALSE and any other value is TRUE. That is NOT what a boolean is, consequently the rules that apply to them are different.