Further (last?) note, with some additions, the scope should be wide enough now.
We have a causal chain (1+2):
1) The existence of undefined enum values:
Undefined enum values are the consequence of the underlying type (integer),
which is a basic type, directly accessible with pointer and writable without special operators.
Supported is casting: enum := Tenum(int), stream.read( enum, sizeof(enum)), move() and some more.
The incoming int value might also be intentionally undefined. Example:
We are inside a dll which knows only the old Tenum = (e0, e1, e2).
And dll clients use newer (backward (downward) compatible) Tenum = (e0, e1,e2, e3).
The dll provider creates no newer dll version, but declares:
Minimum requirement: Tenum = (e0, e1, e2) for parameter or element of file format.
This is essentially like with named integer constants, which can be extended over time.
The compiler should not be allowed to auto-reduce the enum storage size (userdefined).
Only user-written operations are allowed to change the enum value,
for example reducing the enum value to bitpacked-field size.
The bitpacked field is still a (raw) memory field which may contain invalid values:
fieldsize 1 bit: (e0); value 1 is undefined
fieldsize 2 bits: (e0, e1, e2); value 3 is undefined
Bitpacked fields have more limits: less undefined values, no pointer to field, and more.
The user knows where packed fields are declared, and he can optionally check wide values
before size-reducing assigns: bitpackedRecord.enm := enm;
Operations with bitpacked fields are fully transparent.
(Note: A truly confined enum would need to be implemented as datatype
with overloaded operators, which prevent direct write to the internal storage.
In theory, one could add such enum type with extra keyword "guarded")
2) The consequence of undefined enum values:
The representation of invalid values is defined (thus defined behavior can follow),
and only the specific meaning of every invalid value is undefined.
Delphi, C#, C++ do it: they reliably go to else-branch.
Most common usage is, the case-statement is interested only in the labeled values:
Here it's not relevant whether the unlabeled value is in range or not (may crash in fpc);
and the else-branch is often empty or absent. But users can also insert checks in else-branch.
Another usage: The aim is to reduce undefined behavior, to increase software fault tolerance:
Minimize unnecessary risks, save lives, ensure operations in harsh environments (maybe extraterrestrial : ) ).
Think the "impossible", a databit is wrong (in memory or stream (was ok on last check)),
at normal stage of execution, where only normal operations (case, if, in) exist.
The goal: We should not crash (mission failure). Fault-tolerant operations should
not depend on a single bit, but on more conditions (redundancy).
If one condition is wrongly signaled, then it's still not relevant, because other paths can detect its wrongness.
Very important: The programmer should not be required to place an explicit enum check
before every normal operation (case, if, in), and this is possible already in Delphi, C#
Another analogy not mentioned previously:
TList.IndexOf( invalidPtr); returns -1 (without crash)