A null-byte cannot exist in a valid UTF8 encoded string (period!).
That is absolutely wrong. Standard UTF-8 does not restrict use of null in any way. It is a perfectly valid and acceptable character to encode. The official RFC that defines UTF-8,
RFC 3629, even says so, pointing out several times that U+0000 is acceptable and is encoded as a 0x00 byte.
Now, it may be that UTF-8 *when used in the context of something else* might not allow nulls, but that would be a restriction of that "something else", UTF-8 itself does not restrict it.
The standard format does not allow it at all
Yes, it does.
and the modified format encodes the null-byte to two non-null bytes.
That is why it is "modified". But even then, the fact that it encodes null characters means that input strings are allowed to have null characters to begin with, they are simply not encoded using null bytes.
I'm not 100% sure, but the modified flavour is a Java thingy
Yes, it is. Nobody else uses it.
Please don't mistake a textual representation with the binary representation. Setting a string to 'abc'#0'äöüß' will result in a UTF8 string of value 'abc' since the #0 is invalid.
#0 is a perfectly valid string character. It is only C-style strings that treat #0 special. Other languages, including Pascal, don't have that restriction.