Food for thought:
In
http://wiki.freepascal.org/UTF8_strings_and_characters#Examplesall of the code snippets would work with both UTF-8 and UTF-16 at least after some wrapper function changes.
The first three use Pos(), Copy() and Length() which are used in typical Delphi code, too.
In the fourth one, iterating Unicode characters, the code should jump over the already handled parts. Then it surely works with any UTF-16 character, too. It would have an interesting side-effect: it would improve robustness of UTF-16 code. Typical Delphi code does not handle 2-word codepoints but this one does.
The rest need functions with different names but the semantics are the same.
That covers already many use cases. Source code can be made to support both encodings quite easily. Code for UTF-8 typically works with UTF-16 as is. To the other direction not always.
We will need both versions for LCL. How FPC will support that, let's see. It would not be an "opaque" type but selectable by IFDEF of something.
Anyway, so much energy is again wasted for arguing. People are defending their favorite encoding furiously. For what? The problem would be solved many times already with that energy.
It reminds me of the infamous SVN versus Git fight that lasted many years. Nobody bothered to check if tools already supported development with Git, they just wanted to argue with somebody.
There already was a Git-mirror and I tested the other development tools. Patch and other tools accepted Git format diffs and Lazarus developers promised to use them. Git was perfectly usable for Lazarus development all the time!
I did my own development already using Git-svn link. I documented both ways to use Git and even promised to support distributed devel model with it.
Nobody had an excuse to fight and complain any more, the problem was solved. The whiners did not offer patches as they kind of had promised, but that is ok.
Maybe I must solve the Unicode issue, too, to stop people wasting their energy for the endless arguing.
