The real question is, how much time does an application actually spent in text comparison? (And that is case insensitive in this case)
And how much of this time should actually be spent in Unicode normalized text comparison?
Take the IDE. By all probability the place that would be most affected is the text search in the editor. But that is more than fast enough. And you have to take into account, that even that is not spending all of its time is TextCompare. It has to look up each line (the text is not a continuos blob), store results, ...
If you do search on disk, it comes down to disk speed rather than search speed.
And after all, searching for text like that (if done case insensitive) should do Unicode normalization (which is missing). So it would have to use an even more complex processing. And hence not even gaining from the proposal.
There are other bits of code that currently call TextCompare. Not all of them should do that, or at least they should only do a small percentage of the calls that they do. But no one has bothered to do the real optimization on them.
Here the question is, if we decided they are to slow, should we give them a 1% or 2% uplift (which is what would very optimistically remain of the 10%, if we consider that CompareText only is a fraction of what they do), or should we change the logic the use, and gain an actual 2 digit percentage?
And any if we really were to provide some code that can compare "only English alphabet" case insensitive, why do we decide to only optimize for such a small part of the world? Then we should also have "only Chinese" and "only Arabic" and "only ...." versions (which all would be faster for the respective languages).
One example was comparing UUID. Well, I don't see why that needs a "English only" compare text. If I have plenty of UUID to compare, I make sure I store them all uppercase (I.e. convert them before storing), and then compare them binary. Which is even faster.
In fact, I would not even store them as text, I may consider them a base-36 number, convert them to a series of QWord, and have even less data to compare.
The same if I have to compare (maybe sort) large amount of text. Uppercase it once, then sort it. This will work for all languages, and speed up the work more than using an "English only" version. (assuming a large enough amount of text, but if the text is small then there is no need for speed up)
All that said, yes there can be very special use cases that would benefit. But they are not common cases. They don't require the RTL to provide them with specialized pre-written code. They can easily have there own function doing such a very specialized comparison. And if they do have their own code, they may be able to tweak it even more than any rtl provided code could be.