That wasn't what I meant. Inserting manual conversion code is what I meant.
The manual conversion is all trapped inside the LCL, so users don't need to do it themselves.
The LCL is not the solution to everything. It is a visual library only, with some utf8 utility routines. It is only a very small piece of the pie.
I was never convinced of .NET, but there are real world examples where WinRT is a must have: I would like to put my software (True Democracy) in the Windows Store.
I was a whole lot more convinced about certain aspects of .NET than WinRT. WinRT is like Javascript, it only sells because it is bundled.
Yes, the store is WinRT only, but only a small part of the windows software market goes via the store. Last year windows mobile versions (lumia) actually decreased in marketshare year-on-year.
Microsoft has been trying to create new application forms since Windows Vista sidebar, with very limited success. IOW, times have changed. What Microsoft wants is not necessarily what Microsoft gets.
I'm fully migrated to windows 8, and what apps did I install via market store? The old games (mahjong,solitaire for my Mom) and the Windows 8.1 update because I was forced to.
But I can't. Because it doesn't accept Win32 apps, only WinRT. Microsoft is clearly serious in win32 needing to die to do this.
Nonsense. The marketplace doesn't hurt existing customers immediately, except the few that are in need of competing with some established names that have apps in the marketplace.
Windows 10 contains win32, is targeted towards busineses as Windows 7 follow up and will have extended support till halfway the next decade.
Of course it will take ages for it to die, but still we need to prepare for the future in advance or be obsolete.
I already said that declaring something is dead is something totally different from the successor being (already) alive.
.NET winforms, .NET WPF, Silverlight and to a lesser degree the Vista sidebar apps have all been hailed as the successor, because MS is trying to downplay win32 since 2003, with limited success only.
The true successor might not be WinRT, but be the "new" thing of Windows 13.
you can't make money hosting apps in Windows Store with FPC, that's a big disadvantage.
True. So, what is holding you back? Personally the Store is totally uninteresting for me, I don't distribute via stores.
Stores are the future, and the Android store has been great for me.
Stores are the successor to what was shareware in the nineties. But shareware never dominated, and neither will stores in this form (where users have a choice).
It might start to differ if manages to conquer low end laptops with RT only solutions, but that was actually much less likely then 3-44 years ago. Surface RT was a miserable failure.
I'd like to explore other stores. I have now access to Android, iPhone and Mac OS X. Why not Windows store?
Go ahead. But that some have a businessmodel that needs stores, doesn't mean that we all do.
Windows has the largest user base, it's the best store to be. And being early is a big advantage.
Potentially, depending on the adaptation rate, which is crazy low, but till now it has been frankly, underwhelming.
The Apple store is not great because the number of apple users, but because of their usage percentage. Income = userbase * store adaptation rate
I think you shouldn't cling too hard to old stuff. People that clinged to Carbon were kicked hard by Apple and are forced to adapt.
Yes, I dropped Apple, and I'm glad I did, since Lazarus/Cocoa is still in its infancy. Though admitted, if it had been a majority platform for me, I would probably have gone the objective pascal way.
But unfortunately that was also rather later, so I guess if I had really important commercial Apple business, I'd be doing Objective C nowadays.
I guess it will be the same for WinRT. If store is that important for you, easy acceptance and quick time to market after changes/new releases is too important to let language hold you back.
Well, uh, no, since the idea was to pick the system encoding. DUH! You need conversions if you pick one encoding everywhere.
You don't seam to understand me, what I mean is that, using a API that offers UTF-8 everywhere you can:
line 1> Get string from framework (for example from TMemo)
line 2> Do string operation in the string (anything, Pos, iterate through chars, search for substring, lowercase, whatever! And not only ready made operations, but also whatever you imagine! including yes, char by char access)
line 3> Put string back into the framework (for example into TMemo)
With opaque type:
line 1> Get string from framework
line 2> Get UTF-8 string from opaque string
line 3> Do my operations
line 4> Convert again from UTF-8 to opaque string
line 5> Put back to framework
A lot worse IMHO
No, since pos() etc will accept the native encoding too. So your example is convoluted because you introduce a bias for utf8, and then conclude the utf8 way is easier.
As said, I don't buy that. Mostly initial code full of hacks.
You are kidding right? There are many companies deploying Lazarus-based software in production and with significant sales revenue.
... and they have conversion calls and own maintained copies of Delphi components everywhere.
The fact that it ships doesn't mean it is the situation they wanna be.
I agree that components are an issue, but I disagree that it is worth pissing off our existing user base.
Conversion pain is once. Dual (and triple if you support old Delphi) maintenance hurts forever.
So pissing off is relative.
I think of them first. Delphi users are a very distant concern.
Most new influx is Delphi, many codebases of components are shared. We are currently lucky because ZEOS maintainer is sympathetic to Lazarus, but the next one might not like the strain and stop support.
When I search stackoverflow, the instances when a Delphi answer appears are ... well, that's so rare I can't remember the last time.
Depends. I'm now doing opengl stuff, and then it is much less. But for topic like com ports and windows centric stuff, I find quite a lot. More often than not.
IMHO the best solution but a lot of additional work. That's why it was nearly immediately vetoed, and then I chose the solution I needed most, which is delphi compat.
I'm amazed that you say that you agree this would be the best solution.
Why? I proposed it in 2010? (originally there also was a single byte native encoding version for Windows, perfect D7 compat and FPC with current versions)
It was considered too much work, and a final decision was postponed. The fix in FPC 3 is actually nice, but works for procedural interfaces only. ANd it took 4-5 YEARS.
So why not try it? If you don't try how can you be so sure it will be so much work?
Without support from the others it is not doable. So I focus on the important part which is the unicodestring introduction.
And it would stop the endless discussions.
Yup. That was exactly why I proposed it, because all people could work on their solution and would only be asked to minimize unnecessary hard coded encoding usage.
One of the reasons why delphi compatibility worked so well is because something is either compatible or not, and that minimizes discussion, and is also mild wrt backwards compat with the old code (big changes only with major versions, minor details also in between).
If you don't, then suddenly everybody has an own opinion for the new system and wants to design their own personal "FPC" course, and backwards compat issues pop up (because the "old" FPC choice still has its proponents too etc).
There is a reason why compatibility open source projects are relatively more successful. It cuts the crap.
Take a long hard look on e.g. objfpc mode. I standby most of the changes if I had to start from zero, but is it really that different to warrant double doing everything? It doesn't really make new things possible, it is mostly notational with fringe improvements (like case of string).
If we would have started changing in D2009, we would be through most of the pain now.