Question 2:
The best property of Smalltalk is live coding (REPL).
Can this also be done with Pascal? When yes then how?
Question 5:
What role can Lazarus play in all this?
Or would VS-code be a better option for certain needs?
Question 1:React Native does what we do: native compilation, differing only in source language (it uses JavaScript), but unlike C, it doesn't share usable interface so we can't use it as a library.
Nowadays React Native is used a lot for Cross Platform Mobile Apps.
Can Pascal use React Native or even do better itself? When yes then how?
Question 2:Can be done but no existing solution available (easy to find some abandoned efforts online). REPL is more appropriate for scripting languages. I don't think it's necessary as compilation is fast or if you want instantfpc can help you execute the programs like scripts.
The best property of Smalltalk is live coding (REPL).
Can this also be done with Pascal? When yes then how?
Question 3:I haven't checked this, but I've made a Pascal program calling .net dll in the past, just the dll initialization must be done by a .net program as I didn't know how to call the initializer back then. Once called, it will stay in memory and you can use it for as long as the OS keeps it there. I have no idea what you expect from "do better", we already do some things better than .net framework, like not requiring .net framework to run our programs ;)
Microsoft is open sourcing the .net framework.
Can Pascal use the open source .net framework or even do better? When yes then how?
Question 4:TensorFlow is written in C++ and has C interface, Pascal can certainly use that. There are various neural network implementations, just search the forum, someone here is very dedicated on the topic.
The best property of Python is it’s neural network frameworks/libraries.
Which of those frameworks/libraries can Pascal use, or can it do better itself? When yes then how?
What role can Lazarus play in all this?I don't see any Lazarus specific requests from your questions, Lazarus should remain as is: an easy way to write cross platform native GUI applications for various platforms.
Or would VS-code be a better option for certain needs?
React Native does what we do: native compilation, differing only in source language (it uses JavaScript), but unlike C, it doesn't share usable interface so we can't use it as a library.
Good adaptivity to screen sizes;Javascript and webtechnology
Scrape information from the web;I've had good experiences with java and python in that regard. Python scripts will be shorter, but to be honest I really like Java because of it's simplicity and robustness
Get additional information from users all over the world;Server architecture that I would write in Pascal, probably using HTTP (because it's easy) and communicating with a decentralized Mariadb backend.
Prepare this information for use in a neural network;I would suggest that the backend server chooses a sensible intermediate format, and the processing for the NN would than be done in the language chosen for the NN (probably python) because changes in the NN should not require changes in the backend.
Use this information in a neural network (preferably peer to peer redundantly distributed);Python with pytorch
Us the results for actions on the internet (connecting to API’s running on 3rd party servers);Also doable with python, but also possible with any other language
Us the results for actions on user devices;Frontend -> JS
Off-line first where possible (using PouchDB?);If you want to use PouchDB you need JS, otherwise there is the Java version of this CouchDB. If you want to use certain tools, this often takes the language decision for you
Fast UI, fast reliable communication, fast response from neural net, even on mobile phones;Javascript is great for UI, reliable communication is possible with every language, and fast response from the NN is dependend on your NN, if its a large network and you don't have a great GPU (like Phones) it will take long. My suggestion, do the NN stuff on the server side and not on the phone, because large networks take time, simple as that.
Fiercely secured against attempts to disrupt data collection, -communication and -processing;Use a CDN for your backend, this is completely language independend
Automated backup facility if so needed (Central, distributed, hybrid; Whatever would be best; It’s open for suggestions).This is also language independend
Question 1:React native is basically just a javascript interpreter that grants access to system APIs to javascript via a framework. It is built for javascript and has no interface for other languages (afaik).
Nowadays React Native is used a lot for Cross Platform Mobile Apps.
Can Pascal use React Native or even do better itself? When yes then how?
Question 2:Live coding is a bit against the spirit of a compiled language, and I don't think it would work that great with pascal, where you have to write a lot of boiler plate code before anything can happen
The best property of Smalltalk is live coding (REPL).
Can this also be done with Pascal? When yes then how?
Question 3:.Not uses a bytecode language that runs in a sort of VM. It is not compatible with assembly. While on windows there are possibilities to use .Not DLLs with native code, I don't know if it is portable at all. So the conservative answer is a "maybe but probably not"
Microsoft is open sourcing the .net framework.
Can Pascal use the open source .net framework or even do better? When yes then how?
Question 5:Lazarus is great for Fpc-Pascal, simple as that. If you want to use any other language or any other compiler (like Delphi) it's probably better to use a different environment. For example, for Pyhton and C++ I really like Emacs, for JS or C# VS-Code is really great, for Java I'd go with Intellij.
What role can Lazarus play in all this?
Or would VS-code be a better option for certain needs?
PS: Whenever I write JS I personally would use Typescipt, because I'm not an insane person. But it's compatibility is bascially the same
As this is the forum for Lazarus/FPC/Pascal, I'm sure you already got what are the advantages of using Lazarus/FPC/Pascal. But I'm the less biased so I will tell you no, don't use Lazarus, Free Pascal, nor Pascal.
Lazarus and Free Pascal are not commercial supported.
You should consider the other paid alternatives first. Lazarus/FPC is free, you can use it and start making money without spending a dollar. That is not as great as you thought, that also means no one is 'really' responsible for fixing the bug you may have someday when you use it. Here we have bugtracker, but as open source project, Lazarus and Free Pascal rely on volunteers and we currently are lack of volunteers.
I can use Lazarus to build games for days, but I can achieve the same result in less than half hour if I use game specialized development tools.
Pascal's golden era has passed.
Pascal was so popular in the 80s. Nowadays, Pascal is still alive but not many people are using it. Which are the industry standard are not important if you are a hobbyist. But if you're planning to start a career or business in the software development field, you really need to pay attention about it.
[...] as open source project, Lazarus and Free Pascal rely on volunteers and we currently are lack of volunteers.
Pascal's golden era has passed.
Pascal was so popular in the 80s. Nowadays, Pascal is still alive but not many people are using it. Which are the industry standard are not important if you are a hobbyist. But if you're planning to start a career or business in the software development field, you really need to pay attention about it.
Pascal's golden era has passed.
Pascal was so popular in the 80s. Nowadays, Pascal is still alive but not many people are using it. Which are the industry standard are not important if you are a hobbyist. But if you're planning to start a career or business in the software development field, you really need to pay attention about it.
[...] as open source project, Lazarus and Free Pascal rely on volunteers and we currently are lack of volunteers.
Which I suppose could be translated as "there are more people who like what we have produced than are able (i.e. have both the time and the capability) to contribute to the project". That might actually be a healthy situation ...
Lazarus and Free Pascal are not commercial supported.
You should consider the other paid alternatives first. Lazarus/FPC is free, you can use it and start making money without spending a dollar.
no one is 'really' responsible for fixing the bugbut they still get that job done
Lazarus and Free Pascal are general purpose development tools.Most languages and tools used today are general purpose tools, none of the languages listed above are specialized for anything. Python got the defacto ML language, but not because Python is built for ML, but because it is used by many in that area over others. Torch or Tensorflow bindings exist for any language.
You can use them to build anything you can imagine, if you know how. Games, image processing software, database/inventory/accounting/ERP applications, audio/video players, web servers and even operating systems. But as a general purpose tool, they have disadvantage too. I can use Lazarus to build games for days, but I can achieve the same result in less than half hour if I use game specialized development tools.
Which I suppose could be translated as "there are more people who like what we have produced than are able (i.e. have both the time and the capability) to contribute to the project". That might actually be a healthy situation, but as has already been said it does suggest that it's wise to keep away from the "bleeding edge" (currently 3.2.0) and instead to use a version which has accumulated bugfixes (e.g. 3.0.0, with fixes to 3.0.4).
The problem is not that Lazarus and FPC are free and open source, but the lack of volunteers. That said, it's pretty rarely that you run into a breaking bug. With rather advanced generic stuff this occurs rather oftern (how i loathe the error message: "an internal exception occured during compilation" or "fpc internal error ...", which occurs nearly every time generics projects start to get big), but aside from this, if you don't do anything fancy, fpc is pretty stable.
I remember a fellow here, he later switched to other paid development tool. He told me he was looking for a development tool which he wished to start a business. One reason he said if I remember correctly, he cannot 'yell' on the customer/technical support team if something not working as it should be if he uses Lazarus.
I don't mean open source is bad, but I just want to let the OP know the disadvantages of using open source. Of course I could be wrong.At least with regards to software development, languages and tools, Free tools are the norm, with the overwelming majority even being FOSS, while paid tools are ususally only niche tools where there is no big community behind. In such cases FOSS simply can't survive, because it lives from there being many contributers
For you information, Godot is far too inferior compare to Unreal Engine.Of course an engine that is only around for 5-6 years will be inferior to one thats been around for 22 years. But for the time godot is around it makes incredible progress. Especially as companies start pouring money into it, because they want to have a good FOSS engine themselves (as this is cheaper in the long run)
You want to work for somebody with Pascal or Delphi Experience. Maybe harder to find an employer, but not impossible. Pascal isn't dead as long as people talk about and use it.
If a customer would 'yell' at my workplace's support then he wouldn't be a customer of us for long. We try to deescalate such customers of course (and in 99% of such cases that works)
The problem is, we don't have any 'real' customer service here.
Also, you know if someone asks new feature and be too pushy in this forum, probably s/he will get the answer, "it is open source, take the source code and do it yourself."
Have you tried to get 'real' customer support from some companies?
Also, you know if someone asks new feature and be too pushy in this forum, probably s/he will get the answer, "it is open source, take the source code and do it yourself."However as somebody (Marco?) said earlier, if somebody really had a requirement he could always ask if any of the core developers were available for paid contract work, if necessary to generate a "fork" for him if what he needed was likely to be incompatible with the mainstream.
QuoteThe problem is, we don't have any 'real' customer service here.
:D :D :D
Have you tried to get 'real' customer support from some companies?
Not much companies have a support which is worth enough to call it so...
Take for instance the logical commercial party to compare with, Delphi. With the buying of Delphi you only got installation support. For actual issues or other help you needed to have subscription (usually 25% of purchase price an year, several hundred to 500-600 dollar/euro)
This is an old version of "Real Programmers Don't Use PASCAL"
In a newer version there is also
"Real programmers code a pattern matching for the Jupiter moons. In the last 570 free bytes of the Voyager IV. "
This is an old version of "Real Programmers Don't Use PASCAL"
In a newer version there is also
"Real programmers code a pattern matching for the Jupiter moons. In the last 570 free bytes of the Voyager IV. "
* FORTRAN --"the infantile disorder"--, by now nearly 20 years old, is hopelessly inadequate for whatever computer application you have in mind today: it is now too clumsy, too risky, and too expensive to use.
* PL/I --"the fatal disease"-- belongs more to the problem set than to the solution set.
* It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.
* The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence.
* APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums.
But both Pascal and Dijkstra need to be judged by their contribution to the profession, and since today's popular languages are overwhelmingly tending towards managed variables and robust type checking I think that Wirth and Dijkstra have won hands-down.
Don't forget C++20's modular programming proposals!
Don't forget C++20's modular programming proposals!
Now I'm not saying that Pascal's perfect, and I'm not saying that criticism of the language and- dare I say it- community is unjustified. But both Pascal and Dijkstra need to be judged by their contribution to the profession, and since today's popular languages are overwhelmingly tending towards managed variables and robust type checking I think that Wirth and Dijkstra have won hands-down.
It's most unfair that Dijkstra died too young to have expressed a robust opinion of C, C++ and Javascript, since it leaves us having to make do with Stroustrup's "C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off". With friends like that...
And the best thing about this is, it is mostly free.
Shortly before beginning the GNU Project, I heard about the Free University Compiler Kit, also known as VUCK. (The Dutch word for “free” is written with a v.) This was a compiler designed to handle multiple languages, including C and Pascal, and to support multiple target machines. I wrote to its author asking if GNU could use it.
He responded derisively, stating that the university was free but the compiler was not. I therefore decided that my first program for the GNU Project would be a multilanguage, multiplatform compiler. -- https://www.gnu.org/gnu/thegnuproject.html
"Real Old school programmers had no computers, they carved their programs in sheets of ice with an old walrus-tooth"
The above isn't a problem specific to Pascal. It seems PC class software is driven by upgrade cycles fueled by featureitis.
This is driven by the economics model. Big Iron has it's long term support contracts. There is steady money coming in, a focus on keeping customers happy enough to renew the contract, and so there is little need for featureitis.All very true. It would be nice if some of these companies using the "upgrade model" also offered a "stability model" for those who are more interested in ever increasing reliability and stability than fancy new features. As the saying goes: it is what it is. oh well...
The PC world was based on primarily on competition, and licensing subscriptions went away ages ago (only to return in the SaaS / Cloud era), and so there are all sorts of attempts at lock-in, and featureitis, because change is relatively easier than Big Iron (where software changes are often synonymous with hardware changes).
"Sometimes we discover unpleasant truths. Whenever we do so, we are in difficulties: suppressing them is scientifically dishonest, so we must tell them, but telling them, however, will fire back on us." https://www.cs.virginia.edu/~evans/cs655/readings/ewd498.htmlI couldn't resist making a comment on that one. The three things he said that I find ever more brutally applicable to the field are:
1. The tools we use have a profound (and devious!) influence on our thinking habits, and, therefore, on our thinking abilities.The first one is particularly insidious and amazingly widespread.
2. Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer.
and
3. Simplicity is prerequisite for reliability.
"Sometimes we discover unpleasant truths. Whenever we do so, we are in difficulties: suppressing them is scientifically dishonest, so we must tell them, but telling them, however, will fire back on us." https://www.cs.virginia.edu/~evans/cs655/readings/ewd498.htmlI couldn't resist making a comment on that one. The three things he said that I find ever more brutally applicable to the field are:Quote1. The tools we use have a profound (and devious!) influence on our thinking habits, and, therefore, on our thinking abilities.
One has to be careful with that philosophy, it can lead in inconvenient directions.
For example, many computer scientists profess to be followers of Whorf and to be searching for a notation which prevents common programming errors, while at the same time being prepared to countenance syntaxes which can't be parsed easily and semantics which require complex run-time support.
If you have the stomach for it, https://www.jsoftware.com/papers/tot.htm is another classic paper.
MarkMLl
1. The tools we use have a profound (and devious!) influence on our thinking habits, and, therefore, on our thinking abilities.
Quote1. The tools we use have a profound (and devious!) influence on our thinking habits, and, therefore, on our thinking abilities.
For example, many computer scientists profess to be followers of Whorf and to be searching for a notation which prevents common programming errors, while at the same time being prepared to countenance syntaxes which can't be parsed easily and semantics which require complex run-time support.It fully depends on what your needs are. For example, garbage collection is pretty slow, but if you don't need that performance and you simply want to get something done without having to think a. about ownership or b. about memory management, it works great (see Java, the performance penalty is not stopping it from being one of the most prevalent languages).
It depends very much on what you do. I think this is a bit overly black and white. And one of the problem is that currently the popular choices are changing so often they might not even be there any more if you quit your current job.
...
If you are self employed or work in a situation where IT is part of a product or service, it is less important, and your long term maintainability becomes important. And nowadays Lazarus' continuity is actually a plus there.
If you are hobby, that long term view is even more important, since your development will go slower, and you want to be able to maintain for a while to reap the rewards of your effort.
Any comment on my reasoning and/or whether I've arrived to the proper solution?
For example, garbage collection is pretty slow, but if you don't need that performance and you simply want to get something done without having to think a. about ownership or b. about memory management, it works great (see Java, the performance penalty is not stopping it from being one of the most prevalent languages).In many cases garbage collection can be faster than heap allocatiojn.
Or, my favorite example for this are Null pointers. Null pointer errors are extremely tedious, even in pretty "safe" languages like Java.
But there is no need for them to exist. In for example TypeScript, Swift or Kotlin, there are no Null pointers, there are only Nullable variables. This means, every variable that can become null, must be marked as such. If a variable is marked as null, everywhere where it is used, you need to check if it is assigned (provable at compile time). If you know a variable will never be null, you will not mark it, and then you (and the compiler) knows, what ever happens, in this variable will always be a valid pointer.
It is very silly that Pascal went from having only record and objects on the stack, which are never null, to classes where every class object can be null
It is very silly that Pascal went from having only record and objects on the stack, which are never null, to classes where every class object can be null
Would I be right in assuming that an object on the stack has its lifetime defined by the scope of the function in which it is declared?That's correct. Just like any other variable, an object (not a class) on the stack "vanishes" when the function/procedure goes out of scope.
If that is correct then ISTM that it implies excessive reliance on global variables,Not really. The program can use heap blocks for data it wants to be persistent, that said, heap blocks do share some of the global variable problems and, an additional one which is, one must remember to free them when no longer needed. Classes give you the _worst_ of both worlds, there is almost always at least one class variable that is a global variable and its data is allocated on the heap. Gotta love it. :)
But at that point if you're using a heap block you've already got a (nullable) pointer to it.True.
For example, garbage collection is pretty slow, but if you don't need that performance and you simply want to get something done without having to think a. about ownership or b. about memory management, it works great (see Java, the performance penalty is not stopping it from being one of the most prevalent languages).In many cases garbage collection can be faster than heap allocatiojn.
Moving garbage collectors could also move data around to improve cache locality.
Or, my favorite example for this are Null pointers. Null pointer errors are extremely tedious, even in pretty "safe" languages like Java.
....
If a variable is marked as null, everywhere where it is used, you need to check if it is assigned (provable at compile time).
It is very silly that Pascal went from having only record and objects on the stack, which are never null, to classes where every class object can be null
QuoteMoving garbage collectors could also move data around to improve cache locality.
Which of the main GC languages do this? Other than just after allocation when they are still in the same generation of course?
My recollection is that the Smalltalk "Green Book" documents a GC that does that: alternately consolidates data to one end of the heap or the other.
My recollection is that the Smalltalk "Green Book" documents a GC that does that: alternately consolidates data to one end of the heap or the other.
I'm just trying to distinguish alleged possible features from features that are actually proven. Quite often these kinds of optimizations are demonstrated with crafted one piece source on some convention, but it is often hard to successfully integrate it in a production toolchain, because there is much more competition for a place in the cache then, and it requires an overall optimization.
In many cases garbage collection can be faster than heap allocatiojn.
Moving garbage collectors could also move data around to improve cache locality.
Well, actually sounds like replacing one tedious chore by the other.
I never got the obsession with nullability. Basically it only helps when you change a piece of source from non-nullable types to nullable types, and for that case a limited language extension seems to be overkill IMHO.
Most code is made with nullable types, and remains so.
So if you allocate a lot of temporary objects (like in Pascal is often done with for example streams or string lists), gc might even be faster, but the thing about GC is that it is unpredictable on when the g2 gc runs, which can cause notable lags.
Also memory fragmentation simply doesn't happen with such GC systems.
It all depends on what you need.
Which is exactly why I like pascal old school objects, because they live on the stack and it always feels like a complete waste of resources to create a TStringList for only one small procedure. But thats historic and we can't travel back in time and tell borland to do it any other way
Btw. while on the desktop the g2 gc lag is not that notable, even in GUI applications, on android, when you control your device on your fingertips every lagg is notable instantly, which is why google had to become creative, but thats a whole different story.
Which is exactly why I like pascal old school objects, because they live on the stack and it always feels like a complete waste of resources to create a TStringList for only one small procedure. But thats historic and we can't travel back in time and tell borland to do it any other way
It would be maybe, if Object Pascal strings weren't already refcounted and copy-on-write that avoids a lot of copying and recreating when passed through the call chain, and the .NET a purely GC solution without the deduplicated immutable string table ducttaped on.
Ref counting is a kind of GC, and the worst GC for multi threading.
The atomic increment/decrementing really kills all parallelism between threads using the same string
And FPC's implicit exception handling with longjmps makes it even worse
It would be maybe, if Object Pascal strings weren't already refcounted and copy-on-write that avoids a lot of copying and recreating when passed through the call chain, and the .NET a purely GC solution without the deduplicated immutable string table ducttaped on.In the case of String(Lists)s sure, but for example when you use a temporary memory stream to dump a few kb into, this is pretty much uneccessary overehad.
Of course it does. At a given time not all allocated memory will be referenced by the program. And not all those tiny bits can fit any size program.Depends on what you call fragmentation, because even though this memory might not be referenced anymore, it is still allocated, so you always have a continuous block of memory. In .Net (I think this does not work in Java) you can poke the GC to clean up, and at any point in time you only have one continous block of allocated memory.
A tstringlist has a fixed size, since the contained strings don't add to the class size. If it is used a lot, there might be dedicated freelist for it, making it fairly cheap to recycle.Doesn't all classes have fixed size? That said, I've already writte n a lot of classes like Lists, Hash- and Bitsets, etc. as advanced records for temporary usage. But sadly, as advanced records do not support inheritance, it sometimes a real pain in the neck to do so
But anyway, one can discuss from a theoretic viewpoint till the cows come home, but it the end it is all about benchmarking an application that has equivalent behaviour to your application
Do all threads still halt on GC or has that meanwhile been fixed? I have been out of it for a good decade now, bar attending some Fosdem lectures.Afaik there are a few mechanics in Android java that classical JVM java does not have. For one it might optimize the GC stuff completely away by detecting that an object is function local and puts it on the stack. Also the programmer can call the GC manually when the app is working anyway to not interupt the use flow. About threding, you can start a thread to be the "GC root", meaning for object created by that thread, the gc will only consider this thread (I think you can also relate other threads to it) so only that thread needs to be halted.
I already try to do this, but the problem is, the whole LCL and FCL infrastructure is built around common super classes (like TStream, TStrings, whatever) so very often it is just much more convinient.Which is exactly why I like pascal old school objects, because they live on the stack and it always feels like a complete waste of resources to create a TStringList for only one small procedure. But thats historic and we can't travel back in time and tell borland to do it any other wayI have started to write my own records/objects for such small tasks instead of using these classes
It would be maybe, if Object Pascal strings weren't already refcounted and copy-on-write that avoids a lot of copying and recreating when passed through the call chain, and the .NET a purely GC solution without the deduplicated immutable string table ducttaped on.In the case of String(Lists)s sure, but for example when you use a temporary memory stream to dump a few kb into, this is pretty much uneccessary overehad.
QuoteOf course it does. At a given time not all allocated memory will be referenced by the program. And not all those tiny bits can fit any size program.Depends on what you call fragmentation, because even though this memory might not be referenced anymore, it is still allocated, so you always have a continuous block of memory. In .Net (I think this does not work in Java) you can poke the GC to clean up, and at any point in time you only have one continous block of allocated memory.
QuoteA tstringlist has a fixed size, since the contained strings don't add to the class size. If it is used a lot, there might be dedicated freelist for it, making it fairly cheap to recycle.Doesn't all classes have fixed size?
But anyway, one can discuss from a theoretic viewpoint till the cows come home, but it the end it is all about benchmarking an application that has equivalent behaviour to your application
That said, I've already writte n a lot of classes like Lists, Hash- and Bitsets, etc. as advanced records for temporary usage. But sadly, as advanced records do not support inheritance, it sometimes a real pain in the neck to do so