And a digression regarding assumption (1) - if this approach was strictly followed, Delphi or Lazarus would never have been created
And Linux too
This assumption contradicts the iterative approach to software development. And what is the development of Unix if not the negation of assumption (1)? Perhaps at the dawn of Unix development this seemed like a good solution (trauma after Multics?). But then apparently it was abandoned. And quite rightly so.
There has been actually quite an interesting public debate on differen mailing lists between Torwalds and Tanenbaum about exactly this topic, if a monolithic kernel like Linux is preferable to a micro or hybrid system. I am not operating system expert myself, even though it's a topic that interests me quite alot, but I must say that i also think that the monolithic approach of linux is defentely a source of many of it's problems, namely that driver support is one of the biggest problems of Linux since forever. E.g. since the last kernel update on my OpenSuse my Network driver is working extremly unreliable often breaking network connection every 5-10 minutes.
This is a problem thats arguably much better on micro kernel or hybrid systems (such as windows).
But the reason may be simple. Creating software with a GUI is much more time-consuming and requires more experience and knowledge. It's probably obvious. Second, command-line tools are indeed easier to use. For a person creating a simple project, this may be enough. But for large projects, "tapping on the keyboard" is too time-consuming. For me, tools like IDE (or even RAD) are not a deterrent. But many times I've seen a lot of people just get completely lost in them. They are unable to handle them.
There is also another reason, that is automation. If you build your program with only command line tools, you can easiely build and deployed automatically. This is something where Lazarus is severly lacking. I myself spend dozens of hours building the respective tooling required for making my lazarus projects devops ready.
And this is the core of the aforementioned unix philosophy (or guidelines however you want to call it). If you build your tools in a way that they can be used by both a human and a machine, you make this possible without having to sink hours upon hours into trying to get this to work yourself.
Also I would not subscribe to the idea that for large projects are better with a GUI. I have worked on large OpenSource projects with both Java in a Fully Integrated IDE (IntelliJ) and in C++ with only command line tools (vim + git + cmake, etc., later I moved to emacs for the editor), for nearly one and over two years respectively. I've managed to use both of them quite well, but if it's about pure productivity, I would say that the C++ toolchain was better. Mostly because I already was in the console which made the ability to use the console tools easier, while in the IDE everything needs to be clicked multiple times, which takes a lot of time. But overall I think they both where not that different with respect to efficiency and usability
One thing that is easier with GUIs is to build your toolchain such that the application can grow organically. Working on an existing cmake file is easy enough, but building one from scratch can be quite tedious. There is a reason why for all of my C and C++ projects I basically just copy and paste the buildsystem from my previous project with just small adjustments.
But that's just initial overhead, once it is running, maintaining it is not more difficult than with a GUI
And aren't these concepts (solutions) a bit older than Unix? Or at least some of them? Unix was not created in a computer void. He was just one of many OS's. Didn't its success have anything to do with AT&T's power at the time? And why did the US antitrust office get to this company at one time?
Thats not the point, all of those things are parts of the MacOS (and iOS) system that are directly taken from modern FreeBSD. Apple has to disclose this due to the BSD license agreement, which is why we know that this code is still in there, because if it wasn't apple would be the first to remove these mentions from their website.
The Point I was making is that the MacOS and iOS system is even by the strictest definition a Unix, it's based on BSD and contains many features from modern FreeBSD verbatim.
And are these tools based on the same code? No. If only because LT would have a problem in court (copyright). They have been rewritten. Only their operation was supposed to be similar to those in Unix. The fact that they support new devices is due to the fact that their source code is updated. Which of course is a plus for Linux.
Probably a lot of the code is quite reusable so I would be surprised if there is so much new stuff in there. But thats not the point, the point is that these tools still follow the same design. It's a system and a toolchain whose design has proven useful for nearly 50 years now. Sure some tools like grep got a bunch of new functions, but at it's core (and from a users perspective) it's still the same tool that was written by Ken Thomposon in the 70s
Access to equipment and services is not difficult, but incomplete and annoying. Since everything is overseen by the OS (rightly so), this OS should provide convenient access (API) to hardware and services. These different ways that Linux distributions use are called fragmentation. It was the same with Unix. That's why POSIX was created (and of course it didn't fully solve the problem). Linux distributions are not different systems - they are one and the same system: Linux. They all have the same thing: kernel, X Window/Wayland, filesystem, audio subsystem, etc.). The differences are in minor matters. But they are annoying. Even Torvalds complains about it.
Actually when it gets to accessing the hardware information Linux distributions can differ widely, because these are often provided through through pseudofiles, and the fuse pseudofiles are quite different on different systems (and this is really anoying tbh.)
And you're absolutely right, if it wasn't for Microsoft's dictate, it would also happen with Windows. However, the problem with the annoying elements of Windows (e.g. forced updates that mess up the system and install crap-programs from the MS store, tracking users pompously called telemetry, etc.) is not due to the fact that Microsoft is the only possessor of Windows (source code, etc.) but because Microsoft has no real competition. If they had competition, it wouldn't even cross their mind to "fuck" users.
This is quite interesting, because here I think you get this completely the wrong way around. The reason MS implemented all those features is exaclty because of the competition.
If Apple wouldn't have made locking up the system feasable, Microsoft wouldn't have done it. App stores are also a new phenomena previously it was not possible to monetize software usage on the OS, now it is, by locking down the system and taking a 30% cut for all transactions through their (mandatory) store, Apple has demonstrated that this is a feasable market model.
Without the competition through the mobile market, Microsoft would not have added all these mobile adjacent features, because Microsoft sees what the competition is doing and that it works, and copies their methods.
But I must admit that Microsoft is especially bad at it (the Microsoft Store still does not work 50% of the time even when I want to use it). But it's not like they got these ideas in a vacuum.
Yes, you can call it the Windows specification prepared by Microsoft. But this does not mean that such a specification will be worse (Windows was not created by high school students, after all). The C language is standardized but still terrible. Somehow standardization didn't make it better. So the transparency of the standards committee was of no use. And where is the responsibility of the creators of C? The rationale for this is "well, that's how Ritchie and his colleagues implemented it."
I use (or used to use) C and C++ quite a lot in the past, and first, looking at the different versions of C, standardization made C much, much better. But an even better example is C++, because C++ gets massive improvements through all the major standard version. In 2011 the C++ comittee published the C++11 standard, which was a pretty bold update, completely revamping a lot of C++, basically turning it into a completely different language. Original C++ was a lot like ObjectPascal is today, modern C++ has nearly no resemblence anymore to this. Back then there was the fear that no compiler would implement this, and that it would be infeasable, but guess what after it has been standardized, the compilers adopted and today you see rarely if ever using the old C++ style. Since then there have been a few more of these updates, C++14, C++17 and C++20 (no prices for guessing the years they came out), which continue to massively improve the language.
It's a bit similar with C where the big update was with C99, which massively improved the language.
Both C and C++ are success stories that demonstrate the power of standardization. So yes, standardization made C (and C++) much better, are probably the main reason why C++ is still so widely used.
And here is the funny thing, you know who also develops their own C++ version which often in the past did add a bunch of non standard features to the language? Microsoft! And here the development community has made a their opinion pretty clear. Most of the new microsoft features are not that well received (such as .Net integration), while the Standardization efforts where very well accepted. Today Microsoft is not adding as many non standard features to C++ as they used to, and the reason is simple, because it turned out that an expert comittee with user input was just better at designing a language than microsofts internal team was.
Committees (commissions) are not democratic. Committees can also ignore users (and often do). Experts may have different motives, not necessarily substantive ones, because they are people (and people have different views, needs, goals and flaws - just think of the committee debating Algole and the disputes between these people). POSIX was written by people delegated from corporations. The people on such committees represent the needs of the corporation. Corporations agreed for themselves, not for users or developers. The difference between one corporation and such a committee is that in the latter case there is a possibility that a "camel" will be created (a camel is a horse designed by the committee). Somehow I don't see Torvalds allowing development of the Linux kernel by some committee (his project, so I guess he has the right to do so).
Do I get your argument correct that because there are members of corperations who only represent the needs of their corporation in these comittees they are worse than if one corporation rules alone as in the case of Microsoft?
Sorry you lost me there. Sure comittes might not be perfect, but they are a lot better than having one company has the sole decision power and not having to make any of their discussions and decisions public.
Committees also decide how to do something. The difference is in the number of entities. Sometimes it's better not to make rotten compromises (because, for example, out of 7 participants, 2 want something contrary to what the others do and they don't necessarily have to be right). Besides, Unix fragmentation was due to the fact that individual corporations: "that's how they implemented it in their versions of Unix." And then: "well, that's how individual Linux distributions have implemented it."
There is still a big difference between making a decision behind closed doors versus making a decision where everyone involved puts their name on and all the discussions on those decisions are made public and there are public mailinglists for the general public to discuss these issues.
Those aren't even in the same ballpark of transparency and accountability.
Before there was POSIX, it was: "Unix became an AT&T whim." And then each of the big Unix corporations created their version of Unix. Sometimes it's better to have one decision-making place. Otherwise it gets messy. In technical projects you need a manager. Otherwise chaos ensues. And I don't think the Windows API is a huge mess (yes, it could be better, but it's still better than Linux). Just look at Lazarus: which version of him has more problems? The one for Windows or the one for Linux?
[...]
Because it is the truth. Unix was born a long time ago, in a different information age. Yes, it was developed. And those versions that are still in use are improved, perhaps improved. But make no mistake, his best days are over. It has largely been replaced by Linux, because it does not require expensive licensing fees and its source code is available (if someone has the patience to poke around with it).
Yes Unix was born a long time ago, and the original unix systems mostly (except for FreeBSD and MacOS) died long ago. Unix has become a common standard in the 90s, 30 years ago. It's now longer a standard than it was an actively used operating system. When someone says unix today, they mean an operating system that conforms (mostly) to the standards set out by the Single Unix Specification and POSIX. All this discussion about what happend 40 years ago does not matter, time moved on, and when we talk about Unix today we talk about a set of standards, not the operating system family from 50 years ago.
The fact that the Windows Technical Specification (WinAPI) is not approved by some sacred committee does not detract from its usefulness. Sometimes it's better that no committee "dip fingers" in the technical specification and not "created a camel instead of a horse." You keep forgetting that POSIX was created after Unix was fragmented, not before Unix was created. Besides, I see no reason why a project should be inferior just because it doesn't use someone else's guidelines. It can be worse, but it doesn't have to be.
As I detailed above, yes comittess are not perfect, but they have proven (e.g. in C and C++) to allow for massive improvements over the corporate handling by the likes of microsoft.
And the windows API is an absolute mess (as detailed before). Sure one of the issues there is not really microsofts fault, but that they need to have backwards compatibility to the ancient times, which is just part of that buisness model, but still, when looking at the windows API it is pretty obvious when which functions where added just from their style. It feels like a massive patchwork rather than a single coherent specification.
I've read a lot of the POSIX standard, in fact it's one of my goto documents when developing for Linux (much better than the man pages, exactly because it contains the notes and discussions of the comittee to describe why something is a certain way), and it defenitly is a coherent document.
And I see no reason to treat Linux distributions as completely different OSes. It is one and the same OS. They are all based on the same: kernel, file system, X Window, etc. Differences come from little things. But these little things are annoying.
So I have a Linux server and a Linux Desktop system, and they are completely different systems. On my Desktop I have a visual desktop, which I don't have for my server. My filesystem is ext4 on my server and btrfs on my desktop.
I recently changed my Desktop system to a new OpenSuse, which now uses system.d like my server, but previously I used SystemV (which I still consider to be the better choice).
Those systems are as different as two operating systems with the same kernel and following the POSIX spec can get. Nearly nothing on them was the same. If these aren't two different operating systems I don't know what is
Yes, I agree completely. But this also results from the exoticism of languages other than imperative ones, inherited from Algol. And it applies equally to Linux, whose components (e.g. kernel, system services, etc.) are mostly written in C. However, as for JavaScript or Python, maybe it's a good thing that it is 
The amazing thing about Linux is that you do not need C compatibility. The system API for linux is the C standard (augmented by the POSIX standard), provided by the libc. If you look at the RTL Sourcecode for linux in the FPC, you will see that it does not link against the libc. In fact you can completely use the Linux syscalls without linking to the C API.
Even more so, which is the whole point I was making from the very first beginning, it's one of the goals of the POSIX standard to make all features that are available through code API also available through command line tools, which can be called by other programs and produce output that is machine parseable and can be fed into other programs.
Most things POSIX system that you can do with C that you can also do with bash. And if you can do it with bash, you can do it with Lisp, Python, Javascript, Haskell, Prolog, whatever you want. Because after all it is nothing more than reading and writing strings from and to files, which is something that every language can do in some form or another.
I use many different languages on a daily basis, usually by the "best tool for a certain job" principle. And I also use both Windows and Linux on a daily basis. And from my personal experience I can attest that Linux gives you much more freedom for the choice of tools and the way you want to do something.
On Windows everything works for windwos centric solutions such as .Net or Visual C++ or even Lazarus (but even there I sometimes had that some winapi functions where not implemented and I had to embedd them myself). And to microsofts credit, these do work really well. But as soon as you leave this highly specialized area of tooling, everything is much more effort than it needs to be