Recent

Author Topic: Best way to parse file.  (Read 4815 times)

VisualLab

  • Sr. Member
  • ****
  • Posts: 290
Re: Best way to parse file.
« Reply #30 on: January 29, 2023, 05:01:40 pm »
Thats the unix philosophy, instead of having to have one app with all the functionality, you have multiple specialized programs that do one thing very good and whose output can be reused by other programs.

The operating system is a technical product. Is a car, a plane or even an oven or a stool based on some philosophy? Of course not. The term "xyz philosophy" is vague. It's a ploy that tries to justify the shortcomings of a product. Technical products (even as simple as a stool) are created based on a set (list) of construction guidelines (assumptions) that are made before or during their creation (e.g. when modifying a prototype). Linux is a Unix clone. Unix originated in the 1970s as a commercial, very expensive, and confusing mainframe OS (The UNIX-HATERS Handbook ). Linux originated in the 1990s as nostalgia, a kind of Linus Torvads sentiment for the declining Unix. Today, when Unix is almost extinct, people around Linux are trying to implement some solutions in it that Unix did not have, because then computers had a more archaic construction. These attempts are made on the principle of "let's do it as if it were done in Unix". But it's a utopia because no one knows how it would be implemented. It probably wouldn't have been implemented at all because the Unix corporations weren't interested in providing Unix computers for the common people. Ordinary users would not gain anything from it anyway, because Unix was of little use for the PC. And that's why today there is Microsoft with its Windows and those corporations "gone to hell".

Another example where this is useful is for things like reading out hardware information can be quite annoying. Linux systems usually provide pseudofiles for this, but every distro might choose to put the pseudo files into another directory.

Truth. In Linux, programmatic access to information about hardware and services is a pain. Because Linux is primitive and messy. There is no uniform way to access resources and services in the system. Device files are an archaic solution. The aforementioned brothel in the directory system is just the tip of the iceberg. Another example is configuration files and where they are stored.

And the main advantage of this is, that it is very simple to debug, as all of these programs give the data in both human and machine readable form, you can debug your APIs by simply looking at the program output.

What is readable for the user (administrator) is not necessarily useful for the programmer. Linux still makes a habit of using text files after all these years, even though do is only useful for displaying the output of a program. For data processing, text is a pain (character encoding, content parsing, etc.). This is an ancient solution. Today, such a solution should absolutely not be used as a way of exchanging data: OS-program, program-OS or program-program. But it's used in Linux because of the laziness of Linux programmers (and perhaps moronic Unix sentiment).

So there are a lot of reasons to do this. It's one of the Windows deseases that Microsoft thought that everything must be accissable through code APIs and DLLs whose calls must be implemented in each program that tries to use them.

On the contrary. The existence of APIs and libraries is a blessing for a programmer writing software for a given OS. If it wasn't for the API, the programmer would have to implement every little thing himself. Linux does not have any API. This is one of the main reasons why most companies don't provide drivers or commercial software for Linux.

Besides, since when is there a compulsion to implement library calls in every program? You can not use them and "sculpt" the solution yourself. But the API saves you a lot of work. I guess that's what software development is all about, not reinventing the wheel. That's why there are so many different programs for Windows and it's much easier to write your own program for it (even using Lazarus).

By having different programs provide the data in both human and machine readable form, it is much easier to get access to that data and to learn how to use it.

Again: what is readable by the user (administrator) is not useful to the programmer. It's like inferring the operation of a device just by looking at it from the outside. No engineer or constructor does that (doctors too).

Warfley

  • Hero Member
  • *****
  • Posts: 1499
Re: Best way to parse file.
« Reply #31 on: January 29, 2023, 07:40:33 pm »
The operating system is a technical product. Is a car, a plane or even an oven or a stool based on some philosophy? Of course not. The term "xyz philosophy" is vague. It's a ploy that tries to justify the shortcomings of a product. Technical products (even as simple as a stool) are created based on a set (list) of construction guidelines (assumptions) that are made before or during their creation (e.g. when modifying a prototype).
What are you talking about? There is no product philosophy, but a set of guidelines for developing the product? Those are two phrases that mean the exact same thing. A Design philosophy is nothing else than a set of guidelines.

And for unix there is the Unix Philosophy. It's a set of design guidelines that where developed by Ken Thompson and later codified within Bell Labs for the development of Unix.

Quote
Today, when Unix is almost extinct, people around Linux are trying to implement some solutions in it that Unix did not have, because then computers had a more archaic construction. These attempts are made on the principle of "let's do it as if it were done in Unix". But it's a utopia because no one knows how it would be implemented. It probably wouldn't have been implemented at all because the Unix corporations weren't interested in providing Unix computers for the common people. Ordinary users would not gain anything from it anyway, because Unix was of little use for the PC. And that's why today there is Microsoft with its Windows and those corporations "gone to hell".
First Unix is not almost extinct. iPhones make up around 20% of the consumer OS Share, MacOS another 5%, those are real Unix systems, their sourcecode is a direct descendant of the original Unix code. Windows for comparison is at about 30%, so there are almost as many Unix systems in use today as there are Windows systems.

Second, about that Linux implements solutions that Unix did not have, thats also wrong, some of the most essential parts of many current linux systems are things like Bash, grep, cat, system V, etc. which are part of the Operating system (as the OS is nothing more than a set of programs, like the explorer, taskmanager or system settings in windows), are around since Unix.
This thread is about parsing netstat output, which is literally a tool that predates linux and stem directly from the original UNIX world.

Quote
Truth. In Linux, programmatic access to information about hardware and services is a pain. Because Linux is primitive and messy. There is no uniform way to access resources and services in the system. Device files are an archaic solution. The aforementioned brothel in the directory system is just the tip of the iceberg. Another example is configuration files and where they are stored.

Linux is not an operating system, linux is a kernel. It's as much an operating system as the NT Kernel in Windows is. If you have ever used the Windows API you might now that there are different kinds of functions, the officially documented functions like CreateFile, as well as the Nt or Zw versions (like NtCreateFile). The latter are the kernel functions, and they are explicetly not to be used by application developers, they are undocumented and might be subject to change at any point.

So yes, Linux the kernel does not provide a standardized way to access certain information about the external hardware, but the NT Kernel doesn't either. Windows provides only a stable API as part of the whole OS package, but the same can be said about Linux Distributions. So while there might be differences in the Pseudofiles between Debian, Suse and Arch, those are also different operating systems. Within these distros the access to the information is usually quite stable. OpenSuse still uses the same pseudofile structure as it did 20 years ago.

You can't compare Linux, as a kernel to Windows as a complete Operating system. If you wish, you can compare OpenSuse or even Android to Windows.

Quote
What is readable for the user (administrator) is not necessarily useful for the programmer. Linux still makes a habit of using text files after all these years, even though do is only useful for displaying the output of a program. For data processing, text is a pain (character encoding, content parsing, etc.). This is an ancient solution. Today, such a solution should absolutely not be used as a way of exchanging data: OS-program, program-OS or program-program. But it's used in Linux because of the laziness of Linux programmers (and perhaps moronic Unix sentiment).

Text parsing is a pain only if the output is not standardized, which for Unix they are. For example the LS command is standardized by the POSIX standard meaning you can rely on it's output always having the same structure.

Quote
On the contrary. The existence of APIs and libraries is a blessing for a programmer writing software for a given OS. If it wasn't for the API, the programmer would have to implement every little thing himself. Linux does not have any API. This is one of the main reasons why most companies don't provide drivers or commercial software for Linux.

You know what POSIX is right? The Portable Operating System Interface defines the APIs of any POSIX compliant system. So Linux does not just have an API, as a Unix system it has the exact same API as all other Unix systems, code that utilizes the POSIX standard will run the exact same on MacOS, iOS, Android, BSD, and hell even some older versions of Windows (as Windows was for some time actually a POSIX certified unix system)

And this is the key that you seem to not understand about Unix, Unix today means exactly one thing: Standardization. A Unix system is an operating system that follows the Single UNIX Specification, part of which is the aforementioned POSIX standard. Generally there is the differentiation between Unix systems which follow the whole Single Unix Specification, and Unixiods, which follow only the POSIX standard, but in both cases they provide a level of standardization that allows application developers to rely on a common set of functionality.
You can always rely on the output of ls, find, ps, etc. to be of a certain structure, because this format is standardized by the Open Group and will be provided by all operating systems in the same way.

And this standardization follows the unix philosophy, the guidelines which where originally developed by Bell labs, to ensure the systems follow a cohesive design philosophy.  You can rely on the output of programs for reading out system information, because these programs are standardized to be both machine and human readable.

Curt Carpenter

  • Sr. Member
  • ****
  • Posts: 396
Re: Best way to parse file.
« Reply #32 on: January 29, 2023, 09:15:52 pm »
Quote
Is a car, a plane or even an oven or a stool based on some philosophy? Of course not.

Oh my.

Of course a car, a plane, an oven and a stool are "based on some philosophy."   It has been recognized so in the Western tradition since Aristotle proposed the four causes -- material, formal, efficient and final.  Much depends on the question we bring to the oven, or stool, or operating system.

VisualLab

  • Sr. Member
  • ****
  • Posts: 290
Re: Best way to parse file.
« Reply #33 on: January 30, 2023, 01:20:47 am »
The operating system is a technical product. Is a car, a plane or even an oven or a stool based on some philosophy? Of course not. The term "xyz philosophy" is vague. It's a ploy that tries to justify the shortcomings of a product. Technical products (even as simple as a stool) are created based on a set (list) of construction guidelines (assumptions) that are made before or during their creation (e.g. when modifying a prototype).
What are you talking about? There is no product philosophy, but a set of guidelines for developing the product? Those are two phrases that mean the exact same thing. A Design philosophy is nothing else than a set of guidelines.

The term "philosophy of something" is vague, bland, without content (e.g. "poet's philosophy of life", meaning what?). It is used colloquially so often that it is difficult to consider it useful. On the other hand, "set of design assumptions" seems to me a more clear formulation.

I have a bias against this phrase (i.e. "philosophy of something") because it is repeated like a mantra by Linux fanatics over and over again (in other words: it is overused). If on some Linux forum an ordinary user asks why in Linux something has not been fixed or implemented in some more convenient way, then such a person is immediately pacified with the statement: "you do not understand the philosophy of Linux". And yet it is enough to give an honest answer: "because that's how the creator/programmer came up with it". This is my reason for criticizing the term.

And for unix there is the Unix Philosophy. It's a set of design guidelines that where developed by Ken Thompson and later codified within Bell Labs for the development of Unix.

Yes, Unix design guidelines have been known to me for a long time. They used to be OK. They are quite archaic now (at least for 30 years).

First Unix is not almost extinct. iPhones make up around 20% of the consumer OS Share, MacOS another 5%, those are real Unix systems, their sourcecode is a direct descendant of the original Unix code. Windows for comparison is at about 30%, so there are almost as many Unix systems in use today as there are Windows systems.
iceberg. Another example is configuration files and where they are stored.

iPhones have iOS installed and this is not Unix. MacOs X is not Unix either. Yes, somewhere deep in the bowels of these systems there are some leftovers from Unix (BSD), but iOS and Max OS X are not Unix. Some part of Mac OS X is also Mach and this kernel has nothing to do with Unix. These are strained arguments. The only Unixes left are probably AIX (IBM) and Solaris (Oracle, as a legacy from SUN). And these are now niche. So, for the sake of discussion, I can agree that: the "golden era of Unix" is gone forever :)

Second, about that Linux implements solutions that Unix did not have, thats also wrong, some of the most essential parts of many current linux systems are things like Bash, grep, cat, system V, etc. which are part of the Operating system (as the OS is nothing more than a set of programs, like the explorer, taskmanager or system settings in windows), are around since Unix.
This thread is about parsing netstat output, which is literally a tool that predates linux and stem directly from the original UNIX world.

After all, I didn't write that there was no "bash, grep, cat" in Unix and that Linux introduced them. I made it clear that today's computers have components and peripherals that didn't exist in Unix's heyday. Thus, it is not wrong to say that Linux implements solutions that Unix did not. And it's obvious that Linux developers want to equip it (with varying degrees of success) with support for hardware that is also supported by Windows.

Linux is not an operating system, linux is a kernel. It's as much an operating system as the NT Kernel in Windows is. If you have ever used the Windows API you might now that there are different kinds of functions, the officially documented functions like CreateFile, as well as the Nt or Zw versions (like NtCreateFile). The latter are the kernel functions, and they are explicetly not to be used by application developers, they are undocumented and might be subject to change at any point.

Don't explain to me what I wanted to write ("com wrote, wrote"). If I meant the Linux kernel, I would write: "Linux kernel". I'm not an IT puritan like Stallman. That Linux is not cohesively designed, but a "jumble of miscellaneous" without a coherent concept, is another matter. Well, but that's what the "father" of Linux and his "godparents" came up with.

If someone wants to use functions with "Nt" or "Zw" prefixes in Windows, that's his business. Someday it may surprise him.

So yes, Linux the kernel does not provide a standardized way to access certain information about the external hardware, but the NT Kernel doesn't either. Windows provides only a stable API as part of the whole OS package, but the same can be said about Linux Distributions. So while there might be differences in the Pseudofiles between Debian, Suse and Arch, those are also different operating systems. Within these distros the access to the information is usually quite stable. OpenSuse still uses the same pseudofile structure as it did 20 years ago.

That's the problem. There is one way in Windows, because there is an API (better or worse but it is). Linux is a mess. And to be clear - Windows is also not without flaws, but it has fewer of them than Linux ("for a hunchback, it's quite handsome").

You can't compare Linux, as a kernel to Windows as a complete Operating system. If you wish, you can compare OpenSuse or even Android to Windows.

After all, I didn't compare the kernels of these systems, but the entire systems. And yes, despite all this "Linux philosophy", it would be nice to have one consistent, well-designed API for it.

You know what POSIX is right? The Portable Operating System Interface defines the APIs of any POSIX compliant system. So Linux does not just have an API, as a Unix system it has the exact same API as all other Unix systems, code that utilizes the POSIX standard will run the exact same on MacOS, iOS, Android, BSD, and hell even some older versions of Windows (as Windows was for some time actually a POSIX certified unix system)

I'll probably surprise you, but yes, I know what POSIX is. And it has been for quite a long time :) And yes, it was created when there were already many different variants of Unix. Each vendor of a particular version was desperate to show that their version had something different from the others. We all know it well - fragmentation. But Linux fragmentation outshines that of Unix fragmentation . And the best thing is that apparently POSIX is still most fully implemented by QNX, which is not Unix.

And this is the key that you seem to not understand about Unix, Unix today means exactly one thing: Standardization. A Unix system is an operating system that follows the Single UNIX Specification, part of which is the aforementioned POSIX standard. Generally there is the differentiation between Unix systems which follow the whole Single Unix Specification, and Unixiods, which follow only the POSIX standard, but in both cases they provide a level of standardization that allows application developers to rely on a common set of functionality.
You can always rely on the output of ls, find, ps, etc. to be of a certain structure, because this format is standardized by the Open Group and will be provided by all operating systems in the same way.

On the contrary, I understand everything perfectly. Particularly fragmentation in the Unix and Linux world. It's just that I don't have utopian sentiment and nostalgia for Unix.

And this standardization follows the unix philosophy, the guidelines which where originally developed by Bell labs, to ensure the systems follow a cohesive design philosophy.  You can rely on the output of programs for reading out system information, because these programs are standardized to be both machine and human readable.

The same standardization is provided by Microsoft for its system. Only that I have not come across the term: "Windows philosophy". If I met such a term, I would also mock it.

It is known that access to hardware and services is provided through the intermediary of the OS. And this is undeniably a good solution. If a developer needs information about hardware or system services, who should provide access to it? Of course OS. And for this you need some unified method (i.e. the aforementioned: standard). It should be well designed, consistent and complete. And here's the problem. Unfortunately, the devil is always in the details. I am of the opinion that what Linux provides cries to heaven for vengeance. And this is where our points of view differ.
« Last Edit: January 30, 2023, 01:25:22 am by VisualLab »

Curt Carpenter

  • Sr. Member
  • ****
  • Posts: 396
Re: Best way to parse file.
« Reply #34 on: January 30, 2023, 06:08:46 am »
Quote
The term "philosophy of something" is vague, bland, without content

Sadly true, I think, for some people, although classifying them as "somethings" has well known philosophical pitfalls (which admittedly will be taken as vague, bland and without content in some -- best avoided! -- quarters  :).

A "linux philosophy," (I would expect) captures the values and priorities one ascribes to "linuxness" (as an instance of "operating systemness") just as a "stool philosophy" reflects one's values and priorities with respect to "stoolness" as an instance of "furnitureness."  And so on.  One doesn't need to design stools to have a stool philosophy.  It's useful to have some experience of stool-sitting of course. 

Warfley

  • Hero Member
  • *****
  • Posts: 1499
Re: Best way to parse file.
« Reply #35 on: January 30, 2023, 10:46:21 am »
The term "philosophy of something" is vague, bland, without content (e.g. "poet's philosophy of life", meaning what?). It is used colloquially so often that it is difficult to consider it useful. On the other hand, "set of design assumptions" seems to me a more clear formulation.
It depends on what you are talking about, of course there are many empty phrases like the "poet's philosophy of life", but in this specific case I was talking about the Unix philosophy, which is a widely used term, which has a very well defined meaning as it refers to the fixed and codified design guidelines for unix systems as written down in the 70s at bell labs:
Quote
1.    Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
2.    Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
3.    Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
4.    Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.

So it's neither vague nor without content, it's a specific term meaning a very specific set of guidelines.

Quote
Yes, Unix design guidelines have been known to me for a long time. They used to be OK. They are quite archaic now (at least for 30 years).
And yet they are kind of in a renessaince right now. For the past 10-15 years now, there has been a shift at least amongst developer and sys admin tools, away from GUIs back to command line tools. Where in the 90s and 2010s it was common to heavily rely on GUI tools, where command line tools often where just an afterthought, and ususally did not have a standardized interface (e.g. Delphi shipped unly in the Pro version with command line tools, Visual Studio and .Net provided them only as seperate addons, etc.), if we look at the new widely used languages and tools today, javascript with npm, python with pip, rust with cargo, swift with swift package manager, etc. they all shifted back to Unix style command line tools.

So for these guidelines being quite archaic, for some reason all of the most used tooling for developers seem to preferr them over the more modern options that where introduced in the 90s and 2010s.

iPhones have iOS installed and this is not Unix. MacOs X is not Unix either. Yes, somewhere deep in the bowels of these systems there are some leftovers from Unix (BSD), but iOS and Max OS X are not Unix. Some part of Mac OS X is also Mach and this kernel has nothing to do with Unix. These are strained arguments. The only Unixes left are probably AIX (IBM) and Solaris (Oracle, as a legacy from SUN). And these are now niche. So, for the sake of discussion, I can agree that: the "golden era of Unix" is gone forever :)
First, Apple is quite notorious for using a lot of open source code in their products (and not contributing back). With the most common example is actually modern day FreeBSD. For example iOS isolation of application runtimes is mostly just a copy and paste of FreeBSDs jails.

Apple actually has a documentation detailing what they took from FreeBSD: **s://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/BSD/BSD.html
Quote
The BSD component provides the following kernel facilities:
*    processes and protection
    *    host and process identifiers
    *   process creation and termination
    *    user and group IDs
    *    process groups

*    memory management
    *    text, data, stack, and dynamic shared libraries
    *    mapping pages
    *    page protection control

*    POSIX synchronization primitives

*    POSIX shared memory

*    signals
    *    signal types
    *    signal handlers
    *    sending signals

*    timing and statistics
    *    real time
    *    interval time

*    descriptors
    *    files
    *    pipes
    *    sockets

*    resource controls
    *    process priorities
    *    resource utilization and resource limits
    *    quotas

*    system operation support
    *    bootstrap operations
    *    shut-down operations
    *    accounting

Those aren't leftovers, this is most of the core operating system functionality.

Aside from this, also since 2007, apple is certifying all of their products under the Single Unix Specification (Unix 03 to be precise), so even without the direct linage to the original Unix system, it is per definition a Unix system, because Unix (at least for the past 30 years now) does not describe a Historical system, but a set of standards.

Quote
After all, I didn't write that there was no "bash, grep, cat" in Unix and that Linux introduced them. I made it clear that today's computers have components and peripherals that didn't exist in Unix's heyday. Thus, it is not wrong to say that Linux implements solutions that Unix did not. And it's obvious that Linux developers want to equip it (with varying degrees of success) with support for hardware that is also supported by Windows.
But still I can access this hardware through mostly the same tools listed there. While it's true that there is much more hardware available today, e.g. I have an USB bluetooth sender, which I can just access via cat and bash through the serial interface.

It's actually quite amazing how much can be done with just classical tools due to the open design of Unix and it's pseudofilesystem.

Quote
Don't explain to me what I wanted to write ("com wrote, wrote"). If I meant the Linux kernel, I would write: "Linux kernel". I'm not an IT puritan like Stallman. That Linux is not cohesively designed, but a "jumble of miscellaneous" without a coherent concept, is another matter. Well, but that's what the "father" of Linux and his "godparents" came up with.

If someone wants to use functions with "Nt" or "Zw" prefixes in Windows, that's his business. Someday it may surprise him.
Yes but you can't on one hand say that accessing hardware information on Linux is hard, because there is no universal way of doing that, and then ignoring that different distributions have their own ways of doing that. It's just that there is no standardized way of doing that  across distributions.

But comparing that to windows is a faulty comparison, because there is just one Windows Operating System (although in different versions), made by only one company. If there would be multiple NT distributions, we would probably see the exact same problem.

This is the problem when you compare an ecosystem of dozens of different operating systems to an ecosystem with just one.

After all, I didn't compare the kernels of these systems, but the entire systems. And yes, despite all this "Linux philosophy", it would be nice to have one consistent, well-designed API for it.

[...]


The same standardization is provided by Microsoft for its system. Only that I have not come across the term: "Windows philosophy". If I met such a term, I would also mock it.[/quote]

No it's not windows at most provides a quasi standard, it's similar with programming languages, when comparing a standardized language like C with a quasi standard like Delphi for object pascal. The difference between them is transparency and accountability.

A standard is created by a seperate comittee of domain experts, in case of Unix and POSIX it is the IEEE and Open Group. The implementers must than follow that standard. The standardization is also open, meaning together with the standard there will be notes and discussions published to explain the reasons for all the changes. At every point you can look at the standard and understand why a certain decision was made.

Quasi standards have none of this. They happen when the major (or in the case of WIndows, only) implementer decides a way to do something, and this is then the way it is done.
For example, Windows tried to go for UTF-16 (which isn't even true UTF16, but a slight deviation and therefore isn't even conforming to the unicode standard), which resulted in having two versions of all the older API functions the A and W version, where for newer APIs they did not even provide the A version anymore but only the W version. But then later later they noticed that UTF-16 was a terrible idea and now are upgrading the A (ANSI) versions of the APIs to be usable with UTF-8, and also trying to add the A version for all those APIs they added the W version but not the A version in the past.
This creates the absolute mess we have today where some APIs are only ANSI, some are UTF-8, some are only available in the W version, some only in the A version, etc.
And all the accountability we get and the reasoning behind this is: "Well thats just how windows implemented it". This does not happen with standardization

Another great example is, as we are in a Pascal forum, why can you take pointers from read only properties? Simple because Delphi implements it this way. Is that  a good idea, probably not, would anyone support this if it was brought to a standards comittee for decision? Probably not. But we have it, because quasi standards do not provide any transparency or accoutability for decisions, as standards do.

This is a very big difference. Unix describes a set of standards, the decision process behind are public and there is a defined way of giving suggestions to the comittee. Windows is at the whims of Microsoft who can basically do what they want, and this shows in the mess they created with their API.


Quote
On the contrary, I understand everything perfectly. Particularly fragmentation in the Unix and Linux world. It's just that I don't have utopian sentiment and nostalgia for Unix.
Then why are you always talking about Unix as if it is an ancient system? Unix is a set of standards and specification, the latest version of which came out like 2018. This has nothing to do with nostaliga, it's just a current day standardization effort.

Quote
It is known that access to hardware and services is provided through the intermediary of the OS. And this is undeniably a good solution. If a developer needs information about hardware or system services, who should provide access to it? Of course OS. And for this you need some unified method (i.e. the aforementioned: standard). It should be well designed, consistent and complete. And here's the problem. Unfortunately, the devil is always in the details. I am of the opinion that what Linux provides cries to heaven for vengeance. And this is where our points of view differ.
A standard can never be complete. There will always be room for implementation specifics (otherwise the standard developers would need the ability to see the future). On Windows you have no standard, everything is implementation specific. On Linux systems you usually have the POSIX standard with some additional things being implementation specific. But again there are dozens of different Linux based Operating Systems available, meaning there is much more variety. So sure if you consider all the Linux world compared to the single Windows implementaiton this skews the image.
But if you take a specific Linux operating system, you will find this is not that much of an issue as you make it out to be. Take arch, most of the way how to interact with the Hardware is well documented in the Arch Wiki, sure this is not applicable to Debian, but why should it be? These are two different operating systems. I also don't expect Apples Cocoa API to run on Windows.

And in the end, if we compare some standardization with some implementation specifics on Unix, with no standardization and all being implementation specific on Windows, I think it should be clear that more standardization is generally a good thing


Also one thing I want to add here, using the Windows API is only easy if you are using a language that is either relatively close to C (like Pascal), or something that was specifically developed for Windows (like .Net).
Even with imperative languages that are a bit different it already gets a pain, try using the Windows API with Java, Python or Javascript. If you go to different paradigms like non imperative languages such as Haskell or Lisp or Prolog, you have to write a complete wrapper to use the Windows API.

But you know what always works easy in pretty much any language? Parsing text files. You can use the Unix pseudofiles and utility outputs even with esoteric languages such as brainfuck where it is literally impossible to use the Windows API
« Last Edit: January 30, 2023, 12:36:39 pm by Warfley »

VisualLab

  • Sr. Member
  • ****
  • Posts: 290
Re: Best way to parse file.
« Reply #36 on: January 30, 2023, 08:22:51 pm »
...I was talking about the Unix philosophy, which is a widely used term, which has a very well defined meaning as it refers to the fixed and codified design guidelines for unix systems as written down in the 70s at bell labs:
Quote
1.    Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
2.    Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
3.    Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
4.    Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.

So it's neither vague nor without content, it's a specific term meaning a very specific set of guidelines.

You're right, there are some specific guidelines. Although very general, it is still a base. But were these unique assumptions at the time when the first version of Unix was created? Because Unix wasn't the only one (despite its name :) I mean, there were also other OSes back then. Could these other OSes have similar (or the same) assumptions?

And a digression regarding assumption (1) - if this approach was strictly followed, Delphi or Lazarus would never have been created :) And Linux too :) This assumption contradicts the iterative approach to software development. And what is the development of Unix if not the negation of assumption (1)? Perhaps at the dawn of Unix development this seemed like a good solution (trauma after Multics?). But then apparently it was abandoned. And quite rightly so.

Quote
Yes, Unix design guidelines have been known to me for a long time. They used to be OK. They are quite archaic now (at least for 30 years).
And yet they are kind of in a renessaince right now. For the past 10-15 years now, there has been a shift at least amongst developer and sys admin tools, away from GUIs back to command line tools. Where in the 90s and 2010s it was common to heavily rely on GUI tools, where command line tools often where just an afterthought, and ususally did not have a standardized interface (e.g. Delphi shipped unly in the Pro version with command line tools, Visual Studio and .Net provided them only as seperate addons, etc.), if we look at the new widely used languages and tools today, javascript with npm, python with pip, rust with cargo, swift with swift package manager, etc. they all shifted back to Unix style command line tools.

So for these guidelines being quite archaic, for some reason all of the most used tooling for developers seem to preferr them over the more modern options that where introduced in the 90s and 2010s.

But the reason may be simple. Creating software with a GUI is much more time-consuming and requires more experience and knowledge. It's probably obvious. Second, command-line tools are indeed easier to use. For a person creating a simple project, this may be enough. But for large projects, "tapping on the keyboard" is too time-consuming. For me, tools like IDE (or even RAD) are not a deterrent. But many times I've seen a lot of people just get completely lost in them. They are unable to handle them.

First, Apple is quite notorious for using a lot of open source code in their products (and not contributing back).

The vast majority of corporations do this. If they sometimes share something, it's only out of self-interest (e.g. Visual Studio CE, Visual Studio Code).

Apple actually has a documentation detailing what they took from FreeBSD: **s://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/BSD/BSD.html
Quote
The BSD component provides the following kernel facilities:
*    processes and protection
    *    host and process identifiers
    *   process creation and termination
    *    user and group IDs
    *    process groups

*    memory management
    *    text, data, stack, and dynamic shared libraries
    *    mapping pages
    *    page protection control

*    POSIX synchronization primitives

*    POSIX shared memory

*    signals
    *    signal types
    *    signal handlers
    *    sending signals

*    timing and statistics
    *    real time
    *    interval time

*    descriptors
    *    files
    *    pipes
    *    sockets

*    resource controls
    *    process priorities
    *    resource utilization and resource limits
    *    quotas

*    system operation support
    *    bootstrap operations
    *    shut-down operations
    *    accounting

Those aren't leftovers, this is most of the core operating system functionality.

And aren't these concepts (solutions) a bit older than Unix? Or at least some of them? Unix was not created in a computer void. He was just one of many OS's. Didn't its success have anything to do with AT&T's power at the time? And why did the US antitrust office get to this company at one time?

Quote
After all, I didn't write that there was no "bash, grep, cat" in Unix and that Linux introduced them. I made it clear that today's computers have components and peripherals that didn't exist in Unix's heyday. Thus, it is not wrong to say that Linux implements solutions that Unix did not. And it's obvious that Linux developers want to equip it (with varying degrees of success) with support for hardware that is also supported by Windows.
But still I can access this hardware through mostly the same tools listed there. While it's true that there is much more hardware available today, e.g. I have an USB bluetooth sender, which I can just access via cat and bash through the serial interface.

It's actually quite amazing how much can be done with just classical tools due to the open design of Unix and it's pseudofilesystem.

And are these tools based on the same code? No. If only because LT would have a problem in court (copyright). They have been rewritten. Only their operation was supposed to be similar to those in Unix. The fact that they support new devices is due to the fact that their source code is updated. Which of course is a plus for Linux.

Yes but you can't on one hand say that accessing hardware information on Linux is hard, because there is no universal way of doing that, and then ignoring that different distributions have their own ways of doing that. It's just that there is no standardized way of doing that  across distributions.

But comparing that to windows is a faulty comparison, because there is just one Windows Operating System (although in different versions), made by only one company. If there would be multiple NT distributions, we would probably see the exact same problem.

This is the problem when you compare an ecosystem of dozens of different operating systems to an ecosystem with just one.

Access to equipment and services is not difficult, but incomplete and annoying. Since everything is overseen by the OS (rightly so), this OS should provide convenient access (API) to hardware and services. These different ways that Linux distributions use are called fragmentation. It was the same with Unix. That's why POSIX was created (and of course it didn't fully solve the problem). Linux distributions are not different systems - they are one and the same system: Linux. They all have the same thing: kernel, X Window/Wayland, filesystem, audio subsystem, etc.). The differences are in minor matters. But they are annoying. Even Torvalds complains about it.

And you're absolutely right, if it wasn't for Microsoft's dictate, it would also happen with Windows. However, the problem with the annoying elements of Windows (e.g. forced updates that mess up the system and install crap-programs from the MS store, tracking users pompously called telemetry, etc.) is not due to the fact that Microsoft is the only possessor of Windows (source code, etc.) but because Microsoft has no real competition. If they had competition, it wouldn't even cross their mind to "fuck" users.

No it's not windows at most provides a quasi standard, it's similar with programming languages, when comparing a standardized language like C with a quasi standard like Delphi for object pascal. The difference between them is transparency and accountability.

Yes, you can call it the Windows specification prepared by Microsoft. But this does not mean that such a specification will be worse (Windows was not created by high school students, after all). The C language is standardized but still terrible. Somehow standardization didn't make it better. So the transparency of the standards committee was of no use. And where is the responsibility of the creators of C? The rationale for this is "well, that's how Ritchie and his colleagues implemented it."

A standard is created by a seperate comittee of domain experts, in case of Unix and POSIX it is the IEEE and Open Group. The implementers must than follow that standard. The standardization is also open, meaning together with the standard there will be notes and discussions published to explain the reasons for all the changes. At every point you can look at the standard and understand why a certain decision was made.

Committees (commissions) are not democratic. Committees can also ignore users (and often do). Experts may have different motives, not necessarily substantive ones, because they are people (and people have different views, needs, goals and flaws - just think of the committee debating Algole and the disputes between these people). POSIX was written by people delegated from corporations. The people on such committees represent the needs of the corporation. Corporations agreed for themselves, not for users or developers. The difference between one corporation and such a committee is that in the latter case there is a possibility that a "camel" will be created (a camel is a horse designed by the committee). Somehow I don't see Torvalds allowing development of the Linux kernel by some committee (his project, so I guess he has the right to do so).

Quasi standards have none of this. They happen when the major (or in the case of WIndows, only) implementer decides a way to do something, and this is then the way it is done.
[...]
And all the accountability we get and the reasoning behind this is: "Well thats just how windows implemented it". This does not happen with standardization

Committees also decide how to do something. The difference is in the number of entities. Sometimes it's better not to make rotten compromises (because, for example, out of 7 participants, 2 want something contrary to what the others do and they don't necessarily have to be right). Besides, Unix fragmentation was due to the fact that individual corporations: "that's how they implemented it in their versions of Unix." And then: "well, that's how individual Linux distributions have implemented it."

Unix describes a set of standards, the decision process behind are public and there is a defined way of giving suggestions to the comittee. Windows is at the whims of Microsoft who can basically do what they want, and this shows in the mess they created with their API.

Before there was POSIX, it was: "Unix became an AT&T whim." And then each of the big Unix corporations created their version of Unix. Sometimes it's better to have one decision-making place. Otherwise it gets messy. In technical projects you need a manager. Otherwise chaos ensues. And I don't think the Windows API is a huge mess (yes, it could be better, but it's still better than Linux). Just look at Lazarus: which version of him has more problems? The one for Windows or the one for Linux?

Quote
On the contrary, I understand everything perfectly. Particularly fragmentation in the Unix and Linux world. It's just that I don't have utopian sentiment and nostalgia for Unix.
Then why are you always talking about Unix as if it is an ancient system? Unix is a set of standards and specification, the latest version of which came out like 2018. This has nothing to do with nostaliga, it's just a current day standardization effort.

Because it is the truth. Unix was born a long time ago, in a different information age. Yes, it was developed. And those versions that are still in use are improved, perhaps improved. But make no mistake, his best days are over. It has largely been replaced by Linux, because it does not require expensive licensing fees and its source code is available (if someone has the patience to poke around with it).

Quote
It is known that access to hardware and services is provided through the intermediary of the OS. And this is undeniably a good solution. If a developer needs information about hardware or system services, who should provide access to it? Of course OS. And for this you need some unified method (i.e. the aforementioned: standard). It should be well designed, consistent and complete. And here's the problem. Unfortunately, the devil is always in the details. I am of the opinion that what Linux provides cries to heaven for vengeance. And this is where our points of view differ.
A standard can never be complete. There will always be room for implementation specifics (otherwise the standard developers would need the ability to see the future). On Windows you have no standard, everything is implementation specific. On Linux systems you usually have the POSIX standard with some additional things being implementation specific. But again there are dozens of different Linux based Operating Systems available, meaning there is much more variety. So sure if you consider all the Linux world compared to the single Windows implementaiton this skews the image.
But if you take a specific Linux operating system, you will find this is not that much of an issue as you make it out to be. Take arch, most of the way how to interact with the Hardware is well documented in the Arch Wiki, sure this is not applicable to Debian, but why should it be? These are two different operating systems. I also don't expect Apples Cocoa API to run on Windows.



And in the end, if we compare some standardization with some implementation specifics on Unix, with no standardization and all being implementation specific on Windows, I think it should be clear that more standardization is generally a good thing

I do not deny that standards should be updated. This is obvious.

The fact that the Windows Technical Specification (WinAPI) is not approved by some sacred committee does not detract from its usefulness. Sometimes it's better that no committee "dip fingers" in the technical specification and not "created a camel instead of a horse." You keep forgetting that POSIX was created after Unix was fragmented, not before Unix was created. Besides, I see no reason why a project should be inferior just because it doesn't use someone else's guidelines. It can be worse, but it doesn't have to be.

And I see no reason to treat Linux distributions as completely different OSes. It is one and the same OS. They are all based on the same: kernel, file system, X Window, etc. Differences come from little things. But these little things are annoying.

Also one thing I want to add here, using the Windows API is only easy if you are using a language that is either relatively close to C (like Pascal), or something that was specifically developed for Windows (like .Net).
Even with imperative languages that are a bit different it already gets a pain, try using the Windows API with Java, Python or Javascript. If you go to different paradigms like non imperative languages such as Haskell or Lisp or Prolog, you have to write a complete wrapper to use the Windows API.

Yes, I agree completely. But this also results from the exoticism of languages other than imperative ones, inherited from Algol. And it applies equally to Linux, whose components (e.g. kernel, system services, etc.) are mostly written in C. However, as for JavaScript or Python, maybe it's a good thing that it is :)

But you know what always works easy in pretty much any language? Parsing text files. You can use the Unix pseudofiles and utility outputs even with esoteric languages such as brainfuck where it is literally impossible to use the Windows API

Yes, that's true. It's just a pretty old method. And not so painless when you consider character encoding or other nuances. However, sometimes parsing text files can be a headache.



It seems to me that our divergent views on Unix, Linux and issues related to them result from different expectations and needs regarding the features and capabilities offered by OSes.

Warfley

  • Hero Member
  • *****
  • Posts: 1499
Re: Best way to parse file.
« Reply #37 on: January 30, 2023, 11:59:52 pm »
And a digression regarding assumption (1) - if this approach was strictly followed, Delphi or Lazarus would never have been created :) And Linux too :) This assumption contradicts the iterative approach to software development. And what is the development of Unix if not the negation of assumption (1)? Perhaps at the dawn of Unix development this seemed like a good solution (trauma after Multics?). But then apparently it was abandoned. And quite rightly so.
There has been actually quite an interesting public debate on differen mailing lists between Torwalds and Tanenbaum about exactly this topic, if a monolithic kernel like Linux is preferable to a micro or hybrid system. I am not operating system expert myself, even though it's a topic that interests me quite alot, but I must say that i also think that the monolithic approach of linux is defentely a source of many of it's problems, namely that driver support is one of the biggest problems of Linux since forever. E.g. since the last kernel update on my OpenSuse my Network driver is working extremly unreliable often breaking network connection every 5-10 minutes.
This is a problem thats arguably much better on micro kernel or hybrid systems (such as windows).

But the reason may be simple. Creating software with a GUI is much more time-consuming and requires more experience and knowledge. It's probably obvious. Second, command-line tools are indeed easier to use. For a person creating a simple project, this may be enough. But for large projects, "tapping on the keyboard" is too time-consuming. For me, tools like IDE (or even RAD) are not a deterrent. But many times I've seen a lot of people just get completely lost in them. They are unable to handle them.
There is also another reason, that is automation. If you build your program with only command line tools, you can easiely build and deployed automatically. This is something where Lazarus is severly lacking. I myself spend dozens of hours building the respective tooling required for making my lazarus projects devops ready.

And this is the core of the aforementioned unix philosophy (or guidelines however you want to call it). If you build your tools in a way that they can be used by both a human and a machine, you make this possible without having to sink hours upon hours into trying to get this to work yourself.

Also I would not subscribe to the idea that for large projects are better with a GUI. I have worked on large OpenSource projects with both Java in a Fully Integrated IDE (IntelliJ) and in C++ with only command line tools (vim + git + cmake, etc., later I moved to emacs for the editor), for nearly one and over two years respectively. I've managed to use both of them quite well, but if it's about pure productivity, I would say that the C++ toolchain was better. Mostly because I already was in the console which made the ability to use the console tools easier, while in the IDE everything needs to be clicked multiple times, which takes a lot of time. But overall I think they both where not that different with respect to efficiency and usability

One thing that is easier with GUIs is to build your toolchain such that the application can grow organically. Working on an existing cmake file is easy enough, but building one from scratch can be quite tedious. There is a reason why for all of my C and C++ projects I basically just copy and paste the buildsystem from my previous project with just small adjustments.
But that's just initial overhead, once it is running, maintaining it is not more difficult than with a GUI

Quote
And aren't these concepts (solutions) a bit older than Unix? Or at least some of them? Unix was not created in a computer void. He was just one of many OS's. Didn't its success have anything to do with AT&T's power at the time? And why did the US antitrust office get to this company at one time?
Thats not the point, all of those things are parts of the MacOS (and iOS) system that are directly taken from modern FreeBSD. Apple has to disclose this due to the BSD license agreement, which is why we know that this code is still in there, because if it wasn't apple would be the first to remove these mentions from their website.

The Point I was making is that the MacOS and iOS system is even by the strictest definition a Unix, it's based on BSD and contains many features from modern FreeBSD verbatim.

Quote
And are these tools based on the same code? No. If only because LT would have a problem in court (copyright). They have been rewritten. Only their operation was supposed to be similar to those in Unix. The fact that they support new devices is due to the fact that their source code is updated. Which of course is a plus for Linux.
Probably a lot of the code is quite reusable so I would be surprised if there is so much new stuff in there. But thats not the point, the point is that these tools still follow the same design. It's a system and a toolchain whose design has proven useful for nearly 50 years now. Sure some tools like grep got a bunch of new functions, but at it's core (and from a users perspective) it's still the same tool that was written by Ken Thomposon in the 70s

Quote
Access to equipment and services is not difficult, but incomplete and annoying. Since everything is overseen by the OS (rightly so), this OS should provide convenient access (API) to hardware and services. These different ways that Linux distributions use are called fragmentation. It was the same with Unix. That's why POSIX was created (and of course it didn't fully solve the problem). Linux distributions are not different systems - they are one and the same system: Linux. They all have the same thing: kernel, X Window/Wayland, filesystem, audio subsystem, etc.). The differences are in minor matters. But they are annoying. Even Torvalds complains about it.
Actually when it gets to accessing the hardware information Linux distributions can differ widely, because these are often provided through through pseudofiles, and the fuse pseudofiles are quite different on different systems (and this is really anoying tbh.)

Quote
And you're absolutely right, if it wasn't for Microsoft's dictate, it would also happen with Windows. However, the problem with the annoying elements of Windows (e.g. forced updates that mess up the system and install crap-programs from the MS store, tracking users pompously called telemetry, etc.) is not due to the fact that Microsoft is the only possessor of Windows (source code, etc.) but because Microsoft has no real competition. If they had competition, it wouldn't even cross their mind to "fuck" users.
This is quite interesting, because here I think you get this completely the wrong way around. The reason MS implemented all those features is exaclty because of the competition.
If Apple wouldn't have made locking up the system feasable, Microsoft wouldn't have done it. App stores are also a new phenomena previously it was not possible to monetize software usage on the OS, now it is, by locking down the system and taking a 30% cut for all transactions through their (mandatory) store, Apple has demonstrated that this is a feasable market model.
Without the competition through the mobile market, Microsoft would not have added all these mobile adjacent features, because Microsoft sees what the competition is doing and that it works, and copies their methods.
But I must admit that Microsoft is especially bad at it (the Microsoft Store still does not work 50% of the time even when I want to use it). But it's not like they got these ideas in a vacuum.

Quote
Yes, you can call it the Windows specification prepared by Microsoft. But this does not mean that such a specification will be worse (Windows was not created by high school students, after all). The C language is standardized but still terrible. Somehow standardization didn't make it better. So the transparency of the standards committee was of no use. And where is the responsibility of the creators of C? The rationale for this is "well, that's how Ritchie and his colleagues implemented it."
I use (or used to use) C and C++ quite a lot in the past, and first, looking at the different versions of C, standardization made C much, much better. But an even better example is C++, because C++ gets massive improvements through all the major standard version. In 2011 the C++ comittee published the C++11 standard, which was a pretty bold update, completely revamping a lot of C++, basically turning it into a completely different language. Original C++ was a lot like ObjectPascal is today, modern C++ has nearly no resemblence anymore to this. Back then there was the fear that no compiler would implement this, and that it would be infeasable, but guess what after it has been standardized, the compilers adopted and today you see rarely if ever using the old C++ style. Since then there have been a few more of these updates, C++14, C++17 and C++20 (no prices for guessing the years they came out), which continue to massively improve the language.
It's a bit similar with C where the big update was with C99, which massively improved the language.

Both C and C++ are success stories that demonstrate the power of standardization. So yes, standardization made C (and C++) much better, are probably the main reason why C++ is still so widely used.

And here is the funny thing, you know who also develops their own C++ version which often in the past did add a bunch of non standard features to the language? Microsoft! And here the development community has made a their opinion pretty clear. Most of the new microsoft features are not that well received (such as .Net integration), while the Standardization efforts where very well accepted. Today Microsoft is not adding as many non standard features to C++ as they used to, and the reason is simple, because it turned out that an expert comittee with user input was just better at designing a language than microsofts internal team was.

Quote
Committees (commissions) are not democratic. Committees can also ignore users (and often do). Experts may have different motives, not necessarily substantive ones, because they are people (and people have different views, needs, goals and flaws - just think of the committee debating Algole and the disputes between these people). POSIX was written by people delegated from corporations. The people on such committees represent the needs of the corporation. Corporations agreed for themselves, not for users or developers. The difference between one corporation and such a committee is that in the latter case there is a possibility that a "camel" will be created (a camel is a horse designed by the committee). Somehow I don't see Torvalds allowing development of the Linux kernel by some committee (his project, so I guess he has the right to do so).
Do I get your argument correct that because there are members of corperations who only represent the needs of their corporation in these comittees they are worse than if one corporation rules alone as in the case of Microsoft?

Sorry you lost me there. Sure comittes might not be perfect, but they are a lot better than having one company has the sole decision power and not having to make any of their discussions and decisions public.

Quote
Committees also decide how to do something. The difference is in the number of entities. Sometimes it's better not to make rotten compromises (because, for example, out of 7 participants, 2 want something contrary to what the others do and they don't necessarily have to be right). Besides, Unix fragmentation was due to the fact that individual corporations: "that's how they implemented it in their versions of Unix." And then: "well, that's how individual Linux distributions have implemented it."
There is still a big difference between making a decision behind closed doors versus making a decision where everyone involved puts their name on and all the discussions on those decisions are made public and there are public mailinglists for the general public to discuss these issues.
Those aren't even in the same ballpark of transparency and accountability.

Quote
Before there was POSIX, it was: "Unix became an AT&T whim." And then each of the big Unix corporations created their version of Unix. Sometimes it's better to have one decision-making place. Otherwise it gets messy. In technical projects you need a manager. Otherwise chaos ensues. And I don't think the Windows API is a huge mess (yes, it could be better, but it's still better than Linux). Just look at Lazarus: which version of him has more problems? The one for Windows or the one for Linux?

[...]

Because it is the truth. Unix was born a long time ago, in a different information age. Yes, it was developed. And those versions that are still in use are improved, perhaps improved. But make no mistake, his best days are over. It has largely been replaced by Linux, because it does not require expensive licensing fees and its source code is available (if someone has the patience to poke around with it).

Yes Unix was born a long time ago, and the original unix systems mostly (except for FreeBSD and MacOS) died long ago. Unix has become a common standard in the 90s, 30 years ago. It's now longer a standard than it was an actively used operating system. When someone says unix today, they mean an operating system that conforms (mostly) to the standards set out by the Single Unix Specification and POSIX. All this discussion about what happend 40 years ago does not matter, time moved on, and when we talk about Unix today we talk about a set of standards, not the operating system family from 50 years ago.

Quote
The fact that the Windows Technical Specification (WinAPI) is not approved by some sacred committee does not detract from its usefulness. Sometimes it's better that no committee "dip fingers" in the technical specification and not "created a camel instead of a horse." You keep forgetting that POSIX was created after Unix was fragmented, not before Unix was created. Besides, I see no reason why a project should be inferior just because it doesn't use someone else's guidelines. It can be worse, but it doesn't have to be.
As I detailed above, yes comittess are not perfect, but they have proven (e.g. in C and C++) to allow for massive improvements over the corporate handling by the likes of microsoft.
And the windows API is an absolute mess (as detailed before). Sure one of the issues there is not really microsofts fault, but that they need to have backwards compatibility to the ancient times, which is just part of that buisness model, but still, when looking at the windows API it is pretty obvious when which functions where added just from their style. It feels like a massive patchwork rather than a single coherent specification.
I've read a lot of the POSIX standard, in fact it's one of my goto documents when developing for Linux (much better than the man pages, exactly because it contains the notes and discussions of the comittee to describe why something is a certain way), and it defenitly is a coherent document.

Quote
And I see no reason to treat Linux distributions as completely different OSes. It is one and the same OS. They are all based on the same: kernel, file system, X Window, etc. Differences come from little things. But these little things are annoying.
So I have a Linux server and a Linux Desktop system, and they are completely different systems. On my Desktop I have a visual desktop, which I don't have for my server. My filesystem is ext4 on my server and btrfs on my desktop.
I recently changed my Desktop system to a new OpenSuse, which now uses system.d like my server, but previously I used SystemV (which I still consider to be the better choice).
Those systems are as different as two operating systems with the same kernel and following the POSIX spec can get. Nearly nothing on them was the same. If these aren't two different operating systems I don't know what is

Quote
Yes, I agree completely. But this also results from the exoticism of languages other than imperative ones, inherited from Algol. And it applies equally to Linux, whose components (e.g. kernel, system services, etc.) are mostly written in C. However, as for JavaScript or Python, maybe it's a good thing that it is :)
The amazing thing about Linux is that you do not need C compatibility. The system API for linux is the C standard (augmented by the POSIX standard), provided by the libc. If you look at the RTL Sourcecode for linux in the FPC, you will see that it does not link against the libc. In fact you can completely use the Linux syscalls without linking to the C API.

Even more so, which is the whole point I was making from the very first beginning, it's one of the goals of the POSIX standard to make all features that are available through code API also available through command line tools, which can be called by other programs and produce output that is machine parseable and can be fed into other programs.

Most things POSIX system that you can do with C that you can also do with bash. And if you can do it with bash, you can do it with Lisp, Python, Javascript, Haskell, Prolog, whatever you want. Because after all it is nothing more than reading and writing strings from and to files, which is something that every language can do in some form or another.

I use many different languages on a daily basis, usually by the "best tool for a certain job" principle. And I also use both Windows and Linux on a daily basis. And from my personal experience I can attest that Linux gives you much more freedom for the choice of tools and the way you want to do something.
On Windows everything works for windwos centric solutions such as .Net or Visual C++ or even Lazarus (but even there I sometimes had that some winapi functions where not implemented and I had to embedd them myself). And to microsofts credit, these do work really well. But as soon as you leave this highly specialized area of tooling, everything is much more effort than it needs to be
« Last Edit: January 31, 2023, 12:20:55 am by Warfley »

 

TinyPortal © 2005-2018