Currently the Lazarus IDE is specified as needing the source to operate.
yeah that's a valid reason, it's done that way currently... that probably means it would require a lot of work to change
Worse, the exact details are unknown. But it won't be just one-off work, there will be a very long term maintenance attached to it too.
what I argued about here is efficiency for all users as a whole
And that's something scarce these days, cf. the posts above, reasoning like "HDD space is cheap" and so forth.
But that is not it. Most users DON'T have a problem. So that is also why any change must not make the situation worse for them.
Many efficient programs have their resources, e.g. images, localization files, grouped together inside single files such as Zip files
But Lazarus is basically a dynamic system because it can recompile itself with changes. That complicate things immensely. It is not read-only but read-write.
That makes resource access faster and efficient disk usage; a contiguous file is stored on disk
I mentioned not only the size of files but the NUMBER of files
The aforementioned packages don't necessarily need to be be compressed to improve efficiency
Yes, that is basically the ISO loopback solution that I said. Or toggle the compression bit on windows (or *nix filesystem). And that is nice and fairly transparent
But yes, both Lazarus and FPC could do with a better separation between read-only and write directories. The fact that the buildprocess writes to source directories (instead of build specific directories). But somebody has to work on it, and take a very long view before results become visible.
Yes, one can delete the compiler's sources if he doesn't need them; but it's best to not have them copied in the first place. That makes hardware happy users happy
But they
are needed a lot, so simply cutting is not a solution.
You guys have probably seen many HDD failures over the years and inefficient disk usage is a common cause
That is, thousands and thousands operations that can be greatly reduced if only the programmers do not have that pajeet mentality like "space is plentiful nowadays", "HDDs are cheap" et caetera, and instead have the slightest clue of how disks operate and strive to write efficient software
That is minimalism for minimalism's sake. It is
never little enough. That is not a healthy view.
Oh and I have no idea what "pajeet" is supposed to mean.
No, SSDs do not solve that problem, as people of that mentality like to think. They wear out over time as well like everything in this world
But the wear down is not by simply containing data, but by writing it, and in your write-up above you don't really make that different. There is a reason why SSD endurance is measured in Terabytes written, of whole disk writes (TBW)
To have the compiler's installer behave as it does when installed on its own should be simple enough
Anyway, simple suggestions to improve things.
Random cutting is not a solution, and balanced work is *very* hard and long term.
So basically you have two options:
- install on a sufficient machine, cut it down, and reuse that on your more limited machines. If you find other people interested like you, maybe you can create a specialised distribution for them
- start working on a long term solution
You can post bugreports with patches and pull requests via both gitlab projects.