Recent

Author Topic: Compile as Shared Library with FPC?  (Read 5908 times)

trev

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 2020
  • Former Delphi 1-7, 10.2 user
Re: Compile as Shared Library with FPC?
« Reply #15 on: June 20, 2019, 01:28:22 pm »
> And with Apple you pay double for half.

macOS is free :)

And when I tried speccing non-Apple hardware to match the 2018 Mac mini, the difference was minimal (around $50 from memory). Note: I buy my Mac minis from the Apple refurb store.

My preferred OS is FreeBSD which is also free, and runs nicely on Mac minis before 2018.

Alas, my customers (ha! they pay nothing, it's free software I make) mainly use Windows (90%), Linux (5%) and macOS (5%).

marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11382
  • FPC developer.
Re: Compile as Shared Library with FPC?
« Reply #16 on: June 20, 2019, 01:36:30 pm »
(you wanted a rant, you get a rant:)

Why would I do that? Windows has become really irritating, I only use it for games or when I have to write Windows software. And with Apple you pay double for half.

Well, I've waited for Linux to improve its binary compatibility and be less forced in its package management (specially: versioning) for over 20 years now, and it hasn't really improved. The few things there are like symbol versioning are not wide deployed. Increasingly, the problem is avoided by virtualization techniques, but these are serverside techniques and not for enduser desktops.

And it all depends on hordes of package maintainers preparing everything ex ante, with very little flexibility to have average users manage it issues like versioning.    The top 500 packages are somewhat ok, and after that it gets sad. It is very clear that the money for Linux vendors is in servers, not desktops.

Try combining new and old (e.g. LTS with a few packages you intensively use up to date), and manage that for a couple of years. Undoable.

Oh, and despite that it has been the "year that Linux makes it on the desktop" about every single one of them.

Quote
Almost everything is migrating to Linux, even Microsoft itself.

For servers. But in reality, I think this time next year, after w7 deprecation, it might actually be the year that the highest percentage of desktops runs the same OS in the same version (Windows 10) EVER. Go figure.
« Last Edit: June 21, 2019, 11:05:12 am by marcov »

SymbolicFrank

  • Hero Member
  • *****
  • Posts: 1313
Re: Compile as Shared Library with FPC?
« Reply #17 on: June 20, 2019, 02:02:01 pm »
Fair enough.

I think the largest competition for Windows is Android laptops and the like. People know that from their phone, and they're cheap.

And while apt was really great a few decades ago, that has also been it's downfall, as nothing much has happened in the mean time. Trying to port software to a small custom Linux build can be painful. Hence docker and such. Updates today even often require reboots.

devEric69

  • Hero Member
  • *****
  • Posts: 648
Re: Compile as Shared Library with FPC?
« Reply #18 on: June 20, 2019, 03:47:41 pm »
For information, if someone needs to create a software package with specific dlls, in addition to shared dlls (such as a Firebird server, for example), he can look at https://github.com/AppImage/appImageKit.
use: Linux 64 bits (Ubuntu 20.04 LTS).
Lazarus version: 2.0.4 (svn revision: 62502M) compiled with fpc 3.0.4 - fpDebug \ Dwarf3.

PascalDragon

  • Hero Member
  • *****
  • Posts: 5446
  • Compiler Developer
Re: Compile as Shared Library with FPC?
« Reply #19 on: June 21, 2019, 09:28:06 am »
So, what happens if you have multiple versions of your app, that have different dependencies? That goes for the system libraries as well as your own, and the ones from your IDE. Think about having apt install different versions that are build for different releases of Linux (stable, unstable and experimental)...
For Windows and macOS you'd ship the libraries together with your application. Maybe one could investigate the idea for macOS to move the packages distributed by FPC and/or Lazarus into a Framework...

For Linux (or other *nix systems) there would be versioned variants of the packages. E.g.
  • For 3.4.0: rtl.3.4.0.ppl, rtl.objpas.3.4.0.ppl, fcl.base.3.4.0.ppl, etc.
  • For 3.4.2: rtl.3.4.2.ppl, rtl.objpas.3.4.2.ppl, fcl.base.3.4.2.ppl, etc.
  • For trunk (let's assume 3.5.1): rtl.3.5.1.ppl, rtl.objpas.3.5.1.ppl, fcl.base.3.5.1.ppl, etc.
Though for trunk one would need to be careful as the compiler/rtl would check for the PPU version as well (so a 3.5.1 from revision 50001 might not be compatible with the libraries from revision 50000 if there was a change in the PPU version).

Of course the version infix would be used for Windows and macOS as well to avoid the "library hell" if the libraries are installed in a public location.

SymbolicFrank

  • Hero Member
  • *****
  • Posts: 1313
Re: Compile as Shared Library with FPC?
« Reply #20 on: June 21, 2019, 10:59:44 am »
So, to recap: yes, it can be done, but it is more complex and takes more time and space than static linking.

On the other hand, if I deliver a custom application, I also include all the tools, libraries and IDE's I needed to develop it, on a DVD. One for the customer and one for me. Because I know, that if he asks me in a few years to change something, it will take me a lot of time to set up a working development environment if I don't. Simply because the exact version of all those tools aren't available anymore, and the newest versions probably don't work together and need lots of changes to get it to work.

devEric69

  • Hero Member
  • *****
  • Posts: 648
Re: Compile as Shared Library with FPC?
« Reply #21 on: June 30, 2019, 04:23:54 pm »
Binaries look for libraries in windows here...:
 .
 %PATH%
 system32 [or, wow64]


... but on POSIX, binaries look here:
 . (with tweak in .profile)
 $LD_LIBRARY_PATH


I am gluing a "backup" theoretical solution here, to indicate from which location an executable must load DLLs, it wants to use. First, for information, it goes like this, when compiling with gcc (yes, I know, we use fpc):
gcc ... myAppli ... -Wl,-rpath=$ORIGIN/../lib64
(the rpath is stored in the elf executable, in the dynamic section. it can be a relative path)

readelf -d myAppli
.../...
0x0000000f (RPATH) Library rpath: [$ORIGIN/../lib64]
.../...

The switch $ORIGIN is a special variable, that means "this executable", and it means "the actual executable filename", as readlink would see it, so symlinks are followed.
➔ In other words, $ORIGIN is special and resolves to wherever the binary is, at runtime.



There is a tool named chrpath. But above all, there is a more universal tool than chrpath called patchelf (See https://nixos.org/patchelf.html, for example). It was originally created for use in making packages for Nix and NixOS (packaging system and a GNU/Linux distribution).

● In case there is no rpath in a binary (here called myAppli), chrpath fails:
chrpath -r '$ORIGIN/../lib64' myAppli
myAppli: no rpath or runpath tag found.

● On the other hand...,
patchelf --set-rpath '$ORIGIN/../lib64' myAppli
...succeeds just fine! (that's the theory: I still have to test the practice)

One last detail: the Linux library loader works like the FreeBSD library loader: it is different people who maintain them with gcc, but they all respect the POSIX standards (the features of the library's loader can be found here: https://www.freebsd.org/cgi/man.cgi?query=ld.so).
Now, the Linux shell and FreeBSD shell are written in C, C++: then, those who make evolve the loader(s), have this option \ switch '$ORIGIN'. So, they have no interest in modifying the loader, to add a loader's option to look in the directory '.' first to load the DLLs, like on Windows or Mac OS: this is redundant with their gcc' $ORIGIN', which they use all day.

 
➲ Then, as far as I know, this means that, in the meantime, until a similar option appears in FPC :-* , an ELF's "hack" with the readelf software (to read sections in a library, or an executable: see https://www.freebsd.org/cgi/man.cgi?query=readelf&sektion=1&manpath=freebsd-release-ports, for example) and the patchelf software to modify or add a rpath section inside the ELFs they are compiled with FPC, are the two tools that can allow Lazarus developers to manage their DLLs  dependencies).

« Last Edit: July 01, 2019, 02:03:42 pm by devEric69 »
use: Linux 64 bits (Ubuntu 20.04 LTS).
Lazarus version: 2.0.4 (svn revision: 62502M) compiled with fpc 3.0.4 - fpDebug \ Dwarf3.

rsz

  • New Member
  • *
  • Posts: 45
Re: Compile as Shared Library with FPC?
« Reply #22 on: July 03, 2019, 02:24:54 pm »
The run-time package approach is flawed in the sense that it is not language agnostic. Do you understand that? Many people here do not.....

Perspective: I used to work for (contract things) Borland in the 90's. I know what I am talking about

I don't see the issue with that. I personally would just like to reduce the distributed size and memory footprint of my program suite. It certainly would be nice if other languages could easily use our run-time packages, but in that case you should create your library with C interoperability in mind. Or am I completely off base here?

Well, I've waited for Linux to improve its binary compatibility and be less forced in its package management (specially: versioning) for over 20 years now, and it hasn't really improved. The few things there are like symbol versioning are not wide deployed. Increasingly, the problem is avoided by virtualization techniques, but these are serverside techniques and not for enduser desktops.

And it all depends on hordes of package maintainers preparing everything ex ante, with very little flexibility to have average users manage it issues like versioning.    The top 500 packages are somewhat ok, and after that it gets sad. It is very clear that the money for Linux vendors is in servers, not desktops.

This has also been my issue with Linux distributions and projects in general, coming from someone who uses it as their workstation daily. I know where you are coming from, but no one will agree on a single solution to solve this widespread issue. Some don't even acknowledge this as an issue. Distributions just can't come to an agreement on what constitutes a "base system". The Linux ecosystem basically boils down to: diversity in services, dependencies and system configurations and the latest and greatest features at the expense of API/ABI stability, which is great for the end user that loves to customize their system and have the latest features, but a real pain for developers to deal with. We also had LSB (Linux Standard Base), but distribution maintainers just didn't care so they dropped it.

I don't want to throw anyone under the bus, but projects like Gtk+ love breaking API/ABI compatibility to introduce new features, which is a pain to deal with on Linux due to the way you are "supposed" to distribute applications (dynamically linked, no libraries bundled with the applications and recompiled for every distro version). GNOME has a long standing history of "out with the old, in with the new" at the expense of anyone developing their software against their technologies. As much as I like giving Microsoft crap for their operating system, backwards compatibility and ABI stability have always been one of their greatest strengths in user space applications. You can still install and use programs from nearly two decades ago on Windows 10, try that with old Linux programs and you will hit issues with an old GCC and Gtk+1 which is no longer available and doesn't even build on modern Linux systems. Linux applications are meant to be compiled for the current distribution version and recompiled and modified to work with the new version. User space is basically a moving target.

I could go on a very long rant of what is exactly wrong with the Linux ecosystem, but that is essentially pointless. That being said, there are modern solutions to application deployment on Linux that doesn't rely on containers / virtualization or modifying your application to work in new distribution versions (more on this later).

So, to recap: yes, it can be done, but it is more complex and takes more time and space than static linking.
In my mind, if I create an application suite with many applications, the size will be smaller at a certain cutoff point than statically linking everything. I suppose only the numbers will tell for certain if/when this feature lands.

On the other hand, if I deliver a custom application, I also include all the tools, libraries and IDE's I needed to develop it, on a DVD. One for the customer and one for me. Because I know, that if he asks me in a few years to change something, it will take me a lot of time to set up a working development environment if I don't. Simply because the exact version of all those tools aren't available anymore, and the newest versions probably don't work together and need lots of changes to get it to work.
This is good practice and I will keep this in mind, thanks.

Back to the point on distributing Linux applications.

As mentioned by devEric69, it certainly is possible to distribute programs and ship all the dependencies like you would on Windows. I have done some tests and was able to compile a Gtk+1 program on Ubuntu 05.04 (released 2005) and run it on Ubuntu 19.04 (released 2019) by shipping all dependencies and using LD_LIBRARY_PATH and modifying the rpath with patchelf. You can do the same and compile a Gtk+3 application on a modern system and run it on Ubuntu 05.04, which I have also done. This is however not the "linux way" of doing things and it's hard to get right and people generally don't do this so your favorite applications or games from a decade ago simply don't work.

The problem with AppImage, is that people don't ship *all* dependencies. Just look at the available AppImages and how they are packaged, they do not ship ld-linux.so for instance, which was required for me to get the Ubuntu 05.04 application working on Ubuntu 19.04. Chances are, AppImages packaged today will likely not work in a decade from now, depending on how the person packaged it.

In my opinion, if you really want to ship an application without all the Linux platform problems and have it still work in the next decade, then you should ship it as a Flatpak bundle together with the Flatpak runtime. For those who don't know, Flatpak basically works like a chroot without privileges + a bubblewrap sandbox. The runtime is the base environment in which you execute the application and the bundle you ship has all dependencies which your application requires. The runtimes aren't small, but if you really wanted to, you could create your own stripped-down runtime (I have yet to experiment with this).

So while I am unable to see into the future, applications packaged as Flatpaks today should "just work" on future Linux installations, due to how it inherently works.
« Last Edit: July 03, 2019, 02:31:00 pm by rsz »

 

TinyPortal © 2005-2018