That's on par with expectations. FPC compiles the packages part, and every core added beyond 2 has diminishing returns (though a six core is still faster with -T 6 than with -T 4, just not much).
Free Pascal compiling is not massively parallel like C, but there are reasons for that. (and some of those tradeoffs mean FPC is always quite fast)
But, as a Linux user, use"-j" is a habitual operation to build a large software.
But, as a Linux user, use"-j" is a habitual operation to build a large software.
But, as a Linux user, use"-j" is a habitual operation to build a large software.
In practice, the real bottleneck is at the linkage stage, and I'm not aware of a linker that can benefit heavily from multiple CPUs/cores.
And... PascalDragon,
makefile can query the -j parameter just read var "MAKEFLAGS". like this:
all: @echo $(MAKEFLAGS)
It retruns:
-j2 --jobserver-auth=3,4
In general a linker could parallelize the reading of the object files. Most everything else is probably too serialized...
Of course, if one looks at one of the most frequently examined cases that can benefit from make -j, i.e. a Linux kernel rebuild, one notices that most of the code- i.e. almost all the drivers etc.- is saved in the form of .ko files. Multiple .ko files are definitely built in parallel, but I don't know to what extent the fairly small number of intermediate files which are linked to build a .ko are built in parallel.
If OP wants to make a case for change, then I think he needs to provide convincing evidence that an individual Linux .ko benefits from parallelisation, and then demonstrate that FPC's building an individual package is significantly less efficient... which together would be a fairly hefty analysis job, allowing for the number of .ko files and packages being built simultaneously.