No, FPC has incremental build and precompiled header system already built in, and thus wouldn't get the bulk of the savings, if any at all, even in theory.
The ccache tools and philosophy are afaik also specific to a gcc like compiling and preprocessing model (which FPC doesn't as it compiles multiple files in one compiler invocation and doesn't work with headers in the same way)
FPC does have some scaling problems specially in the parallel compilation realm, where only carefully crafted parallel build is possible (as done for e.g. the packages/ tree), which doesn't scale embarrassingly like GCC does.
But to fix that, threading would have to be taken into the compiler which is a gigantic undertaking, specially multi platform. (and even then there would need to some arranging to do for best performance, like working with very large buildunits) and with uncertain overall speed ups.
That said, I've worked with both FPC in a terminal and lazarus on my rpi4/4GB, and for not too gigantic projects that worked fine. Even a FPC bootstrap would only take 4-8 minutes iirc. Worthwhile to go for a coffee, but not THAT bad.
I assume a RPI5/8GB would even be better experience. (better storage speed is maybe even more important than better processor. And even more so on Windows. Both storage speed and RAM are very important if you want to run Windows)