Hello,
I have been painfully faced with legacy software developed over many decades that is in a state of high entropy. I would like to investigate whether I can reduce it and improve the internal organization (or at least understand if it can be done).
The software makes extensive use of a technique based on conditional compilation.
This method basically works as follows:
1. during compilation, I pass a -dMaster1
2. this Master1 define is expanded into a set of derived defines using a mapping materialized in a file for this purpose:
{$IFDEF Master1}
{$DEFINE _a1}
{$DEFINE _a2}
..
{$ENDIF}
{$IFDEF Master2}
{$DEFINE _a1}
{$DEFINE _D2_s}
..
{$ENDIF}
3. In the various source modules, the enabling of the necessary code portions is regulated in almost all cases by the derived defines, except sometimes when the Master1 define is also used directly:
{$IFDEF _a2}
procedure RunMasterMotor();
...
{$ENDIF}
{$IFDEF _a2}
RunMasterMotor();
{$ENDIF}
The problem, of course, is that the code has become very difficult to read because of these countless {$IFDEF ...}.
I would like to find a way to obtain the source code that derives from the application of the master define at compile time, so that I can then repeat the process for all existing master defines, which are about 150, and see how different the sources are from each other as the master define varies, to see if I can build macro groups that share the same (or very similar) code as a whole.
I had thought of hooking into the compilation process, essentially stopping before the assembly generation, and converting the parsed structures in memory (and stripped of the code parts excluded from preprocessing) back into source files in separate directories.
Does anyone have any suggestions?
Thank you