I write my own programs for academic projects, mostly involving numerical analysis.
I have one that I'd like to run over several CPUs simultaneously to get through a few runs.
I've set up a script that runs the program (say 30 times). I duplicate the script (say 8 times, one per CPU) and then run it. Each script runs in a different directory.
If I set off the scripts with a little delay between each one then as long as the program is still running by the time I set off the last one, all works well.
The problem I have is that when multiple versions of the program start simultaneously, Windows seems to slow down to a crawl on starting programs. Once the program gets going though it runs at normal speed. The computer remains slow to start other things to requiring a reboot. The shorter time that my code runs, the worse this is because more instances of the code start at the same time.
To rule out my code causing a memory leak, I reduced it to a "Hello World" type code and this slowdown still happens.
I ruled out the compiler causing a problem by using gfortran (64 bit) and Lazarus (64 bit, console application). The effect is the same.
I wondered if using the same name for each instance of code running was the problem but running a different named version of the executable in each script doesn't help.
The problem arises on two separate computers, one running Windows 7 Home, the other Windows 10 Educational.
Taskmanager shows that CPU and memory usage are low when the computer is slow.
Can anyone help? Has anyone experienced this?
I've asked this in Microsoft Community and was pointed to Lazarus or gfortran forums.
I've asked in Tomshardware. The suggestion there was that the code might be overusing the disk but there's no evidence from Task Manager. It was also suggested that loading libraries into cache could cause a problem. I don't see any memory issues. Besides, the gfortran code was compiled with "-static" so everything is in the executable, I presume my Lazarus code works the same way.