Despite CGI having memory leak free feature, its one process per request handling isn't efficient. All of the initialization will be redone
I was wondering about that.
if we're talking about the end of 1990s, the performance difference between launching a process vs reusing the process would be more obvious.
I do remember, that Apache server even "preforked" itself for handing incoming connections. Just because fork+exec on Linux was pretty expensive operation (i.e. compered to CreateProcess() on Windows. The difference came, from the need of copying the memory of the forking process and then discarding it all together).
Since then OSes evolved (at least Linux adapted copy-on-write approach for fork+exec). Hardware evolved.
Software evolved. (Chromium library is multi-processed based, rather than multithreaded).
As of today, I'd consider that multi-processing is considered a bless, rather than a curse.
So CGI might be as efficient as FastCGI, due to reduced cost of starting a process, and obviously safer (in any regard).
OS control methods (such as Job Objects on Windows or cgroups on linux) can limit the resources availability for a (CGI) process spawned.