You can devise a way to treat local short-lived objects differently than others but then that would be just another level of complexity, wouldn't it? That's not the lazy thing to do (or is it?)
Not really, I mean we have this concept already since the 50s, it's called a a stack frame. All local variabes are automatically freed by the compiler, and we don't even think about it. I mean isn't it weird that we write:
sl := TStringList.Create;
try
// Use SL
finally
sl.Free;
end;
But not:
i := 42;
try
// Use i
finally
i.Free;
end;
When Delphi introduced classes, they introduced a new type which explicetly does not have a feature that all other types until then had. Delphi classes intentionally try to hide the fact that they are pointers, so they desperately want to look like normal local variables, but then they lack one of the most essential features of those. And the thing is, there are types that do this way better, Strings and Arrays are also pointers that hide the fact that they are pointers, yet we don't need to do the following:
s := 'Foo';
try
// Use s
finally
s.Free;
end;
So that classes must be handled manually are an exception not the rule, so the way it currently is adds additional complexity, while if classes would just behave as literally any other type for the history of procedural programming, it wouldn't be the case.
But having been a professional programmer for the last thirty years I've seen a lot of half-baked programs in an unreliable unfinished state. Unfortunately I was often responsible for salvaging a lot of these mostly .NET projects and I've had to ask when precisely does the garbage collector kick in? When programs are written to run twenty-four seven it is better to be on the right side of memory management!
The bugs that Mozilla analyzed where the bugs found in production, i.e. those that slipped through testing. Sure there will probably be much more logic bugs during the development, thats what debugging and testing is for. The thing with memory bugs is that these are the kind of bugs that are extremely hard to find in testing, while also, as previously said, are often extremly security critical (a simple use after free on user controlled data can be exploited to gain root access to the computer the program is running on).
So yes there will be other bugs, but unless you show me specific examples where the lack of memory management produced a bug, this is no reason why manual memory management is better. The fact that there are bugs in .Net programs does not prove that if the language would have manual memory management it would have no bugs. The opposite is the case, you would have the exact same bugs + additional memory management bugs.
I'm extremly confident in saying that over 30 years of research, since automatic memory management has become the norm, has shown conclusively that when it comes to code maintainability manual memory management is always worse than automated. Google recently release that 70% of their security bugs are due to manual memory management. Just type in "memory management bugs" in google scholar and you will find mountains of academic research into that topic.
The discussion on that is settled, all evidence that we have show that manual memory management is a liability and only for extreme low level or performance oriented code relevant. It's not a question if it's good, it is undeniably. The question is how can we use it.