@Warfley: my question is: how are you going to make sure that all those exceptions aren't raised after release? That is, to me, the main question.
Well that's the Beauty of this approach, there will never be an unhanded exception on release (at least in theory), because if there was one you couldn't compile the program. And as I already said, there is no problem with raising an exception, the fact that an exception was raised does not mean it is an error, Just that the called function was prematurely interruptes because something happens that means the function could not finish normally.
But if you handle that exception this is not a problem. E.g. if you have a calculator with user input. If the user tries to divide by 0, an exception will be raised. If you catch that exception and notify the user that they need to enter a non 0 divider, and let them retry, it's not an error, nur just normal functioning of the program.
Similarly, if you have a chat program and read from a TCP stream and the other party ends the chat by closing the stream, you get an exception when trying to fetch the next message, but this isn't an error, Just a notification that the stream was closed. So you catch that exception and write code that then prints the "chat was closed by the other side" line. (E.g. in websockets, while there is a close message, it is not required to send this message, if the TCP stream closes that also ends the websockets stream, so while you expect a close most of the time, a disconnect when waiting for the next message is a totally valid way to close the connection)
Exceptions are just a way to notify the programmer about an unexpected event. If you have code to handle that event correctly, there is no error that needs to be removed, it's just the way you handle unexpected situations.
And for a follow-up question: what do you do if you encounter bad data inside your sanitized state machine, like with an imported database that doesn't fit exactly, or interesting new Unicode extensions?
If you have mechanism (like the aforementioned custom types and typechecking) to ensure that this can't happen (so when having an email type you know that it is definitely valid formatted according to RFC 5322), unless you did something very unsafe (like pointer casting, using move, or anything else that circumvents typechecking), this simply can't happen.
But of course sometimes you may not have this, either because compiler based checking are too restricive, or you just don't have the time/capacities to write all this extra typing code, you need testing, lots of testing. The problem here is that you have functions which expect a certain (sanitized) input, so you need to verify that this input will be provided by the other components.
This is what integration tests have been developed for. If you are familiar with the V model, they are the step above the unit tests. Where unit tests test if a function works correctly, integration tests test if the function is used correctly. As such this is not a problem that can be solved easily on a code level, but require a bit more holistic approach, and should be designed before the first line of code is actually written, during system design.
Tooling in this area is usually part of component design tools. For example when planning you can create a basic design and architecture in UML and use OCL (object constraint language) to define interfaces through the use of pre and post conditions. You can then use a verifier to verify that you UML/OCL model is internally consistent, and then what you need to do is to ensure that your code behaves according to this interface specification.
Part of this is by ensuring that you interface matches the UML definition, which is quite easy to verify. It's harder to show that the code actually follows the constraints set out by OCL (basically if the post conditions hold under assumption of preconditions). But this is what unit tests should ensure. (Also you can use model checking or automatic verification and testing to validate this)
Sadly such tooling is not really exostant for Lazarus, and it's on my list of things I want to make some day but will probably never find time for.
Also if you have a fully integrated design process you can use code generation to generate the bare one structure of your UML/OCL design, ensuring that what you build will be always conforming to your architecture design you modeled beforehand
So it's build on one another. You write your low level unit tests (which is where you would check that exceptions are handled or thrown correctly and so on) and then use integration tests to ensure that your assumptions about the usage and combination of these low level functions behaves correctly
The problem is that testing is hard, it's really hard to write tests for all circumstances, and there is no (non trivial) software that is truly bug free. You just can try to test as thorough as possible and try to divide an conquer to reduce complexity (which is why there is the distinction between unit tests, integration tests, module tests and whole program tests, it tries to reduce from exponential complexity to steps of linear complexity)