I might have missed the point, but as I read the OP says that he uses different optimization levels for debug and release (as the default for them is). I never let a final program be optimized differently than what is developed/tested. I would say that first debug in low O level (0 or 1), then go up to the required optimization level, test there as well
with debugging if needed and if that is OK, then switch off the debug mode.
To me it is not surprising that a program behaves differently in different O levels (due to e.g. initialization - what I sometimes intentionally skip to save time) and so the point is to make proper testing, debugging with the final optimization.
On a side note. When I have an array declared as
type tMyArray = array of integer;
var MyArray : tMyArray;
and I want to set its size, like
then I get a hint on O0, O1, O2 that it does not seem to be initialized. On O3 (and O4) I do not get that. So to make the compiler "happy" I usually do
var MyArray : tMyArray = nil;
and it solves the problem, however if it is part of a record/object I cannot do that. To avoid one more instruction
before the SetLength I usually simple suppress the hint. So my questions: Why is it even a hint for an array not to be initialized before first setting the size? Can it cause any problem? And if so, why is it not reported in O3? I would have assumed that optimization is changing the generated Assembler/MachineCode, but it should not effect the information provided during compilation.