The finally block gets always executed! In your second code snippet Screen.Cursor := crDefault; will be skipped if the exception is re-raised from the exception handler with raise or there is no suitable handler for the exception type.
With the same logic, I do not see why it is recommended many places to use two try.. blocks nested. It is to bypass the fact that you cannot use try..except..finally, but what is the benefit ofover the second method shown above?
procedure a; begin Screen.Cursor := crHourglass; try try // long algorithm... except // handle the exception or decide not to, and let it further be raised to the caller of proc a. end; finally Screen.Cursor := crDefault; end; end;
Yes!The finally block gets always executed! In your second code snippet Screen.Cursor := crDefault; will be skipped if the exception is re-raised from the exception handler with raise or there is no suitable handler for the exception type.
Or if you have 'Exit' in '//long algorythm' block.
The finally block gets always executed! In your second code snippet Screen.Cursor := crDefault; will be skipped if the exception is re-raised from the exception handler with raise or there is no suitable handler for the exception type.My though would be that either I can handle the exception in except and then I do not need finally, code is OK without it; OR I do not know what went wrong (an exception I cannot handle), but then it might be too risky to continue with any code after that even in finally. Now I see the benefit of catching some exceptions, but re-raising some others and still do some final activity. This would however make the language improvement request seen elsewhere to allow try..except..finally..end in one block a lot of sense, so one would not need two nested try-s.
Benefit is that it catches exceptions and also it makes sure that 'finally' block get executed.This was the unclear part. If I catch the exception then I do not need finally, as the code after the except gets executed anyway. The whole learning point for me, is that I can unhandle/re-raise some exceptions and still finally works.
While you can of course ommit the try-finally now, because now this is working completely fine, it is not very future proof.Good point
This was the unclear part. If I catch the exception then I do not need finally, as the code after the except gets executed anyway. The whole learning point for me, is that I can unhandle/re-raise some exceptions and still finally works.If the code does not contain an "Exit" call then yes. But remember that exceptions like out of memory could theoretically be caused by pretty much any function (e.g. if it deals with strings this is always a danger), even those inside an except block. So better safe than sorry and always use exceptions
My though would be that either I can handle the exception in except and then I do not need finally, code is OK without it; OR I do not know what went wrong (an exception I cannot handle), but then it might be too risky to continue with any code after that even in finally. Now I see the benefit of catching some exceptions, but re-raising some others and still do some final activity. This would however make the language improvement request seen elsewhere to allow try..except..finally..end in one block a lot of sense, so one would not need two nested try-s.I'm going to give you a piece of advice that is quite likely many OOP programmers will disagree with, which is, a bug-free program never needs a "try-finally" and extremely rarely needs a "try-except". "try-finally" should never be used and the only times "try-except" may be acceptable is when writing a program that deals with resources that are not under its control.
HTH.Interesting thoughts. In my almost 40 years of Pascal, I hardly used try. Operating System calls (like file operations) have other mechanisms, Div0 should be checked in advance and avoided, etc. I only used it for some external library calls, etc., when I was not sure what can happen.
So I made some reading and try is strongly recommended by many experts and hence I am thinking to change my typical set-up raising exceptions.It used to be that "experts" (actually formal programming theory) recommended that a function/procedure first ensure the preconditions necessary for its successful execution were present. If the preconditions were satisfied then and, only then, would the code that implements the function be executed. Also formal programming theory "recommended" that post-conditions be checked. Following that rule religiously helps enormously in writing code that is bug free and totally eliminates the need for "try-finally" constructs.
Also as said by others in this thread, try has advantages in many situations.I would say that more, better and cleaner advantages are derived from religiously checking preconditions and ideally post-conditions but, I realize that's "old school" these days.
I'm going to give you a piece of advice that is quite likely many OOP programmers will disagree with, which is, a bug-free program never needs a "try-finally" and extremely rarely needs a "try-except". "try-finally" should never be used and the only times "try-except" may be acceptable is when writing a program that deals with resources that are not under its control.
IOW, the presence of a "try-finally" is a clear indicator that the code has logical flow deficiencies and, the presence of a "try-except" is almost always an indicator of the same.
Unfortunately, the Pascal implementation of OOP often forces the use of "try-except" because it's the only way to let the caller of some code know that a problem occurred. Lastly, keep in mind that a "try-except" is nothing more than a cross stack frame goto mechanism which is the worst kind of goto there is.
HTH.
In other words: the only reason to use "try" is to cover up your own bugs? I totally disagree."try-finally"s often end up hiding bugs. Not exactly a desirable feature.
For example: you use a socket to communicate with some remote computer. Do you want the socket to be closed, no matter what? Because, there's probably a thread that is waiting on input. And you cannot just kill threads.What's really surprising is that "modern" programmers seem to have totally forgotten that operating systems, compilers and lots of very complicated software has been written without "try-finally" and suddenly, according to you, apparently no such complex software can be written without that programming crutch.
*snip*Me too.
In other words: the only reason to use "try" is to cover up your own bugs? I totally disagree.
On a tiny microcontroller, where only your own code is running in an endless loop, you would be right. In all other cases, it is more complicated.I was just about to write something like that, using TStream's is another example (or using components that embed TStream). You can't just ignore the possibility for an exception to be raised dealing with periphery, communication parties, etc. It is just not realistic.
For example: you use a socket to communicate with some remote computer. Do you want the socket to be closed, no matter what? Because, there's probably a thread that is waiting on input. And you cannot just kill threads.
And remember: while we have a way to safely close resources with try .. finally, C++ cannot do that. We win.There are techniques for mimicking the finally in C++, in general that is less needed there, but yes - I constantly miss that simple fpc feature.
What's really surprising is that "modern" programmers seem to have totally forgotten that operating systems, compilers and lots of very complicated software has been written without "try-finally" and suddenly, according to you, apparently no such complex software can be written without that programming crutch.No, I'd never forget that. I'm fully comfortable writing in plain C, without any try stuff.
"try-finally"s often end up hiding bugs. Not exactly a desirable feature.how can try..finally hide bugs?
HTH.First, this is something I wondered often when reading your posts with respect to exceptions, why do you associate them with OOP? They are not part of OOP, in fact even Haskell has them and Haskell isn't even a procedural language let alone an OOP language.
What's really surprising is that "modern" programmers seem to have totally forgotten that operating systems, compilers and lots of very complicated software has been written without "try-finally" and suddenly, according to you, apparently no such complex software can be written without that programming crutch.Whats interesting about this is that many of the large C applications are transitioning to a modern language like C++ or Rust having all of these neat features like exceptions. And most new applications use modern languages from the get go.
But when it comes to writing in FPC you have to take in account the reality: RTL, LCL abound with exceptions.you have a point there but, it shows that the use of exceptions is forced on the programmer to deal with code whose design leaves something to be desired.
C library will never throw anything at your face.
how can try..finally hide bugs?Usually due to either a programming mistake or, what is the same an invalid assumption made in the code. The finally ends up hiding that.
First, this is something I wondered often when reading your posts with respect to exceptions, why do you associate them with OOP?I associate them with OOP because there are OOP constructs, properties come to mind, where when something goes wrong there is no mechanism to return an error code to the caller. That forces the use of exceptions.
They are not part of OOP, in fact even Haskell has them and Haskell isn't even a procedural language let alone an OOP language.They aren't formally part of OOP but without them there are a good number of OOP programming constructs that would not be feasible. Exceptions, at least in Object Pascal, are necessary to make OOP usable.
Second, with respect to try-finally, I've been working with a lot of C code and the main reason for memory leaks in my experience is that algorithms that they use return to exit the function early but forget to free the allocated resources.Memory leaks are the results of programming bugs and "try-finally" constructs don't guarantee a program/algorithm to be bug free.
It is very easy to forget such a single line, especially if you have like 4 or 5 exit locations.Particularly when there are multiple exit points, try-finally(s) should be avoided. The generated code is, more often than not, an atrocity.
Try-finally makes two improvements to resource management, first it eliminates the risk to forget the freeing at some point in the code, second if done consistently it is very easy to spot memory leaks.A programmer can "forget" (or "miscalculate") that a "try-finally" is needed, particularly when they are nested (which is quite common.) As far as memory leaks, a little self-checking in the program can catch those without any "try-finally" anywhere in the code.
This is usually done by having a dedicated return value (e.g. -1) to indicate an error or have multiple outputs (e.g. by using pointers/var or out params) and a global variable with more detailed information (e.g. errno on linux).A return indicating an error occurred seems to be too simple for many programmers these days. As far as global variables, the need for those is a clear indication of a design deficiency. Acceptable only in throw-away or proof of concept code.
But there is a key advantage of exceptions over error codes. Exceptions force you to act.It's a bit "peculiar" to have to force programmers to handle errors. I really don't think that exceptions forcing a programmer to pay attention to errors is a good thing. Programmers should write code that handle errors without being forced to do it.
If an exception is not handled, it will kill your application which guarantees that your application will never run in an errorneous state.On the contrary. As you pointed out, it is common in the life of a program to make changes and/or additions. It's not as uncommon as it should be to find exception handlers that were added at some point in time that catch exceptions they were never meant to catch and, as a result, handled them in a way that is not the way they should be handled but, since the program keeps running it leaves the impression that everything is honky-dory.
<snip> programmers should "man up" and "learn to write good code" (an attitude that many priests of the gospel of C share)Personally, I believe programmers should "programmer up" and accept that their first responsibility is to write decent code if they want to call themselves programmers. That's the difference between a programmer and an amateur who programs as a hobby. The amateur throws code at the computer and as long as it seems to work or works most of the time, the program is done. try-finally and try-except are whipped cream to spread generously on code that is "culinary" questionable.
it is easy to make mistakes and the goal of language design is to minimize the chance to do such errors and to mitigate the harm they do. And Exceptions do this pretty wellIt is easy to make mistakes... I've noticed that too plenty of times in my own code. If a program has an error, the ideal result is program crash. That way the problem is obvious and its solution required immediately. Solving the problem is the "mitigation".
On the other hand, knowing that an error will occur, you could exit the calculation and display a message box saying "Calculation xyz cannot be completed because of division by zero". This is fine because the user knows where the problem is. But is it ok to show a messagebox? When the unit in which this happens is reusable I would say no, because: What if the message should be translated to other languages? What if the calculation is used by a console program which cannot display a messagebox, but wants a WriteLn instruction? You don't know what will be appropriate in all use cases of this unit. You have much more flexibility when you "raise" an exception then the user can decide in his exception handler what to do.It's not more flexible. The routine that detects an error is often not the routine where the error should be handled. It simply returns an error code to the caller who will float it up as necessary until it reaches the routine that is designed to handle that error or errors in general. When debugging the code the flow is visible and obvious, when using exceptions execution jumps all over the place. That jumping all over the place can also happen with try-finally (usually when multiple exit points exist.)
Firefox, Gimp, LLVM, KDE, QT, Webkit, Blink engine, GCC, etc. all are written in or are transitioning to C++. In fact, except for the Linux kernel pretty much all of the largest open source projects are transitioning away from C with the main reason being the features like Exceptions and OOP (And the reason why Linux hasn't switched is because Linus is a purist)In the case of C and C++, I understand why C programmers would want to transition to C++. When I write C code, I run the compiler in C++ mode because I get much better type checking than in C mode not to mention true types instead of the C mostly meaningless type gimmicks. That said, there is no way I'd infect my code with OOP stuff.
Arguments from popularity aren't good arguments to begin with, but in this case it doesn't even hold, because all these big projects are transitioning away from pure C
This sounds as if you are confusing "try-finally" and "try-except". Show a sample code snippet in which "try-finally" hides a bug? "try-finally" has only one purpose: to protect resources in case of an error, no error handling!how can try..finally hide bugs?Usually due to either a programming mistake or, what is the same an invalid assumption made in the code. The finally ends up hiding that.
Memory leaks are the results of programming bugs and "try-finally" constructs don't guarantee a program/algorithm to be bug free.But it is exactly what I was critizising. This form of entiteled elitism. A language should be designed in a way that it enforces good and bug free programming. Saying that programmes should just be better at writing code helps no one. If your goal is to increase the code quality, you need to create your language such that it steers the programmer in the right direction.
...
It's a bit "peculiar" to have to force programmers to handle errors. I really don't think that exceptions forcing a programmer to pay attention to errors is a good thing. Programmers should write code that handle errors without being forced to do it.
...
Personally, I believe programmers should "programmer up" and accept that their first responsibility is to write decent code if they want to call themselves programmers. That's the difference between a programmer and an amateur who programs as a hobby. The amateur throws code at the computer and as long as it seems to work or works most of the time, the program is done. try-finally and try-except are whipped cream to spread generously on code that is "culinary" questionable.
Particularly when there are multiple exit points, try-finally(s) should be avoided. The generated code is, more often than not, an atrocity.I don't care for the generated code. Most of the time I don't even look at it. The first and foremost goal (after that the code should work of course) is that the high level code is easiely readable and maintainable. If that results in "bad" assembly I don't care as long as it works as expected/as the language manual promises and I don't run into performance issues, it could be as shit as ever.
A programmer can "forget" (or "miscalculate") that a "try-finally" is needed, particularly when they are nested (which is quite common.) As far as memory leaks, a little self-checking in the program can catch those without any "try-finally" anywhere in the code.Thats why its considered good practice to always include it, even if you think it is not necessary. But sure you can forget a try-finally, but the chances of forgetting try-finally (especially if you trained yourself to always use it in combination with free) are lower than forgetting a single free from 5 exit points. No solution is perfect, it's about minimizing the error potential.
A return indicating an error occurred seems to be too simple for many programmers these days. As far as global variables, the need for those is a clear indication of a design deficiency. Acceptable only in throw-away or proof of concept code.A return value indicating an error is not always a great solution. First return values can be easiely ignored. As I said, errors should always be checked and the language should enforce this. Making it a return value is the weakest form of doing so. And as has been shown time and time again, people just don't check for erros. Just look at the fpsendto example I've posted earlier. This stuff is everywhere.
On the contrary. As you pointed out, it is common in the life of a program to make changes and/or additions. It's not as uncommon as it should be to find exception handlers that were added at some point in time that catch exceptions they were never meant to catch and, as a result, handled them in a way that is not the way they should be handled but, since the program keeps running it leaves the impression that everything is honky-dory.This is I think actually quite a big problem, but a problem that could be solved by the language/compiler. I like to take a look at how other languages target certain problems and here Java is a great example.
Show a sample code snippet in which "try-finally" hides a bug? "try-finally" has only one purpose: to protect resources in case of an error, no error handling!Here is a complete program that uses try-finally and is very misleading and quite likely completely incorrect.
But it is exactly what I was critizising. This form of entiteled elitism. A language should be designed in a way that it enforces good and bug free programming. Saying that programmes should just be better at writing code helps no one.Honestly, I don't think it's a form of elitism. When someone decides to perform an activity, I believe they should be motivated to become good at it, ever better if possible. Therefore, I do think a programmer should become ever better at writing code. If not, they should choose something they want to get really good at. If that makes me an elitist, so be it.
If your goal is to increase the code quality, you need to create your language such that it steers the programmer in the right direction.I completely agree with that. Where we differ in opinion is that "try-finally" doesn't accomplish that. It goes in the opposite direction.
<snip> or create memory leaks if there is no automatic mechanism supporting them. So the language fixes the problem by its design by giving tools to handle this on the language level.and that's how we end up with languages like Java. For throw away stuff or prototyping something, it's perfectly fine, even handy but, when the goal is to produce a really good program things like Java are a joke.
Good programmers can write good code with any language. But most people are not good programmers but average programmers, and the tooling should be optimized for them.I have to admit, I'm not sure that it is possible to write good code with any language. Some languages make accomplishing that goal exceedingly difficult. As far as tools/features that enable a programmer to write better code, I don't include "try-finally" among them. On the contrary, I see that construct as encouraging not designing because the option of using a "try-finally" is there.
If that results in "bad" assembly I don't care as long as it works as expected/as the language manual promises and I don't run into performance issues, it could be as shit as ever.I believe it is possible to write code that is easy to understand and maintain and ensure the resulting assembly code is, at least, decent. That's important because if the code is lousy, it will eventually be a problem either in maintenance or in performance. While performance should not be at the top of the totem pole, it shouldn't be neglected either.
Thats why its considered good practice to always include it, even if you think it is not necessary. But sure you can forget a try-finally, but the chances of forgetting try-finally (especially if you trained yourself to always use it in combination with free) are lower than forgetting a single free from 5 exit points. No solution is perfect, it's about minimizing the error potential.seems to me that it is much simpler and easier to simply check pre-conditions and post-conditions. That way the code isn't peppered with "try-finally", "try-except" and if statements for on-going piecemeal checks.
A return value indicating an error is not always a great solution. First return values can be easiely ignored. As I said, errors should always be checked and the language should enforce this. Making it a return value is the weakest form of doing so. And as has been shown time and time again, people just don't check for erros. Just look at the fpsendto example I've posted earlier. This stuff is everywhere.Those who make it a habit to ignore errors shouldn't be forced to deal with them, they should be encouraged to do something else they care more about.
Second it makes the function calls much more complicated. For example an out of memory error can happen everywhere dynamic memory is used.Running out of memory isn't really an error, it's a program state. The program can simply inform the user, that there is insufficient memory to do whatever it was they wanted to do.
Here is a complete program that uses try-finally and is very misleading and quite likely completely incorrect.Using try-finally does not mean that you can assume that the finally part is error-free. I do not see why this is an argument for not to use try-finally.
Using try-finally does not mean that you can assume that the finally part is error-free.Agreed but, that's an assumption that is not rare to see in "finally" code.
I do not see why this is an argument for not to use try-finally.I have to concede that the example I gave left something to be desired but, it was the only one that came to mind at the time. However, your example reminded me of code in a "finally" that would hide a bug in the "try" section of the code. I used your example project. I'll post the changes after I comment on the other points you made.
Run the attached demo.I did and, it works. I never suggested "try-finally" doesn't work. now.. on to the rest of what you mentioned...
You will probably say: I can check the denominator in the calculation, set an error variable and restore the cursor if the error variable is set.Checking the denominator is the right thing to do and if it is zero restore the cursor and either return an error code to the caller or possibly set the result to a predetermined value that is "convenient" whenever the denominator is zero.
But this is only a very simple example. Suppose the calculation is very complicated, requires several units, and all kinds of error can happen. There is a high chance that you'll miss one of the error conditions in this approach. Try-finally, on the other hand, is a universal mechanism regardless of where and which kind of error happened.Honestly I don't think that's a good argument. If the calculation is very complicated then it should be broken into a number of simple steps. In particular, the denominator should be calculated separately so it can be checked. Using "try-finally" to justify the existence of code that should be simplified isn't what I consider a positive aspect of "try-finally".
The argument "don't use try-finally it makes bugs in the finally block hard to find" is like "stop programming because bugs are hard to find".Now it's a good time to post the change I made in your code which shows how "try-finally" can hide bugs.
I doesn't eliminate it. In fact, "try..finally" helps for applying the theory in practice.So I made some reading and try is strongly recommended by many experts and hence I am thinking to change my typical set-up raising exceptions.It used to be that "experts" (actually formal programming theory) recommended that a function/procedure first ensure the preconditions necessary for its successful execution were present. If the preconditions were satisfied then and, only then, would the code that implements the function be executed. Also formal programming theory "recommended" that post-conditions be checked. Following that rule religiously helps enormously in writing code that is bug free and totally eliminates the need for "try-finally" constructs.
Now it's a good time to post the change I made in your code which shows how "try-finally" can hide bugs.I don't understand what you are saying here. "Original code" = my posted code? It did not use any "m"... Which information was incorrect?
I just made one small change in TForm1.Button1Click, now it reads:In the original code, the invalid value of "m" was used but, the corrective actions done in the "finally" didn't solve the problem, on the contrary, it hid it because it wasn't expecting that kind of problem. The net effect was, the program ran but, the information it showed was obviously incorrect.
procedure TForm1.Button1Click(Sender: TObject); var i: Integer; x: Double; m : HMODULE; { added } begin Screen.Cursor := crHourglass; try m := GetModuleHandle(pchar($7FFE0000)); { added } for i := 0 to 100 do x := 1.0 / sin(x); finally Screen.Cursor := crDefault; end; end;
I don't understand what you are saying here. "Original code" = my posted code? It did not use any "m"... Which information was incorrect?Not your code. This is a problem I saw in somebody else's code, which I referred to as "the original code".
In my understanding: If you don't use try-finally the bug you introduced is still there. You can't blame try-finally for this.Yes, it's still there but, the actions taken by the "try-finally" in the original code left the impression that everything was fine when it really wasn't. Effectively, it hid that bug because the actions taken in the "finally" part had nothing to do with the problems caused by the invalid parameter in GetModuleHandle.
I think I leave here because I cannot (and do not want to) convince you. All was said from my side.No problem, that's fine.
and that's how we end up with languages like Java. For throw away stuff or prototyping something, it's perfectly fine, even handy but, when the goal is to produce a really good program things like Java are a joke.Manual memory management is bad. When rewriting their CSS eingine Mozilla analysis of the bug reports within the old code and identified 43 total security compromising bugs (link (https://hacks.mozilla.org/2019/02/rewriting-a-browser-component-in-rust/)). Of those 34 where identified as highly critical. 32 of which where related to manual memory management. Mozilla developers are generally speaking not bad developers, they are one of the few companies that have the luxury of being able to hire some of the best developers in the world. Still even the best of the best make a lot of mistakes and most of them are related to manual memory management. (Note that 7 errors where null pointer errors, which can also be completely avoided through language design)
Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures). However, even Rust is not foolproof. Developers still need to be aware of correctness bugs and data leakage attacks. Code review, testing, and fuzzing still remain essential for maintaining secure libraries.
Compilers can’t catch every mistake that programmers can make. However, Rust has been designed to remove the burden of memory safety from our shoulders, allowing us to focus on logical correctness and soundness instead.
*snip*Actually, they're setjmp/longjmps and they can (usually do) transfer the execution to very far points in your program.
try..finally and try..except are modern tools to handle error, although under the hood they are still if-s and jumps just like any other code. While it helps to automate different things, it also adds an overhead to the code.
An immediate conclusions: Do not use them if no error can occur, especially if something is executed multiple times (like a loop running in a billion times). Later I cover the finally bit, what only makes sense if there is something to do finally (typically heap memory or hardware release), while the except bit can be placed higher in the calling hierarchy reducing the overhead significantly.It is not reducing overhead, rather is for generalizing your error handling. Here it comes "try..finally" which helps you do a proper cleanup into the inner calls before handling the exception.
It is about what the definition of exception is. Defining a special case in your algorithm as an exception, even if it can be properly handled with if/then is a bad practice, IMHO. You fall into the trap of using exceptions just because you can.
try..finally and try..except are if-s and jumps, so they theoretically can be used for other purposes. The general rule is that one should use it only for real exceptions, not for normal process flow. It sounds simple, but far not that easy in real life (especially if one likes this structure). Look at the next example:It is perfectly working, but abuses the structure and should not be used like this. On the other hand a similar if..raise can be OK in the retirement home age checking. If someone is younger than 18 there then it can be a real programming error (remember Y2K?) and need to be handled as an exception.
procedure HireEmployee; var Age : integer; begin writeln('How old are you?'); readln(Age); try if Age<14 then raise ExceptionChild; if Age<18 then raise ExceptionYoung; ProcessApplication; except On E:ExceptionChild do writeln('You are a child, cannot work here'); On E:ExceptionYoung do writeln('You are too young, you need your parents'' approval to work here'); end; end;
(Funfact: after I wrote this example, I just found C++ tutorial https://www.w3schools.com/cpp/cpp_exceptions.asp (https://www.w3schools.com/cpp/cpp_exceptions.asp) recommending try for just this particular check...)I hope it is only to illustrate how it can be done with the construct.
*snip*The prevalence of the finally is because it is a common practice to do this:
As per statistics finally is 70X more used than except. Still to me try..except is the more natural use (just like in C++ there is only try..catch).
*snip*There is another consideration for acting ASAP when dealing with shared resources: In a multitasking environment it is important to minimize the use of the resource, e.g. to prevent deadlocks. The common rule is when the next resource can't be acquired to release all those previously held.
Usecase 2 - Access to resources (hardware, network, file, etc.)
Just like in the above case, the error is beyond our control, and if the call can raise exceptions, I would try to catch it as soon as possible.
*snip*My humble advice is to use "try..finally" in each case something was done at the beginning and must be reverted at the end.
So far easy. So far I had known it. Now, back to my original post. If I handle the exception then the program execution continues after the except block and I can release all the resources I allocated (memory, hardware) or reset the ones I changed (like the Cursor). Why do I need try..finally and what happens to the exception that caused the problem at the first place. This is what I learnt now.
Usecase 4 - No exception, but multiple exit points
If I have three algorithms to solve a problem, I can do (incorrectly):The result is that the Cursor remains crHourglass if the problem is solved. I personally like exit, but I do remember some professional companies explicitly forbid to use it: One method can have only one exit point.
procedure a; begin Screen.Cursor := crHourglass; try if Solved1 then exit; if Solved2 then exit; if Solved3 then exit; Writeln('Cannot solve'); except // meaningless code, as no exception is raised (unless in one of the SolvedX, but that is beyond the point now end; Screen.Cursor := crDefault; end;
I would allow multiple exits, but then to me exit is exit, so I would not allow exit with finally. That is far too confusing to me, but it might be my old school.
Anyway the try..finally works well in this case.
Usecase 5 - We do not want to handle the exceptionThis is especially true when resource allocation was performed at earlier stages, they should be de-allocated before the execution is transferred to the caller.
This is the second big learning for me.
Actually it is wrong to handle exceptions that occurred in our own code. If we notice something wrong (e.g. an underage employee candidate) and can handle it then it should be handled through normal, old school mechanisms. (I know it is debatable, but this is my takeaway!) We shall only raise an exception if we cannot handle it and want to inform the caller about the failure. So, typically we do not handle the exception raised in our function. Since there is no try..end structure (it is clearly meaningless, this is why!), if we return to the caller after an exception, the code to release the resource is never used.
It is the same if the exception occurs in a function we call and we are unable or unwilling to handle it. The exception goes back to the first try.. level and stops there, but any code in between is never executed. This is when try..finally comes into the picture.The exceptions are sent to the caller, signing that we cannot process the job application, but still the Cursor is reset.
Screen.Cursor := crHourglass; try if Age<14 then raise ExceptionChild; if Age<18 then raise ExceptionYoung; ProcessApplication; finally Screen.Cursor := crDefault; end;
One thing that bothered me, still. Why is it 70X more frequent to use try..finally than try..except?Here, the key words are "it stops the whole processing". The exceptions mechanism (when used wisely) can spare you a lot of "if/then/else" by cutting the code that is meaningless to execute after the error occurred.
try..finally is logically used a lot, especially in OOP, where objects are created in the Heap and need to be released. In old school OOP even this was not necessarily true, as objects could be in the stack and released automatically once the function returned. With classes it changed and need to be released. As most components we use today are classes, it makes memory leak prevention a priority.
But why exceptions are not handled close to the error, i.e. many-many times in our codes? One answer is given above, i.e. if it can be solved, it is not an exception. The other answer is that if an exception is handled, from that moment on, it does not exist and the program runs happily (i.e. unaware) until it crashes. So, it is better to let the exception raise as high as the point where it can really be handled efficiently. Imagine something like:
procedure a; begin try c := a/b; except writeln('Div/0'); end; end; procedure b; begin try c := a/b; except writeln('Div/0'); end; end; // and so on procedure z; begin try c := a/b; except writeln('Div/0'); end; end; procedure All; begin a; b; // and so on z; end;
vs.Not only the first is much longer having all the try..except blocks, but also All will call a, b, .. z even if the first one fails, because a hides its error. Maybe later in the program it creates a much bigger problem (e.g. DB corruption). The second approach stops the whole processing, if any of the a, b, .. z. fails.
procedure a; begin c := a/b; end; procedure b; begin c := a/b; end; // and so on procedure z; begin c := a/b; end; procedure All; begin try a; b; // and so on z; except writeln('Div/0'); end; end;
These two explain why try..finally is frequent and try..except is rare giving a surprising ratio.
And my last wondering, whether try..except..finally would be needed or not.I'm not sure I fully understand that, but maybe, if you accept that "try..finally" is simply a mechanism for a proper clean-up and "try..except/raise" is another, unrelated one, for dealing with exceptions, you'll get a more clear picture.
Well, as said, most of the exception are not handled so try..finally is enough. If the exceptions are handled the final activities can go after the try..except..end block. So, the only times when both except and finally can be necessary is when we want to handle exceptions and also we have multiple non-exception exit points or when we only want to handle some of the exceptions, but not all.
The first is clearly a bad programming practice to me. I already mentioned the danger of using exit, but if we use it and also want to use exception handling, it is very important to know whether the final activity is needed in all cases, or only if there is an early exit or only if there is an exception. A combined try..except..finally would be unclear not talking about the possibility of allowing try..except..finally and try..finally..except. I think it would confuse everyone. So, I agree that even if one wants to use exit and exception handling at the same time, using of nested try.. blocks is the right approach.
The second is also a bit controversial to me. Difficult to imagine a situation, where I handle some of the exceptions raised by the functions I call, but also either do not handle some exceptions received (i.e. let it pass by me) or raise my own ones, and at the same time I still also want to use a finally block. If there is such a rare case, I would agree to use two nested try blocks, making it also clear what is executed when and in what sequence.
What's really surprising is that "modern" programmers seem to have totally forgotten that operating systems, compilers and lots of very complicated software has been written without "try-finally" and suddenly, according to you, apparently no such complex software can be written without that programming crutch.
Manual memory management is bad.I was going to click the "withdraw from thread" button MarkMLI mentioned but, I cannot let this one go.
That's it. I'm done with this. goto has been renamed "try-except", after that Kafkan transformation, it has gone from ostracized to esteemed citizen.
Please don't fall into the trap of believing that I am terribly dogmatical about [the goto statement]. I have the uncomfortable feeling that others are making a religion out of it, as if the conceptual problems of programming could be solved by a single trick, by a simple form of coding discipline!
Hardly: it's got the social standing of somebody who washes corpses for a living.OOP programmers are basically forced to use try-except because it seems the favored way in OOP to report an error is to raise an exception which has to be caught by an exception handler (try/except.)
Everybody- without exception- says "do not use this unless you really have to". Virtually all documentation cautions against using raise (or whatever a particular environment calls it) for routine control flow.
Everybody- without exception- says "do not use this unless you really have to".It is simply a sign of incompetence. When used indiscriminately.
I don't see why that should be anything to do with OOP, /except/ for the case where a constructor returns an instance (and even then the caller could quite easily check whether it's being fed a null pointer).It has to do with OOP because there are constructions in OOP that don't provide a way for the programmer to return an error code (e.g, as a function result) forcing the programmer to raise an exception to report any error.
It has to do with OOP because there are constructions in OOP that don't provide a way for the programmer to return an error code (e.g, as a function result) forcing the programmer to raise an exception to report any error.
That said, I am under the impression that the above applies mostly, if not exclusively, to the Pascal OOP implementation.What makes you think that? The paradigm is the same across languages and certainly not just Pascal.
Pascal has properties. If something goes wrong in a setter, I am under the impression that the only option to notify the caller of a problem in a setter is to raise an exception.
@MarkMLI, @Thaddy
Pascal has properties. If something goes wrong in a setter, I am under the impression that the only option to notify the caller of a problem in a setter is to raise an exception.
I did not write up *anything* about properties. Just to stand corrected, >:D@MarkMLI, @Thaddy
Pascal has properties. If something goes wrong in a setter, I am under the impression that the only option to notify the caller of a problem in a setter is to raise an exception.
Is it all that fuss about properties actually? How they can be confused with OOP?
Manual memory management is not bad, it's actually quite good when it's well designed. What's bad, are programmers at manual memory management. That's where the problem is and, slapping memory management crutches into compilers causes programmers never to become any better at it.Again this sentiment that programmers should just get better at programming. If even the programmers at Mozilla, a company with highly competative salaries and a really good reputation, which can basically hand pick the best of the best developers in their field, run into these problems, simply telling the programmers to get better ain't gonna cut it.
Indeed. A jump is a jump, whatever you look at it. And that seems to be no exception pun intended, :P
People should get more understanding of the ASM output the compiler generates: A jump is a jump is a GOTO and a jump... :o. A GOTO is a jump..... <sigh, boring>
Nobody did, at least until reply #46. :oI did not write up *anything* about properties. Just to stand corrected, >:D@MarkMLI, @Thaddy
Pascal has properties. If something goes wrong in a setter, I am under the impression that the only option to notify the caller of a problem in a setter is to raise an exception.
Is it all that fuss about properties actually? How they can be confused with OOP?
Nobody did, at least until reply #46. :o
That said, I am under the impression that the above applies mostly, if not exclusively, to the Pascal OOP implementation.I may be wrong again but, it seems to me that OOP programmers in this forum are using Pascal's implementation of OOP, including properties which means, a lot more exceptions than are necessary.
Again this sentiment that programmers should just get better at programming.I believe I share that sentiment with a great number of people because when a company is looking for a programmer, they very commonly ask for experience (I "suspect" that's because they want a programmer who has - hopefully - spent some time getting better at programming)
If even the programmers at Mozilla, a company with highly competative salaries and a really good reputation, which can basically hand pick the best of the best developers in their field, run into these problems, simply telling the programmers to get better ain't gonna cut it.First, my hat off to the programmers at Mozilla. Writing something like Firefox is no picnic. It definitely takes a lot of dedication and talent. That said, it doesn't mean they are always right. They are not, they are human, they make mistakes and they are also influenced by factors that have nothing to do with programming.
Automatic memory management is needed because it has been shown time and time again that even the best programmers aren't able to make manual memory management working bug free.I'd normally say that I would take such a statement with a grain of salt but, in this case, I think I'm going to have to work out a deal with the state of Utah.
Do you have the same sentiment everywhere? What you can't run 100km at 130kmh, well you shouldn't use a car, you should just try to run faster. It simply doesn't wor this way. When we hit a problem, we simply create tools that solve that problem for us. Thats what tools are for. Thats what makes us humans so successfulI have the same sentiment about anything someone chooses to do as a profession. IOW, I don't expect Mary Housewife to become a race car driver because she drives her car to the grocery store. That said, I still expect her not to run into people, trees, fire hydrants and, I also expect her to stop at red lights (among other things.) Maybe I'm a demanding kind of guy but, if she cannot consistently accomplish those things, she will probably (and hopefully) not be allowed to drive a car (something to be thankful for)
Maybe I'm wrong but, it seems to me that properties are part of the OOP paradigm in Pascal, therefore, if raising an exception from a property is necessary to indicate an error occurred to the caller, that's a characteristic of OOP and, Pascal's implementation of OOP.
GOTO is just a jump. Get real y'all. This annoys me a bit (a lot).GoTo - это не совсем прыжок. Это условный/безусловный переход из одной части программы, в другую. Каждый программист должен понимать, что GoTo - это неотъемлемая часть программирования. Хороший программист будет управлять программой с помощью GoTo достаточно неплохо. Но новичкам лучше не использовать его (знать нужно).
Where have you seen a program that only worked on procedure calls? :)
I'm sorry, but while I don't disagree with your specific points the logic that you use to get to them is sufficiently flaky that I really do have to comment on it.I'll quote what I said again:
It has to do with OOP because there are constructions in OOP that don't provide a way for the programmer to return an error code (e.g, as a function result) forcing the programmer to raise an exception to report any error.I made it a point to acknowledge that this may be a problem specific to the Pascal OOP implementation.
That said, I am under the impression that the above applies mostly, if not exclusively, to the Pascal OOP implementation.
"properties are part of the OOP paradigm in Pascal": OK, you're saying there that properties are a part of Pascal, but are implicitly admitting that other flavours of OOP do or at least could exist without them.Absolutely.
"raising an exception from a property (is) a characteristic of OOP"I don't see it that way. I think that any language that implements properties the way they are implemented in Pascal would end up forcing the use of exceptions on programmers in order to indicate an error condition and, since properties seem to be a feature that is only available in OOP languages, it seems very reasonable and logical to me to consider them a part of OOP. A part that some languages implement and others don't but, still a part of OOP.
No, that's wrong. You've already said that properties are part of Pascal and that other flavours of OOP might not use them, so your statements are contradictory.
My recollection is that you made the blanket statement somewhat earlier in the thread that exceptions were part of OOP, therefore if exceptions are a bad idea then so is OOP. Just about everybody agrees that overuse of exceptions is a problem, which is an incentive to avoid their use where possible, which is one of the points being made to OP when he asks about try/finally which is somewhat better behaved.Exceptions are definitely not a part of OOP. Exceptions are available in plain C and there is no OOP there but, OOP and, very specifically, the Pascal implementation of OOP makes exceptions _necessary_ in order to make some of its features (e.g, setters) being useful. That is not the case in C. It is rare to need an exception handler in C because there is not C feature that requires using them - unlike setters in OOP Pascal.
But OOP in the general case is /not/ tainted by that since exceptions are an Object Pascal thing.Honestly, at this point, I think it's difficult to define what is and isn't part of OOP but, as previously stated, properties are part of OOP Pascal, therefore any negative side effects they may have on programming reflects on OOP generally and very specifically on its Pascal implementation.
So in summary: exceptions are a Bad Thing in excess, but their use is demanded by heavy use of active getters and setters in the LCL and elsewhere. That is, arguably, a poor design choice in the LCL, and that could possibly be argued as a point against Object Pascal, but since Object Pascal is only one implementation of the OOP paradigm it can hardly be argued as outright condemnation of OOP itself.Is it really simply a poor design choice ?... it's definitely a poor design choice but it is also one that is encouraged by the language in various ways, the most common one is the apparently limitless indulgence in syntactic sugar that many OOP programmers seem to be addicted to.
properties are part of OOP Pascal, therefore any negative side effects they may have on programming reflects on OOP generally and very specifically on its Pascal implementation.
Anything that uses strict functional programming. In fact proponents go to a great deal of trouble to avoid conditionals including what we'd consider to be grossly-excessive overuse of transcendental functions since they have traditionally argued that that makes programs more amenable to parallelisation (i.e. strict SIMD).Извиняюсь, но это относится к любому программированию! Будь то функциональное или ООП. Не стоит забывать, что любое ООП без особых проблем заменяется функциональным. А вот наоборот не получится полностью (разве что эмулировать). Почему все забывают, что функциональное программирование - это основа, а ООП - это уже одна из довольно неплохих наработок (которую зачастую лепят где нужно и где не нужно).
Now I grant that that answer isn't strictly relevant to Object Pascal, but you made an unqualified statement and asked for a response. However there's a lot of abstractions in OP that you don't have in strict functional programming (and a lot in OP that you don't get in strict procedural programming, and so on) so I would urge you to take on board the points- /valid/ points IMO- made by Thaddy and others.
And the next time you have problems using try/finally etc. I suggest asking for help sorting out what's wrong.
MarkMLl
However it's arguable whether that taints any particular language implementation (FPC etc.) and it definitely doesn't taint OOP in its entirety.IMO, since Pascal without OOP cannot have properties, it taints the Pascal implementation of OOP.
However it's arguable whether that taints any particular language implementation (FPC etc.) and it definitely doesn't taint OOP in its entirety.IMO, since Pascal without OOP cannot have properties, it taints the Pascal implementation of OOP.
IMO, since Pascal without OOP cannot have properties, it taints the Pascal implementation of OOP.Не совсем понимаю. Почему не может? Можно всё сделать самому, это немного другое, но возможно всё.
Your opinion's fine, your logic isn't :-)I see. Logic and, just simple observation, sees that there is a worm (properties) in the apple (OOP.) I'll pass on the apple pie. :-)
I don't quite understand. Why can't it? You can do everything yourself, it's a little different, but everything is possible.It wouldn't have the same syntax. Procedurally, it would be a function with a parameter (hopefully not a procedure that has to raise an exception to notify the caller that an error occurred.)
yandex translate:
I don't quite understand. Why can't it? You can do everything yourself, it's a little different, but everything is possible.
I see. Logic and, just simple observation, sees that there is a worm (properties) in the apple (OOP.) I'll pass on the apple pie. :-)
Rubbish. Your statement- that because there's a flaw in the definition or implementation of Object Pascal then OOP is tainted- would imply that because you saw a worm in one apple you'd never eat anything again.Now, that's a creative piece of "logic" there. The logical conclusion is, worms eat apples and I don't eat apples with worms.
A unit can't be instantiated, therefore it's not possible to argue that the presence of units implies OOP.That's true but, units existed in Pascal before any OOP features were added to it. That factually proves that units are not associated with OOP.
Therefore the mere presence of properties doesn't imply OOP.Could you please point to a Pascal compiler that supports properties but not OOP ? just in case, not limited to Windows, any Pascal compiler ever written for any architecture is acceptable as an example.
OK, example to support that: a unit can have properties which allow variable-like access to a getter/setter or to a global variable.По вашим словам, я делаю вывод, что вы "заблудились" в ООП? Вы не прочитали изначально мой пост выше? Функциональное программирование - это основы. На основах можно сделать всё! Абсолютно всё! Можно тот же ООП со всеми свойствами и многими премудростями сделать заново. ООП и основана на функциональном программировании, но ни как не наоборот.
A unit can't be instantiated, therefore it's not possible to argue that the presence of units implies OOP.
Therefore the mere presence of properties doesn't imply OOP.
MarkMLl
According to you, I conclude that you are "lost" in the OOP? Did you not initially read my post above?
@Seenkao,Благодарю. Да, вы правы, тут я в очередной раз "в своих мыслях", и немного в другую степь ушёл. Я про процедурное программирование, путаю периодически. Хотя большей частью это всё связано.
Can you clarify please, what do you mean under the term "functional programming"? Lambda calculus? Or actually, you mean "procedural programming" which in turn is derivation of "imperative programming"?
(functional programming ≠ imperative programming)
Yes, and I'm agreeing with you you twp!Что такое "TWP"? А то у меня переводчик не очень хорошим словом обзывает... ))) (но если это правильный перевод, то успехов вам MarkMLl)
What is "TWP"? And then my translator does not call me a very good word ...))) (but if this is the correct translation, then good luck to you MarkMLl)
Now, that's a creative piece of "logic" there. The logical conclusion is, worms eat apples and I don't eat apples with worms.
That's true but, units existed in Pascal before any OOP features were added to it. That factually proves that units are not associated with OOP.
Could you please point to a Pascal compiler that supports properties but not OOP ? just in case, not limited to Windows, any Pascal compiler ever written for any architecture is acceptable as an example.
In that case I will conclude that most of the discussion is provoked by misunderstanding of what is written, either by confusion of terms or by use of automatic translation. I'd say that I know Russian fairly well (not my native, tho) but still can't get your point.@Seenkao,Благодарю. Да, вы правы, тут я в очередной раз "в своих мыслях", и немного в другую степь ушёл. Я про процедурное программирование, путаю периодически. Хотя большей частью это всё связано.
Can you clarify please, what do you mean under the term "functional programming"? Lambda calculus? Or actually, you mean "procedural programming" which in turn is derivation of "imperative programming"?
(functional programming ≠ imperative programming)
yandex translate:
Thanks. Yes, you are right, here I am again "in my thoughts", and I went to a slightly different steppe. I'm talking about procedural programming, I get confused from time to time. Although for the most part all this is interconnected.
*snip*
Then the 440bx standpoint that is actually revealed quite late into the discussion - exception handling is bad, because it is the only way to return error when assigning to a property, which is part of the Pascal implementation of OOP. Because of that developer is forced to use it, and that is presumably a bad thing.I'm not sure why you say "quite late" given that I stated my view on the matter in my very first post in this thread.
@440bx,Exceptions are great when a program has to deal with unpredictable behavior such as accessing some resource that is _not_ under its control. That's what exceptions are for. Using exceptions to simply inform the caller that something unexpected happened is a gross misuse of exceptions. A prime example is their use in setters to inform the caller an error or unexpected condition took place.
That is the way I'm getting it, correct me if I'm wrong.
As late as of reply #46. There is your first mentioning of "properties".Then the 440bx standpoint that is actually revealed quite late into the discussion - exception handling is bad, because it is the only way to return error when assigning to a property, which is part of the Pascal implementation of OOP. Because of that developer is forced to use it, and that is presumably a bad thing.I'm not sure why you say "quite late" given that I stated my view on the matter in my very first post in this thread.
Why you consider that wrong?@440bx,Exceptions are great when a program has to deal with unpredictable behavior such as accessing some resource that is _not_ under its control. That's what exceptions are for. Using exceptions to simply inform the caller that something unexpected happened is a gross misuse of exceptions. A prime example is their use in setters to inform the caller an error or unexpected condition took place.
That is the way I'm getting it, correct me if I'm wrong.
*snip*The prevalence of the finally is because it is a common practice to do this:
As per statistics finally is 70X more used than except. Still to me try..except is the more natural use (just like in C++ there is only try..catch).
L := TStringList.Create; try // use L finally L.Free; end;
As late as of reply #46. There is your first mentioning of "properties".I mentioned the problem in my very first post. I mentioned properties as an example of a construct that causes the problem in post #46.
Why you consider that wrong?because cross stack frame gotos should not be used to handle run of the mill errors. Cross stack frame gotos (exceptions) can cause a myriad of problems, they should be used when they are the only way to handle an error (for instance, access violations caused by an attempt to read memory that isn't managed by the program)
because cross stack frame gotos should not be used to handle run of the mill errors. Cross stack frame gotos (exceptions) can cause a myriad of problems, they should be used when they are the only way to handle an error (for instance, access violations caused by an attempt to read memory that isn't managed by the program)
As late as of reply #46. There is your first mentioning of "properties".I mentioned the problem in my very first post. I mentioned properties as an example of a construct that causes the problem in post #46.
Unfortunately, the Pascal implementation of OOP often forces the use of "try-except" because it's the only way to let the caller of some code know that a problem occurred. Lastly, keep in mind that a "try-except" is nothing more than a cross stack frame goto mechanism which is the worst kind of goto there is.
As late as of reply #46. There is your first mentioning of "properties".I mentioned the problem in my very first post. I mentioned properties as an example of a construct that causes the problem in post #46.
Here I can see recommendations, not problems.My though would be that either I can handle the exception in except and then I do not need finally, code is OK without it; OR I do not know what went wrong (an exception I cannot handle), but then it might be too risky to continue with any code after that even in finally. Now I see the benefit of catching some exceptions, but re-raising some others and still do some final activity. This would however make the language improvement request seen elsewhere to allow try..except..finally..end in one block a lot of sense, so one would not need two nested try-s.I'm going to give you a piece of advice that is quite likely many OOP programmers will disagree with, which is, a bug-free program never needs a "try-finally" and extremely rarely needs a "try-except". "try-finally" should never be used and the only times "try-except" may be acceptable is when writing a program that deals with resources that are not under its control.
IOW, the presence of a "try-finally" is a clear indicator that the code has logical flow deficiencies and, the presence of a "try-except" is almost always an indicator of the same.Unsubstantiated statements.
Unfortunately, the Pascal implementation of OOP often forces the use of "try-except" because it's the only way to let the caller of some code know that a problem occurred.Here all readers should guess that you're actually talking about properties.
Lastly, keep in mind that a "try-except" is nothing more than a cross stack frame goto mechanism which is the worst kind of goto there is.It is actually more that a goto, it is a language construct with it's defined semantics. Otherwise, we can say for every control structure: it is nothing more than a goto. What about the threads? Aren't they worst than "try-except"?
Isn't it error to try accessing e.g. TStrings.Items[ I ] with I beyond the current size of the collection?Why you consider that wrong?because cross stack frame gotos should not be used to handle run of the mill errors. Cross stack frame gotos (exceptions) can cause a myriad of problems, they should be used when they are the only way to handle an error (for instance, access violations caused by an attempt to read memory that isn't managed by the program)
*snip*I agree that the implementation of such features could be treacherous. But we're discussing their usage, not implementation.
As you pointed out in your #9... agreed, and the cross-frame issue is also one I tried to mention when I first posted in this thread (#35): it's what distinguishes exceptions from try/finally. And the problems are specifically what Warfley found himself wrestling with when he was working on coroutines a few weeks ago.
Neither OOP itself, nor the way the fpc compiler handles it, introduce any such force.
Neither OOP itself, nor the way the fpc compiler handles it, demand that code that can return an error must go into a property.
Such code can (within OOP) be written in other ways, that can return the error as result (or out param, or via a LastError call). All at the choice of the author of any code (including frameworks).
Yet, if "Pascal implementation" refers to the rtl/fcl/lcl/ then yes, the rtl/fcl/lcl introduce such "possibility" (see below, not always a "force") by introducing classes that contain properties that raise exceptions. But also introducing functions (non-oop)
But I really don't like the bad argument that condemns OOP because of this one specific problem in the way that people tend to use this one specific language implementation.I don't condemn OOP just because of that. That's just one (1) of the many reasons I strongly dislike OOP. The reason I'm elaborating on exceptions is because it's closely related to the initial topic of the thread.
1) The "Pascal implementation" of non OOP, also forces the user. (e.g. StrToInt)Yes, that is true but, it would be very easy to provide an implementation of StrToInt that didn't raise an exception in case of a conversion error. The same cannot be said about setters.
2) I am assuming that in the quoted text "Pascal implementation" refers to the rtl/fcl/lcl/...Actually, it refers to how the OOP portion is designed and implemented. That would include the RTL but, the implementation of the LCL (for instance) is, to a significant extent, a result of how OOP is implemented in Pascal.
Neither OOP itself, nor the way the fpc compiler handles it, introduce any such force.That statement is valid only if properties are not considered part of OOP, which is something some participants in this thread claim. A claim that I consider "unconvincing".
Neither OOP itself, nor the way the fpc compiler handles it, demand that code that can return an error must go into a property.Maybe so but, when a programmer uses properties and encounters an error condition in a setter, there doesn't seem to be any alternatives to raising an exception.
Such code can (within OOP) be written in other ways, that can return the error as result (or out param, or via a LastError call). All at the choice of the author of any code (including frameworks).I can see that but, then it wouldn't be a property anymore. What I'm saying here is, the language provides properties, programmers use them because they are available and, as a result they have to use exceptions. IOW, I don't see Pascal programmers making any efforts to avoid setters and the consequent need for exceptions.
- Without oop, instead of TList you use arrays (or similar constructs). For those you need to check bounds in advance. For TList you can check bounds in advance too, avoiding the need of "try except" blocks, as you ensure no exception will be risen. So in this case the rtl/fcl/lcl still do NOT (always or "often") force the user to use "try except"From the point of view of what can be done, what you've presented is very reasonable but, from what I've seen, OOP programmers would rather enclose their code in a "try-except" than do any pre-condition checking.
Some classes may not offer advanced checks to avoid exceptions. But then, neither does StrToInt.It seems to me that the better solution would be to have implementations don't depend on exceptions to return errors.
The user can then chose not to use those classes at all (same as StrToInt).
2) "often forces" well actually no: not forces, merely offers. In many cases alternative options are given. But sometimes, maybe yes: Some parts of the framework forces the use of exceptions, IF the user wants to use that part of the framework. (but again OOP and non-OOP).What good is it to have a framework if some of it has to be avoided ?
Here all readers should guess that you're actually talking about properties.I'm definitely guilty of expecting the readers to know Object Pascal and be aware of properties among other things. I guess my expectations are misplaced.
Isn't it error to try accessing e.g. TStrings.Items[ I ] with I beyond the current size of the collection?I've never used TStrings, I'm going to guess that it is an error. Are you using that as an example to justify the use of exceptions ?
What kind (myraid) of problems can cause the use of "try-except"? Please specify.Yes, there are really a myriad but, one of them is already enough to use them only when strictly necessary and that is, it is a cross stack frame goto. A goto that does not cross stack frames is already undesirable, one that crosses stack frames is orders of magnitud more undesirable and, you should know why.
What good is it to have a framework if some of it has to be avoided ?What good is it, that there is a StrToInt, if you have to avoid it?
*snip*Yes.Here all readers should guess that you're actually talking about properties.I'm definitely guilty of expecting the readers to know Object Pascal and be aware of properties among other things. I guess my expectations are misplaced.Isn't it error to try accessing e.g. TStrings.Items[ I ] with I beyond the current size of the collection?I've never used TStrings, I'm going to guess that it is an error. Are you using that as an example to justify the use of exceptions ?
Should I know why? No, I shouldn't. I am a novice, please explain, why?What kind (myraid) of problems can cause the use of "try-except"? Please specify.Yes, there are really a myriad but, one of them is already enough to use them only when strictly necessary and that is, it is a cross stack frame goto. A goto that does not cross stack frames is already undesirable, one that crosses stack frames is orders of magnitud more undesirable and, you should know why.
You imply (falsely) that properties must be used for everything.I don't believe I ever implied that.
Code that need exceptions, well it can be implemented as a method, a "function of object" that can return the error. The option of having properties is not a mandate to use nothing else but properties.if the code isn't dealing with resources it does not manage then it shouldn't need exceptions.
The existence of code throwing exceptions (including in properties) is not mandated by OOP, nor by the compiler, nor by the existence of properties. It is a choice.looks like it's a "choice" a bit more common than it should be.
Only due to the use in the framework, the choice becomes limited. (for both OOP and non-OOP)isn't limiting choices a way of forcing the remaining choices ?
good question.QuoteWhat good is it to have a framework if some of it has to be avoided ?What good is it, that there is a StrToInt, if you have to avoid it?
Mind, it does not matter if it is easy to avoid or not. The point is that if it is to be avoided, then why is it there at all.because the functionality is useful, unfortunately, it reports errors using an exception mechanism.
Anyway, you get the last word. As I do not have to offer anything on the original thread. So if you wish tear my statement apart.I'll make an exception and let you have the last word. This is an Object Oriented Post (OOP)
I'm definitely guilty of expecting the skeptics of OOP/exceptions to know Object Pascal and be aware of collections among other things. I guess my expectations are misplaced.Considering that collections such as TString are not part of the language definition, knowing or not knowing about them is not a reflection on an individual's knowledge of Object Pascal.
Should I know why? No, I shouldn't. I am a novice, please explain, why?In that case, I recommend you read the book "Oh Pascal" by Doug Cooper. Excellent book, it will answer that question for you and many more. Good reading for a novice.
As you pointed out in your #9... agreed, and the cross-frame issue is also one I tried to mention when I first posted in this thread (#35): it's what distinguishes exceptions from try/finally. And the problems are specifically what Warfley found himself wrestling with when he was working on coroutines a few weeks ago.
In my understanding try..finally doesn't really handle an exception, as it re-raises it directly after executing the finally block.
You're arguing that exception handling is bad, because it is implied by OOP, which in turn is bad, because everything can be written procedural and without objects? Did I get it right?Нет! Нет! И ещё раз нет! )))
Then the 440bx standpoint that is actually revealed quite late into the discussion - exception handling is bad, because it is the only way to return error when assigning to a property, which is part of the Pascal implementation of OOP. Because of that developer is forced to use it, and that is presumably a bad thing.По сути, я уже ответил на это. Это не зависит от того, какое программирование мы используем. Процедурное или ООП. Это зависит от того, что мы делаем в коде. Конечный результат так обрабатывать (как я понимаю) просто нельзя. Это создано для попытки вызова процедуры/функции или какого-то кода - что может вызвать исключение. Результат - не может вызвать исключения (код уже выполнен и мы либо уже получили исключение или код просто отработал). )))
So, just concentration on file i/o, yes, test if the file is there (fingers crossed) or or just try opening it and, on that rare occasion where it does not work, rely on try..finally and exceptions. My numbers indicate which is a faster !
Hope you don't mind me butting in !
Davo
Neither OOP itself, nor the way the fpc compiler handles it, introduce any such force.That statement is valid only if properties are not considered part of OOP, which is something some participants in this thread claim. A claim that I consider "unconvincing".
I want to say that I still consider this a bug in the RTL. SetJmp/LongJmp not handling the exception state transparently means that they simply can not be used with exceptions. In C++, which also supports SetJmp/LongJmp as well as exceptions this isn't a case and these functions handle the exception state for you.
Lastly I want to repeat what I've wrote a little bit earlier. Try-finally should not only be associated with exceptions. I think is a little bit because the naming (i.e. having the "try" part), but try-finally is much more than that. In fact even without exceptions try-finally would be massively useful.
In order to use try..exception/try..finally it is necessary to include the called code in them (which usually happens in OOP). And in my code, I'm not checking the included code, but the final result. And, I can't use these methods for verification because they won't "try" to do. Everything is already done and there is nothing left to process. The mistake has already been made (something is my fault here, and I didn't understand it initially).
Therefore, we need to look at what is happening. If a procedure/function is called, then it can be handled in a similar way. It will try to work, raise an exception and return (pretending that the code did not work). And then we will bypass the error. If we process the final result, we will not be able to process anything with these methods.
Thank you for the recommendation. This is a good book, indeed.Should I know why? No, I shouldn't. I am a novice, please explain, why?In that case, I recommend you read the book "Oh Pascal" by Doug Cooper. Excellent book, it will answer that question for you and many more. Good reading for a novice.
There are also extraordinary circumstances in which using gotos is permissible. Most common is the ‘I want to get out of here in a hurry’ case. Suppose, for example, that program input is coming from punched cards or tape, and an input checking procedure spots incorrect data. Since we know that there’s no point in continuing to process input, we can issue an error message and go to the very end of the program (because it’s o.k. to label an end).It doesn't prove your point, just on the contrary! It justifies goto in the case of error.
(code example)
The goto is also properly used for beating a hasty retreat from a function whose arguments are determined to be inappropriate. In these cases the desirability of graceful degradation outweighs the stigma attached to using gotos.
You continue insisting that OOP implementation in Object Pascal forces using of "try-except" based only on the language specification? How you can do that without knowing a bit of RTL/FCL/LCL (and hence confusing OOP with the framework)? It is like to say that the sole existence of parameter-less functions in Pascal (as somebody already noted this earlier) is forcing you to do the same.I'm definitely guilty of expecting the skeptics of OOP/exceptions to know Object Pascal and be aware of collections among other things. I guess my expectations are misplaced.Considering that collections such as TString are not part of the language definition, knowing or not knowing about them is not a reflection on an individual's knowledge of Object Pascal.
It doesn't prove your point, just on the contrary! It justifies goto in the case of error.It does not in any way do that. The reason he included that _exception_ about the use of goto is because there is no "exit" and other control flow facilities in the Pascal definition he is using and, you know that.
You continue insisting that OOP implementation in Object Pascal forces using of "try-except" based only on the language specification? How you can do that without knowing a bit of RTL/FCL/LCL (and hence confusing OOP with the framework)? It is like to say that the sole existence of parameter-less functions in Pascal (as somebody already noted this earlier) is forcing you to do the same.IOW, no one can tell that an elephant is big and heavy unless they are a veterinarian. It's a fact that when using the Pascal implementation of OOP, particularly the Delphi and FPC implementations, the programmer will be forced to use "try-except" in many cases where they should not be necessary.
It does not in any way do that. The reason he included that _exception_ about the use of goto is because there is no "exit" and other control flow facilities in the Pascal definition he is using and, you know that.
Please don't fall into the trap of believing that I am terribly dogmatical about [the goto statement]. I have the uncomfortable feeling that others are making a religion out of it, as if the conceptual problems of programming could be solved by a single trick, by a simple form of coding discipline!
There were a hints that conversation was leading nowhere...
At least I've learned a Welsh word that sounds amazingly like a word from my language with the same meaning.
In any event, I think this discussion has departed a long way from answering OP's question.
That said, even this is not inherent to OOP in any shape or form. This is just how it was chosen to implement Classes in Object Pascal. This could easiely be changed by having the ability to set the return value of the constructor manually rather than automatically.Although not said explicitly, it sounds as FPC classes were pascal OOP and comparable to C++ classes, and this comparison leads to points like stack vs. heap. Important to mention that classes is not core OOP (at least in my eyes) even in pascal. You can always use object instead of class and that behaves much more similar to C++ class, can be in stack, etc.
Other languages like C++, which allow for their classes to be stack allocated, don't run into that problem.
In any event, I think this discussion has departed a long way from answering OP's question.
Yes, but I really enjoyed reading it. Also I learnt a lot, as I also summarized above.
It might even be defensible to argue that with exceptions for gross error recovery and a decent coroutine implementation SetJmp/LongJmp could be removed from the language.I would agree but personally I like these really academic and experimental projects which sometimes require you getting "dirty" and often require such low level functions. So while in productive code i don't think that such jumps are required, I wouldnt like taking them away