Recent

Author Topic: "Computing Theory" forum board  (Read 7219 times)

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 9754
  • Debugger - SynEdit - and more
    • wiki
Re: "Computing Theory" forum board
« Reply #15 on: July 21, 2020, 03:22:02 pm »
For example: Does small array has duplicates? All I found is "use brute force". That's something i can came up with. Is there any better way?

That is perfectly suited for "programming general".

While in theory there is a difference between discussing an implementation or an algorithm, in practice - especially on a Language specific Forum (Pascal) - such discussions will 99% overlap.
We would end up having 2 boards with the same kind of questions. Or we would need to move, and also split topics all the time. And that is a no go.

Imho, if we talk about a collection of "how to", or "Example Algorithm / usecases" then this would perfectly well fit a (moderated) section in the wiki.
Ask your question in "programming general", then write up a summary answer on the wiki, and create an overview page to such answers.

Handoko

  • Hero Member
  • *****
  • Posts: 5122
  • My goal: build my own game engine using Lazarus
Re: "Computing Theory" forum board
« Reply #16 on: July 21, 2020, 04:19:08 pm »
Great idea. I would be happy to see "Computing Theory" in this forum.

If someone interested, here I want to share something I found:

Spatial Hashing

Doing collision detection is easy. Usually we build an array to store to positions of the objects and do the test. But if the game contains too many objects, we have to optimize the code for better performance. Spatial Hashing can really improve the performance compare to the brute force technique. I found a page explaining spatial hashing. It didn't explain it in Pascal but the explanation is easy to understand. It's really worth to read.

https://conkerjo.wordpress.com/2009/06/13/spatial-hashing-implementation-for-fast-2d-collisions/

List of Algorithms

Oh my, I never knew there are so much algorithms. I wish I have more time so I can study them. Here is a list of all known algorithms, unfortunately the explanations are too hard to understand for commoners like me.

https://en.wikipedia.org/wiki/List_of_algorithms

Top 10 Algorithms

This one is shorter, still there are so many things mentioned there I haven't known.

https://www.quora.com/What-are-the-top-10-algorithms-every-software-engineer-should-know-by-heart
« Last Edit: July 21, 2020, 06:22:45 pm by Handoko »

440bx

  • Hero Member
  • *****
  • Posts: 3921
Re: "Computing Theory" forum board
« Reply #17 on: July 21, 2020, 04:39:07 pm »
Oh my, I never knew there were so much algorithms. I wish I have more time so I can study them. Here is a list of all known algorithms, unfortunately the explanations are too hard to understand for commoners like me.
and that can be considered a short list.

I would suggest this, just read what the algorithm is for (its purpose, what problem it solves), read the algorithm too but, don't be concerned if you don't understand it (iow, don't try hard to understand it) because, what happens in reality is this: if you've read about an algorithm and what it does, what problem it solves, the day you need it, you'll remember reading about it then, you search for it and, because you are in a situation where you need to solve the problem the algorithm addresses, your chances of understanding how it works are much better than when you were just satisfying your curiosity or thirst for knowledge.  The difference is that in the latter case, your mind wants a solution to the problem you're facing not, just satisfying academic curiosity. 

Necessity is the mother of all understanding ;)



(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

Thaddy

  • Hero Member
  • *****
  • Posts: 14169
  • Probably until I exterminate Putin.
Re: "Computing Theory" forum board
« Reply #18 on: July 21, 2020, 05:53:39 pm »
Necessity is the mother of all understanding ;)
No, curiosity is... ::) ;D (E. g. there is no necessity to go to the moon or mars )
« Last Edit: July 21, 2020, 05:55:55 pm by Thaddy »
Specialize a type, not a var.

ASBzone

  • Hero Member
  • *****
  • Posts: 678
  • Automation leads to relaxation...
    • Free Console Utilities for Windows (and a few for Linux) from BrainWaveCC
Re: "Computing Theory" forum board
« Reply #19 on: July 21, 2020, 05:55:57 pm »
Necessity is the mother of all understanding ;)

Ha, ha!

Despite your hijacking of that well known proverb, your advice was spot on!
-ASB: https://www.BrainWaveCC.com/

Lazarus v2.2.7-ada7a90186 / FPC v3.2.3-706-gaadb53e72c
(Windows 64-bit install w/Win32 and Linux/Arm cross-compiles via FpcUpDeluxe on both instances)

My Systems: Windows 10/11 Pro x64 (Current)

Thaddy

  • Hero Member
  • *****
  • Posts: 14169
  • Probably until I exterminate Putin.
Re: "Computing Theory" forum board
« Reply #20 on: July 21, 2020, 06:42:28 pm »
And he is wrong...
https://www.modernagespirituality.com/not-necessity-but-curiosity-is-the-mother-of-invention/

BTW That is not all: it is simply Plato who coined it first.
« Last Edit: July 21, 2020, 06:49:17 pm by Thaddy »
Specialize a type, not a var.

MarkMLl

  • Hero Member
  • *****
  • Posts: 6647
Re: "Computing Theory" forum board
« Reply #21 on: July 24, 2020, 06:10:55 pm »
Oh my, I never knew there were so much algorithms. I wish I have more time so I can study them. Here is a list of all known algorithms, unfortunately the explanations are too hard to understand for commoners like me.
and that can be considered a short list.

If I might be permitted to witter for just a moment... this book http://web.engr.oregonstate.edu/~budd/Books/leda/ describes an attempt at a multi-paradigm language. The first three chapters are available for download, chapter 2 demonstrates the various paradigms (imperative, object-oriented, functional, logical) and then chapter 3 demonstrates (among other things) using the unification algorithm normally considered to be part of logic programming in the context of operations on a graph.

Knowing all the algorithms in the book is just one small part of it. Knowing the various linguistic paradigms and having sufficient experience to choose the appropriate language is just one small part of it. Knowing how to express a problem such that an existing well-studied algorithm can be applied is a much bigger part.

Which I suppose leads me back on topic: typical commercial and network programming, i.e. the sort of things at which Delphi and Lazarus excel, puts much more emphasis on information preparation and presentation than on "clever" internal algorithms.

MarkMLl


MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

440bx

  • Hero Member
  • *****
  • Posts: 3921
Re: "Computing Theory" forum board
« Reply #22 on: July 24, 2020, 06:50:55 pm »
If I might be permitted to witter for just a moment...
Absolutely :)

this book http://web.engr.oregonstate.edu/~budd/Books/leda/ describes an attempt at a multi-paradigm language. The first three chapters are available for download, chapter 2 demonstrates the various paradigms (imperative, object-oriented, functional, logical) and then chapter 3 demonstrates (among other things) using the unification algorithm normally considered to be part of logic programming in the context of operations on a graph.
thank you for that book reference.  I admire the ability some people have to mix programming paradigms when developing a program, something I am not particularly good at.  I will check it out.

Knowing the various linguistic paradigms and having sufficient experience to choose the appropriate language is just one small part of it. Knowing how to express a problem such that an existing well-studied algorithm can be applied is a much bigger part.
I remember having a hard time thinking in Prolog.  It took a while to break (or at least relax) the "imperative procedural programming" ingrained in the way I'm used to thinking about algorithms and, only with limited and basic success but, it was a good experience.

Which I suppose leads me back on topic: typical commercial and network programming, i.e. the sort of things at which Delphi and Lazarus excel, puts much more emphasis on information preparation and presentation than on "clever" internal algorithms.
I believe that knowing "clever" algorithms is good because they show different ways a problem can be solved, it expands one's horizons.  I also believe that "information preparation" is the key to simple and easy to understand algorithms.  In a way, different paradigms are a different way of looking at the information to be managed. 

That's why I think it's not crucial to understand how many clever algorithms work.  Knowing they are there is important, that way if one ever need to solve that problem, one knows what to look at.  Paraphrasing a Mathematician's adage, to understand a solution, one has to understand the problem first (half the solution of a problem is a correct understanding of the problem... something like that.)
(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

MarkMLl

  • Hero Member
  • *****
  • Posts: 6647
Re: "Computing Theory" forum board
« Reply #23 on: July 24, 2020, 07:43:21 pm »
thank you for that book reference.  I admire the ability some people have to mix programming paradigms when developing a program, something I am not particularly good at.  I will check it out.

The author's interesting, he's got "form" in Smalltalk and APL amongst others.

Quote
I remember having a hard time thinking in Prolog.  It took a while to break (or at least relax) the "imperative procedural programming" ingrained in the way I'm used to thinking about algorithms and, only with limited and basic success but, it was a good experience.

I can't remember whether I've used Prolog "in anger", but I've definitely worked through a few exercises e.g. to code this:

Code: [Select]
(*  1. The only animals in this house are cats;                         *)
(*  2. Every animal is suitable for a pet, that loves to gaze at the moon; *)
(*  3. When I detest an animal, I avoid it;                             *)
(*  4. No animals are carnivorous, unless they prowl at night;          *)
(*  5. No cat fails to kill mice;                                       *)
(*  6. No animals ever take to me, except what are in this house;       *)
(*  7. Kangaroos are not suitable for pets;                             *)
(*  8. None but carnivora kill mice;                                    *)
(*  9. I detest animals that do not take to me;                         *)
(* 10. Animals, that prowl at night, always love to gaze at the moon.   *)

in_this_house(cats) .
pet(X) :- gazes_at_moon(X) .
detest(X) :- avoid(X) .
prowls_at_night(X) :- carnivore(X) .
kills_mice(cats) .
takes_to_me(X) :- in_this_house(X) .
not_pet(kangaroos) .
carnivore(X) :- kills_mice(X) .
detest(X) :- not_take_to_me(X) .
gazes_at_moon(X) :- prowls_at_night(X) .

?- pet(X) .

which was an interesting exercise in itself, because even doing it by hand illustrated that there are two disjoint sets of rules.

I think https://www.amzi.com/AdventureInProlog/a1start.php might interest.

Quote
I believe that knowing "clever" algorithms is good because they show different ways a problem can be solved, it expands one's horizons.  I also believe that "information preparation" is the key to simple and easy to understand algorithms.  In a way, different paradigms are a different way of looking at the information to be managed.

Yes, but at the same time a mature language can be expected to have some degree of intelligence when it comes to selecting its own algorithms. As a specific example, a significant part of APL is the fact that is can invert a matrix, which I believe is equivalent to solving a set of simultaneous equations. And that is something which has been studied exhaustively for many years, and it's reasonable to expect that an APL implementation (or any competent mathematical package) will study its parameters carefully since selecting the right algorithm will have a massive performance implication (Budd touches on that in his first chapter).

Quote
That's why I think it's not crucial to understand how many clever algorithms work.  Knowing they are there is important, that way if one ever need to solve that problem, one knows what to look at.  Paraphrasing a Mathematician's adage, to understand a solution, one has to understand the problem first (half the solution of a problem is a correct understanding of the problem... something like that.)

So at that point, there's less imperative for the user to know the details of the algorithms. But there's a very big imperative for him to recognise e.g. when a problem can be best-described using matrices.

My feeling is that comparatively large languages such as Leda are on the wrong track. Far more important is for a core language to have an extensible syntax such that it can express idioms in a way that appear familiar to practitioners used to a range of paradigms: something that very few are up to. It's easy enough to express (say) the semantics of a set of PROLOG rules using Pascal or C++ syntax, but it's not really possible to use syntax which is sufficiently PROLOG-like to make somebody who was good at using PROLOG to solve problems feel at home.

MarkMLl
MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

440bx

  • Hero Member
  • *****
  • Posts: 3921
Re: "Computing Theory" forum board
« Reply #24 on: July 25, 2020, 05:59:59 am »
So at that point, there's less imperative for the user to know the details of the algorithms. But there's a very big imperative for him to recognise e.g. when a problem can be best-described using matrices.
That makes me think about SQL.  SQL makes it really easy for just about anyone to manipulate a database but, those without any knowledge of what may be going on inside (the indexes and algorithms used) can cause some real problems (performance problems being the most common ones.)

My feeling is that comparatively large languages such as Leda are on the wrong track.
I definitely tend to agree but, I have mixed feelings about that.  It's nice to have something that operates at a "descriptive" level of some kind (such as SQL and even Delphi/Lazarus to some extent) as long as the user can "see" (or at least has a reasonable idea of) what's going on behind the scenes.  That's important for the tool/language to be used effectively.  The problem is that, it's not always obvious what is happening in there which can easily cause poor use of the tool/language.

Bottom line: the results show that cushy high level languages can't make up for user's lack of knowledge. 


(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

devEric69

  • Hero Member
  • *****
  • Posts: 648
Re: "Computing Theory" forum board
« Reply #25 on: July 25, 2020, 09:04:47 am »
Quote
Handoko wrote:
Doing collision detection is easy. Usually we build an array to store to positions of the objects and do the test. But if the game contains too many objects, we have to optimize the code for better performance. [snip]

Btw, for information, there is an interesting discussion on the context of optimal use\performance of some existing containers - their cost in insertion or reading, globally whether they are sorted or not, the number of contents, the type of indexes - here (tests are impressive): https://forum.lazarus.freepascal.org/index.php/topic,34348.msg359058.html#msg359058
use: Linux 64 bits (Ubuntu 20.04 LTS).
Lazarus version: 2.0.4 (svn revision: 62502M) compiled with fpc 3.0.4 - fpDebug \ Dwarf3.

MarkMLl

  • Hero Member
  • *****
  • Posts: 6647
Re: "Computing Theory" forum board
« Reply #26 on: July 25, 2020, 09:18:02 am »
So at that point, there's less imperative for the user to know the details of the algorithms. But there's a very big imperative for him to recognise e.g. when a problem can be best-described using matrices.
That makes me think about SQL.  SQL makes it really easy for just about anyone to manipulate a database but, those without any knowledge of what may be going on inside (the indexes and algorithms used) can cause some real problems (performance problems being the most common ones.)

SQL is a good example. In the hands of somebody with the right mindset it can do some /very/ complex things, and a good implementation will optimise them well. But at that point one has to ask whether it's appropriate to "massage" an operation into the declarative paradigm, or whether one would be better leaving it as an algorithm written in the style to which workers in that particular field are accustomed... hence in the case of SQL various server-side languages such as PL/SQL.

(As an aside, since an algorithm is normally understood to describe a sequence of steps, I'm not sure whether the word really applies to the declarative paradigm where a sequence of steps can be replaced by a single SQL or APL etc. expression.)

I don't have a good SQL example to hand, but I've seen description of techniques which have a lot in common with https://en.wikipedia.org/wiki/APL_(programming_language)#Prime_numbers

Quote
My feeling is that comparatively large languages such as Leda are on the wrong track.
I definitely tend to agree but, I have mixed feelings about that.  It's nice to have something that operates at a "descriptive" level of some kind (such as SQL and even Delphi/Lazarus to some extent) as long as the user can "see" (or at least has a reasonable idea of) what's going on behind the scenes.  That's important for the tool/language to be used effectively.  The problem is that, it's not always obvious what is happening in there which can easily cause poor use of the tool/language.

Bottom line: the results show that cushy high level languages can't make up for user's lack of knowledge.
[/quote]

Undoubtedly true. However I'd still like to emphasise the usefulness of being able to merge programming styles, ideally in the same unit (e.g. you're assigning something to a Pascal variable which you can see is being selected using PROLOG-like logic). But I think that my major point is that a language should be extensible in directions its designers haven't anticipated: it's not just enough for its designers to say "In version x.y we've added syntax to support logic programming", it should be possible at the user (or at least library-writer) level to add a particular style of logic programming even if this demands syntax enhancements.

As a particular example, ALGOL had its I/O operations hardcoded into the language, and in one anecdote when somebody from the sales team complained that customers were having a hard time with it the star compiler writer spent a weekend of his time sorting out the semantics. Pascal is obviously in a similar position, with the implied polymorphism of WriteLn() etc. being handled as a "special" by the compiler.

It's not generally known, but several versions of the NT family of OSes (certainly v3 up as far as XP, and possibly later) embed a PROLOG engine in the program that handles network connections. If you pull a particular DLL to pieces you can see the rules that control it.

MarkMLl
MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

 

TinyPortal © 2005-2018