Recent

Author Topic: Why isn't Lazarus / Free Pascal more popular?  (Read 20059 times)

korba812

  • Sr. Member
  • ****
  • Posts: 468
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #75 on: May 02, 2025, 11:47:20 pm »
I wonder how many ERP applications have been developed in Fortran or GNU Octave?

AmatCoder

  • Jr. Member
  • **
  • Posts: 67
    • My site
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #76 on: May 03, 2025, 01:00:58 am »
As everyone can see VC++ created code that executes almost twice as fast as the code created by Lazarus.

Okay, let's multiply two 1000x1000 matrices.
[...]
Final comparative results:

Code: Text  [Select][+][-]
  1. Fortran Execution Time, ms:        1140
  2. Cpp Execution Time, ms:            1433
  3. FPC Execution Time, ms:            2437
  4. FPC Naive Execution Time, ms:      1390
  5. FPC AVX Execution Time, ms:        32
  6.  
;D

You are cheating here. Why don't you apply the same optimizations in C++ example?

Right now the conclusions that are drawn from that benchmark:
  • Standard code from C++ is almost twice as fast as the standard code created by FPC (as Lenny33 said).
  • FPC code requires optimizations to match the speed of standard code from C++.

Seenkao

  • Hero Member
  • *****
  • Posts: 711
    • New ZenGL.
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #77 on: May 03, 2025, 01:38:26 am »
You are cheating here.
С каких пор, использование головы стало жульничеством?

Google translate:
Since when did using your head become cheating?

Quote
Why don't you apply the same optimizations in C++ example?
Можете предоставить пример, как можно на другом компиляторе использовать те же оптимизации что и на компиляторе C++?
------

Лично для вас. Тупо переписывание кода с одного ЯП на другой, приравнивает программиста к пишущей машинке. Зачем просто бездумно переводить код, если ты сразу видишь что можно оптимизировать и уменьшить размер набираемого текста?
Ещё в самом начале этого топика я убрал пару строчек из кода Паскаль, а другие строчки заменил соответствующими выражениями и получил двукратный прирост (миниму двукратный). В коде при этом ни чего не изменилось.

Человек пишет код, увидел где можно оптимизировать и по пути оптимизировал. Кому от этого плохо?

А чем плохо, когда человек решил перебрать код и улучшил работу всей своей программу? Кому от этого плохо? Вам? Потому что программист на C++ не захотел прикладывать голову, а использовал для этого компилятор?

Люди обленились. И решили что за них сможет сделать всё компилятор. А кто-то оптимизирует код здесь и сейчас. И это далеко не плохо!

-----------------------------------------
Google translate:
Can you provide an example of how you can use the same optimizations on another compiler as on the C++ compiler?
------

For you personally. Stupidly rewriting code from one PL to another equates the programmer to a typewriter. Why just mindlessly translate the code if you immediately see that you can optimize and reduce the size of the typed text?
At the very beginning of this topic, I removed a couple of lines from the Pascal code, and replaced other lines with the corresponding expressions and got a twofold increase (at least twofold). Nothing changed in the code.

A person writes code, saw where it is possible to optimize and optimized along the way. Who is it bad for?

And what is wrong when a person decides to go over the code and improves the work of his entire program? Who is it bad for? You? Because the C++ programmer did not want to put his head to it, but used the compiler for this?

People got lazy. And decided that the compiler can do everything for them. And someone optimizes the code here and now. And this is far from bad!
Rus: Стремлюсь к созданию минимальных и достаточно быстрых приложений.

Eng: I strive to create applications that are minimal and reasonably fast.
Working on ZenGL

440bx

  • Hero Member
  • *****
  • Posts: 5454
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #78 on: May 03, 2025, 01:50:04 am »
A good compiler should generate _better_ code than an expert programmer.

The reason is simple: a compiler should be the sum of experts in code generation and CPU architecture.  As a piece of software running on modern hardware, a compiler can be executing anywhere from 3 to 5 billion instructions per second (never known anyone who could do that), it should easily outdo an expert.  Plus, today's processors have gotten so complex that it is not within the reach of most humans to produce optimal code for a processor.

This concept that the compiler should be more capable than even an expert programmer is nothing new.  When IBM produced the first Fortran compiler, one of the goals was for the compiler to generate faster code than an expert programmer could.

The one thing the programmer is responsible for is: selecting what is hopefully the best algorithm to solve the problem.  Generating the best possible code is what the compiler, not the programmer, is for.
(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v4.0rc3) on Windows 7 SP1 64bit.

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 11331
  • Debugger - SynEdit - and more
    • wiki
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #79 on: May 03, 2025, 01:57:34 am »
Since when did using your head become cheating?

Doing it biased for one target but not the other... (according to the replies / haven't double checked)

But all that is beside the point.

In an other example, a classic optimization:
Writing
Code: Pascal  [Select][+][-]
  1. p := @data[0];
  2. e:= @data[n];
  3. while p < e do begin
  4.   p^ := foo;
  5.   inc(p);
  6. end;

is way more likely to hide some error than
Code: Pascal  [Select][+][-]
  1. for i := 0 to n-1 do
  2.   data[i] := foo;

If it wasn't, we wouldn't need anything but assembler. We use high level languages so we can write more readable code, and avoid some mistakes.

If I have to "reduce the level" of the language, then that is the exact opposite of why I am using it...

AmatCoder

  • Jr. Member
  • **
  • Posts: 67
    • My site
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #80 on: May 03, 2025, 02:10:57 am »

Since when did using your head become cheating?
[...]
People got lazy. And decided that the compiler can do everything for them. And someone optimizes the code here and now. And this is far from bad!
What? I am not against optimizing. Where did I write that? Do you understand English?

We're talking about benchmarks.
You should not use an optimized code against non-optimized code for perfomance comparation. If you do, you're cheating.

Seenkao

  • Hero Member
  • *****
  • Posts: 711
    • New ZenGL.
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #81 on: May 03, 2025, 02:22:03 am »
440bx, судя по вашим словам, программист не нужен, а нужен сборщик программ из комплектующих.

Алгоритмы рождаются и умирают. Вы не сможете все алгоритмы занести в компилятор, по простой причине, что алгоритмов настолько много, что компилятор будет их перебирать не часами, а месяцами для одной программы. При том, что компиляторы давно не однопроходные, вы так сможете скомпилировать программу только через несколько лет в первый раз.

То, что в компилятор вкладывают знания людей, это понятно. Достаточно популярные компиляторы быстрее развиваются, менее популярные - медленнее. Компиляторы не просто так делают много проходов для компиляции приложений. Изначально они структурируют код программиста, чтоб его можно было легче обработать. Дальше они проход за проходим применяют оптимизации уже заложенные в них. И на конце получим результат, где-то лучше, где-то хуже.

Делая более оптимизированный код, мы автоматически снимаем часть решений с компилятора и конечный код можем получить более оптимальным, по причине того, что либо программист указал как лучше структурировать программу (FPC с подсказками намного лучше оптимизирует), либо компилятор сможет потратить освободившиеся проходы на дополнительную оптимизацию. В случае очень хороших компиляторов, мы получим меньшее время компиляции (компилятор не увидит что можно оптимизировать и закончит работу).

А теперь по самим программам.
Составляет программу человек. Обычно в программе несколько модулей, в которых некоторые данные взаимосвязаны. По истечению времени программа наполняется и человек может увидеть, что изменения в одном модуле, могут повлиять на работу в другом модуле. При этом в первом модуле не важно сколько времени это займёт, потому что на саму программу это почти не влияет. А вот изменения в другом модуле достаточно критичны, и изменение кода в первом модуле с изменением кода в другом модуле увеличит скорость работы программы.
Ни один компилятор этого не увидит! В компиляторе не заложено анализировать код в разных модулях изменять структуры данных, для того чтоб в какой-то части программы можно было оптимизировать другой код для ускорения его работы.

И вот хоть в лепёшку разбейтесь, но у вас не получится оптимизировать компилятор настолько, что он сможет это сделать.
Даже новый "ИИ" не может, хотя там ещё больше ресурсов затрачено.

--------------------------------------------
Google translate:
440bx, judging by your words, a programmer is not needed, but an assembler of programs from components is needed.

Algorithms are born and die. You will not be able to enter all the algorithms into the compiler, for the simple reason that there are so many algorithms that the compiler will go through them not for hours, but for months for one program. Given that compilers have long ceased to be single-pass, you will only be able to compile a program for the first time in a few years.

It is clear that people's knowledge is invested in the compiler. Quite popular compilers develop faster, less popular ones - more slowly. Compilers do not just make many passes to compile applications. Initially, they structure the programmer's code so that it can be processed more easily. Then, pass after pass, they apply the optimizations already embedded in them. And in the end, we get a result, somewhere better, somewhere worse.

By making more optimized code, we automatically remove some of the decisions from the compiler and the final code can be more optimal, due to the fact that either the programmer indicated how to better structure the program (FPC with hints optimizes much better), or the compiler can spend the freed passes on additional optimization. In the case of very good compilers, we will get less compilation time (the compiler will not see what can be optimized and will finish the work).

And now about the programs themselves.
A person makes a program. Typically a program has several modules in which some data is interrelated. After some time, the program is filled and a person can see that changes in one module can affect the work in another module. At the same time, in the first module it does not matter how much time it takes, because it almost does not affect the program itself. But changes in another module are quite critical, and changing the code in the first module with a change in the code in another module will increase the speed of the program. No compiler will see this! The compiler is not designed to analyze the code in different modules or change data structures so that in some part of the program it would be possible to optimize other code to speed up its operation.

And even if you try your best, you will not be able to optimize the compiler so much that it can do this.
Even the new “AI” can’t, although even more resources have been spent there.
Rus: Стремлюсь к созданию минимальных и достаточно быстрых приложений.

Eng: I strive to create applications that are minimal and reasonably fast.
Working on ZenGL

Seenkao

  • Hero Member
  • *****
  • Posts: 711
    • New ZenGL.
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #82 on: May 03, 2025, 02:28:54 am »
You should not use an optimized code against non-optimized code for perfomance comparation. If you do, you're cheating.
Здесь я "изменил" программу для Паскаля. Я ни чего не сделал, получил ускорение, как и писал выше. Как можно считать что это обман, если я сам бенчмарк не менял?

А остальном, я тоже не согласен, что использование ассемблера в каком-то ЯП является обманом. Потому что в конечном итоге будет проверяться написанная программа, а не тестовое приложение.

Я плохо знаю английский. Возможно переводчик что-то поднаврал. Читаю я очень медленно, и мне всё равно нужны подсказки для перевода.

----------------------------
Google translate:
Here I "changed" the program for Pascal. I did not do anything, I got the acceleration, as I wrote above. How can one consider this to be a scam if I didn’t change the benchmark itself?

As for the rest, I also don't agree that using assembler in some PL is a scam. Because in the end, the written program will be tested, not the test application.

I do not know English well. Perhaps the translator lied about something. I read very slowly, and I still need hints for translation.
« Last Edit: May 03, 2025, 02:32:45 am by Seenkao »
Rus: Стремлюсь к созданию минимальных и достаточно быстрых приложений.

Eng: I strive to create applications that are minimal and reasonably fast.
Working on ZenGL

AmatCoder

  • Jr. Member
  • **
  • Posts: 67
    • My site
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #83 on: May 03, 2025, 03:20:57 am »
I will explain you like a 5-years-old:

Here we are comparing the performance of C++ vs FPC.

Example: If you use assembler to optimize FPC code but you do not use assembler to optimize C++ code then you can not say "FPC is faster!"

You need to use assembler in both codes (or not use it in any of them) for a reliable comparison.
« Last Edit: May 03, 2025, 03:23:53 am by AmatCoder »

Seenkao

  • Hero Member
  • *****
  • Posts: 711
    • New ZenGL.
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #84 on: May 03, 2025, 04:55:26 am »
I will explain you like a 5-years-old:
Хорошо, отвечу вам вашими же словами. Раз вы так желаете.

Объясню вам как 7-и летнему ребёнку:
Когда вы сделаете программу для работы с данными, где надо обработать как можно больше данных. И, если вы использовали все возможности языка для этого, то не важно что вы использовали. Потому что ассемблер входит в ваш ЯП и он помогает улучшать вашу программу.

Как итог, программа на Паскале может оказаться быстрее, потому что программист именно использовал все средства для этого, которые ему доступны. А программист C++ положился на компилятор.

А ещё, если вы вернётесь на 20 лет назад, то вдруг увидите, что для того чтоб доказать что компилятор C/C++ лучше, почему-то в то время не гнушались использовать ассемблер для этого.

А теперь, объясню вам как 3-х летнему ребёнку:
Я так понимаю, вы берёте 100 литровую бочку, когда хотите выпить глоток воды. И сравниваете это с тем, что в соседней столовой стоит автомат, который разлил давным давно воду по чашкам, люди просто ходят и берут там воду.
Это ведь для вас одно и то же.

Пить из бочки - это Паскаль. Брать воду с раздачи в столовой - это C/C++.
Лично я, для того чтоб попить из бочки, возьму чашку и зачерпну, а не буду брать бочку в руки (как вы предлагаете). И наливать другим из бочки я буду не опрокидывая её, а черпая воду (но вы предлагаете опрокидывать бочку для этого).


------------------------------------------------------
Google translate:
Okay, I'll answer you in your own words. If that's what you want.

I'll explain it to you like a 7-year-old:
When you make a program for working with data, where you need to process as much data as possible. And if you used all the language capabilities for this, then it doesn't matter what you used. Because the assembler is part of your PL and it helps improve your program.

As a result, a Pascal program can be faster because the programmer used all the tools available to him for this. And the C++ programmer relied on the compiler.

And also, if you go back 20 years, you will suddenly see that in order to prove that the C/C++ compiler is better, for some reason at that time they did not hesitate to use the assembler for this.

Now, I'll explain it to you like a 3-year-old:
As I understand it, you take a 100-liter barrel when you want to drink a sip of water. And you compare it to the fact that in the neighboring canteen there is a machine that poured water into cups a long time ago, people just walk and take water from there.
After all, for you it's the same thing.

Drinking from a barrel is Pascal. Taking water from the dispenser in the canteen is C/C++.
Personally, in order to drink from a barrel, I will take a cup and scoop up some, and I will not take the barrel in my hands (as you suggest). And I will pour water for others from the barrel not by tipping it over, but by scooping up water (but you suggest tipping the barrel over for this).

P.S. И бенчмарки созданы для сравнения производительности программ, а не для сравнения работы компиляторов.
P.S. And benchmarks are created to compare the performance of programs, and not to compare the work of compilers.
!!!
« Last Edit: May 03, 2025, 05:35:07 am by Seenkao »
Rus: Стремлюсь к созданию минимальных и достаточно быстрых приложений.

Eng: I strive to create applications that are minimal and reasonably fast.
Working on ZenGL

440bx

  • Hero Member
  • *****
  • Posts: 5454
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #85 on: May 03, 2025, 05:51:50 am »
@Seenkao,

you have a very "Personal Computer" view of what a compiler is supposed to be (or is in the Personal Computer world.)

440bx, judging by your words, a programmer is not needed, but an assembler of programs from components is needed.
That is a grossly incorrect conclusion.  The programmer is needed to design and select algorithms to solve a problem.  A compiler knows _nothing_ about how to do that and is rather unlikely it ever will because you can't tell a compiler "write me a word processor" or whatever it is your little heart desires at the time.

You will not be able to enter all the algorithms into the compiler,
It wouldn't cross a compiler writer's mind the thought of entering algorithms into the compiler.   There is no point whatsoever in doing that.

What the compiler should know are things like, the CPU instruction set, it should know how to create dependency chains (so it can internally create dependency diagrams), it should know how many registers it can use, any special abilities (or restrictions) some registers may have, it should know about the number of clock cycles used by every instruction, it should know the number of pipelines the CPU uses and, that's just a _few_ of the things a good compiler should know about the processor it generates code for.  That's just some of the basics, in many cases the compiler should also have basic knowledge of algebra to enable it to restructure expressions in a form that is better optimized.   

Initially, they structure the programmer's code so that it can be processed more easily. Then, pass after pass, they apply the optimizations already embedded in them. And in the end, we get a result, somewhere better, somewhere worse.
That's the most basic stuff.  The peephole optimizer.  That "optimization" is mostly to get rid of redundancy which does make the program faster and smaller but that's optimization 0.0001.

By making more optimized code, we automatically remove some of the decisions from the compiler and the final code can be more optimal, due to the fact that either the programmer indicated how to better structure the program (FPC with hints optimizes much better), or the compiler can spend the freed passes on additional optimization. In the case of very good compilers, we will get less compilation time (the compiler will not see what can be optimized and will finish the work).
Programmer time is expensive (even a run of the mill programmer is expensive), CPU cycles are cheap and getting cheaper every day.  The work should be done by the part that is cheap, which is the CPU.  Another reason it should be the CPU doing all the hard work is because the CPU can analyze large numbers of alternatives in a tiny fraction of the time it would take a human to analyze just one alternative.

A person makes a program. Typically a program has several modules in which some data is interrelated. After some time, the program is filled and a person can see that changes in one module can affect the work in another module. At the same time, in the first module it does not matter how much time it takes, because it almost does not affect the program itself. But changes in another module are quite critical, and changing the code in the first module with a change in the code in another module will increase the speed of the program. No compiler will see this! The compiler is not designed to analyze the code in different modules or change data structures so that in some part of the program it would be possible to optimize other code to speed up its operation.
A good optimizing compiler is designed to see all the code.  Even FPC supposedly can do that with WPO.

And even if you try your best, you will not be able to optimize the compiler so much that it can do this.
It is a lot easier to produce a really good optimizing compiler than having to select, what you _believe_ is a good set of instructions for every piece of code of every program you write.  That's extremely inefficient (not to mention error prone.)

One central reason why RISC processors were created was to move the optimization burden from the programmer to the compiler.  Simply because the compiler operates at a speed humans can't even dream of.    Even a late 50s IBM could do 50,000 multiplications per second.  Find a human being who can do 1/100 of that, good luck!

The only area where the human being can beat a computer is in the selection of the proper algorithm to solve a problem simply because it is the human being who understands the problem and has critical information needed to make the correct decisions.   For instance, if you need to sort a list of 4 elements, a quicksort is not a good choice but,  it could be a very good choice in other cases.  Neither a compiler nor a computer can make the choice simply because they don't understand the problem nor do they have information that is necessary to make the right choice.

Think about it: the great majority of bugs in programs are caused by _humans_ not by the compiler.  As it is, humans already make enough mistakes without the burden of "helping the compiler" optimize code. 

A good compiler should be an expert system whose purpose is to generate superb code for its target processor.  It's the compiler's job to produce optimized code.  It's the programmer's job to select the best algorithms to solve a problem.

I'm not sure but, if FPC was a really good optimizing compiler, maybe it would be more popular  (a lot of programmers love to brag about how fast their code is and, if the compiler helps in that area, it's usually a "welcome feature".)
(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v4.0rc3) on Windows 7 SP1 64bit.

LV

  • Sr. Member
  • ****
  • Posts: 286
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #86 on: May 03, 2025, 06:17:48 am »
You are cheating here.

I don't understand what you are talking about. I have done my job and let the readers draw their conclusions.
In my opinion, a FORTRAN veteran is a good candidate for heavy math and engineering calculations. Modern FORTRAN compilers (GCC, Intel oneAPI, NVIDIA, AMD, Oracle/Sun...) support parallelism, GPU acceleration, and much more out of the box.  ;)


JD

  • Hero Member
  • *****
  • Posts: 1906
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #87 on: May 03, 2025, 10:10:49 am »
Ladies and gentlemen, passionate Lazarus/FPC users, volunteers and developers,

Lend me your ears. May I kindly request that we mind our language when we express our opinions? I'm no snowflake but calling someone a novice, stupid or infantile does not help our cause. We are a small community, and we are at risk of making it smaller by trading insults and belittling each other. Our levels of expertise differ, but of what use is the shiniest and best invention the brightest one among us can build if it does not attract others? Edison was not the brightest (compared to Tesla) but he had mindshare and, I would argue, more emotional intelligence.

That said, you are welcome to take out ALL your frustrations on me. I don't care and can handle it  :D. But many cannot and leave if the history of this forum is anything to go by. So, a shrinking user base is not the objective. We all have a stone that we can use to build this edifice and keep Lazarus/FPC alive.

 :D :D :D :D
« Last Edit: May 03, 2025, 10:55:57 am by JD »
Linux Mint - Lazarus 4.0/FPC 3.2.2,
Windows - Lazarus 4.0/FPC 3.2.2

mORMot 2, PostgreSQL & MariaDB.

Hansvb

  • Hero Member
  • *****
  • Posts: 818
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #88 on: May 03, 2025, 10:22:45 am »
+1 (JD)

Seenkao

  • Hero Member
  • *****
  • Posts: 711
    • New ZenGL.
Re: Why isn't Lazarus / Free Pascal more popular?
« Reply #89 on: May 03, 2025, 01:46:03 pm »
What the compiler should know are things like, the CPU instruction set, it should know how to create dependency chains (so it can internally create dependency diagrams), it should know how many registers it can use, any special abilities (or restrictions) some registers may have, it should know about the number of clock cycles used by every instruction, it should know the number of pipelines the CPU uses and, that's just a _few_ of the things a good compiler should know about the processor it generates code for.  That's just some of the basics, in many cases the compiler should also have basic knowledge of algebra to enable it to restructure expressions in a form that is better optimized.
Алгоритмы так же заложены в компилятор. И даже в FPC. Я искал долго и упорно оптимизацию по делению и нашёл алгоритм на старых форумах. Как оказалось, я зря искал этот алгоритм, посмотрев скомпилированный код (без моего вмешательства) компилятор сам вставил алгоритм деления. Вроде как деление на 10, если мне память не изменяет.

Quote
That's the most basic stuff.  The peephole optimizer.  That "optimization" is mostly to get rid of redundancy which does make the program faster and smaller but that's optimization 0.0001.
И да и нет. Иногда это помогает, в критических областях (пример был выше на 3-й странице, где получили ускорение в 2 и более раз, ни чего не сделав).

Quote
A person makes a program. Typically a program has several modules in which some data is interrelated. After some time, the program is filled and a person can see that changes in one module can affect the work in another module. At the same time, in the first module it does not matter how much time it takes, because it almost does not affect the program itself. But changes in another module are quite critical, and changing the code in the first module with a change in the code in another module will increase the speed of the program. No compiler will see this! The compiler is not designed to analyze the code in different modules or change data structures so that in some part of the program it would be possible to optimize other code to speed up its operation.
A good optimizing compiler is designed to see all the code.  Even FPC supposedly can do that with WPO.
Просматривать и анализировать - это разные вещи.

Quote
It is a lot easier to produce a really good optimizing compiler than having to select, what you _believe_ is a good set of instructions for every piece of code of every program you write.  That's extremely inefficient (not to mention error prone.)
Цитата выше не предполагает что надо изменять код только местно!
Неужели вы думаете, что я занимаюсь только микрооптимизациями?  :D Выше, же мной предоставлен пример, когда компилятор не справится. И это не микрооптимизации, а исследование кода (преднамеренное или случайное, не важно).

И вы же сами подтвердили мои слова:
Quote
The only area where the human being can beat a computer is in the selection of the proper algorithm to solve a problem simply because it is the human being who understands the problem and has critical information needed to make the correct decisions.
:)

Quote
A good compiler should be an expert system whose purpose is to generate superb code for its target processor.  It's the compiler's job to produce optimized code.  It's the programmer's job to select the best algorithms to solve a problem.
Его цель должна быть помощь программисту. Чтобы конечный код был более оптимизирован. Компилятор за человека ни чего не должен делать, если ему об этом только не сказали (например указывая уровень оптимизации, мы указываем, что программа должна быть изменена и насколько сильно мы ей это позволяем делать).

Всегда надо смотреть, стоит заниматься оптимизацией или нет. На начальных этапах разработки программы, зачастую не стоит этим заниматься. Оптимизацией стоит заниматься тогда, когда уже знаешь что программа работает и собираешься её выкладывать её в общий доступ. Это так же не значит, что надо пересматривать весь код и везде вносить микрооптимизации. Это означает, что надо оптимизировать те места, которые ты видишь сейчас.
В дальнейшем, просматривая программу, ты можешь заметить что где-то что-то можно ещё оптимизировать. Почему бы этого не сделать? Ну и оптимизировать надо когда оптимизация требуется, когда явно видно, что программа тормозит. :)


------------------------------------------------------------
Google translate:
What the compiler should know are things like, the CPU instruction set, it should know how to create dependency chains (so it can internally create dependency diagrams), it should know how many registers it can use, any special abilities (or restrictions) some registers may have, it should know about the number of clock cycles used by every instruction, it should know the number of pipelines the CPU uses and, that's just a _few_ of the things a good compiler should know about the processor it generates code for. That's just some of the basics, in many cases the compiler should also have basic knowledge of algebra to enable it to restructure expressions in a form that is better optimized.
The algorithms are also included in the compiler. And even in FPC. I searched long and hard for division optimization and found an algorithm on old forums. As it turned out, I was looking for this algorithm in vain, after looking at the compiled code (without my intervention) the compiler itself inserted a division algorithm. It seems to be division by 10, if my memory serves me right.

Quote
That's the most basic stuff. The peephole optimizer. That "optimization" is mostly to get rid of redundancy which does make the program faster and smaller but that's optimization 0.0001.
Yes and no. Sometimes it helps in critical areas (the example was above on the 3rd page, where they got an acceleration of 2 or more times without doing anything).

Quote
A person makes a program. Typically a program has several modules in which some data is interrelated. After some time, the program is filled and a person can see that changes in one module can affect the work in another module. At the same time, in the first module it does not matter how much time it takes, because it almost does not affect the program itself. But changes in another module are quite critical, and changing the code in the first module with a change in the code in another module will increase the speed of the program. No compiler will see this! The compiler is not designed to analyze the code in different modules or change data structures so that in some part of the program it would be possible to optimize other code to speed up its operation.
A good optimizing compiler is designed to see all the code. Even FPC supposedly can do that with WPO.
Viewing and analyzing are two different things.

Quote
It is a lot easier to produce a really good optimizing compiler than having to select, what you _believe_ is a good set of instructions for every piece of code of every program you write. That's extremely inefficient (not to mention error prone.)
The quote above does not imply that you only need to change the code locally!
Do you really think that I only do micro-optimizations? :D Above, I provided an example where the compiler can't handle it. And this is not micro-optimization, but code research (intentional or accidental, it doesn't matter).

And you yourself confirmed my words:
Quote
The only area where the human being can beat a computer is in the selection of the proper algorithm to solve a problem simply because it is the human being who understands the problem and has critical information needed to make the correct decisions.
:)

Quote
A good compiler should be an expert system whose purpose is to generate superb code for its target processor. It's the compiler's job to produce optimized code. It's the programmer's job to select the best algorithms to solve a problem.
Its goal should be to help the programmer. So that the final code is more optimized. The compiler should not do anything for a person unless it is told to do so (for example, by specifying the optimization level, we indicate that the program should be changed and how much we allow it to do so).

You always have to look at whether it is worth doing optimization or not. At the initial stages of program development, it is often not worth doing. Optimization is worth doing when you already know that the program works and are going to put it out into the public domain. This also does not mean that you need to review the entire code and make micro-optimizations everywhere. This means that you need to optimize those places that you see now. Later, when reviewing the program, you may notice that somewhere something can still be optimized. Why not do this? Well, you need to optimize when optimization is required, when it is clearly visible that the program is slowing down. :)
Rus: Стремлюсь к созданию минимальных и достаточно быстрых приложений.

Eng: I strive to create applications that are minimal and reasonably fast.
Working on ZenGL

 

TinyPortal © 2005-2018