Recent

Author Topic: IRC channel  (Read 10944 times)

PierceNg

  • Sr. Member
  • ****
  • Posts: 369
    • SamadhiWeb
Re: IRC channel
« Reply #75 on: December 04, 2022, 06:42:15 am »
Another way to learn this stuff is to read some books. Here's one: Bertrand Meyer's Introduction to Theory of Programming Languages: full ebook now freely available
« Last Edit: December 04, 2022, 06:46:54 am by PierceNg »

Seenkao

  • Hero Member
  • *****
  • Posts: 545
    • New ZenGL.
Re: IRC channel
« Reply #76 on: December 04, 2022, 09:12:54 am »
I don’t suppose that there is a way to make assembly language cross platform?
Я не думаю, что вам стоит настолько глубоко в ассемблер. Потому что это уже не совсем ассемблер.
Вы можете заглянуть сюда для ознакомления.
Я не совсем понимаю, почему многие считают, что ассемблер не может быть кроссплатформенным.

google translate:
I don't think you should go that deep into assembler. Because it's not exactly assembler anymore.
You can look here for a review.
I do not quite understand why many people think that assembler cannot be cross-platform.

Quote

Is there a way to improve the fpc compiler to be as fast for that as assembly language maybe?
Ускорить можно, догнать полностью нельзя.  Но и ускорением надо заниматься. Но на данное время ни кто не занимается. ARM-ассемблер не оптимизирован практически ни на сколько. Указанные проблемы, указанные на багтрекере, связанные с улучшенной компиляцией кода, на данное время не решаются.
Я просто надеюсь, что они хотя бы начали подумывать об улучшении компиляции кода!
Но, думаю, там и без этого проблем хватает.
Лично я влезать внутрь кода FPC не очень хочу. Особенно не зная где и что искать. Мне проще самому написать нужные ассемблерные вставки.
Поэтому, будем ждать. Разница в компиляции между FPC 3.0.4 и FPC 3.2.0 достаточная, чтоб увидеть, что прогресс есть.

Google translate:
You can speed up, but you can't catch up completely. But you also need to accelerate. But for now, no one is doing it. ARM assembler is not optimized in any way. The indicated problems, indicated on the bug tracker, related to improved code compilation, are not solved at this time.
I just hope they at least started thinking about improving code compilation!
But I think there are enough problems without it.
Personally, I don't really want to get into the FPC code. Especially not knowing where and what to look for. It's easier for me to write the necessary assembler inserts myself.
Therefore, we will wait. The compilation difference between FPC 3.0.4 and FPC 3.2.0 is enough to see that there is progress.
Rus: Стремлюсь к созданию минимальных и достаточно быстрых приложений.

Eng: I strive to create applications that are minimal and reasonably fast.
Working on ZenGL

Joanna

  • Hero Member
  • *****
  • Posts: 701
Re: IRC channel
« Reply #77 on: December 06, 2022, 12:22:09 am »
Quote
real time "chatty" mediums have those characteristics but, for topics that are often centered on problem solving, a static thread of text messages (such as this forum) is usually a much better medium because the progression to the "solution" or best answer isn't simply lost. 
It isn’t about comparing forums to IRC. of course forums have the advantage of being more organized and people online on different dates and times can have a conversation.

IRC has advantages over forums because it’s more likely to learn things you were not expecting to in irc because people happen to be talking about it. Things learned in this way can be useful later. Also as I said before, irc is better for asking about things you know nothing about than forums where people expect you to know the correct terminology for the question you are asking. Irc is more for people interested in a topic who like to talk. Although people with pascal problems do come to irc it is not necessary to have an actual problem to talk about pascal in irc.

As for the discussion of avoiding memory fragmentation with good memory management, how would that be done besides only allocating variables once. I am in the habit of deallocating things as soon as they are out of view and recreating them if needed. Would I eventually use up all the memory?
✨ 🙋🏻‍♀️ More Pascal enthusiasts are needed on IRC .. https://libera.chat/guides/ IRC.LIBERA.CHAT  Ports [6667 plaintext ] or [6697 secure] channel #fpc  Please private Message me if you have any questions or need assistance. 💁🏻‍♀️

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 9754
  • Debugger - SynEdit - and more
    • wiki
Re: IRC channel
« Reply #78 on: December 06, 2022, 12:49:29 am »
As for the discussion of avoiding memory fragmentation with good memory management, how would that be done besides only allocating variables once. I am in the habit of deallocating things as soon as they are out of view and recreating them if needed. Would I eventually use up all the memory?

For your average app => the mem manager does a good enough job. However if you run a server app, expected to run 24/7 all year round, then you may want to take steps.

Though of course some desktop apps may also need it. The memory manager may be good to deal with it, but if you have an app that consumes a lot of memory and you have to add a few percent for ongoing fragmentation, then you may just cross the critical line.

I am not an expert on it - and if you google it, there will probably a lot more.

One effective measure, that is relatively easy done, is to use your own pools for specific data.  (Note that many mem mgr will do that for small chunks already).
If you often instantiate (and then free) a specific class (i.e. all instances have the exact same size). You could allocate a pool of memory fitting 100 such instances.
There be no fragmentation build up, since any gap is guaranteed to be filled by the next alloc (remember: same size => the next alloc will be a perfect fit)

Or pre-allocate chunks of pre-defined sizes for certain task. So you can return the entire chunk when you are done.
You can run that work in a different process => then it wont affect your main processes memory.

Work with as little dynamic alloc as possible.
I wouldn't think about that, unless it's mission critical server software.
Or for embedded and other systems with strictly limit memory.


440bx

  • Hero Member
  • *****
  • Posts: 3921
Re: IRC channel
« Reply #79 on: December 06, 2022, 02:39:53 am »
As for the discussion of avoiding memory fragmentation with good memory management, how would that be done besides only allocating variables once. I am in the habit of deallocating things as soon as they are out of view and recreating them if needed. Would I eventually use up all the memory?
Fragmentation occurs because there is a significant variation in the lifetimes of the various entities allocated in a single large global pool (usually a heap.)

One of the first rules to follow to avoid fragmentation is to create separate pools for distinct types of data that usually have similar lifetimes (group by data type which often have similar lifetimes.)  It is not possible to give universal rules because the various pools required are determined by the nature of the problem being solved. 

In Windows, there are two features that greatly simplify memory management. 

One is the ability to create individual, extensible heaps. Those are excellent to keep large numbers of relatively small and heterogenous relatively short lived data that must be accessible in more than one function.  Once the data is no longer needed, the _entire_ heap is destroyed, no piecemeal deletions (which in addition to causing fragmentation are also, the main contributors to memory leaks.)  As a bonus, if the heaps are created in such a way that they are guaranteed to be accessed by only one thread at a time then, they don't need to use a synchronization object which makes allocations and deallocations faster.

The other feature which greatly simplifies memory management in any demand paged O/S (such as Windows and most O/Ss today) is the ability to allocate large blocks of memory but, have the O/S allocate/commit the memory _only_ when it is actually needed/used.  For instance, in 64 bit Windows, a programmer can allocate "n" blocks of 1 GB (that's gigabytes) of memory and after the allocation is done the _total_ amount of memory allocated/used will be "n" x 4 _kilobytes_, the rest will only be allocated by the O/S if and when the program reads/writes to an address in the range.    You can verify that's how much is allocated by Windows using either Process Hacker 2 or Process Explorer (look at the working set size)  For the sake of example, if "n" is 16, the _total_ amount of memory Windows will set aside to cover the 16 blocks of 1 GB each will be a "whopping" 64 kilobytes (16 * 4kb.)

In the above case, it is quite common to see code that only reserves the range then has an _unnecessary_ explicit exception handler to reserve commit more as needed.  The exception handler is unnecessary because because any demand paged O/S does the allocation automatically and only when needed if the memory is also committed in addition to reserved.    The problem is that, it is commonly (and erroneously) believed that if the memory is committed then it will also be entirely allocated during the call that makes the allocation (it does not, most O/Ss will only allocate one page.)

As a bonus, when using virtual memory, the programmer can create multiple regions within a range.  by marking some regions as inaccessible before and after an accessible one, a programmer can more easily test for overruns and errant pointers.

Lastly, it's unlikely that there is another feature that has more potential to simplify code than custom memory management (not to mention making the code faster.)

HTH.
« Last Edit: December 06, 2022, 02:46:08 am by 440bx »
(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

Joanna

  • Hero Member
  • *****
  • Posts: 701
Re: IRC channel
« Reply #80 on: December 08, 2022, 04:45:36 pm »
Thanks for the explanation. I’m not sure how to create pools from inside pascal code or is it set somewhere else? If the memory does get too fragmented it seems like it could be de fragmented similar to how a disk is defragmented? Which I assume would involve copying things to temporary locations and then writing them more compactly?
✨ 🙋🏻‍♀️ More Pascal enthusiasts are needed on IRC .. https://libera.chat/guides/ IRC.LIBERA.CHAT  Ports [6667 plaintext ] or [6697 secure] channel #fpc  Please private Message me if you have any questions or need assistance. 💁🏻‍♀️

KodeZwerg

  • Hero Member
  • *****
  • Posts: 2006
  • Fifty shades of code.
    • Delphi & FreePascal
Re: IRC channel
« Reply #81 on: December 08, 2022, 04:56:56 pm »
You should not deal with memory at all, that does do the memory manager for you.
« Last Edit: Tomorrow at 31:76:97 xm by KodeZwerg »

440bx

  • Hero Member
  • *****
  • Posts: 3921
Re: IRC channel
« Reply #82 on: December 08, 2022, 05:54:32 pm »
Thanks for the explanation.
You're welcome but, the problem is that just about any explanation will be deficient in some way because correct memory management is dependent on the situation.   All that can be accurately exposed are the methods but, their correct application depends on, as previously stated, the situation at hand.

I’m not sure how to create pools from inside pascal code or is it set somewhere else?
TTBOMK, there is no mechanism in Pascal, including FPC, to create separate/independent pools of memory.  In Windows those can be created using HeapCreate, HeapDestroy, VirtualAlloc(Ex), VirtualFree for Heaps and Virtual memory respectively.  I know that Linux offers similar functions for Virtual memory management but, I don't know if it offers creation and destruction of heaps.

If the memory does get too fragmented it seems like it could be de fragmented similar to how a disk is defragmented?
Yes, it is similar but, not quite the same.  The difference is mostly based on the fact that on a disk there is an atomic allocation size (usually the cluster, which in Windows is usually 4K), a file can be made of a linked list of clusters.  The file is considered fragmented when the clusters aren't contiguous. 

In memory, particularly heaps, the problem is different.  Free heap blocks, unlike a disk cluster, can vary greatly in size which causes problems when attempting to reuse them: either they may be too small, therefore unusable, or too large which, if used for a smaller structure, means some of the memory is simply wasted.

Which I assume would involve copying things to temporary locations and then writing them more compactly?
there are lots of problems associated with attempting to defragment memory.  It's rarely worth the time or the code.



You should not deal with memory at all, that does do the memory manager for you.
I disagree.  Good memory management is one of the major factors in writing a program that is resource efficient and responsive.  Languages and "systems" (read VMs) that automate memory management for the programmer run into limitations that are a direct result of the "convenience".
(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 9754
  • Debugger - SynEdit - and more
    • wiki
Re: IRC channel
« Reply #83 on: December 08, 2022, 06:09:49 pm »
You should not deal with memory at all, that does do the memory manager for you.

+1 for 99% of cases in Desktop apps...

Thanks for the explanation. I’m not sure how to create pools from inside pascal code or is it set somewhere else? If the memory does get too fragmented it seems like it could be de fragmented similar to how a disk is defragmented? Which I assume would involve copying things to temporary locations and then writing them more compactly?

No, you can't defragment it. (Well not in any practical sense).

If you code allocated some memory for whatever reason, it could have any amount of pointers to it. You can't move that memory, unless you can at the same time update every single of those pointers. So only the code that has the pointers, can "ask if re-allocation would help". But there may be many different bits of code having that pointer, and they would all have to do that together.... => so practically impossible.



The how to manage stuff yourself .... There are many ways. They also often cross over into performance optimization...

Like the earlier mention (by 44bx) of allocating dedicated pages. This can also reduce swapping (if cleverly done).

Examples:

Lists have "capacity", so you pre-allocate what you need, rather than increasing usage in small bits.
So you never have to free those small bits, and you don't create gaps by doing so.

Classes have "newinstance" in which you can allocate the memory.
So if you have a class of which you frequently create and destroy instances, and you know the max you ever have created is 1000, then you allocate the memory for those 1000. You kind of handle that as an array of mem_chunks_with_size_of_the_instance.
This pre-allocated mem, will of course have at times gaps. But due to all chunks (free/gap and used) have the same size, those gaps can always be used. That pre-alloc mem will always be able to hold those 1000 instances.
Of course instead you could create 1000 instance once, and then re-use them without destroying them.
The key is, that you pre-determine the maximum you will ever need.

So the perfect app (for non fragmentation) will never grow allocations (neither in size, nor count).
Determine at the start what you need, allocate it once, and be done with allocating => and then there wont be fragmentation.

From memory / last time I looked at it: "NGINX" does that.
It allocates a fixed amount of "connection" objects. If they are used up, it will refuse new incoming connections.
The sys-admin can configure that amount, before starting the nginx server.
The limit is fine, because normal operation of a webserver should be that if a connection is established, the server is busy (using cpu) computing the response. Since the server has limited amount of CPU (even if indirect by forwarding to other servers, the capacity of which is also fixed), it will only be able to compute so much. So there is no point of accepting new connection, if it is clear that it will be on hold for an excessively too long time. Such connections may as well be refused straight away. And hence the maximum of objects required can be pre-calculated.

On the other hand "Apache" at least version 1, would solve any memory issue by simply restarting each child process at regular intervals. Since each process has its own virtual address space, this solves any mem problem (fragmentation, leaks, ...). It is an expensive operation though.

« Last Edit: December 08, 2022, 06:12:56 pm by Martin_fr »

Joanna

  • Hero Member
  • *****
  • Posts: 701
Re: IRC channel
« Reply #84 on: December 09, 2022, 01:17:51 am »
Quote
You can't move that memory, unless you can at the same time update every single of those pointers.
That’s interesting I didn’t think about the pointers being fixed addresses.
If I allocate and deallocate the same 5 classes whose size doesn’t change, will they stay at the same addresses in memory each time?
✨ 🙋🏻‍♀️ More Pascal enthusiasts are needed on IRC .. https://libera.chat/guides/ IRC.LIBERA.CHAT  Ports [6667 plaintext ] or [6697 secure] channel #fpc  Please private Message me if you have any questions or need assistance. 💁🏻‍♀️

440bx

  • Hero Member
  • *****
  • Posts: 3921
Re: IRC channel
« Reply #85 on: December 09, 2022, 02:22:15 am »
That’s interesting I didn’t think about the pointers being fixed addresses.
If I allocate and deallocate the same 5 classes whose size doesn’t change, will they stay at the same addresses in memory each time?
_maybe_ BUT, you most definitely should _not_ rely on that in any way.

(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

KodeZwerg

  • Hero Member
  • *****
  • Posts: 2006
  • Fifty shades of code.
    • Delphi & FreePascal
Re: IRC channel
« Reply #86 on: December 09, 2022, 02:55:12 am »
You should not deal with memory at all, that does do the memory manager for you.
I disagree.  Good memory management is one of the major factors in writing a program that is resource efficient and responsive.  Languages and "systems" (read VMs) that automate memory management for the programmer run into limitations that are a direct result of the "convenience".
Can you give me an example where you was in need to "defragment" memory? Just to be sure, we are now talking about RAM in this IRC channel thread, right?
« Last Edit: Tomorrow at 31:76:97 xm by KodeZwerg »

440bx

  • Hero Member
  • *****
  • Posts: 3921
Re: IRC channel
« Reply #87 on: December 09, 2022, 03:49:38 am »
Can you give me an example where you was in need to "defragment" memory? Just to be sure, we are now talking about RAM in this IRC channel thread, right?
Fragmentation is a symptom of, let's call it, poor memory management and, one of the ways that memory is managed poorly is when a program relies heavily on a single pool of memory (usually the default process heap.)

You could rightfully ask ... what's the problem in relying on a single heap for all memory allocations ? ... fair question.  One of the problems that makes testing and debugging more difficult is that a heap is like an alphabet soup of memory allocations, i.e, all data types are all found in the same bucket in a sequence that can easily have no correlation with the order in which they have been allocated.   This makes it very difficult to visualize what the code did and what it should do next.

Compare that situation, which is the common situation when using a single heap, with that of an application that either, creates dedicated heaps or allocates blocks of virtual memory for specific data types.  IOW, one memory pool per data type.  Among the many advantages are: there is only one data type, that means, no need to mentally identify and separate different structures.  It is common for the structures to be in the order in which they were allocated (except if the program mixes deletes with allocations - which is something that should be avoided.)  Since the bucket (heap) was created by the code, there is a usually a pointer to that bucket that can be used in an external memory viewer to visually verify that everything is as expected as the code is running (unlike when using debugger for the same purpose, a custom heap is always in scope, therefore always inspectable.)

Memory fragmentation in today's O/Ss results in memory being wasted.  Wasting memory is not desirable but considering how much memory is available these days, unless the waste is above 20 percent, I personally wouldn't worry about it.  What I do "worry" about is that "alphabet soup" effect a single memory pool has on data and how much harder it makes testing and program verification (and ensuring there are no memory leaks, among other things.)

To see how bad memory fragmentation can be and the problems it creates, the 16 bit version of Windows is ideal for that.  In that version of Windows, many of the memory allocation routines returned a "handle" to memory, that handle was actually a pointer to a pointer.  the reason it did that is because in 16 bit, fragmentation often got really bad and by returning a pointer to a pointer that allowed the O/S to move memory around, which meant updating the first level pointer but, that wasn't a problem because the application/program wasn't using those pointers, it was using handles which meant that, to have access to the memory block it need to first call APIs such as GloblalLock or GlobalFix.  Memory management in 16 bit Windows was a _mess_ and, that mess was almost entirely due to the fragmentation of a dedicated "generic" memory pool (i.e, a heap.)

Fortunately, things aren't nearly that bad in 32 bit (thank paging  for that) but, having a single memory bucket for all memory management is, let's say, "less than ideal", one of the problems is fragmentation (which results in wasted memory) but, IMO, worse, is all the complications it causes in the _human_ visualization and tracking of data which has a direct impact on program correctness and verification (and performance but, that is a secondary concern.)


(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 9754
  • Debugger - SynEdit - and more
    • wiki
Re: IRC channel
« Reply #88 on: December 09, 2022, 04:00:11 am »
Just to be sure, we are now talking about RAM in this IRC channel thread, right?

You mean, yet another topic? But the thread is called "IRC channel". So we do what one would do on an IRC channel. We chat about this, that and everything else...  ;)

Bogen85

  • Hero Member
  • *****
  • Posts: 595
Re: IRC channel
« Reply #89 on: December 09, 2022, 04:13:25 am »
Just to be sure, we are now talking about RAM in this IRC channel thread, right?

I already answered this.  ::)
https://forum.lazarus.freepascal.org/index.php/topic,61328.msg462496.html#msg462496

That’s interesting I didn’t think about the pointers being fixed addresses.
If I allocate and deallocate the same 5 classes whose size doesn’t change, will they stay at the same addresses in memory each time?
_maybe_ BUT, you most definitely should _not_ rely on that in any way.

It is very rare for the majority of programs that are either interactive or otherwise don't have predictable timing and order of inputs to follow repeatable consistent memory allocation patterns.

Take this program for example (Not the most efficient, but it demonstrates what is going on, and that is only reason I wrote it, to explain what is being discussed)

Code: Pascal  [Select][+][-]
  1. {$mode objfpc}{$H+}
  2. program ClassMemLoc;
  3.  
  4. uses sysutils;
  5.  
  6. type ABCpointers = array [1..25] of pointer;
  7.  
  8. var
  9.   n: integer = 0;
  10.   abc_pointers: ABCpointers;
  11.  
  12. type
  13.  
  14.   BooClass = class
  15.   private
  16.     id: integer;
  17.     tag: string;
  18.   public
  19.     constructor create(const _tag: string);
  20.     destructor destroy; override;
  21.   end;
  22.  
  23.   BooPointers = array of BooClass;
  24.  
  25. procedure track_init;
  26.   var i: integer;
  27.   begin
  28.     for i := low(abc_pointers) to high(abc_pointers) do abc_pointers[i] := nil;
  29.   end;
  30.  
  31. function track(const boo: pointer): string;
  32.   var i: integer;
  33.   begin
  34.     for i := low(abc_pointers) to high(abc_pointers) do begin
  35.       if (abc_pointers[i] = nil) or (abc_pointers[i] = boo) then begin
  36.         abc_pointers[i] := boo;
  37.         case i of
  38.           1: exit(' (pointer  1)');
  39.           2: exit(' (pointer  2)');
  40.           3: exit(' (pointer  3)');
  41.           4: exit(' (pointer  4)');
  42.           5: exit(' (pointer  5)');
  43.           6: exit(' (pointer  6)');
  44.           7: exit(' (pointer  7)');
  45.           8: exit(' (pointer  8)');
  46.           9: exit(' (pointer  9)');
  47.          10: exit(' (pointer 10)');
  48.          11: exit(' (pointer 11)');
  49.          12: exit(' (pointer 12)');
  50.          13: exit(' (pointer 13)');
  51.          14: exit(' (pointer 14)');
  52.          15: exit(' (pointer 15)');
  53.          16: exit(' (pointer 16)');
  54.          17: exit(' (pointer 17)');
  55.          18: exit(' (pointer 18)');
  56.          19: exit(' (pointer 19)');
  57.          20: exit(' (pointer 20)');
  58.          21: exit(' (pointer 21)');
  59.          22: exit(' (pointer 22)');
  60.          23: exit(' (pointer 23)');
  61.          24: exit(' (pointer 24)');
  62.          25: exit(' (pointer 25)');
  63.         end;
  64.       end;
  65.     end;
  66.     result := ' (untracked)';
  67.   end;
  68.  
  69. constructor BooClass.create(const _tag: string);
  70.   begin
  71.     tag := _tag;
  72.     inc(n);
  73.     id := n;
  74.     writeln(format('Boo Create: %s %02d %p %s', [tag, id, pointer(self), track(pointer(self))]));
  75.   end;
  76.  
  77. destructor BooClass.destroy;
  78.   begin
  79.     writeln(format('    Boo Destroy: %s %02d %p %s', [tag, id, pointer(self), track(pointer(self))]));
  80.     inherited;
  81.   end;
  82.  
  83. var
  84.   i : integer;
  85.   a,b,c,d: BooClass;
  86.   bp: BooPointers;
  87.  
  88.   function newBoo(const tag: string): BooClass;
  89.     begin
  90.       result := BooClass.create(tag);
  91.     end;
  92.  
  93. begin
  94.   track_init;
  95.   for i := 1 to 20 do begin
  96.     a := newBoo('a');
  97.     b := newBoo('b');
  98.     c := newBoo('c');
  99.     d := newBoo('d');
  100.     writeln;
  101.     a.free;
  102.     b.free;
  103.     c.free;
  104.     writeln;
  105.     insert(d, bp, length(bp));
  106.   end;
  107.   writeln('free all d');
  108.   for i := low(bp) to high(bp) do begin
  109.     bp[i].free;
  110.   end;
  111. end.
  112.  

Code: Text  [Select][+][-]
  1. $ fpc classmemloc.pas
  2. Free Pascal Compiler version 3.2.2 [2022/08/17] for x86_64
  3. Copyright (c) 1993-2021 by Florian Klaempfl and others
  4. Target OS: Linux for x86-64
  5. Compiling classmemloc.pas
  6. classmemloc.pas(105,26) Warning: Variable "bp" of a managed type does not seem to be initialized
  7. Linking classmemloc
  8. 111 lines compiled, 0.2 sec
  9. 1 warning(s) issued
  10.  

And when run:

Code: Text  [Select][+][-]
  1. Boo Create: a  1 00007FC2EF24C060  (pointer  1)
  2. Boo Create: b  2 00007FC2EF24C080  (pointer  2)
  3. Boo Create: c  3 00007FC2EF24C0A0  (pointer  3)
  4. Boo Create: d  4 00007FC2EF24C0C0  (pointer  4)
  5.  
  6.     Boo Destroy: a  1 00007FC2EF24C060  (pointer  1)
  7.     Boo Destroy: b  2 00007FC2EF24C080  (pointer  2)
  8.     Boo Destroy: c  3 00007FC2EF24C0A0  (pointer  3)
  9.  
  10. Boo Create: a  5 00007FC2EF24C080  (pointer  2)
  11. Boo Create: b  6 00007FC2EF24C060  (pointer  1)
  12. Boo Create: c  7 00007FC2EF24C0E0  (pointer  5)
  13. Boo Create: d  8 00007FC2EF24C100  (pointer  6)
  14.  
  15.     Boo Destroy: a  5 00007FC2EF24C080  (pointer  2)
  16.     Boo Destroy: b  6 00007FC2EF24C060  (pointer  1)
  17.     Boo Destroy: c  7 00007FC2EF24C0E0  (pointer  5)
  18.  
  19. Boo Create: a  9 00007FC2EF24C0A0  (pointer  3)
  20. Boo Create: b 10 00007FC2EF24C0E0  (pointer  5)
  21. Boo Create: c 11 00007FC2EF24C060  (pointer  1)
  22. Boo Create: d 12 00007FC2EF24C080  (pointer  2)
  23.  
  24.     Boo Destroy: a  9 00007FC2EF24C0A0  (pointer  3)
  25.     Boo Destroy: b 10 00007FC2EF24C0E0  (pointer  5)
  26.     Boo Destroy: c 11 00007FC2EF24C060  (pointer  1)
  27.  
  28. Boo Create: a 13 00007FC2EF24C060  (pointer  1)
  29. Boo Create: b 14 00007FC2EF24C0E0  (pointer  5)
  30. Boo Create: c 15 00007FC2EF24C0A0  (pointer  3)
  31. Boo Create: d 16 00007FC2EF24C120  (pointer  7)
  32.  
  33.     Boo Destroy: a 13 00007FC2EF24C060  (pointer  1)
  34.     Boo Destroy: b 14 00007FC2EF24C0E0  (pointer  5)
  35.     Boo Destroy: c 15 00007FC2EF24C0A0  (pointer  3)
  36.  
  37. Boo Create: a 17 00007FC2EF24C0A0  (pointer  3)
  38. Boo Create: b 18 00007FC2EF24C0E0  (pointer  5)
  39. Boo Create: c 19 00007FC2EF24C060  (pointer  1)
  40. Boo Create: d 20 00007FC2EF24C140  (pointer  8)
  41.  
  42.     Boo Destroy: a 17 00007FC2EF24C0A0  (pointer  3)
  43.     Boo Destroy: b 18 00007FC2EF24C0E0  (pointer  5)
  44.     Boo Destroy: c 19 00007FC2EF24C060  (pointer  1)
  45.  
  46. Boo Create: a 21 00007FC2EF24C060  (pointer  1)
  47. Boo Create: b 22 00007FC2EF24C0E0  (pointer  5)
  48. Boo Create: c 23 00007FC2EF24C0A0  (pointer  3)
  49. Boo Create: d 24 00007FC2EF24C160  (pointer  9)
  50.  
  51.     Boo Destroy: a 21 00007FC2EF24C060  (pointer  1)
  52.     Boo Destroy: b 22 00007FC2EF24C0E0  (pointer  5)
  53.     Boo Destroy: c 23 00007FC2EF24C0A0  (pointer  3)
  54.  
  55. Boo Create: a 25 00007FC2EF24C0A0  (pointer  3)
  56. Boo Create: b 26 00007FC2EF24C0E0  (pointer  5)
  57. Boo Create: c 27 00007FC2EF24C060  (pointer  1)
  58. Boo Create: d 28 00007FC2EF24C180  (pointer 10)
  59.  
  60.     Boo Destroy: a 25 00007FC2EF24C0A0  (pointer  3)
  61.     Boo Destroy: b 26 00007FC2EF24C0E0  (pointer  5)
  62.     Boo Destroy: c 27 00007FC2EF24C060  (pointer  1)
  63.  
  64. Boo Create: a 29 00007FC2EF24C060  (pointer  1)
  65. Boo Create: b 30 00007FC2EF24C0E0  (pointer  5)
  66. Boo Create: c 31 00007FC2EF24C0A0  (pointer  3)
  67. Boo Create: d 32 00007FC2EF24C1A0  (pointer 11)
  68.  
  69.     Boo Destroy: a 29 00007FC2EF24C060  (pointer  1)
  70.     Boo Destroy: b 30 00007FC2EF24C0E0  (pointer  5)
  71.     Boo Destroy: c 31 00007FC2EF24C0A0  (pointer  3)
  72.  
  73. Boo Create: a 33 00007FC2EF24C0A0  (pointer  3)
  74. Boo Create: b 34 00007FC2EF24C0E0  (pointer  5)
  75. Boo Create: c 35 00007FC2EF24C060  (pointer  1)
  76. Boo Create: d 36 00007FC2EF24C1C0  (pointer 12)
  77.  
  78.     Boo Destroy: a 33 00007FC2EF24C0A0  (pointer  3)
  79.     Boo Destroy: b 34 00007FC2EF24C0E0  (pointer  5)
  80.     Boo Destroy: c 35 00007FC2EF24C060  (pointer  1)
  81.  
  82. Boo Create: a 37 00007FC2EF24C060  (pointer  1)
  83. Boo Create: b 38 00007FC2EF24C0E0  (pointer  5)
  84. Boo Create: c 39 00007FC2EF24C0A0  (pointer  3)
  85. Boo Create: d 40 00007FC2EF24C1E0  (pointer 13)
  86.  
  87.     Boo Destroy: a 37 00007FC2EF24C060  (pointer  1)
  88.     Boo Destroy: b 38 00007FC2EF24C0E0  (pointer  5)
  89.     Boo Destroy: c 39 00007FC2EF24C0A0  (pointer  3)
  90.  
  91. Boo Create: a 41 00007FC2EF24C0A0  (pointer  3)
  92. Boo Create: b 42 00007FC2EF24C0E0  (pointer  5)
  93. Boo Create: c 43 00007FC2EF24C060  (pointer  1)
  94. Boo Create: d 44 00007FC2EF24C200  (pointer 14)
  95.  
  96.     Boo Destroy: a 41 00007FC2EF24C0A0  (pointer  3)
  97.     Boo Destroy: b 42 00007FC2EF24C0E0  (pointer  5)
  98.     Boo Destroy: c 43 00007FC2EF24C060  (pointer  1)
  99.  
  100. Boo Create: a 45 00007FC2EF24C060  (pointer  1)
  101. Boo Create: b 46 00007FC2EF24C0E0  (pointer  5)
  102. Boo Create: c 47 00007FC2EF24C0A0  (pointer  3)
  103. Boo Create: d 48 00007FC2EF24C220  (pointer 15)
  104.  
  105.     Boo Destroy: a 45 00007FC2EF24C060  (pointer  1)
  106.     Boo Destroy: b 46 00007FC2EF24C0E0  (pointer  5)
  107.     Boo Destroy: c 47 00007FC2EF24C0A0  (pointer  3)
  108.  
  109. Boo Create: a 49 00007FC2EF24C0A0  (pointer  3)
  110. Boo Create: b 50 00007FC2EF24C0E0  (pointer  5)
  111. Boo Create: c 51 00007FC2EF24C060  (pointer  1)
  112. Boo Create: d 52 00007FC2EF24C240  (pointer 16)
  113.  
  114.     Boo Destroy: a 49 00007FC2EF24C0A0  (pointer  3)
  115.     Boo Destroy: b 50 00007FC2EF24C0E0  (pointer  5)
  116.     Boo Destroy: c 51 00007FC2EF24C060  (pointer  1)
  117.  
  118. Boo Create: a 53 00007FC2EF24C060  (pointer  1)
  119. Boo Create: b 54 00007FC2EF24C0E0  (pointer  5)
  120. Boo Create: c 55 00007FC2EF24C0A0  (pointer  3)
  121. Boo Create: d 56 00007FC2EF24C260  (pointer 17)
  122.  
  123.     Boo Destroy: a 53 00007FC2EF24C060  (pointer  1)
  124.     Boo Destroy: b 54 00007FC2EF24C0E0  (pointer  5)
  125.     Boo Destroy: c 55 00007FC2EF24C0A0  (pointer  3)
  126.  
  127. Boo Create: a 57 00007FC2EF24C0A0  (pointer  3)
  128. Boo Create: b 58 00007FC2EF24C0E0  (pointer  5)
  129. Boo Create: c 59 00007FC2EF24C060  (pointer  1)
  130. Boo Create: d 60 00007FC2EF24C280  (pointer 18)
  131.  
  132.     Boo Destroy: a 57 00007FC2EF24C0A0  (pointer  3)
  133.     Boo Destroy: b 58 00007FC2EF24C0E0  (pointer  5)
  134.     Boo Destroy: c 59 00007FC2EF24C060  (pointer  1)
  135.  
  136. Boo Create: a 61 00007FC2EF24C060  (pointer  1)
  137. Boo Create: b 62 00007FC2EF24C0E0  (pointer  5)
  138. Boo Create: c 63 00007FC2EF24C0A0  (pointer  3)
  139. Boo Create: d 64 00007FC2EF24C2A0  (pointer 19)
  140.  
  141.     Boo Destroy: a 61 00007FC2EF24C060  (pointer  1)
  142.     Boo Destroy: b 62 00007FC2EF24C0E0  (pointer  5)
  143.     Boo Destroy: c 63 00007FC2EF24C0A0  (pointer  3)
  144.  
  145. Boo Create: a 65 00007FC2EF24C0A0  (pointer  3)
  146. Boo Create: b 66 00007FC2EF24C0E0  (pointer  5)
  147. Boo Create: c 67 00007FC2EF24C060  (pointer  1)
  148. Boo Create: d 68 00007FC2EF24C2C0  (pointer 20)
  149.  
  150.     Boo Destroy: a 65 00007FC2EF24C0A0  (pointer  3)
  151.     Boo Destroy: b 66 00007FC2EF24C0E0  (pointer  5)
  152.     Boo Destroy: c 67 00007FC2EF24C060  (pointer  1)
  153.  
  154. Boo Create: a 69 00007FC2EF24C060  (pointer  1)
  155. Boo Create: b 70 00007FC2EF24C0E0  (pointer  5)
  156. Boo Create: c 71 00007FC2EF24C0A0  (pointer  3)
  157. Boo Create: d 72 00007FC2EF24C2E0  (pointer 21)
  158.  
  159.     Boo Destroy: a 69 00007FC2EF24C060  (pointer  1)
  160.     Boo Destroy: b 70 00007FC2EF24C0E0  (pointer  5)
  161.     Boo Destroy: c 71 00007FC2EF24C0A0  (pointer  3)
  162.  
  163. Boo Create: a 73 00007FC2EF24C0A0  (pointer  3)
  164. Boo Create: b 74 00007FC2EF24C0E0  (pointer  5)
  165. Boo Create: c 75 00007FC2EF24C060  (pointer  1)
  166. Boo Create: d 76 00007FC2EF24C300  (pointer 22)
  167.  
  168.     Boo Destroy: a 73 00007FC2EF24C0A0  (pointer  3)
  169.     Boo Destroy: b 74 00007FC2EF24C0E0  (pointer  5)
  170.     Boo Destroy: c 75 00007FC2EF24C060  (pointer  1)
  171.  
  172. Boo Create: a 77 00007FC2EF24C060  (pointer  1)
  173. Boo Create: b 78 00007FC2EF24C0E0  (pointer  5)
  174. Boo Create: c 79 00007FC2EF24C0A0  (pointer  3)
  175. Boo Create: d 80 00007FC2EF24C320  (pointer 23)
  176.  
  177.     Boo Destroy: a 77 00007FC2EF24C060  (pointer  1)
  178.     Boo Destroy: b 78 00007FC2EF24C0E0  (pointer  5)
  179.     Boo Destroy: c 79 00007FC2EF24C0A0  (pointer  3)
  180.  
  181. free all d
  182.     Boo Destroy: d  4 00007FC2EF24C0C0  (pointer  4)
  183.     Boo Destroy: d  8 00007FC2EF24C100  (pointer  6)
  184.     Boo Destroy: d 12 00007FC2EF24C080  (pointer  2)
  185.     Boo Destroy: d 16 00007FC2EF24C120  (pointer  7)
  186.     Boo Destroy: d 20 00007FC2EF24C140  (pointer  8)
  187.     Boo Destroy: d 24 00007FC2EF24C160  (pointer  9)
  188.     Boo Destroy: d 28 00007FC2EF24C180  (pointer 10)
  189.     Boo Destroy: d 32 00007FC2EF24C1A0  (pointer 11)
  190.     Boo Destroy: d 36 00007FC2EF24C1C0  (pointer 12)
  191.     Boo Destroy: d 40 00007FC2EF24C1E0  (pointer 13)
  192.     Boo Destroy: d 44 00007FC2EF24C200  (pointer 14)
  193.     Boo Destroy: d 48 00007FC2EF24C220  (pointer 15)
  194.     Boo Destroy: d 52 00007FC2EF24C240  (pointer 16)
  195.     Boo Destroy: d 56 00007FC2EF24C260  (pointer 17)
  196.     Boo Destroy: d 60 00007FC2EF24C280  (pointer 18)
  197.     Boo Destroy: d 64 00007FC2EF24C2A0  (pointer 19)
  198.     Boo Destroy: d 68 00007FC2EF24C2C0  (pointer 20)
  199.     Boo Destroy: d 72 00007FC2EF24C2E0  (pointer 21)
  200.     Boo Destroy: d 76 00007FC2EF24C300  (pointer 22)
  201.     Boo Destroy: d 80 00007FC2EF24C320  (pointer 23)
  202.  

It takes a few iterations for a, b, c to fall into a predictable pattern, but they eventually do.
But this is a contrived example, as one can't actually predict the input order and timing in most programs.
It this case the input is from a loop, and what is being allocated is always the same size, so it is obviously a predictable pattern as to what allocations are requested.

Even so, one can't rely on the allocater falling into this predicable pattern.
The 1, 5, 3 and 3, 5, 1 patterns for a, b, c are what ended up being predicable patterns, but the allocater could have assigned those to d, with may result in no predictable pattern.

« Last Edit: December 09, 2022, 01:43:29 pm by Bogen85 »

 

TinyPortal © 2005-2018