The idea of using handles that point to pointers to memory locations seems like a good idea. It seems like being able to defragment memory is very important for programs expected to run for a long time without stopping.
The logic of defrafmenting memory seems like it would be fairly simple kind of like defragmenting people in a theatre by making them all scoot over and remove gaps
But at a higher level, how does this translate into use by by the data structures and current live stack frames that are using the pointers?
You move memory blocks around in a processor level multi-threaded application everything using those blocks is going to crash.
The handles would then have to essentially be indirect pointers (so regular code is not getting actual pointers any more). Even with them being indirect, there would need to be synchronization to make sure sure nothing is using a live pointer. So something like a stop the world garbage collection sweep. Then on top of that, the handles are now essentially data structures, and what is going to defragment them?
You can't defragment the virtual address space as that would need to change the pointers that are stored e.g. in the stack or where ever. You can defragment your own heap only with deallocated areas by combining them again or whatever depending on your heap management algorithm.
My point was that, theoretically by using MMU, the fragmentation could be avoided by redirecting/merging physical RAM pages in the address translation table (which could be seen as additional redirection, like handles in GlobalLock). But since the MMU is managed exclusively by the operating system, this is almost impractical unless one is working with the bare metal.
If we talking about de-fragmenting at an MMU level, on processor level multi-threading that is still likely going to be stop the world for the process using the region be de-fragmented.
Then there is page size thrown into the mix, and OS level memory allocation and freeing being both expensive time wise (on OSes where crossing the syscall boundry is expensive).
This bug report highlights the page size issue.
https://bugzilla.redhat.com/show_bug.cgi?id=2001569Description of problem: At the moment the aarch64 kernel builds of RHEL8 assume a 64kB pagesize, which works fine most of the time. However when trying to run in a virtualized environment on Apple M1 devices via Parallels Desktop it is not possible to boot RHEL8 because the M1 chips only support 4kB & 16kB pagesizes. Ubuntu and Debian are compatible with Parallels on these devices, so it should be possible for RHEL as well.
On Linux at least (and would likely apply to other *nix like the common BSDs) Because allocation at syscall level is both expensive and not fine grained (you can't just allocate 32 bytes for a string data structure, you need to allocate at least the page size, be that 4K, 16K, or 64K as being discussed in the above bug report) then most language runtime do there own heap management, so the individual pointers to data structures are not mapped at the logical address to physical address MMU translation layer.
As such, the run-times do their own pool management, and often allocation much larger memory pools than what was requested, and don't give those pools back to operating system right away (which is expensive) when data is freed, as they make the memory blocks within those pools available to other parts or your program.
Most of the memory fragmentation being discussed here I'm assuming is taking place at the language runtime level. And de-fragmenting at that level is using logical addresses, not physical addresses.
And just using "handles" everywhere is not a simple solution to the problem, as I discussed at the beginning of this reply.