Forum > General

Strange memory allocation problem

<< < (2/5) > >>

MarkMLl:

--- Quote from: jollytall on December 01, 2024, 04:08:59 pm ---My guess is that it is because the array is continuously resized and moved in memory leaving a lot of holes in the memory. So, I tried tricks. E.g. at the beginning I set the array size of my points to a very large number and then setting it back to the starting point (100 points to start with) where it can grow again. I thought that with this trick the array reserves the large space and when it grows it does not need to be moved around (speed, memory efficiency). Unfortunately it can be seen that the used memory jumps down, so probably it does not keep the Heap reserved.

--- End quote ---

BTDT, you have to allocate a "reasonable" maximum size and then keep track of how much space you're actually using. Thaddy has in the past recommended that when you hit the limit you multiply the size by the Golden Ratio, but didn't provide any rationale.

There's some classic Knuth code where he implements multiple heaps, with different heaps being used for different sized allocations... I'm not sure I've got a copy but it's probably in his Magnum Opus.

MarkMLl

440bx:

--- Quote from: jollytall on December 01, 2024, 04:08:59 pm ---I think I have a workaround (need to rewrite the program), that I reserve and keep the array size at a very large value and use a "lastindex" property to manage how large it really is.

--- End quote ---
Disclaimer: my knowledge of Linux internals is _zero_.

However, I believe that as most (all?) O/Ss it is a demand paged O/S. if that is the case you can allocate (fully commit) a block of virtual memory that is larger than the amount of memory required by the largest array you anticipate in the worst case.

In a demand paged O/S there will only be memory allocated for those blocks that contain data. 

The only thing to be aware of is that virtual address space (not actual memory) is consumed in the full allocated amount.  This can occasionally be a problem in 32 bit.

In Windows the allocation is done using VirtualAlloc, Linux has an equivalent function but I don't remember its name.

Basically that would implement what you mentioned without using more actual memory than necessary.

HTH.

Thaddy:
It is just a dead pointer and you forgot to allocate memory for it. <sigh>

LV:
What you described closely resembles the functionality of Particle-in-Cell codes. This category of tasks is well-developed, particularly for parallel computing, and I assume any issues related to memory allocation have been addressed. It may be worthwhile to explore existing software implementations of these tasks.

jollytall:
Thanks, I will do the workaround and will let you know. I will do the large array as a fixed size (dynamic in the heap) one, as I know the final size in advance. The smaller ones I need to make really dynamic as reserving the maximum in each cells would be too much as most of the cells will be empty.

@LV: I will definitely search for it.

@Thaddy, I do not get the "dead pointer" hint. Would that be memory leak and something that is found by heaptrc?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version