Forum > Windows

Large (huge) pages in Lazarus

(1/4) > >>

B4R:
Previously, I worked with Delphi 5-8 and a little bit with XE.
Now, I am looking to create an application that scans through large files and computes HMAC SHA256 from blocks of data contained therein. I thought that Lazarus with Indy can handle the task.

Ideally, the buffer in which these files will be kept, should be allocated as large pages also known as huge pages. This is trivial for me in C++. I am very reluctant to code this program in C++ because it needs a GUI, and I am not up to speed with that.

I searched and failed to find information on whether they are available in Lazarus. Are they, and if yes, then how can I enable them?

Many thanks!

440bx:
You didn't mention the O/S you're using. 

If you're using Windows then you use VirtualAlloc (or its Ex sibling) to allocate 4MB pages for a given allocation.

If you're using a different O/S then I cannot provide an answer.

HTH.


B4R:
Windows.

Is VirtualAlloc wrapped in Lazarus, or am I to import it from Windowze?

And will only my own buffers be allocated as large pages? I mean that when I will be calling Indy to compute HMACs, will it use regular pages internally? Is there a way to use large pages globally for all allocations in the whole process?

Red_prig:
If you want this globally, then you need to write your own custom memory manager, otherwise it will be an individual allocation via VirtualAlloc

440bx:

--- Quote from: B4R on June 29, 2024, 07:56:01 pm ---Windows.

Is VirtualAlloc wrapped in Lazarus, or am I to import it from Windowze?

And will only my own buffers be allocated as large pages? I mean that when I will be calling Indy to compute HMACs, will it use regular pages internally? Is there a way to use large pages globally for all allocations in the whole process?

--- End quote ---
Answer to the first question: it's included in the Windows unit.  As long as you have "uses Windows", you can use VirtualAlloc and VirtualAllocEx.

Answer to second question: _you_ tell VirtualAlloc to size of the page to use.  You need to do this for every buffer you allocate.  Therefore, controlling page size can only be done for the buffers you allocate.   Strictly speaking that last sentence is not absolutely true, it is possible to force code in other used units to also use large pages but, that's a lot, and I really mean a LOT, more complicated.

Therefore, it is not possible NOR desirable to use large pages for all allocations in the whole process.  It makes no sense to grab a 4MB page to store a 10 character window title or class name or whathaveyou. 

Another thing which is very important when allocating large pages is that the O/S must find a single  _physical_ block that is 4MB which becomes the page.  The problem is, since most allocations consist of 4KB pages it is possible that there are no blocks or very few blocks of 4MB.   IOW, if you ask for 1GB made of large pages, the O/S needs to find 256 blocks of 4MB.  Physical memory fragmentation may cause the number of 4MB blocks to be less than that in which case the allocation will fail.  OTH, 1GB is 262,144 pages of memory, none of those pages need to be consecutive (though many will be), therefore the allocation is almost guaranteed to succeed because Windows can shuffle things around to satisfy the request.  Another big disadvantage of large pages, unlike regular size pages, is that those are _not_ doled out on demand (because Windows won't take the risk of running out of 4MB blocks, a problem it doesn't have to face when dealing with 4KB pages.)

The bottom line is this: it is _extremely_ rare for a user mode program to derive significant benefit from using large pages.  From what you mentioned, it sounds like you should simply map the files.  It's very doubtful you'll have a perceptible gain using 4MB pages.

HTH.

Navigation

[0] Message Index

[#] Next page

Go to full version