Have I bought a useless computer?
The data must be accessed repeatedly out of order. The alternative is to repeatedly read the data file maybe 2000 times. There is nothing wrong with large arrays per se, although I would be happy to avoid them.
Anyway, the problem appears to be with the iMac.
Is it FPC problem or OS issue?Well. Jonas is the expert: OS issue.
Whether int64 and qword are taken as ordinal seems to depend on 32 bit / 64 bit.No it is not. The code compiles even on 32 bit arm. As Jonas pointed out this is an OS limitation?
I think the TS's main intention is not focus on the use case, he/she just wanted to push FPC to its limits to see how far it can handle large memory or at least to make sure it can handle large memory.
No it is not. The code compiles even on 32 bit arm. As Jonas pointed out this is an OS limitation?Interesting. My result with the crosssompiler for arm-linux is:
First, my delay in replying is because I live in Australia and the time difference causes problems.No, it does not.
Jonas wrote "OS X does not support more than 4GB of statically allocated data, even on 64 bit platforms (probably for efficiency reasons)."
But this contradicts the Apple developer document.
Also, it seems pointless to have 64 bit addressing if it cannot be used.It can perfectly be used, just not for statically allocated data. You can dynamically allocate as much as you want (up to the limits mentioned in the Apple developer documentation). In FPC you can do this, a.o., using getmem/freemem/new/dispose, dynamic arrays and classes.
This test appears to show that 64-bit addressing is working. If so, why don't the declarations in the first post work?It's because the limit is related to addresses that are encoded directly in an instruction. As long as such an address is within 4GB of the instruction that accesses it, things will work. The address that is encoded in your test program is the begin address of BigArray, and that address is well with 4GB of your main program. If you would try to directly access BigArray[high(BigArray)], then again you would get an error because that address is located at 9.6GB from your main program. The reason it works if you use an index, is that then again the start address of BigArray is directly encoded in the instruction, and afterwards the index is loaded, it is multiplied by 8, and then added to this first address (all using —semantically— separate operations). So part of the address that is encoded in the instruction will still be within the 4GB limit.
Why doesn't it bomb out with EOutOfMemory?