The following code fails to write to fileThe result of "MaxLongint + 1" is Int64. Try "MaxLongint - 1".
procedure writefail; var x:TFilestream; p:pointer; begin p:=getmem(maxlongint+1); x:=TFilestream.Create('/home/user/bin.txt',fmcreate); x.WriteBuffer(p^,maxlongint+1);//Fails to write garbage in linux-x86_64 x.Free; end;
The result of "MaxLongint + 1" is Int64. Try "MaxLongint - 1".In Freepascal simply use High(<type>);
AFAIK Address space is limited to high(qword), at lot.., but single read/writes are limited to high(nativeint) which is indeed 2G
In streams.inc we have:
procedure TStream.ReadBuffer(var Buffer: TBytes; Offset, Count: NativeInt);
procedure TStream.WriteBuffer(const Buffer: TBytes; Offset, Count: NativeInt);
AFAIK Address space is limited to high(qword), at lot.., but single read/writes are limited to high(nativeint) which is indeed 2G
In de context of 32 bit, Sarah. That's why I suggested it to express address space.
AFAIK Address space is limited to high(qword), at lot.., but single read/writes are limited to high(nativeint) which is indeed 2G
Note that in practise it limits to available memrory otherwise a EOutOfMemory is thrown.
Note that for 32 bit WIN there is a PE flag - since windows 7 - that extends available memory to 4G but still with the limitation ffor single read/writes
The following function returns a negative number when passing a 3GB file. The problem is at FileSeek.
What is the speed increase of using the largest buffer the OS will give you over that 1 GB buffer? Is that measurable?
What is the speed increase of using the largest buffer the OS will give you over that 1 GB buffer? Is that measurable?
I mean, you need at least an SSD, which write in blocks the size of megabytes. Many random reads will overflow their cache memory, because the individual blocks are too small.
So, the difference between one and many IOPS?
Then again, the OS will try to use all free memory as disk cache as well.
What is faster, a single, large buffer, managed by the application and only a small cache, managed by the OS, or the other way around? Does the OS limit the cache size for a single IOP?
In other words: do use a buffer and don't write each byte individually, but leave the rest to the OS and hardware?
Maybe if you write a disk clone software higher buffers are more worthwhile, but even that will have decreasing returns while the buffer gets larger.
I think this is a question of a separate topic. In Windows, a zero file of 3 GB is written.
If SaveToFile calls something like "WriteBuffer(Memory^,Size);" where Size is int64 on x86_64 targets then the rtl devs might have hit the same problem as I didThe WriteBuffer function has a limitation, but the SaveToFile function does not.
It's strange, because I have the same test code, and I got 0-sized file.Your code also successfully creates a 3 GB file. Maybe there is no disk space, or you did not wait for the end of the program?