Thanks for the feedback, guys. It's much appreciated. Let me address some of the points raised and clarify what I'm doing at the bottom:
OP: you're asking a specific question in a general forum while giving absolutely no information on the platform(s) you're using.
So, pro forma, I should say I'm running on Windows 10 or 11. On the other hand, shouldn't a call with 64-bit integers be reliably callable across platforms? (Or it should raise an error where it can't be done.)
d2010
I'm sorry, I don't really understand what you're getting at. The mission is: Read the entire file into memory in one call. 25 years ago I could read a 1GB file into memory on a machine with 1GB, no problem, I feel like 35GB should be well within reach now for a machine with 64GB. (It's not, of course, which is why I'm here. )
BlockRead() internally calls the Do_Read() function, whose len parameter is only 32 bits. I think this is a bug.
I think it's a bug, too! I mean, maybe for whatever reason it can't be done, but I think it should be indicated somehow, somewhere.
has anyone tried using a "Cardinal" instead for "fs" ?
I just did. It does not error out. However, it manages this by being completely wrong about the filesize, returning 1 billion something.
If you want to read large files, map them into memory instead.
No! Or, more politely, no, thank you!

(I'll explain at the bottom.)
With both of those you'll get a pointer to the raw data of the file you can access as if you read the whole file using a Filestream. But there are a two main advantages:
1. Multiple processes that map the same file as readonly will share the same virtual memory space for that file meaning the data is only read once
2. The file is not read as once but whenever you access a part of the file which was not loaded the OS will load it
1. There are no multiple processes.
2. I am accessing it all at once. Serially, from top-to-bottom every time.
If you insist on reading a large file, you should split the read up into multiple smaller reads, preferrably not larger than a page size (4K), this way the OS level call will not block for as long reducing the chance of hitting a signal during the read, and also if a signal hits, which interrupts the read, you don't need to redo the whole read just the block.
I mean, I can stream the stupid thing in. I can break it into chunks. I'm coming from the modern dynamic language world which would love nothing more than to put it in a big bloated tree of some kind. I will elaborate further:
Why would you even want to read such a big file at once?
I'm glad you asked that!

do you really not want to provide feedback for the user or give them the chance to abort an operation (depending on the use case)?
There is no user. There is no operation. There is only analysis.
Thus it would be better anyway to use smaller transfer sizes (e.g. 2 or 4 MB is rather nice) and read the file in multiple parts. Thus you can not only provide feedback to the user (especially on slower drives), but also won't have to deal with such size limits.
So, here's the situation: I'm slicing up a database table to learn about its contents. I say to my program "Here's a file. Figure out what's REALLY in it—because DB specs are lazy and sloppy and abused—and give me a report." Generally, I'm able to go through an entire table at once. This file is big enough to where I can only do individual fields, but those are in the tens of gigabytes.
By far the most time consuming part of this process is reading from the disk. It's the difference between seconds or minutes (depending on filesize) and instantaneous. It's the difference between being able to run something dozens of times very quickly to figure out how to tweak parameters and filter out garbage, and having to wait ten minutes and then pick up context.
I might have to go to C++.

Though, if it is an underlying OS issue, I guess that won't help.
(I've been doing this so long, by the way, that I first had to read in 64KB blocks because THAT was the largest value you could blockread/blockwrite. It was needlessly complicated because the data generally comes in as a text file, and there's block boundaries don't fall neatly on line ends. I swore, with God As My Witness, I would never go back to trying read chunked files, once the 32-bit addressing came around.)