To be more specific, it shall have the largest size which can be addressed by FreePascal and the system it's running on.
The requisites you are mentioning are difficult to manage. On Windows, the largest file size (at least theoretically) is 16TB, that's Terabytes, that's over 16,000 Gigabytes.
Once your files go over some reasonable size, I'd say about 300MB, what you want to do is map sections of the file (never the entire file.) Depending on what you're doing, how the contents of the file may be manipulated, you may need more than one section. You don't change/manipulate the data in the sections, you accumulate changes in memory buffers that apply to a given section, once the user is done fiddling with the file, you apply the changes to the mappings as you create a new file (that includes the changes.)
That's the basic concept used to manage large, multi terabyte databases.
The most important thing when you are dealing with really large files is, avoid doing I/O. Delay doing I/Os as long as possible, when you need to do I/O, batch them in file local groups, that often allows you to consolidate a number of I/Os into a smaller number of I/Os (that's the objective.)
The important thing to have very clear in mind is that, the way you deal with very large files, depends on how the files need to be manipulated. There are some basic rules of thumb that apply but, ultimately, the algorithms to use depend on how the data inside the file needs to be managed.
To recap,
1. Use file mappings
2. Map _sections_ of the file, not the entire file (the O/S may not even allow mapping the entire file.)
3. Buffer and accumulate any changes that must be made to the sections.
4. Consolidate the changes in memory
5. Merge the mapped buffers and the changes as the new (changed) file is created.
(it is usually a desirable thing to have a log of changes, allows for undo and user work recovery in case of something bad happening to the user's system - how you do that, is yet another ball of wax.)
HTH.