ALGOL is Pascal's immediate predecessor. The thing I was trying to illustrate was not so much the syntax, as the way that setup directives, program source and raw data were in that era bundled together into a job. Multiple jobs (e.g. for a lecture group of students) would have been assembled into a batch, and run through the computer with the operator handling the printer output etc.
I think it seems that you providing greasy rich information that indicates that you have like a wide academic knowledge in computers and programming, I'm trying to reach the same level of yours. From what you mentioned about ALGOL, it seems a language that applies a particular direction of programming, it's not made for beginners or normal hobbyists. I think it belongs to those who have no problem of the limitation of knowledge belonging to architecture of computers nor the understanding of this level.
BASIC was one of the first languages which provided support for multiple interactive users. To be honest it was a bit of an afterthought for Pascal, and if you look at the earliest documentation you will also see that it assumes that associating I/O and specific filenames was considered to be outside the scope of the language i.e. was handled by some sort of manufacturer-specific job control language as in the example I've given you.
I think this is because BASIC serves another level and direction of programming, first of all, BASIC is for absolute beginners, it tends to high level programming, so caring in some low level details is usually out of the interest although that there's some low level tools in it, but BASIC is a limited high level language, in the other side, we have Pascal which is also high level language but it's not limited like BASIC, I think Pascal is the reasonable next step for those are not satisfied enough with BASIC and want more.
Perhaps the main problem with simply appending data to program source is that there's no error checking involved: it's outside the scope of the compiler so it's not possible to warn that a float is about to be allocated when an integer is expected, and it's not possible to use a data structure (e.g. a record) and specify which fields are to be populated. And not allowing the compiler to check predefined input is, apart from anything else, a recipe for security violations (trust me, early mainframes were by no means immune to that sort of thing).
I got the general idea of this paragraph, to be honest with you, not all details I got. If I understood correctly, I think the efforts will be focused on something like preparing the program itself - before its final executable version that will have no compiler help nor checking - to expect and deal with the wrong data will be entered in its combined data files. For example, database programs that will check the user input before saving it inside the data files or using it in processes that deal with particular data types.
Purists would say that if you want predefined data (and assuming that you're using Lazarus) then you embed it into the executable as a resource. But that has much the same problems as I've listed above, and I'd suggest that a better solution would be to embed it as a constant (i.e. initialisation of a constant record or array) contained in an include file: that has the advantage that the include file can be generated by some other program if you want e.g. a big actuarial table.
MarkMLl
The Purists speaking is correct or practical opinion in the case of big programs what we call 'em "Projects", but in the case of small programs, the opinion is some weak, because small programs doesn't deserve that kind of caring, like some hobbyists BASIC programs.