Again: Cut down/customize the msdos one for your purposes,
Which I'm willing to try to do if I can make sense of this convoluted mess.
or create a new one based on the minimal skeletal "embedded" rtl.
Which I'm willing to do to an extent, but I'm noticing even that has more crud in it than I want... Though I think much of that is just the generic nature of the beast overall.
What I'm really looking for is a sub 2k single file .pp that just BS's it into compiling. Just list out what HAS to be included/set in a device neutral manner... I'm starting to suspect that unlike C compilers, FPC just isn't set up internally to be THAT generic/vanilla an implementation of Pascal.
But I think cutting down the msdos one is easier.
Not from what I'm seeing, it's spanning so many separate files and pulling such massive amounts of the "generic" cross platform codebase, everything I "tug on" just breaks it.
Dozens of people manage to on a daily basis. Try harder.
Can you point me on examples of this? How are they figuring this out? If dozens of people are "doing this daily" then WHERE ARE THEY? That reeks of pulling a number out of one's backside.
.h style would be even more files.
Done properly it would be ONE master file giving you ALL the prototypes for the SINGLE library. Admittedly, a concept that a LOT of people seem to find alien these days.
(could be worse, could be full stack JS)It's called an INTERFACE file for a reason.
Well, it is good that you managed to make a step. What optimization settings did you use?
Forget right now, but I'm rejecting FPC for now as it's too much hassle since I've already lost two years to my health, I don't want to waste a year screwing with the tools.
Actually looks like I'm moving to Open Watcom C since it does what I want done out of the box (well, other than the fact that it is C). Compiles to 16 bit TINY from 32/64 bit command line, smallest .COM of any higher level language I've tested, etc, etc... Even so I'm gonna keep playing with FPC as I PREFER Pascal to C, it's just the state of compilers keeps making the choice for me.
Had a LOT of people trying to point me at Microsoft C++ 8.0's command line compiler, that's good for a laugh as it makes FPC's implementation shine by comparison.
(Gee, inefficient compilers from Microsoft? SAY NOT SO!!!) and still fails to meet my actual goal of something I can call from a Windows command line under Win 8.1 to compile to 16 bit. (since compiling inside DOSBOX sucks, even at max cycles)
Admittedly the guys who keep pointing me at MASM and Microsoft C are all ex big-iron guys, so they come from a whole different world than those of us who started out in the microcomputer era on things like the ELF and Trash-80. Laughably, they fit in better with today's "Oh just throw more code and hardware at the problem" mentality.
No, true constants are not loaded in the binary. Only typed constants are. (also in TP)
I should have specified, my bad:
Seg0040: Word = $0040; { Selector for segment $0040 }
SegA000: Word = $A000; { Selector for segment $A000 }
SegB000: Word = $B000; { Selector for segment $B000 }
SegB800: Word = $B800; { Selector for segment $B800 }
That's right off the Borland Pascal 7 CD. THOUGH I had someone explain to me WHY they do that!
mov es, [Seg0040] ; 8 + EA, 3 bytes if DS reference
3 fetch (BIU empty) * 4 + 8 exec + 9 ea + 2 bytes data * 4 = 37
With fetch, mem, and EA calculated that's a worst case of 37 clock cycles BIU empty on a 8088, in 3 bytes.
That is less code, and doesn't need a extra register compared to:
mov ax, 0x0040 ; 4 clocks, 3 bytes
mov es, ax ; 2 clocks, 2 bytes
3 fetch * 4 + 4 exec + (2 fetch * 4 - 4 BIU free) + 2 = 22
22 clocks but 5 bytes and you need a register free to do it.
So if that is used ONCE, you break even. More than once you save two bytes and no need for getting an extra register involved... but it costs you 15 clock cycles to do it.
Since you cannot actually do:
mov es, 0x0040 ; this will not assemble!
Since there is no such thing as "mov segreg, immed" -- just mem#e and reg#e
Which x86 systems are nowadays really targeted at 16-bits development ? (as opposed to 32/64bit ones that can still just boot 16-bit?)
VERY popular to do with Intel Quark D series -- While they are 32 bit x86 code compatibile, NOBODY runs them in 32 bit protected mode. This is becuase the overhead of the virtual page-space ALONE would be murder when on the low end (D1000) you have 32k of instruction RAM and 8k of SRAM, and at the high end (SE) we're talking 384k of instruction space and 80k of RAM. You might use the 32 bit math and memory copy capabilities, but when you only have a total combined address space on the primary models of 52k or less, you're not building as anything but TINY (so you can keep all your segments the same). Even with the SE, you're hard pressed to find a legitimate reason NOT to run it in "REAL" mode since that tops out at 436k total combined address space. The ideal on that larger address space most likely being COMPACT -- aka CS, DS, and SS all getting their own 64k limit, with the rest as a heap. Maybe LARGE by allowing functions to be FAR if needed with thier own CS, but really if you're that lean on RAM it's unlikely you'd want the those extra 8-16 bytes of overhead added to every function call in the code much less 2 more bytes on the stack.
(hell, that's part of what made me start down this path in the first place, reducing function call overhead!)You only tread into high end specs when you get into the SOC's like the Quark X1000 with it's 400mhz clock and external DRAM capabilities. I've seen dev boards for that ranging from 256 megs all the way up to 2 gigs.
There are also a LOT of embedded systems that still run DOS... on real 80186/lower PDIP chips. You'll find them in the same type of environments you'll still find 8052's chugging along. Changes in the mechanical engineering of the robots or other machines they are controlling means they still need people to write new code for them -- PARTICULARLY since many modern chips don't want to let you get that far into the low-end I/O, and convincing them to move to AVR or ARM presents them with an unknown.
Though it's funny how even a 16mhz ATMEGA32 kicks a 25mhz 386DX's backside on raw computing power... But try convincing people with million dollar machines they've been using for 40 years to switch to the same chip that's in a Arduino Micro. What's wrong with the existing one?!? *SIGH*No, it shouldn't. You are mixing things up.
Again, my bad, I should have been more specific -- people seem to treat ALL constants that way. That it's so easy to mess it up is why the distinction of "typed" vs. "untyped" is not just confusing, it's annoying... and poorly documented! Having a different name for them to explicity say "Don't do that" would REALLY help with language clarity... and here I thought language clarity was one of Pascal's strengths and goals.
But again, the implementation often differs from the original ideals.
Side note, DAMN I hate C... not even one include into porting this over and I'm already cursing at how silly a language it is. STILL not convinced this is a joke:
https://www.gnu.org/fun/jokes/unix-hoax.htmlAs it is I've set up the "new" C99 integer types just for code clarity sake... Whoever thought "char" should be a signed 8 bit integer needs a good swift kick in the groin... but then I don't think I've EVER written anything that wasn't for a 8 bit processor that used a 8 bit signed integer.