Recent

Author Topic: array of Byte and conversion negative number  (Read 3303 times)

PascalDragon

  • Hero Member
  • *****
  • Posts: 5446
  • Compiler Developer
Re: array of Byte and conversion negative number
« Reply #30 on: September 15, 2022, 09:10:08 am »
What Delphi also does right way - it doesn't add stack frame, if procedure/function doesn't have parameters in stack. FPC requires nostackframe.

This allows for more flexibility of the user: they can decide whether they want a stack frame with their assembly routine or not as that results in different accesses to the parameters.

This i am not sure. In Turbo Pascal then Delphi it was also contorleed by a switch. We may argue about reasonable defaults, but the switch was there too.

Quote
The {$W} directive controls the generation of stack frames for procedures and functions.

In the {$W+} state, stack frames are always generated for procedures and functions, even when they're not needed.
In the {$W-} state, stack frames are only generated when they're required, as determined by the routine's use of local variables.

FPC also supports the {$W} directive.

Peviously they required "assembler" keyword.
Today they seem demand the body contained asm/end without outer begin/end.

That's because the developers of Delphi never bothered to implement a real inline assembler for the non-i386 platform. In FPC you can use an asmend-block at any time inside a function on any platform in Delphi you can only do this on i386.

Arioch

  • Sr. Member
  • ****
  • Posts: 421
Re: array of Byte and conversion negative number
« Reply #31 on: September 15, 2022, 06:51:49 pm »
That's because the developers of Delphi never bothered to implement a real inline assembler for the non-i386 platform.

I don't see how that is related. To me it is orthogonal, either code generator goes through stack frame generation phase, or skips it. This should not depend upon quantity of back-ends.

Arioch

  • Sr. Member
  • ****
  • Posts: 421
Re: array of Byte and conversion negative number
« Reply #32 on: September 15, 2022, 08:41:35 pm »
No, that's /wrong/. The 68K was a predecessor

No, that is /right/. Whatever it was - it was! In past. Today those 68K which survived are lo-power lo-perf microcontrollers.

Same way ARM used to be super-top attempt at super-computing. Was. Then it became lo-power phone processors. Now it seems ot going back to performance competitions.
Same way Itanium once was, for awhile, outperforming x86. Does it make VLIW suitable for general purpose computing? No. Itanium lost the race and was moved to ever shrinking niche.

IF (which i doubt) there really were hardware limitations that made 68K benegitting from LSGA bytes order, then those probaby were the limitations that helped it loose the race to other CPU lines.

AFAIR the PowerPC arch also had endianess selectable. Did it help PowerPC overpower Pentiums - in long run! - or they lost the race and make Apple escape to x64?

The observable experience shows that reversed endianess does not help performance and maybe harms it (like, indirectly, speding CPU developera manhours on developing workaround to compensate for endianess. But this is sheer speculation).

-----

BTW, from Thinking Computers ( which are looking great i admit, a real show of force in elegant design, i remmeber i used to oogle those photos before), i came to this: https://en.wikipedia.org/wiki/MasPar

Quote
The DEC researchers enhanced the architecture by:  .... making the processor elements to be 4-bit instead of 1-bit

Sounds like a doom was spelled on pure bit-slicers. Whatever elegance they had in abstract math, the real world disagreed and insisted on multi-bit fixed data frames being read/written. Which given some decades, made 64 bit a a minimal RAM data exchange frame. Which invalidated the whole "compare only first byte - save on bandwidth" argument.

And personally, i have my sweet spot too, i belive balanced ternary computers were the most elegant. But... i would not argue we must try to resurrect them in general and replace actual architectures with them. Sad, but... case long closed.

MarkMLl

  • Hero Member
  • *****
  • Posts: 6676
Re: array of Byte and conversion negative number
« Reply #33 on: September 15, 2022, 10:05:46 pm »
No, that is /right/. Whatever it was - it was! In past. Today those 68K which survived are lo-power lo-perf microcontrollers.

No, it is /wrong/. Saying "because the only surviving 68Ks are microcontrollers therefore larger-scale implementations are irrelevant" is the same as saying "because all current Fords are great they never produced a lemon"... you /have/ heard of the Edsel I presume? :-)

Quote
BTW, from Thinking Computers ( which are looking great i admit, a real show of force in elegant design, i remmeber i used to oogle those photos before), i came to this: https://en.wikipedia.org/wiki/MasPar

Quote
The DEC researchers enhanced the architecture by:  .... making the processor elements to be 4-bit instead of 1-bit

Sounds like a doom was spelled on pure bit-slicers. Whatever elegance they had in abstract math, the real world disagreed and insisted on multi-bit fixed data frames being read/written. Which given some decades, made 64 bit a a minimal RAM data exchange frame. Which invalidated the whole "compare only first byte - save on bandwidth" argument.

That's actually a very interesting one. My understanding is that the Thinking Machine processors were single-bit serial (i.e. like many classic systems designed to minimise active-device count like the LGP-30) and I don't know whether expanding that to handle four bits per cycle makes it into something that could be called a bitslice architecture. My suspicion is that it could not, since the salient characteristic of a bitslice architecture is that there are multiple physical slices working in parallel, while an n-bit serial CPU has hardware which processes the first n bits of the operand followed by the second n bits and so on.

So while I like what DEC did there, it didn't convert a serial ALU into a bitsliced one and probably isn't directly relevant to the discussion.

I had an interesting personal revelation relating to what could probably be called a bitsliced machine. I got into work one morning and continued running a software checkout on the mini I was working on, but it told me that the middle eight bits of the ALU were faulty. On opening it up I discovered that I'd pulled the middle of the three identical CPU cards the previous evening. It doesn't sound like much, but it does emphasise how on a "real" computer the ALU is just one subsystem of many: though I say so myself I've written some pretty good embedded system startup test code but even assuming a "switches and lights" display I can't imagine a microprocessor delivering that sort of targetted error information.

MarkMLl
MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

PascalDragon

  • Hero Member
  • *****
  • Posts: 5446
  • Compiler Developer
Re: array of Byte and conversion negative number
« Reply #34 on: September 16, 2022, 03:45:15 pm »
No, that's /wrong/. The 68K was a predecessor

No, that is /right/. Whatever it was - it was! In past. Today those 68K which survived are lo-power lo-perf microcontrollers.

Then you've never heard about the APOLLO 68080 which powers the Vampire V4. ::) And I wouldn't call the ColdFire by NXP as low performance either considering that it can run a full blown Linux...

Arioch

  • Sr. Member
  • ****
  • Posts: 421
Re: array of Byte and conversion negative number
« Reply #35 on: September 16, 2022, 04:09:06 pm »
and Intel 432/960 derivative BiiN is said to still power F-22 Raptor, but those are really niche cases.

the industry moved on, suggesting those approachs were creating more problems than people wanted to workaround :-)

in particular, 432 was aligning commands on bit boundaries not byte or word boundaries, wasn't that ideal from the "read only few bits you need first" approach ? :-)

but in practice the RAM/cache read today is most often 64-bits parallel, and 32-bits in the most of the rest cases ;-)
« Last Edit: September 16, 2022, 04:31:42 pm by Arioch »

superc

  • Full Member
  • ***
  • Posts: 241
Re: array of Byte and conversion negative number
« Reply #36 on: September 16, 2022, 04:13:18 pm »
and Intel 432/906 derivative BiN is said to still power F-22 Raptor, but those are really niche cases.

how do you get this information? it's very interesting, don't get me wrong it's just curiosity ...

Arioch

  • Sr. Member
  • ****
  • Posts: 421
Re: array of Byte and conversion negative number
« Reply #37 on: September 16, 2022, 04:30:46 pm »
and Intel 432/906 derivative BiN is said to still power F-22 Raptor, but those are really niche cases.

how do you get this information? it's very interesting, don't get me wrong it's just curiosity ...

me too, i was wiki-surfing yesterday just for the sake of the forum flame - and then THAT

it is a one line (last in "history") at https://en.wikipedia.org/wiki/BiiN given without sources

can be a baseless gossip or a leak

 

TinyPortal © 2005-2018