Recent

Author Topic: Float in non-float-processor  (Read 4548 times)

x2nie

  • Hero Member
  • *****
  • Posts: 515
  • Impossible=I don't know the way
    • impossible is nothing - www.x2nie.com
Float in non-float-processor
« on: April 26, 2017, 04:26:34 pm »
Hi all,


I don't know how to start telling my problem, so here is the background of problem:


I finished translating SourceAFIS[1] from C# .NET ver3.5 to be .NET 2.0 (from desktop to Windows Mobile)
but the performance is very bad. ==> comparing 20 fingerprint in about 1 minutes.
After telling the author, his said that my used hardware (Motorolla MC75) doesn't support the floating point natively.
Meaning: my several months work was useless.
But we still can't just leave that hardware, because that hardware comply the requirement of explosion proof. Well, I am not sure about the marketing division deal, I am just a programmer :p


So, my plan is to translate SourceAFIS to Lazarus/FreePascal *.dll, and calling them from C#
But, before starting that big job, can anyone tell me how to deal with float (double) calculation with processor that doesn't support floating-point natively?


My personal stupid approach is to get rid from float completely, and use only integer/longinteger in any calculation needed.
But, again, before that big job, is there alternative / easier way to do with FreePascal / Lazarus ?


Thanks in advance. :)




[1] https://sourceforge.net/projects/sourceafis/
When you were logged in, you can see attachments.
Lazarus Github @ UbuntuCinnamon-v22.04.1 + LinuxMintDebianEdition5

Leledumbo

  • Hero Member
  • *****
  • Posts: 8746
  • Programming + Glam Metal + Tae Kwon Do = Me
Re: Float in non-float-processor
« Reply #1 on: April 26, 2017, 05:36:59 pm »
My personal stupid approach is to get rid from float completely, and use only integer/longinteger in any calculation needed.
That doesn't sound stupid, as opposed to write a floating point emulation code, which in the end will still be slow. I think what the .NET runtime does is exactly just that when it detects a processor without floating point support.

Nitorami

  • Sr. Member
  • ****
  • Posts: 481
Re: Float in non-float-processor
« Reply #2 on: April 26, 2017, 05:58:38 pm »
I would agree that changing to integer arithmetic will probably be the most reasonable option. Apart from that, I am not sure whether FPC would support float on processors without coprocessor. The documentation (prog.pdf) says that

- the compiler has an emulator under DOS (go32v2) which requires the emu87 unit;
- under Linux amd most Unixes the kernel takes care of coprocessor support
- for the Motorola 680x0, internal runtime routines are called to do the calculations. 

Thaddy

  • Hero Member
  • *****
  • Posts: 14204
  • Probably until I exterminate Putin.
Re: Float in non-float-processor
« Reply #3 on: April 26, 2017, 06:17:16 pm »
- FPC has soft float since day one.
- What you need is fixed point math <<<--- BASICS on slow platforms.
- You should have known the above before you started.
Specialize a type, not a var.

avra

  • Hero Member
  • *****
  • Posts: 2514
    • Additional info
Re: Float in non-float-processor
« Reply #4 on: April 26, 2017, 06:45:28 pm »
ct2laz - Conversion between Lazarus and CodeTyphon
bithelpers - Bit manipulation for standard types
pasettimino - Siemens S7 PLC lib

x2nie

  • Hero Member
  • *****
  • Posts: 515
  • Impossible=I don't know the way
    • impossible is nothing - www.x2nie.com
Re: Float in non-float-processor
« Reply #5 on: April 26, 2017, 08:28:58 pm »
My personal stupid approach is to get rid from float completely, and use only integer/longinteger in any calculation needed.
That doesn't sound stupid, as opposed to write a floating point emulation code, which in the end will still be slow. I think what the .NET runtime does is exactly just that when it detects a processor without floating point support.
Absolutely. That was what I worry about: compiler's emulation (of unsupported floating point by processor) will drops the performance again.


My stupid is in detail: I will just multiply any used decimal number with a constant; eg. 3.14f = 314000i = 3.14 * 100000(constant).
Thats bad idea, isn't it?
Because my approach requires float either, and in turn will (again) require floating-point emulation in process. %)
----------------


@Nitorami, big thanks. After few minute I found that said: ftp://ftp.freepascal.org/fpc/docs-pdf/prog.pdf
Its very useful to know that compiler options, to get greater control of codes generated by compiler/fpc.
----------------


@Thaddy, I need about an hours to understand your suggestion. Now I know, after reading @avra's articles.
----------------


@avra, god bless you of your kind let us know that great pascal units ( plus quickly updating broken link).
So, @avra, by using AFP/Fix.pas, we only have one chance to decide the range of integer~fraction part per application, right?
I can accept that situation; because usually we are only working in one world at one time.
eg. in geographic (spatial) world, there is needed only few integer range and wider fraction part.
in accounting world, the requirements is wide integer range with few fraction portion.
In bio identification world, I am yet not sure which one is the most needed, due I have spent weeks in result accuration check (high level).
Now I am getting down on it. 8-)


Thanks you everybody :-*
When you were logged in, you can see attachments.
Lazarus Github @ UbuntuCinnamon-v22.04.1 + LinuxMintDebianEdition5

avra

  • Hero Member
  • *****
  • Posts: 2514
    • Additional info
Re: Float in non-float-processor
« Reply #6 on: April 27, 2017, 09:06:09 am »
@avra, god bless you of your kind let us know that great pascal units ( plus quickly updating broken link).
You're most welcome. I thank you for your kind words.

So, @avra, by using AFP/Fix.pas, we only have one chance to decide the range of integer~fraction part per application, right?
No, you can change it programatically whenever you want. In "FixedPointMath.pas" example for E-Lab AvrCo Pascal you can see this line:
Code: Pascal  [Select][+][-]
  1. fixInit(16); // try 10, 16, 20 or some other number of bits that you want for fractional part
This is the place where you decide number of fractional bits. You can call it as many times as you like with just one remark. All your existing TFix fixed point numbers have old number of bits so you have to dump them and recalculate them with new number of bits if you need such feature. Floating point numbers can be temporary used for storing old values of such variables.

AFP (Arbitrary fixed point) library is very flexible, but if speed is your main goal things can be faster. When number of bits is fixed there can be many optimizations. For example, I made Fix64 lib that has s31.32 format (signed 31 integer bits and 32 fractional bits) that is fast and usable even on 8-bit microcontroller. That lib is closed source and part of AvrCo compiler with TFix64 type supported by compiler core so I can not publish that source, but if you decide to go that path I can give you valuable hints. Saying all that for AFP, the other library (FPMATH) is using s15.16 format and is natively made in FPC so your job would be easier. Having less bits and fixed number of them, it should (I haven't checked) already be faster and more optimized, especially if you need trigonometry because of precalculated sine table.
ct2laz - Conversion between Lazarus and CodeTyphon
bithelpers - Bit manipulation for standard types
pasettimino - Siemens S7 PLC lib

 

TinyPortal © 2005-2018