Couldn't you do your floating point calculations with fixed point instead?
This won't be easy.
There are floating point operations for the calculation of a neural network...
Calculating an exp or tanh function with fixed point arithmetic and needing a good accuracy is not easy, and i guess that i will end up more or less to the same speed as now.
You can take a look at AFP (Arbitrary Fixed Point Lib
http://forum.e-lab.de/topic.php?t=2387 and
http://www.avrfreaks.net/index.php?module=Freaks%20Academy&func=viewItem&item_type=project&item_id=2351). It's not FPC but close enough Pascal variant to be useful with some minor adaptions. It's originally made for AvrCo Pascal compiler (AVR 8-bit microcontrollers,
http://www.e-lab.de/AVRco/index_en.html). Lib is free, very fast and allows fixed point numbers with arbitrary number of bits for integer and decimal part with total bits set to 32 (which allows fixed point numbers like s15.16 or s21.10). It even has basic trigonometry functions implemented, and it's been used in real time 3D calculations. I have also made 64 bit version fixed to s31.32 (for further speed optimizations), but that version has become part of the compiler and I can not publish the sources. However if you just need to add exp() to AFP I can help.