Recent

Author Topic: FPC & Efficient Neural Networks  (Read 3545 times)

schuler

  • Full Member
  • ***
  • Posts: 223
FPC & Efficient Neural Networks
« on: June 20, 2019, 05:04:00 pm »
:) Hello :),
I’m frequently asked why I’m using pascal to work with neural networks and my frequent reply is: compiled pascal code is usually super efficient. Keeping efficiency in mind, I would like to comment a neural network architecture that is efficient in both trainable parameters (memory) and computing requirements (CPU/GPU/FPGA/…). It seems perfectly logical to me implementing efficient neural architectures with an efficient language (pascal).

I’ve heard about teams working with 60 video cards in parallel with training time from 1 to 2 months. Adding neurons and neuronal layers is surely a possible way to improve artificial neural networks if you have enough hardware and computing time. In the case that you can’t afford time and hardware, you’ll look for efficiency. There is an inspirational paper here: https://arxiv.org/abs/1704.04861 . They base their work on separable convolutions:
https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728 .

As can be seen on above links, a separable convolution is a composition of 2 building blocks: a depth-wise convolution followed by a point-wise convolution. Although I won’t explain here what these 2 building blocks do, I would like to comment that these 2 building blocks allow artificial neural network architectures to solve classification problems with a fraction of trainable parameters and computing power.

The 2 separable convolution building blocks (depth-wise and point-wise convolutions) are ready to use:
Code: Pascal  [Select][+][-]
  1. TNNetDepthwiseConv.Create(pMultiplier, pFeatureSize, pInputPadding, pStride: integer);
  2. TNNetDepthwiseConvLinear.Create(pMultiplier, pFeatureSize, pInputPadding, pStride: integer);
  3. TNNetDepthwiseConvReLU.Create(pMultiplier, pFeatureSize, pInputPadding, pStride: integer);
  4. TNNetPointwiseConvReLU.Create(pNumFeatures: integer; pSuppressBias: integer = 0);
  5. TNNetPointwiseConvLinear.Create(pNumFeatures: integer; pSuppressBias: integer = 0);

Or, if you just want to give a go with an out of the box separable convolution, you can add it with these methods:

Code: Pascal  [Select][+][-]
  1. TNNet.AddSeparableConvReLU(pNumFeatures{filters}, pFeatureSize, pInputPadding, pStride: integer; pDepthMultiplier: integer = 1; pSuppressBias: integer = 0; pAfterLayer: TNNetLayer = nil): TNNetLayer;
  2. TNNet.AddSeparableConvLinear(pNumFeatures{filters}, pFeatureSize, pInputPadding, pStride: integer; pDepthMultiplier: integer = 1; pSuppressBias: integer = 0; pAfterLayer: TNNetLayer = nil): TNNetLayer;

The most recent convolutional API can be found here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uconvolutionneuralnetwork.pas

:) Wish everyone happy pascal coding. :)

Thaddy

  • Hero Member
  • *****
  • Posts: 14205
  • Probably until I exterminate Putin.
Re: FPC & Efficient Neural Networks
« Reply #1 on: June 20, 2019, 05:15:14 pm »
The same to you. I really enjoy your coding (and knowledge about the subject) and have already applied it in some applications.
Thank you very much!   8) 8) 8) 8) 8) 8) (Actually beyond cool)
« Last Edit: June 20, 2019, 05:22:32 pm by Thaddy »
Specialize a type, not a var.

schuler

  • Full Member
  • ***
  • Posts: 223
Re: FPC & Efficient Neural Networks
« Reply #2 on: June 27, 2019, 04:15:57 pm »
@Thaddy, thank you for your kind words. It stimulates me to type a bit more.
I would like to comment another 2 architectures:

In the case anyone reading this intends to build ResNet or DenseNet style architectures, there are building blocks (neural network layers) ready for this. ResNet requires a sum (an addition) from 2 outputs from 2 paths into a single path. This sum can be easily added with TNNetSum as per example below:

Code: Pascal  [Select][+][-]
  1.  var
  2.   PreviousLayer: TNNetLayer;
  3. begin
  4.   PreviousLayer := NN.GetLastLayer();
  5.   NN.AddLayer( TNNetConvolutionReLU.Create(pNeurons, {featuresize}3, {padding}1, {stride}1) );
  6.   NN.AddLayer( TNNetConvolutionLinear.Create(pNeurons, {featuresize}3, {padding}1, {stride}1) );
  7.   NN.AddLayer( TNNetSum.Create([PreviousLayer, NN.GetLastLayer()]) );

Contrary to ResNets, DenseNets concatenate outputs instead of summing. In FPC, this can be done with:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetDeepConcat.Create([PreviousLayer, NN.GetLastLayer()]) );

These are the original papers:
« Last Edit: July 01, 2019, 12:14:36 am by schuler »

jwdietrich

  • Hero Member
  • *****
  • Posts: 1232
    • formatio reticularis
Re: FPC & Efficient Neural Networks
« Reply #3 on: June 27, 2019, 04:59:50 pm »
Thanks, that is very interesting. Did you already write some documentation for your code? It could be useful for one of my projects, too.
function GetRandomNumber: integer; // xkcd.com
begin
  GetRandomNumber := 4; // chosen by fair dice roll. Guaranteed to be random.
end;

http://www.formatio-reticularis.de

Lazarus 2.2.6 | FPC 3.2.2 | PPC, Intel, ARM | macOS, Windows, Linux

Thaddy

  • Hero Member
  • *****
  • Posts: 14205
  • Probably until I exterminate Putin.
Re: FPC & Efficient Neural Networks
« Reply #4 on: June 27, 2019, 05:32:25 pm »
Thanks, that is very interesting. Did you already write some documentation for your code? It could be useful for one of my projects, too.
It is certainly worth the trouble. The code is self-documenting to a certain extend, but you need a little knowledge.
I have a dynamic traffic light simulation (2 way axis ) based on traffic throughput (simple example) and a heating domotics example (slightly more complex)
Maybe I can add some simple examples together with schuler. to the wiki. I'll see what I can do.
Schuler has a pretty impressive recognition example. This is a library which makes a good addition to the standard libraries with FreePascal.
We don't have that as standard yet...
Specialize a type, not a var.

schuler

  • Full Member
  • ***
  • Posts: 223
Re: FPC & Efficient Neural Networks
« Reply #5 on: July 01, 2019, 12:10:32 am »
@jwdietrich,
There is a simple example here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimple/
There are also super complex examples but I think that the above is a good start.

@Thaddy,
In the case that you have your code available from a public repository, I would be happy to add links to your code maybe in CAI’s readme. What do you think?
I would like to add some more features before applying to add it as a standard library.

Given that I’m already typing here and this topic is about efficiency, I would like to comment a recent paper https://arxiv.org/abs/1803.05407 that shows super interesting properties from NN averaged weights. The calculation method shown in the paper was implemented into CAI as follows:
Code: Pascal  [Select][+][-]
  1. procedure TNNet.AddToWeightAverage(NewElement: TNNet; CurrentElementCount: integer);
  2. begin
  3.   MulMulAddWeights(CurrentElementCount/(CurrentElementCount+1), 1/(CurrentElementCount+1), NewElement);
  4. end;

In contrast to the above, some TensorFlow examples apply exponential moving average https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage. Exponential moving average has been implemented as follows:
Code: Pascal  [Select][+][-]
  1. procedure TNNet.AddToExponentialWeightAverage(NewElement: TNNet; Decay: TNeuralFloat);
  2. begin
  3.   MulMulAddWeights(Decay, 1 - Decay, NewElement);
  4. end;

:) Live long and prosper :)
« Last Edit: July 01, 2019, 12:20:50 am by schuler »

 

TinyPortal © 2005-2018