* * *

Author Topic: Conscious Artificial Intelligence - Project Update  (Read 1542 times)

schuler

  • Jr. Member
  • **
  • Posts: 57
Conscious Artificial Intelligence - Project Update
« on: November 24, 2017, 12:54:49 am »
 :) Hello :)
As some of you already know, I've been developing Neural Networks with Free Pascal / Lazarus and sometimes it's good to show progress.

The newest addition to the project is a convolutional neural network pascal unit capable of convolution and other common features. These are the available layers:
* TNNetInput (Input/output: 1D, 2D or 3D).
* TNNetFullConnect (Input/output: 1D, 2D or 3D).
* TNNetFullConnectReLU (Input/output: 1D, 2D or 3D).
* TNNetLocalConnect (Input/output: 1D, 2D or 3D - feature size: 1D or 2D). Similar to full connect with individual neurons.
* TNNetLocalConnectReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolution (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolutionReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetMaxPool (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConcat (Input/output: 1D, 2D or 3D) - Allows concatenating the result from previous layers.
* TNNetReshape (Input/output: 1D, 2D or 3D).
* TNNetSoftMax (Input/output: 1D, 2D or 3D).

These are the available weight initializers:
* InitUniform(Value: TNeuralFloat = 1);
* InitLeCunUniform(Value: TNeuralFloat = 1);
* InitHeUniform(Value: TNeuralFloat = 1);
* InitGlorotBengioUniform(Value: TNeuralFloat = 1);

The source code is located here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uconvolutionneuralnetwork.pas

There is an extensive CIFAR-10 testing script located at:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/testcnnalgo/testcnnalgo.lpr

The API allows you to create divergent/parallel and convergent layers as per
example below:

Code: Pascal  [Select]
  1. // The Neural Network
  2. NN := TNNet.Create();
  3.  
  4. // Network that splits into 2 branches and then later concatenated
  5. TA := NN.AddLayer(TNNetInput.Create(32, 32, 3));
  6.  
  7. // First branch starting from TA (5x5 features)
  8. NN.AddLayerAfter(TNNetConvolutionReLU.Create(16, 5, 0, 0),TA);
  9. NN.AddLayer(TNNetMaxPool.Create(2));
  10. NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
  11. NN.AddLayer(TNNetMaxPool.Create(2));
  12. TB := NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
  13.  
  14. // Another branch starting from TA (3x3 features)
  15. NN.AddLayerAfter(TNNetConvolutionReLU.Create(16, 3, 0, 0),TA);
  16. NN.AddLayer(TNNetMaxPool.Create(2));
  17. NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
  18. NN.AddLayer(TNNetMaxPool.Create(2));
  19. TC := NN.AddLayer(TNNetConvolutionReLU.Create(64, 6, 0, 0));
  20.  
  21. // Concats both branches so the NN has only one end.
  22. NN.AddLayer(TNNetConcat.Create([TB,TC]));
  23. NN.AddLayer(TNNetLayerFullConnectReLU.Create(64));
  24. NN.AddLayer(TNNetLayerFullConnectReLU.Create(NumClasses));
  25.  

As a simpler example, this example shows how to create a simple fully forward connected network 3x3:
Code: Pascal  [Select]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(3) );
  3. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  4. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  5. NN.SetLearningRate(0.01,0.8);

In above example, learning rate is 0.01 and inertia is 0.8.

Follows an example showing how to train your network:
Code: Pascal  [Select]
  1. // InputVolume and vDesiredVolume are of the type TNNetVolume
  2. NN.Compute(InputVolume);
  3. NN.GetOutput(PredictedVolume);
  4. vDesiredVolume.SetClassForReLU(DesiredClass);
  5. NN.Backpropagate(vDesiredVolume);

Recently, an ultra fast Single Precision AVX/AVX2 API was coded so Neural Networks could benefit from it:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uvolume.pas

2 videos were created showing this unit:
https://www.youtube.com/watch?v=qGnfwpKUTIQ
https://www.youtube.com/watch?v=Pnv174V_emw

In preparation for future OpenCL based Neural Networks, an OpenCL wrapper was created:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopencl.pas
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopenclcl.pas

There is an example for this OpenCL wrapper here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/opencl/easy-trillion-test/

 :) Wish everyone happy pascal coding  :)
« Last Edit: November 25, 2017, 02:46:43 am by schuler »

DonAlfredo

  • Hero Member
  • *****
  • Posts: 876
Re: Conscious Artificial Intelligence - Project Update
« Reply #1 on: November 24, 2017, 09:51:35 am »
Very impressive. And OpenCL is a very useful feature for speeding things up.
Thanks.

schuler

  • Jr. Member
  • **
  • Posts: 57
Re: Conscious Artificial Intelligence - Project Update
« Reply #2 on: May 16, 2018, 01:28:01 am »
 :) Hello,  :)
I haven't updated this post for a long time. There are plenty of new features since the last update:

A new pooling layer:
* TNNetAvgPool (input/output: 1D, 2D or 3D)

Layers for inverse operations:
* TNNetDeLocalConnect (input/output: 1D, 2D or 3D)
* TNNetDeLocalConnectReLU (input/output: 1D, 2D or 3D)
* TNNetDeconvolution (input/output: 1D, 2D or 3D)
* TNNetDeconvolutionReLU (input/output: 1D, 2D or 3D)
* TNNetDeMaxPool (input/output: 1D, 2D or 3D)

New normalization layers:
* TNNetLayerMaxNormalization (input/output: 1D, 2D or 3D)
* TNNetLayerStdNormalization (input/output: 1D, 2D or 3D)

A dropout layer:
* TNNetDropout (input/output: 1D, 2D or 3D)

L2 regularization has been added. It's now possible to apply L2 to all layers or only to convolutional layers.
Code: Pascal  [Select]
  1.       procedure SetL2DecayToConvolutionalLayers(pL2Decay: TNeuralFloat);
  2.       procedure SetL2Decay(pL2Decay: TNeuralFloat);

A new video:
Increasing Image Resolution with Neural Networks
https://www.youtube.com/watch?v=jdFixaZ2P4w

Data Parallelism has been implemented. Experiments have been made with up to 40 parallel threads in virtual machines with up to 32 virtual cores.

Data Parallelism is implemented at TNNetDataParallelism. As per example, you can create and run 40 parallel neural networks with:
Code: Pascal  [Select]
  1. NN := TNNet.Create();
  2. ...
  3. FThreadNN := TNNetDataParallelism.Create(NN, 40);
  4. ...
  5. ProcThreadPool.DoParallel(@RunNNThread, 0, FThreadNN.Count-1, Nil, FThreadNN.Count);
  6. FThreadNN.AvgWeights(NN);

 :) Wish everyone happy coding  :)

 

Recent

Get Lazarus at SourceForge.net. Fast, secure and Free Open Source software downloads Open Hub project report for Lazarus