Recent

Author Topic: Conscious Artificial Intelligence - Project Update  (Read 60456 times)

schuler

  • Full Member
  • ***
  • Posts: 223
Conscious Artificial Intelligence - Project Update
« on: November 24, 2017, 12:54:49 am »
 :) Hello :)
As some of you already know, I've been developing Neural Networks with Free Pascal / Lazarus and sometimes it's good to show progress.

The newest addition to the project is a convolutional neural network pascal unit capable of convolution and other common features. These are the available layers:
* TNNetInput (Input/output: 1D, 2D or 3D).
* TNNetFullConnect (Input/output: 1D, 2D or 3D).
* TNNetFullConnectReLU (Input/output: 1D, 2D or 3D).
* TNNetLocalConnect (Input/output: 1D, 2D or 3D - feature size: 1D or 2D). Similar to full connect with individual neurons.
* TNNetLocalConnectReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolution (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolutionReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetMaxPool (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConcat (Input/output: 1D, 2D or 3D) - Allows concatenating the result from previous layers.
* TNNetReshape (Input/output: 1D, 2D or 3D).
* TNNetSoftMax (Input/output: 1D, 2D or 3D).

These are the available weight initializers:
* InitUniform(Value: TNeuralFloat = 1);
* InitLeCunUniform(Value: TNeuralFloat = 1);
* InitHeUniform(Value: TNeuralFloat = 1);
* InitGlorotBengioUniform(Value: TNeuralFloat = 1);

The source code is located here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uconvolutionneuralnetwork.pas

There is an extensive CIFAR-10 testing script located at:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/testcnnalgo/testcnnalgo.lpr

The API allows you to create divergent/parallel and convergent layers as per
example below:

Code: Pascal  [Select][+][-]
  1. // Creates The Neural Network
  2. NN := TNNet.Create();
  3.  
  4. // This network splits into 2 paths and then is later concatenated
  5. InputLayer := NN.AddLayer(TNNetInput.Create(32, 32, 3));
  6.  
  7. // First branch starting from InputLayer (5x5 features)
  8. NN.AddLayerAfter(TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}2, {Stride=}1), InputLayer);
  9. NN.AddLayer(TNNetMaxPool.Create(2));
  10. NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}5, {Padding=}2, {Stride=}1));
  11. NN.AddLayer(TNNetMaxPool.Create(2));
  12. EndOfFirstPath := NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}5, {Padding=}2, {Stride=}1));
  13.  
  14. // Another branch starting from InputLayer (3x3 features)
  15. NN.AddLayerAfter(TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}3, {Padding=}1, {Stride=}1), InputLayer);
  16. NN.AddLayer(TNNetMaxPool.Create(2));
  17. NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1));
  18. NN.AddLayer(TNNetMaxPool.Create(2));
  19. EndOfSecondPath := NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1));
  20.  
  21. // Concats both branches into one branch.
  22. NN.AddLayer(TNNetDeepConcat.Create([EndOfFirstPath, EndOfSecondPath]));
  23. NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1));
  24. NN.AddLayer(TNNetLayerFullConnectReLU.Create(64));
  25. NN.AddLayer(TNNetLayerFullConnectReLU.Create(NumClasses));

As a simpler example, this example shows how to create a simple fully forward connected network 3x3:
Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(3) );
  3. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  4. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  5. NN.SetLearningRate(0.01,0.8);

In above example, learning rate is 0.01 and inertia is 0.8.

Follows an example showing how to train your network:
Code: Pascal  [Select][+][-]
  1. // InputVolume and vDesiredVolume are of the type TNNetVolume
  2. NN.Compute(InputVolume);
  3. NN.GetOutput(PredictedVolume);
  4. vDesiredVolume.SetClassForReLU(DesiredClass);
  5. NN.Backpropagate(vDesiredVolume);

Recently, an ultra fast Single Precision AVX/AVX2 API was coded so Neural Networks could benefit from it:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uvolume.pas

2 videos were created showing this unit:
https://www.youtube.com/watch?v=qGnfwpKUTIQ
https://www.youtube.com/watch?v=Pnv174V_emw

In preparation for future OpenCL based Neural Networks, an OpenCL wrapper was created:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopencl.pas
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopenclcl.pas

There is an example for this OpenCL wrapper here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/opencl/easy-trillion-test/

 :) Wish everyone happy pascal coding  :)
« Last Edit: October 15, 2020, 10:21:34 am by schuler »

DonAlfredo

  • Hero Member
  • *****
  • Posts: 1738
Re: Conscious Artificial Intelligence - Project Update
« Reply #1 on: November 24, 2017, 09:51:35 am »
Very impressive. And OpenCL is a very useful feature for speeding things up.
Thanks.

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #2 on: May 16, 2018, 01:28:01 am »
 :) Hello,  :)
I haven't updated this post for a long time. There are plenty of new features since the last update:

A new pooling layer:
* TNNetAvgPool (input/output: 1D, 2D or 3D)

Layers for inverse operations:
* TNNetDeLocalConnect (input/output: 1D, 2D or 3D)
* TNNetDeLocalConnectReLU (input/output: 1D, 2D or 3D)
* TNNetDeconvolution (input/output: 1D, 2D or 3D)
* TNNetDeconvolutionReLU (input/output: 1D, 2D or 3D)
* TNNetDeMaxPool (input/output: 1D, 2D or 3D)

New normalization layers:
* TNNetLayerMaxNormalization (input/output: 1D, 2D or 3D)
* TNNetLayerStdNormalization (input/output: 1D, 2D or 3D)

A dropout layer:
* TNNetDropout (input/output: 1D, 2D or 3D)

L2 regularization has been added. It's now possible to apply L2 to all layers or only to convolutional layers.
Code: Pascal  [Select][+][-]
  1.       procedure SetL2DecayToConvolutionalLayers(pL2Decay: TNeuralFloat);
  2.       procedure SetL2Decay(pL2Decay: TNeuralFloat);

A new video:
Increasing Image Resolution with Neural Networks
https://www.youtube.com/watch?v=jdFixaZ2P4w

Data Parallelism has been implemented. Experiments have been made with up to 40 parallel threads in virtual machines with up to 32 virtual cores.

Data Parallelism is implemented at TNNetDataParallelism. As per example, you can create and run 40 parallel neural networks with:
Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. ...
  3. FThreadNN := TNNetDataParallelism.Create(NN, 40);
  4. ...
  5. ProcThreadPool.DoParallel(@RunNNThread, 0, FThreadNN.Count-1, Nil, FThreadNN.Count);
  6. FThreadNN.AvgWeights(NN);

 :) Wish everyone happy coding  :)
« Last Edit: July 31, 2018, 05:30:11 am by schuler »

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #3 on: July 31, 2018, 05:21:45 am »
It's time for new features and examples!

Follows some new examples:

CIFAR-10 with OpenCL on convolution forward step: https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10OpenCL/

CIFAR-10 batch updates example:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

Mario Werner independently coded and shared examples for CIFAR-10 and MNIST:
https://bitbucket.org/108bits/cai-implementations/src/c8c027b1a0d636713f7ebb70a738f1cd7117a7a4?at=master

BTW, if you have examples that you've shared in public repositories, please let me know. I love seeing what you are doing.

These are new features:

OpenCL based Convolution: on tiny images 32x32x3 or 28x28x1, you won't get much improvement against the well made AVX code. But you'll find impressive improvement on large input volumes. OpenCL can be enabled by calling:
Code: Pascal  [Select][+][-]
  1. procedure EnableOpenCL(platform_id: cl_platform_id; device_id: cl_device_id);

Batch Updates Support
Batch updates can be enabled/disabled via:
Code: Pascal  [Select][+][-]
  1. procedure SetBatchUpdate(pBatchUpdate: boolean);

New Normalization layers:
* TNNetLocalResponseNorm2D: (input/output: 2D or 3D).
* TNNetLocalResponseNormDepth: (input/output: 2D or 3D).

New Concatenation / Splitting layers:
* TNNetDeepConcat (input/output: 1D, 2D or 3D) - Concatenates independent NN branches into the Depth axis.
* TNNetSplitChannels (input: 1D, 2D or 3D / output: 1D, 2D or 3D). Splits layers/channels from input. This is useful for creating divergent NN branches.

New ready to use data augmentation methods from TVolume:
* procedure FlipX();
* procedure FlipY();
* procedure CopyCropping(Original: TVolume; StartX, StartY, pSizeX, pSizeY: integer);
* procedure CopyResizing(Original: TVolume; NewSizeX, NewSizeY: integer);
* procedure AddGaussianNoise(pMul: TNeuralFloat);
* procedure AddSaltAndPepper(pNum: integer; pSalt: integer = 2; pPepper: integer = -2);

Final random thought:
As pascal is easy to teach and learn (in my opinion), pascal might be perfect for teaching/learning neural networks.

:) Wish everyone happy pascal coding :)
« Last Edit: July 31, 2018, 05:29:06 am by schuler »

Thaddy

  • Hero Member
  • *****
  • Posts: 14159
  • Probably until I exterminate Putin.
Re: Conscious Artificial Intelligence - Project Update
« Reply #4 on: July 31, 2018, 08:57:54 am »
As pascal is easy to teach and learn (in my opinion), pascal might be perfect for teaching/learning neural networks.
Actually, that has been the case since a very long time, in the early 80's it was pretty much the standard.
That's the way I learned some of the basics and taught some of the basics at university back in the 80's.
(Basically I learned it, wrote examples and taught it - almost - in the same week, writing the curriculum on the go... <hmm..> O:-) :-[ )
We used it in teaching for problems like Traveling Salesman and Knapsack. Lot's of fun. I must still have some line printed code from those days...The green/white line stuff with holes on both sides.
Let's see if I can put it into your way more advanced software form... Great job.
« Last Edit: July 31, 2018, 09:05:36 am by Thaddy »
Specialize a type, not a var.

Trenatos

  • Hero Member
  • *****
  • Posts: 533
    • MarcusFernstrom.com
Re: Conscious Artificial Intelligence - Project Update
« Reply #5 on: July 31, 2018, 08:18:43 pm »
This is very cool to see, many thanks for sharing!

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #6 on: August 02, 2018, 11:30:33 pm »
Hello,
I got a couple of private questions from forum members and decided to post here a general reply as it might help others.

This link gives a general view of CNNs:
http://cs231n.github.io/convolutional-networks/

Above link mentions some well known NN architectures. I would like to show here how to implement some of these well known architectures with CAI:

Yann LeCun LeNet-5:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(28, 28, 1) );
  3. NN.AddLayer( TNNetConvolution.Create(6, 5, 0, 1) );
  4. NN.AddLayer( TNNetMaxPool.Create(2) );
  5. NN.AddLayer( TNNetConvolution.Create(16, 5, 0, 1) );
  6. NN.AddLayer( TNNetMaxPool.Create(2) );
  7. NN.AddLayer( TNNetFullConnect.Create(120) );
  8. NN.AddLayer( TNNetFullConnect.Create(84) );
  9. NN.AddLayer( TNNetFullConnectLinear.Create(10) );
  10. NN.AddLayer( TNNetSoftMax.Create() );

AlexNet can be implemented as follows:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(227, 227, 3) );
  3.  
  4. //Conv1 + ReLU
  5. NN.AddLayer( TNNetConvolutionReLU.Create(96, 11, 0, 4) );
  6. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  7. NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) );
  8.  
  9. //Conv2 + ReLU
  10. NN.AddLayer( TNNetConvolutionReLU.Create(256, 5, 2, 1) );
  11. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  12. NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) );
  13.  
  14. //Conv3,4,5 + ReLU
  15. NN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );
  16. NN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );
  17. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  18. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  19.  
  20. // Dropouts and Dense Layers
  21. NN.AddLayer( TNNetDropout.Create(0.5) );
  22. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  23. NN.AddLayer( TNNetDropout.Create(0.5) );
  24. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  25. NN.AddLayer( TNNetFullConnectLinear.Create(1000) );
  26. NN.AddLayer( TNNetSoftMax.Create() );

Above implementation is based on:
https://medium.com/@smallfishbigsea/a-walk-through-of-alexnet-6cbd137a5637

The VGGNet has an interesting architecture. It always uses 3x3 filters making its implementation easy to understand:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(224, 224, 3) );
  3. NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );
  4. NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );
  5. NN.AddLayer( TNNetMaxPool.Create(2) );
  6. //112x112x64
  7. NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );
  8. NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );
  9. NN.AddLayer( TNNetMaxPool.Create(2) );
  10. //56x56x128
  11. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  12. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  13. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  14. //28x28x256
  15. NN.AddLayer( TNNetMaxPool.Create(2) );
  16. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  17. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  18. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  19. NN.AddLayer( TNNetMaxPool.Create(2) );
  20. //14x14x512
  21. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  22. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  23. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  24. NN.AddLayer( TNNetMaxPool.Create(2) );
  25. //7x7x512
  26. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  27. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  28. NN.AddLayer( TNNetFullConnectLinear.Create(1000) );
  29. NN.AddLayer( TNNetSoftMax.Create() );

Above architectures have been added to the API as follows:
Code: Pascal  [Select][+][-]
  1.   THistoricalNets = class(TNNet)
  2.     public
  3.       procedure AddLeCunLeNet5(IncludeInput: boolean);
  4.       procedure AddAlexNet(IncludeInput: boolean);
  5.       procedure AddVGGNet(IncludeInput: boolean);
  6.   end;

Hope it helps and happy pascal coding!
« Last Edit: August 06, 2018, 09:45:26 pm by schuler »

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #7 on: March 13, 2019, 10:19:49 pm »
Hello,
This message is intended to be read by whom loves pascal and neural networks.

After a long time, here we go with a new post. This time, instead of reporting new features, this post is more about how to do things than just listing new features. I would like to show how to replicate examples made for Keras and TensorFlow with CAI. As shown in the paper https://arxiv.org/abs/1412.6806, it’s possible to create efficient neural networks with just convolutional layers. Therefore, we’ll start from convolutional layers. There is a simple example from Keras located at https://keras.io/examples/cifar10_cnn/. From this code, we can find convolutional layers being added with:
Code: Pascal  [Select][+][-]
  1. model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
  2. model.add(Activation('relu'))

The above can be done with CAI with just one line of code:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );

It’s also possible to add it with 2 lines of code but it’s not recommended as it’s not efficient in CAI:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetConvolutionLinear.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  2. NN.AddLayer( TNNetReLU.Create() );

This is the MaxPooling from Keras:
Code: Pascal  [Select][+][-]
  1. model.add(MaxPooling2D(pool_size=(2, 2)))

And this is the equivalent from CAI:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetMaxPool.Create(2) );

The full convolutional neural network can be defined in plain Free Pascal with:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(32,32,iInputDepth) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  3. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  4. NN.AddLayer( TNNetMaxPool.Create(2) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  6. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  7. NN.AddLayer( TNNetMaxPool.Create(2) );
  8. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}512) );
  9. NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );
  10. NN.AddLayer( TNNetSoftMax.Create() );

As you can see above, there aren’t any dropout layers in the Free Pascal implementation. This was done intentionally as I prefer using data augmentation instead of dropout layers. The data augmentation technique used here is very simple:
  • We crop the image and then stretch back to the original size.
  • We randomly flip/flop the image.
  • We randomly make the image gray.
The above can be easily done in Free Pascal:
Code: Pascal  [Select][+][-]
  1. // Crop and Stretch
  2. CropSizeX := random(9);
  3. CropSizeY := random(9);
  4. ImgInputCp.CopyCropping(ImgVolumes[ImgIdx], random(CropSizeX), random(CropSizeY),ImgVolumes[ImgIdx].SizeX-CropSizeX, ImgVolumes[ImgIdx].SizeY-CropSizeY);
  5. ImgInput.CopyResizing(ImgInputCp, ImgVolumes[ImgIdx].SizeX, ImgVolumes[ImgIdx].SizeY);
  6. // Flip the Image
  7. if Random(1000) > 500 then
  8. begin
  9.   ImgInput.FlipX();
  10. end;
  11. // Make it Gray
  12. if (Random(1000) > 750) then
  13. begin
  14.   ImgInput.MakeGray(color_encoding);
  15. end;

Running the above in CAI gives really impressive results (they seem to me better than on Keras itself…). In the case that you would like to try by yourself, you can get this Lazarus Project:
https://sourceforge.net/p/cai/svncode/724/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

When you compile and run it, select “2 – Keras inspired” in “Algo” dropdown, check “Multiple Samples at Validation” and then start it. Although this example looks simple, it’s no numerical toy. It actually has more than 2.1 million neuronal weights/connections!

I would like to also show you another example ported to Free Pascal. This time it’s an example from TensorFlow: https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py
The neuronal structure can be found in the “def inference(images)”. This structure was ported to Free Pascal as follows:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(24,24,3) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );
  3. NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );
  4. NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );
  6. NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );
  7. NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );
  8. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}384) );
  9. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}192) );
  10. NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );
  11. NN.AddLayer( TNNetSoftMax.Create() );

Running it in CAI/Free Pascal, you’ll get similar results (classification accuracy) to results reported in TensorFlow site. In the case that you would like to run this by yourself, select “3 – TF inspired”, “Multiple Samples at Validation” and “24x24 image cropping”. The data augmentation technique found in this example is a bit different than before: we are cropping the original image into 24x24 images at random places. We are also flopping it and making it sometimes gray.

:-) I wish everyone happy pascal neural networking! :-)
« Last Edit: March 13, 2019, 10:30:53 pm by schuler »

avra

  • Hero Member
  • *****
  • Posts: 2514
    • Additional info
Re: Conscious Artificial Intelligence - Project Update
« Reply #8 on: March 13, 2019, 10:55:30 pm »
this post is more about how to do things than just listing new features
Much appreciated. Thank you!
ct2laz - Conversion between Lazarus and CodeTyphon
bithelpers - Bit manipulation for standard types
pasettimino - Siemens S7 PLC lib

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #9 on: April 21, 2019, 08:50:10 am »
 @Avra,
I'm starting to believe that most of examples showing how to use CAI are too advanced.

Just added 2 entry level examples. The first shows how to learn boolean functions XOR, AND and OR:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimple/

The next is the same as above with Pearson Correlation:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimplecorrelation/

:) I hope it helps and happy pascal coding to everyone. :)

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #10 on: July 29, 2019, 12:31:10 pm »
 :) Hello :)

Instead of reporting new features (there are plenty of new neuronal layer types), I'll report here a test. For the past 2 years, I've probably spent more time testing than coding. The test is simple: using a DenseNet like architecture with 1M trainable parameters, normalization, L2, cyclical learning rate and weight averaging, how precise can be the CIFAR-10 image classification accuracy?

The experiment was done with SVN revision 907:
https://sourceforge.net/p/cai/svncode/907/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

In the case that you intend to reproduce it, you should use exactly the same parameters found in the attached screenshot expect for "Logical Threads" and "Physical Threads" (you should use your CPU core count instead). This experiment was run in the slowest (and cheapest) machine I have access to.

At my end, I got about 94% CIFAR-10 classification accuracy.

:) Wish everyone happy pascal coding. :)
« Last Edit: July 29, 2019, 12:34:30 pm by schuler »

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #11 on: July 29, 2019, 06:37:32 pm »
 :) Hello :)
I promise that I'll try to not to post anything after this for some time.

I've just done an experiment comparing CAI with Keras/TF. CAI and TF were both compiled with AVX instruction set (no AVX2 nor AVX512). This is the FPC code:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(32, 32, 3) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}0, {stride}1) );
  3. NN.AddLayer( TNNetMaxPool.Create(4) );
  4. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  6. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  7. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  8. NN.AddLayer( TNNetFullConnectLinear.Create(10) );
  9. NN.AddLayer( TNNetSoftMax.Create() );

This is the equivalent Keras code:
Code: Python  [Select][+][-]
  1. model = Sequential()
  2. model.add(Conv2D(64, (5, 5), padding='valid',
  3.                  input_shape=x_train.shape[1:]))
  4. model.add(Activation('relu'))
  5. model.add(MaxPooling2D(pool_size=(4, 4)))
  6. model.add(Conv2D(64, (3, 3), padding='same'))
  7. model.add(Activation('relu'))
  8. model.add(Conv2D(64, (3, 3), padding='same'))
  9. model.add(Activation('relu'))
  10. model.add(Flatten())
  11. model.add(Dense(32))
  12. model.add(Activation('relu'))
  13. model.add(Dense(32))
  14. model.add(Activation('relu'))
  15. model.add(Dense(num_classes))
  16. model.add(Activation('softmax'))

Using the very same machine (google cloud with 16 virtual cores and no gpu), I run both models. This is the output from Keras showing that each epoch is taking 20 seconds to be computed:
Quote
Epoch 24/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3232 - acc: 0.5270 - val_loss: 1.2861 - val_acc: 0.5369
Epoch 25/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3297 - acc: 0.5226 - val_loss: 1.2873 - val_acc: 0.5361
Epoch 26/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3144 - acc: 0.5295 - val_loss: 1.2868 - val_acc: 0.5378
Epoch 27/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3156 - acc: 0.5284 - val_loss: 1.2915 - val_acc: 0.5356
Epoch 28/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3150 - acc: 0.5281 - val_loss: 1.2776 - val_acc: 0.5397
Epoch 29/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3057 - acc: 0.5332 - val_loss: 1.2737 - val_acc: 0.5415

In CAI, each epoch is computed in about 16 or 17 seconds as per log (time in seconds is incremental and is the last field below):
Quote
31,0.7509,0.7379,0.6683,0.7780,0.6491,0.6031,0.0010000,523
32,0.7565,0.6392,0.6981,0.7679,0.6725,0.6202,0.0010000,539
33,0.7482,0.8476,0.7243,0.7678,0.6851,0.6034,0.0010000,555
34,0.7653,0.5730,0.5781,0.7665,0.6971,0.6179,0.0010000,571
35,0.7668,0.7007,0.6337,0.7800,0.6464,0.5884,0.0010000,588

Am I typing that CAI will always perform better? Absolutely not. CAI is likely to be faster on some models and slower on other models. I haven't tested enough to be able to indicate when it will be faster or slower.

:) Anyway, wish everyone happy pascal coding. :)
« Last Edit: July 29, 2019, 06:39:13 pm by schuler »

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #12 on: September 02, 2019, 10:33:01 pm »
 :) Hello :),
I'm thinking about making a change that will brake existing code but it will make this library to look a lot better. Before I do this change, I would like to have some feedback please.

The change is: renaming library units and then creating a package (version 1.0).

All library units will start with "Neural". So, these are some renaming examples:
uconvolutionalneuralnetwork -> NeuralNet
uvolume -> NeuralVolume
ueasyopencl -> NeuralOpenCl
uevolutionary -> NeuralEvolution
ucifar10 -> NeuralDatasets
ubyteprediction -> NeuralPrediction

What do you think? Please feel free to ask for other changes.

avra

  • Hero Member
  • *****
  • Posts: 2514
    • Additional info
Re: Conscious Artificial Intelligence - Project Update
« Reply #13 on: September 03, 2019, 12:11:36 am »
I'm thinking about making a change that will brake existing code but it will make this library to look a lot better. Before I do this change, I would like to have some feedback please.
I do not mind suggested changes, but it would be nice if thread and other examples get corrected so that they still work.
ct2laz - Conversion between Lazarus and CodeTyphon
bithelpers - Bit manipulation for standard types
pasettimino - Siemens S7 PLC lib

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #14 on: September 23, 2019, 04:06:10 am »
... I'm approaching 1000 commits ... And this is the "new SVN repository" since I moved from CVS to SVN many years ago. I'm getting old...  Might be time to move to git ... Anyway ...

@avra
A new Neural API version with major changes/improvements is now under construction:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/neural/

To whom may concern,
All unit names now start by "neural". All examples have already been fixed. The old "libs" folder has been kept for backward compatibility so the next version won't brake any existing code.

Looking for sources of inspiration, Keras has an useful "fit" method:
https://keras.rstudio.com/reference/fit.html

"fit" like methods and classes are now under construction for CAI! YAY! neuralfit.pas has already more than 700 lines of source code.

Soon, I intend to have deep learning examples ready to run on Google Colab with Lazarus/FPC/CAI NEURAL API.

:) Wish everyone happy pascal coding :)
« Last Edit: September 23, 2019, 11:59:59 am by schuler »

 

TinyPortal © 2005-2018