Lazarus

Announcements => Third party => Topic started by: schuler on November 24, 2017, 12:54:49 am

Title: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 24, 2017, 12:54:49 am
 :) Hello :)
As some of you already know, I've been developing Neural Networks with Free Pascal / Lazarus and sometimes it's good to show progress.

The newest addition to the project is a convolutional neural network pascal unit capable of convolution and other common features. These are the available layers:
* TNNetInput (Input/output: 1D, 2D or 3D).
* TNNetFullConnect (Input/output: 1D, 2D or 3D).
* TNNetFullConnectReLU (Input/output: 1D, 2D or 3D).
* TNNetLocalConnect (Input/output: 1D, 2D or 3D - feature size: 1D or 2D). Similar to full connect with individual neurons.
* TNNetLocalConnectReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolution (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolutionReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetMaxPool (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConcat (Input/output: 1D, 2D or 3D) - Allows concatenating the result from previous layers.
* TNNetReshape (Input/output: 1D, 2D or 3D).
* TNNetSoftMax (Input/output: 1D, 2D or 3D).

These are the available weight initializers:
* InitUniform(Value: TNeuralFloat = 1);
* InitLeCunUniform(Value: TNeuralFloat = 1);
* InitHeUniform(Value: TNeuralFloat = 1);
* InitGlorotBengioUniform(Value: TNeuralFloat = 1);

The source code is located here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uconvolutionneuralnetwork.pas

There is an extensive CIFAR-10 testing script located at:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/testcnnalgo/testcnnalgo.lpr

The API allows you to create divergent/parallel and convergent layers as per
example below:

Code: Pascal  [Select][+][-]
  1. // The Neural Network
  2. NN := TNNet.Create();
  3.  
  4. // Network that splits into 2 branches and then later concatenated
  5. TA := NN.AddLayer(TNNetInput.Create(32, 32, 3));
  6.  
  7. // First branch starting from TA (5x5 features)
  8. NN.AddLayerAfter(TNNetConvolutionReLU.Create(16, 5, 0, 0),TA);
  9. NN.AddLayer(TNNetMaxPool.Create(2));
  10. NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
  11. NN.AddLayer(TNNetMaxPool.Create(2));
  12. TB := NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
  13.  
  14. // Another branch starting from TA (3x3 features)
  15. NN.AddLayerAfter(TNNetConvolutionReLU.Create(16, 3, 0, 0),TA);
  16. NN.AddLayer(TNNetMaxPool.Create(2));
  17. NN.AddLayer(TNNetConvolutionReLU.Create(64, 5, 0, 0));
  18. NN.AddLayer(TNNetMaxPool.Create(2));
  19. TC := NN.AddLayer(TNNetConvolutionReLU.Create(64, 6, 0, 0));
  20.  
  21. // Concats both branches so the NN has only one end.
  22. NN.AddLayer(TNNetConcat.Create([TB,TC]));
  23. NN.AddLayer(TNNetLayerFullConnectReLU.Create(64));
  24. NN.AddLayer(TNNetLayerFullConnectReLU.Create(NumClasses));
  25.  

As a simpler example, this example shows how to create a simple fully forward connected network 3x3:
Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(3) );
  3. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  4. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  5. NN.SetLearningRate(0.01,0.8);

In above example, learning rate is 0.01 and inertia is 0.8.

Follows an example showing how to train your network:
Code: Pascal  [Select][+][-]
  1. // InputVolume and vDesiredVolume are of the type TNNetVolume
  2. NN.Compute(InputVolume);
  3. NN.GetOutput(PredictedVolume);
  4. vDesiredVolume.SetClassForReLU(DesiredClass);
  5. NN.Backpropagate(vDesiredVolume);

Recently, an ultra fast Single Precision AVX/AVX2 API was coded so Neural Networks could benefit from it:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uvolume.pas

2 videos were created showing this unit:
https://www.youtube.com/watch?v=qGnfwpKUTIQ
https://www.youtube.com/watch?v=Pnv174V_emw

In preparation for future OpenCL based Neural Networks, an OpenCL wrapper was created:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopencl.pas
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopenclcl.pas

There is an example for this OpenCL wrapper here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/opencl/easy-trillion-test/

 :) Wish everyone happy pascal coding  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: DonAlfredo on November 24, 2017, 09:51:35 am
Very impressive. And OpenCL is a very useful feature for speeding things up.
Thanks.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 16, 2018, 01:28:01 am
 :) Hello,  :)
I haven't updated this post for a long time. There are plenty of new features since the last update:

A new pooling layer:
* TNNetAvgPool (input/output: 1D, 2D or 3D)

Layers for inverse operations:
* TNNetDeLocalConnect (input/output: 1D, 2D or 3D)
* TNNetDeLocalConnectReLU (input/output: 1D, 2D or 3D)
* TNNetDeconvolution (input/output: 1D, 2D or 3D)
* TNNetDeconvolutionReLU (input/output: 1D, 2D or 3D)
* TNNetDeMaxPool (input/output: 1D, 2D or 3D)

New normalization layers:
* TNNetLayerMaxNormalization (input/output: 1D, 2D or 3D)
* TNNetLayerStdNormalization (input/output: 1D, 2D or 3D)

A dropout layer:
* TNNetDropout (input/output: 1D, 2D or 3D)

L2 regularization has been added. It's now possible to apply L2 to all layers or only to convolutional layers.
Code: Pascal  [Select][+][-]
  1.       procedure SetL2DecayToConvolutionalLayers(pL2Decay: TNeuralFloat);
  2.       procedure SetL2Decay(pL2Decay: TNeuralFloat);

A new video:
Increasing Image Resolution with Neural Networks
https://www.youtube.com/watch?v=jdFixaZ2P4w (https://www.youtube.com/watch?v=jdFixaZ2P4w)

Data Parallelism has been implemented. Experiments have been made with up to 40 parallel threads in virtual machines with up to 32 virtual cores.

Data Parallelism is implemented at TNNetDataParallelism. As per example, you can create and run 40 parallel neural networks with:
Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. ...
  3. FThreadNN := TNNetDataParallelism.Create(NN, 40);
  4. ...
  5. ProcThreadPool.DoParallel(@RunNNThread, 0, FThreadNN.Count-1, Nil, FThreadNN.Count);
  6. FThreadNN.AvgWeights(NN);

 :) Wish everyone happy coding  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 31, 2018, 05:21:45 am
It's time for new features and examples!

Follows some new examples:

CIFAR-10 with OpenCL on convolution forward step: https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10OpenCL/

CIFAR-10 batch updates example:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

Mario Werner independently coded and shared examples for CIFAR-10 and MNIST:
https://bitbucket.org/108bits/cai-implementations/src/c8c027b1a0d636713f7ebb70a738f1cd7117a7a4?at=master

BTW, if you have examples that you've shared in public repositories, please let me know. I love seeing what you are doing.

These are new features:

OpenCL based Convolution: on tiny images 32x32x3 or 28x28x1, you won't get much improvement against the well made AVX code. But you'll find impressive improvement on large input volumes. OpenCL can be enabled by calling:
Code: Pascal  [Select][+][-]
  1. procedure EnableOpenCL(platform_id: cl_platform_id; device_id: cl_device_id);

Batch Updates Support
Batch updates can be enabled/disabled via:
Code: Pascal  [Select][+][-]
  1. procedure SetBatchUpdate(pBatchUpdate: boolean);

New Normalization layers:
* TNNetLocalResponseNorm2D: (input/output: 2D or 3D).
* TNNetLocalResponseNormDepth: (input/output: 2D or 3D).

New Concatenation / Splitting layers:
* TNNetDeepConcat (input/output: 1D, 2D or 3D) - Concatenates independent NN branches into the Depth axis.
* TNNetSplitChannels (input: 1D, 2D or 3D / output: 1D, 2D or 3D). Splits layers/channels from input. This is useful for creating divergent NN branches.

New ready to use data augmentation methods from TVolume:
* procedure FlipX();
* procedure FlipY();
* procedure CopyCropping(Original: TVolume; StartX, StartY, pSizeX, pSizeY: integer);
* procedure CopyResizing(Original: TVolume; NewSizeX, NewSizeY: integer);
* procedure AddGaussianNoise(pMul: TNeuralFloat);
* procedure AddSaltAndPepper(pNum: integer; pSalt: integer = 2; pPepper: integer = -2);

Final random thought:
As pascal is easy to teach and learn (in my opinion), pascal might be perfect for teaching/learning neural networks.

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on July 31, 2018, 08:57:54 am
As pascal is easy to teach and learn (in my opinion), pascal might be perfect for teaching/learning neural networks.
Actually, that has been the case since a very long time, in the early 80's it was pretty much the standard.
That's the way I learned some of the basics and taught some of the basics at university back in the 80's.
(Basically I learned it, wrote examples and taught it - almost - in the same week, writing the curriculum on the go... <hmm..> O:-) :-[ )
We used it in teaching for problems like Traveling Salesman and Knapsack. Lot's of fun. I must still have some line printed code from those days...The green/white line stuff with holes on both sides.
Let's see if I can put it into your way more advanced software form... Great job.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Trenatos on July 31, 2018, 08:18:43 pm
This is very cool to see, many thanks for sharing!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on August 02, 2018, 11:30:33 pm
Hello,
I got a couple of private questions from forum members and decided to post here a general reply as it might help others.

This link gives a general view of CNNs:
http://cs231n.github.io/convolutional-networks/

Above link mentions some well known NN architectures. I would like to show here how to implement some of these well known architectures with CAI:

Yann LeCun LeNet-5:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(28, 28, 1) );
  3. NN.AddLayer( TNNetConvolution.Create(6, 5, 0, 1) );
  4. NN.AddLayer( TNNetMaxPool.Create(2) );
  5. NN.AddLayer( TNNetConvolution.Create(16, 5, 0, 1) );
  6. NN.AddLayer( TNNetMaxPool.Create(2) );
  7. NN.AddLayer( TNNetFullConnect.Create(120) );
  8. NN.AddLayer( TNNetFullConnect.Create(84) );
  9. NN.AddLayer( TNNetFullConnectLinear.Create(10) );
  10. NN.AddLayer( TNNetSoftMax.Create() );

AlexNet can be implemented as follows:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(227, 227, 3) );
  3.  
  4. //Conv1 + ReLU
  5. NN.AddLayer( TNNetConvolutionReLU.Create(96, 11, 0, 4) );
  6. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  7. NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) );
  8.  
  9. //Conv2 + ReLU
  10. NN.AddLayer( TNNetConvolutionReLU.Create(256, 5, 2, 1) );
  11. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  12. NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) );
  13.  
  14. //Conv3,4,5 + ReLU
  15. NN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );
  16. NN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );
  17. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  18. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  19.  
  20. // Dropouts and Dense Layers
  21. NN.AddLayer( TNNetDropout.Create(0.5) );
  22. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  23. NN.AddLayer( TNNetDropout.Create(0.5) );
  24. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  25. NN.AddLayer( TNNetFullConnectLinear.Create(1000) );
  26. NN.AddLayer( TNNetSoftMax.Create() );

Above implementation is based on:
https://medium.com/@smallfishbigsea/a-walk-through-of-alexnet-6cbd137a5637

The VGGNet has an interesting architecture. It always uses 3x3 filters making its implementation easy to understand:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(224, 224, 3) );
  3. NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );
  4. NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );
  5. NN.AddLayer( TNNetMaxPool.Create(2) );
  6. //112x112x64
  7. NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );
  8. NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );
  9. NN.AddLayer( TNNetMaxPool.Create(2) );
  10. //56x56x128
  11. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  12. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  13. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  14. //28x28x256
  15. NN.AddLayer( TNNetMaxPool.Create(2) );
  16. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  17. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  18. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  19. NN.AddLayer( TNNetMaxPool.Create(2) );
  20. //14x14x512
  21. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  22. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  23. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  24. NN.AddLayer( TNNetMaxPool.Create(2) );
  25. //7x7x512
  26. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  27. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  28. NN.AddLayer( TNNetFullConnectLinear.Create(1000) );
  29. NN.AddLayer( TNNetSoftMax.Create() );

Above architectures have been added to the API as follows:
Code: Pascal  [Select][+][-]
  1.   THistoricalNets = class(TNNet)
  2.     public
  3.       procedure AddLeCunLeNet5(IncludeInput: boolean);
  4.       procedure AddAlexNet(IncludeInput: boolean);
  5.       procedure AddVGGNet(IncludeInput: boolean);
  6.   end;

Hope it helps and happy pascal coding!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on March 13, 2019, 10:19:49 pm
Hello,
This message is intended to be read by whom loves pascal and neural networks.

After a long time, here we go with a new post. This time, instead of reporting new features, this post is more about how to do things than just listing new features. I would like to show how to replicate examples made for Keras and TensorFlow with CAI. As shown in the paper https://arxiv.org/abs/1412.6806 (https://arxiv.org/abs/1412.6806), it’s possible to create efficient neural networks with just convolutional layers. Therefore, we’ll start from convolutional layers. There is a simple example from Keras located at https://keras.io/examples/cifar10_cnn/ (https://keras.io/examples/cifar10_cnn/). From this code, we can find convolutional layers being added with:
Code: Pascal  [Select][+][-]
  1. model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
  2. model.add(Activation('relu'))

The above can be done with CAI with just one line of code:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );

It’s also possible to add it with 2 lines of code but it’s not recommended as it’s not efficient in CAI:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetConvolutionLinear.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  2. NN.AddLayer( TNNetReLU.Create() );

This is the MaxPooling from Keras:
Code: Pascal  [Select][+][-]
  1. model.add(MaxPooling2D(pool_size=(2, 2)))

And this is the equivalent from CAI:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetMaxPool.Create(2) );

The full convolutional neural network can be defined in plain Free Pascal with:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(32,32,iInputDepth) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  3. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  4. NN.AddLayer( TNNetMaxPool.Create(2) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  6. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  7. NN.AddLayer( TNNetMaxPool.Create(2) );
  8. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}512) );
  9. NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );
  10. NN.AddLayer( TNNetSoftMax.Create() );

As you can see above, there aren’t any dropout layers in the Free Pascal implementation. This was done intentionally as I prefer using data augmentation instead of dropout layers. The data augmentation technique used here is very simple:
The above can be easily done in Free Pascal:
Code: Pascal  [Select][+][-]
  1. // Crop and Stretch
  2. CropSizeX := random(9);
  3. CropSizeY := random(9);
  4. ImgInputCp.CopyCropping(ImgVolumes[ImgIdx], random(CropSizeX), random(CropSizeY),ImgVolumes[ImgIdx].SizeX-CropSizeX, ImgVolumes[ImgIdx].SizeY-CropSizeY);
  5. ImgInput.CopyResizing(ImgInputCp, ImgVolumes[ImgIdx].SizeX, ImgVolumes[ImgIdx].SizeY);
  6. // Flip the Image
  7. if Random(1000) > 500 then
  8. begin
  9.   ImgInput.FlipX();
  10. end;
  11. // Make it Gray
  12. if (Random(1000) > 750) then
  13. begin
  14.   ImgInput.MakeGray(color_encoding);
  15. end;

Running the above in CAI gives really impressive results (they seem to me better than on Keras itself…). In the case that you would like to try by yourself, you can get this Lazarus Project:
 https://sourceforge.net/p/cai/svncode/724/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/  (https://sourceforge.net/p/cai/svncode/724/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/)

When you compile and run it, select “2 – Keras inspired” in “Algo” dropdown, check “Multiple Samples at Validation” and then start it. Although this example looks simple, it’s no numerical toy. It actually has more than 2.1 million neuronal weights/connections!

I would like to also show you another example ported to Free Pascal. This time it’s an example from TensorFlow:  https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py (https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py)
The neuronal structure can be found in the “def inference(images)”. This structure was ported to Free Pascal as follows:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(24,24,3) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );
  3. NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );
  4. NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );
  6. NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );
  7. NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );
  8. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}384) );
  9. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}192) );
  10. NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );
  11. NN.AddLayer( TNNetSoftMax.Create() );

Running it in CAI/Free Pascal, you’ll get similar results (classification accuracy) to results reported in TensorFlow site. In the case that you would like to run this by yourself, select “3 – TF inspired”, “Multiple Samples at Validation” and “24x24 image cropping”. The data augmentation technique found in this example is a bit different than before: we are cropping the original image into 24x24 images at random places. We are also flopping it and making it sometimes gray.

:-) I wish everyone happy pascal neural networking! :-)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on March 13, 2019, 10:55:30 pm
this post is more about how to do things than just listing new features
Much appreciated. Thank you!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 21, 2019, 08:50:10 am
 @Avra,
I'm starting to believe that most of examples showing how to use CAI are too advanced.

Just added 2 entry level examples. The first shows how to learn boolean functions XOR, AND and OR:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimple/

The next is the same as above with Pearson Correlation:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimplecorrelation/

:) I hope it helps and happy pascal coding to everyone. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 29, 2019, 12:31:10 pm
 :) Hello :)

Instead of reporting new features (there are plenty of new neuronal layer types), I'll report here a test. For the past 2 years, I've probably spent more time testing than coding. The test is simple: using a DenseNet like architecture with 1M trainable parameters, normalization, L2, cyclical learning rate and weight averaging, how precise can be the CIFAR-10 image classification accuracy?

The experiment was done with SVN revision 907:
https://sourceforge.net/p/cai/svncode/907/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

In the case that you intend to reproduce it, you should use exactly the same parameters found in the attached screenshot expect for "Logical Threads" and "Physical Threads" (you should use your CPU core count instead). This experiment was run in the slowest (and cheapest) machine I have access to.

At my end, I got about 94% CIFAR-10 classification accuracy.

:) Wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 29, 2019, 06:37:32 pm
 :) Hello :)
I promise that I'll try to not to post anything after this for some time.

I've just done an experiment comparing CAI with Keras/TF. CAI and TF were both compiled with AVX instruction set (no AVX2 nor AVX512). This is the FPC code:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(32, 32, 3) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}0, {stride}1) );
  3. NN.AddLayer( TNNetMaxPool.Create(4) );
  4. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  6. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  7. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  8. NN.AddLayer( TNNetFullConnectLinear.Create(10) );
  9. NN.AddLayer( TNNetSoftMax.Create() );

This is the equivalent Keras code:
Code: Python  [Select][+][-]
  1. model = Sequential()
  2. model.add(Conv2D(64, (5, 5), padding='valid',
  3.                  input_shape=x_train.shape[1:]))
  4. model.add(Activation('relu'))
  5. model.add(MaxPooling2D(pool_size=(4, 4)))
  6. model.add(Conv2D(64, (3, 3), padding='same'))
  7. model.add(Activation('relu'))
  8. model.add(Conv2D(64, (3, 3), padding='same'))
  9. model.add(Activation('relu'))
  10. model.add(Flatten())
  11. model.add(Dense(32))
  12. model.add(Activation('relu'))
  13. model.add(Dense(32))
  14. model.add(Activation('relu'))
  15. model.add(Dense(num_classes))
  16. model.add(Activation('softmax'))

Using the very same machine (google cloud with 16 virtual cores and no gpu), I run both models. This is the output from Keras showing that each epoch is taking 20 seconds to be computed:
Quote
Epoch 24/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3232 - acc: 0.5270 - val_loss: 1.2861 - val_acc: 0.5369
Epoch 25/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3297 - acc: 0.5226 - val_loss: 1.2873 - val_acc: 0.5361
Epoch 26/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3144 - acc: 0.5295 - val_loss: 1.2868 - val_acc: 0.5378
Epoch 27/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3156 - acc: 0.5284 - val_loss: 1.2915 - val_acc: 0.5356
Epoch 28/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3150 - acc: 0.5281 - val_loss: 1.2776 - val_acc: 0.5397
Epoch 29/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3057 - acc: 0.5332 - val_loss: 1.2737 - val_acc: 0.5415

In CAI, each epoch is computed in about 16 or 17 seconds as per log (time in seconds is incremental and is the last field below):
Quote
31,0.7509,0.7379,0.6683,0.7780,0.6491,0.6031,0.0010000,523
32,0.7565,0.6392,0.6981,0.7679,0.6725,0.6202,0.0010000,539
33,0.7482,0.8476,0.7243,0.7678,0.6851,0.6034,0.0010000,555
34,0.7653,0.5730,0.5781,0.7665,0.6971,0.6179,0.0010000,571
35,0.7668,0.7007,0.6337,0.7800,0.6464,0.5884,0.0010000,588

Am I typing that CAI will always perform better? Absolutely not. CAI is likely to be faster on some models and slower on other models. I haven't tested enough to be able to indicate when it will be faster or slower.

:) Anyway, wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 02, 2019, 10:33:01 pm
 :) Hello :),
I'm thinking about making a change that will brake existing code but it will make this library to look a lot better. Before I do this change, I would like to have some feedback please.

The change is: renaming library units and then creating a package (version 1.0).

All library units will start with "Neural". So, these are some renaming examples:
uconvolutionalneuralnetwork -> NeuralNet
uvolume -> NeuralVolume
ueasyopencl -> NeuralOpenCl
uevolutionary -> NeuralEvolution
ucifar10 -> NeuralDatasets
ubyteprediction -> NeuralPrediction

What do you think? Please feel free to ask for other changes.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on September 03, 2019, 12:11:36 am
I'm thinking about making a change that will brake existing code but it will make this library to look a lot better. Before I do this change, I would like to have some feedback please.
I do not mind suggested changes, but it would be nice if thread and other examples get corrected so that they still work.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 23, 2019, 04:06:10 am
... I'm approaching 1000 commits ... And this is the "new SVN repository" since I moved from CVS to SVN many years ago. I'm getting old...  Might be time to move to git ... Anyway ...

@avra
A new Neural API version with major changes/improvements is now under construction:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/neural/

To whom may concern,
All unit names now start by "neural". All examples have already been fixed. The old "libs" folder has been kept for backward compatibility so the next version won't brake any existing code.

Looking for sources of inspiration, Keras has an useful "fit" method:
https://keras.rstudio.com/reference/fit.html

"fit" like methods and classes are now under construction for CAI! YAY! neuralfit.pas has already more than 700 lines of source code.

Soon, I intend to have deep learning examples ready to run on Google Colab with Lazarus/FPC/CAI NEURAL API.

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on September 23, 2019, 08:46:11 am
Better and better! I will enjoy playing with your code this week. TNX!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 23, 2019, 11:58:25 am
@Thaddy,
It's not ready yet! I intend to have the first version ready in 2 weeks.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on September 23, 2019, 12:34:44 pm
Ok. I have some patience... :D
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 23, 2019, 05:46:08 pm
Thank you for the interest! Makes life happier!

This is an example about the work in progress:
https://github.com/joaopauloschuler/neural-api#how-does-the-code-look-like-for-a-cifar-10-classification-example

A good introductory example is:
https://github.com/joaopauloschuler/neural-api/tree/master/examples/SimpleImageClassifier

:) Wishing everyone happy pascal coding :)

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: nouzi on October 03, 2019, 02:17:40 pm
Good job 'bravo '
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 08, 2019, 12:46:20 pm
@nouzi, thank you!

@Thaddy, I think that the new CAI Neural API subproject currently placed at https://github.com/joaopauloschuler/neural-api is ready for testing.

In the case that no major bug is found on the near future, I'll call it CAI Neural API v1.0.

:) wish everyone happy pascal solution construction :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 19, 2019, 07:08:25 pm
:) Hi :)
To whom is interested in computer vision and has never had a look at MNIST (http://yann.lecun.com/exdb/mnist/ (http://yann.lecun.com/exdb/mnist/)) or Fashion MNIST (https://github.com/zalandoresearch/fashion-mnist (https://github.com/zalandoresearch/fashion-mnist)), I highly recommend clicking on these links. MNIST contains images with handwritten digits while Fashion MNIST contains 10 clothing classes.

At this moment, these datasets have API support:
Source code examples for MNIST and Fashion MNIST have just been added:
:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on October 20, 2019, 06:12:42 pm
Nice! Things are getting better and better.  :P
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: nouzi on October 20, 2019, 08:40:51 pm
@schuler

you are a hero
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 26, 2019, 01:39:47 pm
@avra and @nouzi,
Thank you for your support.

Just asked to include a Free Pascal benchmark into the Fashion MNIST benchmark list:
https://github.com/zalandoresearch/fashion-mnist/issues/153

Got 94% accuracy with just 174K parameters. In the benchmark list (https://github.com/zalandoresearch/fashion-mnist), you'll find results with 26M parameters...

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on October 26, 2019, 01:42:36 pm
That is extremely good!!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 05, 2019, 12:42:22 am
 :) ,
I would like to ask everyone: what is the feature or example that you would like to be added? Feel free to ask. In the case that you ask for something that I can do quickly or I already have it half baked, I might just spend some minutes typing code.

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: MathMan on November 05, 2019, 01:43:33 pm
:) ,
I would like to ask everyone: what is the feature or example that you would like to be added? Feel free to ask. In the case that you ask for something that I can do quickly or I already have it half baked, I might just spend some minutes typing code.

:) wish everyone happy pascal coding :)

Before I answer I have to admit that I neither played with CAI, nor will I have time to do so in the foreseeable future. So if you don't mind suggestions from a purely theoretical perspective

* (fully) binarized neural network - (F)BNN
* bayesian confidence propagation neural network - BCPNN

Both to me look, from what I have read, like interesting approaches addressing topics like reduced power consumption, inference capabilities, etc. But, also from what I read, both might be major developments.

Cheers,
MathMan
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: circular on November 05, 2019, 03:54:06 pm
That looks amazing, being able to program neural networks in Pascal. I wonder if that can be useful in LazPaint? Some image processing features might be more efficient with neural networks.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on November 05, 2019, 05:29:11 pm
That looks amazing, being able to program neural networks in Pascal. I wonder if that can be useful in LazPaint? Some image processing features might be more efficient with neural networks.
Well, one of the examples is about recognition, but it needs to learn. That process is quite slow (hours and days even months), but the code renders quite amazing results after that, and once established -to your specification! you have to stop learning! at some point -  it is fast.
And well written pure pascal.
What is maybe missing is just some pre-processing along the lines of my fuzzy code (on this forum, Zadeh logic) that would help a bit in some scenario's as a filter, but is actually a completely different subject and is AI but not machine learning AI, what  schuler's library is all about.
If you are interested: the code is really good  and you pick it up quite fast if you have just a tiny bit of background. It is one of my favorite Freepascal projects.

Example: if you feed it enough number plate examples ( say, a million, from all angles ) and you run for some months, you have a fast number plate recognition software with high probability of being correct. But the fun is is really to use it on comic book examples and let it filter out Daffy Duck...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: zulof on November 12, 2019, 11:05:27 am
Could/Should this be included in online package manager?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: m.abudrais on November 22, 2019, 08:14:54 am
thank you for making this good library.
I am absolute beginner in NN.
i run the CIFAR-10 example (for  50 epochs only)   and  the SimpleImageClassifier.nn is generated.
then I tried to Classify some image  by this code:
Code: Pascal  [Select][+][-]
  1. procedure TForm1.Button1Click(Sender: TObject);
  2. var
  3.   NN: THistoricalNets;
  4.   O1: array of TNeuralFloat;
  5.   pOutPut, pInput: TNNetVolume;
  6.   k, y, x: integer;
  7.   OK: extended;
  8.   img: TTinyImage;
  9. begin
  10.   Image1.Picture.LoadFromFile('C:\Users\moh\Desktop\dog1.png');
  11.   NN := THistoricalNets.Create();
  12.   NN.AddLayer([TNNetInput.Create(32, 32, 3),
  13.     TNNetConvolutionLinear.Create(64, 5, 2, 1, 1).InitBasicPatterns(),
  14.     TNNetMaxPool.Create(4), TNNetConvolutionReLU.Create(64, 3, 1, 1, 1),
  15.     TNNetConvolutionReLU.Create(64, 3, 1, 1, 1),
  16.     TNNetConvolutionReLU.Create(64, 3, 1, 1, 1),
  17.     TNNetConvolutionReLU.Create(64, 3, 1, 1, 1), TNNetDropout.Create(0.5),
  18.     TNNetMaxPool.Create(2), TNNetFullConnectLinear.Create(10),
  19.     TNNetSoftMax.Create()]);
  20.   NN.LoadDataFromFile('SimpleImageClassifier.nn');
  21.   for y := 0 to 31 do
  22.   begin
  23.     for x := 0 to 31 do
  24.     begin
  25.       img.B[y, x] := Red(Image1.Canvas.Pixels[y, x]);
  26.       img.g[y, x] := Green(Image1.Canvas.Pixels[y, x]);
  27.       img.B[y, x] := Blue(Image1.Canvas.Pixels[y, x]);
  28.     end;
  29.   end;
  30.   pInput := TNNetVolume.Create(32, 32, 3, 1);
  31.   pOutPut := TNNetVolume.Create(10, 1, 1, 1);
  32.   LoadTinyImageIntoNNetVolume(img, pInput);
  33.   NN.Compute(pInput);
  34.   NN.GetOutput(pOutPut);
  35.   OK := 0.0;
  36.   for k := 0 to 9 do
  37.   begin
  38.     OK := OK + pOutPut.Raw[k];
  39.     WriteLn(pOutPut.Raw[k]);
  40.   end;
  41.   WriteLn();
  42.   WriteLn(OK);
  43. end;  
i tried image for a dog and car from the same data set but i always get 1 in pOutPut.Raw[0]!
can you please add  example  to show how to use Classifier after training.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 25, 2019, 04:56:02 pm
Please forgive me for taking so long to reply.

First of all, you can completely remove "AddLayer" call as CAI stores both architecture and weights into the same nn file.

Then, you need to add RgbImgToNeuronalInput as follows:

Code: Pascal  [Select][+][-]
  1. LoadTinyImageIntoNNetVolume(img, pInput);
  2. pInput.RgbImgToNeuronalInput(csEncodeRGB);

If the above doesn't work, I'll try to compile at my end.

As you have SoftMax, you can print output class probabilities with:
Code: Pascal  [Select][+][-]
  1. pOutPut.Print();

You can get the inferred output class with:
Code: Pascal  [Select][+][-]
  1. WriteLn('Inferred Class: ', csTinyImageLabel[pOutPut.GetClass()]);
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: m.abudrais on November 25, 2019, 07:40:52 pm
thank you for replying.
Quote
First of all, you can completely remove "AddLayer" call as CAI stores both architecture and weights into the same nn file.

I used the wrong function LoadDataFromFile, the correct one is LoadFromFile.
now it works :).
this is the correct code if any one is interesting
Code: Pascal  [Select][+][-]
  1. procedure TForm1.Button1Click(Sender: TObject);
  2. var
  3.   NN: THistoricalNets;
  4.   O1: array of TNeuralFloat;
  5.   pOutPut, pInput: TNNetVolume;
  6.   k, y, x: integer;
  7.   OK: extended;
  8.   img: TTinyImage;
  9. begin
  10.   NN := THistoricalNets.Create();
  11.   NN.LoadFromFile('SimpleImageClassifier.nn');
  12.   for y := 0 to 31 do
  13.   begin
  14.     for x := 0 to 31 do
  15.     begin
  16.       img.R[y, x] := Red(Image1.Canvas.Pixels[y, x]);
  17.       img.g[y, x] := Green(Image1.Canvas.Pixels[y, x]);
  18.       img.B[y, x] := Blue(Image1.Canvas.Pixels[y, x]);
  19.     end;
  20.   end;
  21.   pInput := TNNetVolume.Create(32, 32, 3, 1);
  22.   pOutPut := TNNetVolume.Create(10, 1, 1, 1);
  23.   LoadTinyImageIntoNNetVolume(img, pInput);
  24.   pInput.RgbImgToNeuronalInput(csEncodeRGB);
  25.   NN.Compute(pInput);
  26.  
  27.   WriteLn('Inferred Class: ', csTinyImageLabel[pOutPut.GetClass()]);  
  28.  
I moved the image loading to TForm1.FormCreate,the image don't load when I load it TForm1.Button1Click functon !!.
thank you again for making this good library.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 25, 2019, 09:55:28 pm
@m.abudrais
Are you placing your code on a public repo?

Anyway, just realized that you might be able to replace both for loops by using the neuralvolumev https://github.com/joaopauloschuler/neural-api/blob/master/neural/neuralvolumev.pas (https://github.com/joaopauloschuler/neural-api/blob/master/neural/neuralvolumev.pas) unit:

Code: Pascal  [Select][+][-]
  1. /// Loads a Picture into a Volume
  2. procedure LoadPictureIntoVolume(LocalPicture: TPicture; Vol:TNNetVolume); {$IFDEF Release} inline; {$ENDIF}
  3.  
  4. /// Loads a Bitmat into a Volume
  5. procedure LoadBitmapIntoVolume(LocalBitmap: TBitmap; Vol:TNNetVolume);

The code will look like:
Code: Pascal  [Select][+][-]
  1. LoadPictureIntoVolume(Image1.Picture, pInput);
  2. pInput.RgbImgToNeuronalInput(csEncodeRGB);

In the case that the input image isn't 32x32, you can resize it (via copying) with:
Code: Pascal  [Select][+][-]
  1. TVolume.CopyResizing(Original: TVolume; NewSizeX, NewSizeY: integer);

Last idea: given that you have a trained NN, you could call this:
Code: Pascal  [Select][+][-]
  1. procedure TNeuralImageFit.ClassifyImage(pNN: TNNet; pImgInput, pOutput: TNNetVolume);
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: m.abudrais on November 26, 2019, 04:58:28 pm
Quote
Are you placing your code on a public repo?
I have just uploaded the code to:
https://github.com/mabudrais/CAI-NEURAL-API-Test (https://github.com/mabudrais/CAI-NEURAL-API-Test)
your library has many useful functions, I think more examples may be added to show that.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 06, 2019, 06:57:25 pm
:) Hello :)

It's usually very hard to understand neuron by neuron how a neural network dedicated to image classification internally works. One technique used to help with the understanding about what individual neurons represent is called Gradient Ascent. You can find more about gradient ascent at http://yosinski.com/deepvis .

In this technique, an arbitrary neuron is required to activate and then the same backpropagation method used for learning is applied to an input image producing an image that this neuron expects to see. I'm happy to inform that we have a working example purely coded in FPC/Lazarus at: https://github.com/joaopauloschuler/neural-api/tree/master/examples/GradientAscent

To be able to run this example, you'll need to load an already trained neural network file and then select the layer you intend to visualise. I've attached an example from the last layer (can you see any pattern?) of a neural network trained to classify CIFAR-10 dataset. Can you find patterns for horse, deer, ship, car or truck?

:) Long life to Pascal :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 21, 2019, 01:28:15 pm
Just found this thread, it seems to be a very interesting topic!
Thanks to all the hard work you did!

While I still prefer python/tensorflow/keras approach to train CNNs, I'd like to use the resulting CNN in my Delphi project to help me sorting the sounds. Do you have any method to export the weights from a Keras model and load it to your library so I can do that NN magic in pure pascal without any additional DLLs?

Thank you in advance!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 23, 2019, 06:29:55 pm
Hello agorka! Glad that you like it.

Supporting Keras file format isn't in the near horizon. At this moment, I'm polishing a super advanced example about producing artificial art: https://github.com/joaopauloschuler/neural-api/tree/master/examples/VisualGAN

Other feature requests are listed as github issues.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 24, 2019, 07:38:50 am
Wow! I couldn't imagine this could be done in Lazarus without such heavy frameworks like tensorflow or pytorch...
Still, my question is, what file formats could be used to load existing weights to the convolution layers?
I could find a converter or make my own export function from Python...
Is there any json, csv, any other format?

Thanks!
Oleg.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on December 24, 2019, 08:17:21 am
Is there any json, csv, any other format?
It would also help for MatLab.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 24, 2019, 11:35:49 am
@agorka,
In the case that you try loading weights by yourself, here we go:

A neural network TNNet has Layers:
Code: Pascal  [Select][+][-]
  1. property Layers: TNNetLayerList read FLayers;

Each Layer (TNNetLayer) has Neurons:
Code: Pascal  [Select][+][-]
  1. property Neurons: TNNetNeuronList read FNeurons;

Each Neuron (TNNetNeuron) has weights:
Code: Pascal  [Select][+][-]
  1. property Weights: TNNetVolume read FWeights

Accessing weights from the second neuron from the second layer will look like this:
Code: Pascal  [Select][+][-]
  1. NN.Layers[1].Neurons[1].Weights

As you can see, Weights is a TNNetVolume.

TNNetVolume has 2 methods that might interest you:
Code: Pascal  [Select][+][-]
  1.     // load and save functions
  2.     function SaveToString(): string;
  3.     procedure LoadFromString(strData: string);

Having a look at the implementation, you'll find that weights are saved as a ";" separated string. The first 4 values are structural: version, SizeX, SizeY and depth. The following values are the actual weights.

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 24, 2019, 11:47:46 am
Quote
Wow! I couldn't imagine this could be done in Lazarus without such heavy frameworks like tensorflow or pytorch
Lazarus/FPC can do impressive things such as compiling to other targets (android and raspberry).

Artificial Art can be useful for texture generation on an artificial environment (such as computer games) as long as you have a pre-trained network. I'll eventually add an example creating very large images (and not just 32x32 pixels).
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 24, 2019, 10:48:38 pm
Thank you, I'm getting it running with 1 convolution layer.
One thing I stuck with was weights not updating after loading:
          NN.Layers.Neurons[N].LoadFromString(SL[N]);
I had to add
        NN.Layers.AfterWeightUpdate;
and unprivate this AfterWeightUpdate procedure.
Probably it should be called from LoadFromString function to avoid this inconsistencies.

Thank you for your hard work! I really hope to port my keras network to Pascal tomorrow! =)

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 24, 2019, 11:09:42 pm
Sorry, I've run into a problem again.
My network now is as simple as this:

      TNNetInput.Create(512, 512, 1),
      TNNetConvolutionReLU.Create(24, 7, 0, 1, 0).InitUniform(0),
      TNNetConvolutionReLU.Create(1, 5, 0, 1, 0).InitUniform(0)

Though, I run out of memory in
procedure TNNetConvolutionBase.SetPrevLayer(pPrevLayer: TNNetLayer);
when it tries to allocate 502x502x5x5x24 Volume, which is ~600MB.
As I understand, it's needed to optimize convolution calculation, but in my 32 bit application I just can't allocate that much, and my real network needs many more layers.
Is there any way to workaround this problem?
For me it's better to run it slowly than not to run at all. =)

Thanks!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 25, 2019, 02:15:01 am
Good idea about AfterWeightUpdate... Thank you.

Do you have freedom to try something like this (a bigger stride compensated by more neurons in the first convolution)?
Code: Pascal  [Select][+][-]
  1.       TNNetInput.Create(512, 512, 1),
  2.       TNNetConvolutionReLU.Create({neurons=}128, {FeatureSize=}7, {Padding=)0, {Stride=}7, {SupressBias=}0),
  3.       ...

If not, I'll eventually code a turn around (slower with less memory).
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 26, 2019, 10:45:41 am
Thank you!

I reduced the input size to 256x256 and was able to train the net to similar precision.
With this size, it's still enough memory to create the network in Delphi.
Now I'm adding the 3rd layer (MaxPooling) and getting another inconsistency with Keras:

My layer is
Code: Pascal  [Select][+][-]
  1.       TNNetMaxPool.Create(4, 2)
  2.  

With input size 246 it produces output size 123.
Keras produces size 122.

Code: Pascal  [Select][+][-]
  1. function TNNetPoolBase.CalcOutputSize(pInputSize: integer): integer;
  2. begin
  3.   Result := (pInputSize + FPadding) div FStride;
  4.   if ((pInputSize + FPadding) mod FStride > 0) then Inc(Result);
  5. end;
  6.  

I mostly find this formula for output size:
Out=(W−F+2P)/S+1

Could this be corrected to improve compatibility with other neural network frameworks?

Thanks for your continous help =)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 27, 2019, 01:32:35 am
Hello agorka,
Thank you for bringing this to my attention.

I've just created a new layer called TNNetMaxPoolPortable with your requested behavior. So, if you get the latest github version, you'll be able to replace your existing maxpools by this new one.

The original implementation has motivation behind it. Assume an 11x11 input followed by a 2x2 max pooling. In this scenario, CAI will output a 6x6 map while the portable implementation will output a 5x5 map ((11-2+2*0) div 2) + 1=5 completely missing last 11th row and column with loss of information. Therefore, I still like CAI's original implementation.

May I ask you what is your target machine? Does it have AVX or OpenCL?

In the case that you have AVX, are you curious to benchmark CAI against another API on your own environment? If yes, I can send you equivalent implementations on both APIs.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 28, 2019, 09:03:57 am
Thank you!
I'm updating right now.
About the performance, I mostly care about compatibility right now, so performance doesn't matter much for me. I might return to the performance after I'll have everything working. =)

BTW, I know it's strange but I still use Delphi 7 in parallel with Lazarus, so I have to change some parts of your code to make it compatible.
I don't know if you'd like to have it compatible with D7, but still here's the changes:
Code: Pascal  [Select][+][-]
  1. procedure TVolume.FillForIdx(c: T; const aIdx: array of integer);
  2. var
  3.   Idx: integer;
  4. begin
  5.   for Idx in aIdx do
  6.     FData[Idx] := c;
  7. end;
I change to
Code: Pascal  [Select][+][-]
  1. procedure TVolume.FillForIdx(c: T; const aIdx: array of integer);
  2. var
  3.   I, Idx: integer;
  4. begin
  5.   for I:=0 to Length(aIdx)-1 do
  6.   begin
  7.     Idx:=aIdx[i];
  8.     FData[Idx] := c;
  9.   end;
  10. end;
I know it's not that beautiful but hopefully not less efficient. =)

Also in CopyChannels:
Code: Pascal  [Select][+][-]
  1.       for i:=0 to Length(aChannels)-1 do
  2.       begin
  3.         InputDepth:=aChannels[i];
  4.         Self[X, Y, OutputDepth] := Original[X, Y, InputDepth];
  5.         Inc(OutputDepth);
  6.       end;

I'll write back as soon as I get my FFT data into the net and compare it with Keras result. I had some mismatches when processing some test data but probably it was because of different MaxPooling layer...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 29, 2019, 11:46:50 pm
@agorka,
You aren't the first to ask for D7 support. At the other extreme, at this moment, I'm retesting AVX512 support with FPC trunk...

I believe that if you open source your work about porting trained networks to pascal this would benefit the pascal user base.

In Keras, do your convolutional layers have biases? If yes, you'll need to bring too.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 30, 2019, 11:31:01 am
Oh, great!
D7 support requires very little changes, such as removing inline directives, removing StrictDelimiter and a few lines more.

I finally got it kinda working. Not a very precise classification but at least something!
So of course I can share my code but it's far from ideal.
My convolutions have biases of course, but I thought it's already works. Still I need to compare Keras and Pascal predictions to check if there's any differences.
Also I removed all batch normalization layers from the network to avoid possible differences.
And my Keras network had Flatten layer which I didn't find in your library, but it seems that Dense layer work right after Convolution layer (but I'm not sure that the order of weights get transferred correctly). Removing flatten layer in Keras produces wrong dimension after Dense layer, so maybe you can implement similar Flatten layer, or confirm it should work correctly right away?

Here goes Python Keras export part:
Code: Pascal  [Select][+][-]
  1. def SaveModelToTXT(model, filename):
  2.     print("Saving model to", filename, "...")
  3.     print("Found ", len(model.layers), " layers")
  4.     for i in range(len(model.layers)):
  5.         layer=model.layers[i]
  6.         layerFileName = filename+"_layer"+str(i)+".csv"
  7.         print(layer.name, ' to ', layerFileName)
  8.         if isinstance(layer, keras.layers.Conv2D):
  9.             with open(layerFileName, "w") as f:
  10.                 w=layer.get_weights()
  11.                 for filter in range(w[0].shape[3]):
  12.                     bias=w[1][filter]
  13.                     weights=w[0][:,:,:,filter].flatten()
  14.                     title = "Layer" +str(i)+"  Filter "+str(filter)+"  WShape:"+str(w[0].shape)
  15.                     #print(title)
  16.                     dimensions='1;'+str(w[0].shape[0])+';'+str(w[0].shape[1])+';'+str(w[0].shape[2])
  17.                     weights=["%.6f" % number for number in weights]
  18.                     weights=';'.join(weights)
  19.                     f.write(str(bias)+']'+dimensions+';'+weights+"\n")
  20.         if isinstance(layer, keras.layers.Dense):
  21.             with open(filename+"_layer"+str(i)+".csv", "w") as f:
  22.                 w=layer.get_weights()
  23.                 print(w[0].shape)
  24.                 for filter in range(w[0].shape[1]):
  25.                     bias=w[1][filter]
  26.                     weights=w[0][:,filter].flatten()
  27.                     title = "Layer" +str(i)+"  Filter "+str(filter)+"  WShape:"+str(w[0].shape)
  28.                     dimensions='1;'+str(w[0].shape[0])+';1;1'
  29.                     weights=["%.6f" % number for number in weights]
  30.                     weights=';'.join(weights)
  31.                     f.write(str(bias)+']'+dimensions+';'+weights+"\n")
  32.     print("Done!")
  33. SaveModelToTXT(model, modelPath+"ModelNetworkWeights")

And here's pascal reading of weights:

Code: Pascal  [Select][+][-]
  1. procedure TSampleCNN.LoadWeights(FN: String);
  2. var I, N : Integer;
  3.     S : String;
  4.     SL : TStringList;
  5.     Size, SX, SY : Integer;
  6.     LayerFN : TFileNameUnicodeString;
  7. begin
  8. For I:=0 to NN.Layers.Count-1 do
  9.   Begin
  10.   If (NN.Layers[i] is TNNetConvolutionLinear) or
  11.      (NN.Layers[i] is TNNetConvolutionReLU) or
  12.      (NN.Layers[i] is TNNetFullConnectLinear) then
  13.      Begin
  14.      Try
  15.      SL:=TStringList.Create;
  16.      LayerFN:=FN+'_layer'+IntToStr(I)+'.csv';
  17.      If not FileExistsUTF8(LayerFN) then
  18.         Begin
  19.         ShowMessage('File not found: '+LayerFN);
  20.         Continue
  21.         End;
  22.      SL.LoadFromFile(LayerFN);
  23.      If NN.Layers[I].Neurons.Count<>SL.Count then
  24.         Begin
  25.         ShowMessage('Number of neurons mismatch: '+IntToStr(NN.Layers[I].Neurons.Count)+' vs '+IntToStr(SL.Count));
  26.         End
  27.      else
  28.         Begin
  29.         For N:=0 to NN.Layers[I].Neurons.Count-1 do
  30.           Begin
  31.           Size:=NN.Layers[I].Neurons[N].Weights.Size;
  32.           SX:=NN.Layers[I].Neurons[N].Weights.SizeX;
  33.           SY:=NN.Layers[I].Neurons[N].Weights.SizeY;
  34.           NN.Layers[I].Neurons[N].LoadFromString(SL[N]);
  35.           If NN.Layers[I].Neurons[N].Weights.Size<>Size then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
  36.           If NN.Layers[I].Neurons[N].Weights.SizeX<>SX then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
  37.           If NN.Layers[I].Neurons[N].Weights.SizeY<>SY then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
  38.           End;
  39.         NN.Layers[I].AfterWeightUpdate;
  40.         End
  41.      Except
  42.         ShowMessage('Exception loading weights for layer '+IntToStr(I));
  43.      End;
  44.      SL.Free;
  45.      End;
  46.   End;
  47. end;

Also, I was thinking, how much could this optimized if network is used for prediction? I've got more projects that would benefit from adding NN, but memory consumption will be much higher there...

Oleg.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 30, 2019, 04:41:32 pm
Thank you for sharing your code! I'm certainly going to give it a go.

A beautiful aspect of TNNetVolume is about it being able to represent data on both 1D and 3D at any time. Therefore, flatten layers aren't required in this API.

In regards to memory requirements, you may consider using separable convolutions: https://github.com/joaopauloschuler/neural-api/tree/master/examples/SeparableConvolution

If you still need common convolutions with less memory requirements, I'll add feature into my feature list.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: fastblade on January 03, 2020, 09:17:18 am
I am a beginner of AI and NN,
Can this API do Object detecting and get some examples?

Thanks for your greate work!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 05, 2020, 01:42:19 am
@fastblade,
Very welcome!

Although there are plenty of examples about image classification, there is no example about object detection yet. I was almost starting to code one and then I ended coding a generative adversarial example. Porting an existing example from Keras to this API is doable.

BTW, as I'm heading for version 1.0, the time to report bugs is now.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on January 05, 2020, 09:44:04 pm
I don't know if it's a bug report or feature request, but I think some of the networks could benefit from non-square convolutions or poolings,
for example now I'm training my FFT detector on inputs of 512x128, and with only square operations I seem to be quite limited in options. =)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 06, 2020, 02:30:25 pm
Thank you for the feedback agorka!

Totally off topic, yesterday I was googling to find how other APIs support Weight Averaging (https://arxiv.org/abs/1803.05407) and I ended discovering that plenty of them do not have support.

I'm glad to inform that CAI does have support. This is an example showing how to configure 10 epochs weight averaging for image classification:
Code: Pascal  [Select][+][-]
  1. NeuralFit := TNeuralImageFit.Create;
  2. NeuralFit.AvgWeightEpochCount := 10; // the number of epochs used for weight averaging
  3. NeuralFit.Fit(NN, ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes, {NumClasses=}10, {batchsize=}128, {epochs=}100);

Long life to pascal!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Root on January 07, 2020, 02:10:10 pm
hi, first of all great work, I would like to ask if this code can be converted with your library.
thank you very much

https://github.com/erohkohl/question-tagging/blob/master/src/model.py
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 08, 2020, 07:14:33 pm
@Root,
Yes. It can be done.

Inner layers are TNNetFullConnectReLU.

The last 2 layers are (in this order): TNNetFullConnectLinear and TNNetSoftMax.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Root on January 12, 2020, 09:35:48 pm
Are you planning to implement these two algorithms?

Recurrent Neural Networks
LSTM Networks

ref.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
I think it is only to add the layer specifications and related calculations.

this is a library that implements them linearly.
https://github.com/cvsandbox/ANNT

Thank you ;)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 14, 2020, 05:15:59 pm
:) Hello Root, :)
RNN and LSTM are desirable but aren't current planned.

Next steps in this API are:

Soon, we'll have a new example with the Tiny ImageNet dataset https://tiny-imagenet.herokuapp.com/ (https://tiny-imagenet.herokuapp.com/). This data set has 64x64 RGB images and 200 classes.

I'm happy to help anyone trying to code new layers with a forked github repo.

:) Wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on January 21, 2020, 11:01:09 am
Hello!
I've got some news on my progress porting Keras model to your library!
I've finally trained it in Keras to 95% accuracy, tried to use it in Pascal, and faced this problem:

My model looks like
Code: Pascal  [Select][+][-]
  1.       TNNetInput.Create(512, 128, 1),
  2.       TNNetConvolutionReLU.Create(24, 7, 0, 1, 0).InitUniform(0),
  3.       TNNetConvolutionReLU.Create(24, 5, 0, 1, 0).InitUniform(0),
  4.       TNNetMaxPoolPortable.Create(2, 2),
  5.       TNNetConvolutionReLU.Create(24*2, 5, 0, 1, 0).InitUniform(0),
  6.       TNNetMaxPoolPortable.Create(2, 2),
  7.       TNNetConvolutionReLU.Create(24*4, 5, 0, 1, 0).InitUniform(0),
  8.       TNNetMaxPoolPortable.Create(2, 2),
  9.       TNNetConvolutionReLU.Create(24*4, 5, 0, 1, 0).InitUniform(0),
  10.       TNNetMaxPoolPortable.Create(2, 2),
  11.       TNNetFullConnectLinear.Create(NumClasses)
Seems that X and Y coordinates are swapped when loading Conv layer weights from strings saved in Keras.
I found that by looking to feature maps, Keras contained vertical lines while pascal contained horizontal and vise versa. I used numpy.swapaxes on weights for convolution layers and it did the trick, though images are still not 100% the same, maybe I'll also need to flip convolution filters to match with Keras.
The next problem was the dense layer.
In Keras I had a flatten layer before the dense, so weights of the dense layer were flat. I had to reshape them manually to 27x3x96, swap axes, flatten, and then save. This seems too cumbersome, cause I had to know the original image shape, and it can't be used in generic weight saving function...
Having non-square convolution could have solved my problem, so I could use 27x3 convolution instead of dense layer.
Maybe you have a better different idea how this problem could be solved in a more beautiful way?

Good news is that after all those tricks I've got very good prediction accuracy in Pascal! Thank you again for making this possible! =)
Oleg.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 29, 2020, 05:09:59 pm
@agorka,
Thank you for sharing good news! I'm curious to look at your code. Have you though about open sourcing it? Your code could help others.

Maybe, with time, we could polish both ends and make porting tasks easier.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Firewrath on January 29, 2020, 09:50:17 pm
ok, so this is going to be a little OT from the current discussion going on, sorry. >.>

and first of all, I know pretty much nothing about AI other then article's I've read and such,
I'd like to play with it but I have an 11 y/o PC and almost all the tutorials are in python, so bleh. :P

But I was reading this guys blog:
http://blog.dlib.net/
and he posted something I found rather interesting that I havn't seen anywhere else, called a 'tiled pyramid'
(but mind, I don't follow AI news / updates a lot)

So I was wondering if it could be implemented in your work for training.
Post:
http://blog.dlib.net/2017/08/vehicle-detection-with-dlib-195_27.html
and Pic:
Shortened pic link (https://2.bp.blogspot.com/-xl3Qne3TWjk/WaHJgQZkH8I/AAAAAAAAAuc/qu037v4HxCUjFinIp6Ze5Oou7rhS1OU1QCLcBGAs/s1600/mmod_tutorial_image_pyramid.jpg)

To me, it seems a really smart way to train an AI, but that's coming from someone who doesn't know how it all works.
So what do I know? ;)
but wanted to pass it along anyways. ^-^
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on March 12, 2020, 05:35:55 pm
@Firewrath, I own you a reply. Will do eventually.

I would like to comment here a trick that you can do with this API or any other API when working with image classification: you can increase the input image size.

As per the following example, by increasing CIFAR-10 input image sizes from 32x32 to 48x48, you can gain up to 2% in classification accuracy.

You can change image sizes with:
Code: Pascal  [Select][+][-]
  1. ImgTrainingVolumes.ResizeImage(48, 48);
  2. ImgValidationVolumes.ResizeImage(48, 48);
  3. ImgTestVolumes.ResizeImage(48, 48);

Then, you need to change the input image size in your NN model:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer([
  2. TNNetInput.Create(48, 48, 3),

There is a CIFAR-10 example showing this trick here:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierResize48.lpr (https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierResize48.lpr)

 :) On the power of native code, I wish everyone happy pascal coding!  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 12, 2020, 02:30:46 am
 :) hello :),

I would like to comment a new feature and new examples that have just been added:
There are plenty of image datasets where each folder represents a class of images. To make the pascal coding simpler, a new procedure CreateVolumesFromImagesFromFolder has been added as exemplified below:
Code: Pascal  [Select][+][-]
  1.     // change ProportionToLoad to a smaller number if you don't have available 6GB of RAM.
  2.     ProportionToLoad := 1;
  3.     WriteLn('Loading ', Round(ProportionToLoad*100), '% of the Tiny ImageNet 200 dataset into memory.');
  4.     CreateVolumesFromImagesFromFolder
  5.     (
  6.       ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes,
  7.       {FolderName=}'tiny-imagenet-200/train', {pImageSubFolder=}'images',
  8.       {color_encoding=}0{RGB},
  9.       {TrainingProp=}0.9*ProportionToLoad,
  10.       {ValidationProp=}0.05*ProportionToLoad,
  11.       {TestProp=}0.05*ProportionToLoad
  12.     );

The above example loads the Tiny ImageNet 200 Dataset https://tiny-imagenet.herokuapp.com/ (https://tiny-imagenet.herokuapp.com/) into RAM memory. 90% is loaded as training data while 5% is loaded for each validation and testing.

The next example shows how to load the dataset used for the paper Identification of Plant Leaf Diseases Using a 9-layer Deep Convolutional Neural Network https://data.mendeley.com/datasets/tywbtsjrjv/1 (https://data.mendeley.com/datasets/tywbtsjrjv/1):

Code: Pascal  [Select][+][-]
  1.     // change ProportionToLoad to a smaller number if you don't have available 32GB of RAM.
  2.     ProportionToLoad := 1;
  3.     WriteLn('Loading ', Round(ProportionToLoad*100), '% of the Plant leave disease dataset into memory.');
  4.     CreateVolumesFromImagesFromFolder
  5.     (
  6.       ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes,
  7.       {FolderName=}'plant', {pImageSubFolder=}'',
  8.       {color_encoding=}0{RGB},
  9.       {TrainingProp=}0.9*ProportionToLoad,
  10.       {ValidationProp=}0.05*ProportionToLoad,
  11.       {TestProp=}0.05*ProportionToLoad,
  12.       {NewSizeX=}128, {NewSizeY=}128
  13.     );

As you can see above, besides loading the dataset, images are being resized to 128x128. As I don't have 32GB of RAM in my development machine, I've tested the above code with https://vast.ai/ (https://vast.ai/) using this jupyter notebook: https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.ipynb (https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.ipynb). You can find raw results at: https://github.com/joaopauloschuler/neural-api/tree/master/examples/SimplePlantLeafDisease/results .

As you can see on these raw results, the test classification (or the plant disease diagnostic) accuracy is 98.95% (this is pretty high - I wonder if we could do the same with corona virus).

CreateVolumesFromImagesFromFolder has parallel code so classes are loaded into memory in parallel. I consider this code tremendously fast and it outperforms code that I saw in other APIs. I should say thank you to FPC developers for coding FPImage and plenty of other super fast bits and pieces that allowed me to implement a fast CreateVolumesFromImagesFromFolder.

This is the neural network architecture used for classification:
Code: Pascal  [Select][+][-]
  1.     NN.AddLayer([
  2.       TNNetInput.Create(128, 128, 3),
  3.       TNNetConvolutionLinear.Create({Features=}64, {FeatureSize=}5, {Padding=}4, {Stride=}2),
  4.       TNNetMaxPool.Create(2),
  5.       TNNetMovingStdNormalization.Create(),
  6.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  7.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  8.       TNNetMaxPool.Create(2),
  9.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  10.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  11.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}2),
  12.       TNNetDropout.Create(0.5),
  13.       TNNetMaxPool.Create(2),
  14.       TNNetFullConnectLinear.Create(39),
  15.       TNNetSoftMax.Create()
  16.     ]);
The complete source code can be found at: https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.pas (https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.pas)

 :) I wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: eljo on April 12, 2020, 07:07:56 am
I just wanted to say I love your work. Thank you for your support.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 14, 2020, 09:21:47 am
@eljo,
Thank you for your kind words. They are motivating.

Created a full Tiny ImageNet 200 example: https://github.com/joaopauloschuler/neural-api/tree/master/examples/SimpleTinyImageNet

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 03, 2020, 09:24:20 am
 :) Hello :),
I have news about Increasing Image Resolution with Neural Networks.

I created a command line tool so everyone can now increase image resolution with no more than FPC (no external library are required) via command line:
Code: Pascal  [Select][+][-]
  1. #SuperResolution -i street.png -o street3.png
  2. Loading input file: street.png
  3. Input image size: 158x214x3
  4. Creating Neural Network...
  5. Resizing with tiles...
  6. Neural network file found at ../../../examples/SuperResolution : super-resolution-7-64-sep.nn
  7. Padding input image.
  8. Resizing with tiles to: 288x416x3
  9. Saving output file: street3.png

-i defines the input image file while -o defines the output image file.

You can get this command line tool from:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SuperResolution/README.md

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Alienigene on May 23, 2020, 02:56:48 am
Hello everyone, I don't know if this is the right place to ask, but I know this problem is rather frequent and it is related to Free Pascal/Lazarus IDE, so I figure here is a good place to ask.

The thing is very simple. My Operating System is Windows 10 v1909 x64. I have downloaded Conscious Artificial Intelligence (project on Sourceforge.net), the files in it are 64-bit. I have extracted the files, ran ArtificialArt.exe, pressed start and nothing happens. I noticed a console window beside the gui window, and it told me it couldn't find a bin file, told me to download cifar, and provided me the address. Simple enough, I downloaded and extracted the files to the folder of CAI.

Then I pressed start again, this time and all times after, an error message window pop up and says: Access violation. Press OK to ignore and risk data corruptio. Press Abort to kill the program. This always happens after loading the .bin files, Creating Neural Networks... and immediately after Creating generative. If I press OK, nothing will happen. If I press Abort, program doesn't end, console says Sending STOP request, but the program doesn't stop, requiring me to close the console to kill the program.

The same problem applies to GradientAscent.exe, it happens immediately after I load a neural network.
I noticed their icons are of the programs developed by Lazarus IDE, I searched the problem online and found it rather frequent, but no solutions, so I figured I would ask here.

I have installed FreePascal IDE 3.0.4 (x86), Lazarus IDE 2.0.6 (x86_64), and placed the executable path of FPC in system path Env.Var., I have no problem using Lazarus IDE itself so far, any ideas on what caused this and how to solve it?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 24, 2020, 01:12:35 am
Hello Alienigene,
Can you confirm please whether your processor has or hasn't AVX instruction set? In the case it hasn't, I can prepare a compiled version for you without the AVX instruction set.

Current binaries require AVX instruction set and they will provoke an exception at the time the NN is run with a processor that hasn't AVX.

In the case that you would like to try to compile by yourself, with Lazarus, go to Options/Project Options/Custom Options/Defines and then uncheck AVX. After this, go to Run/Build and then Run/Run.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Alienigene on May 28, 2020, 10:35:37 am
I am sorry that I have just seen it, been busy for a while, so I didn't reply instantly. My CPU is Intel Celeron CPU J1900 @ 1.99GHz, quad-core CPU, I looked at CPU-Z, it supports MMX, SSE1-4.2, EM64T and VT-x instruction sets, but no, no AVX, so I think that is the cause of the problem.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Alienigene on May 29, 2020, 06:41:35 am
I got a problem, big problem, I have downloaded the svncode trunk of cai from sourceforge, intended to compile all the stuff by myself, however there are not any project files(.lpi), there are lots of .pas files, some .inc files, some .lpr .lfm and .cl files, obviously multiple files are included in one project, but I honestly don't know which files are included in a project, and what file each project corresponds to, e.g. .exe .dll .lib .nn, I don't know the correspondences, so I can't compile anything...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 31, 2020, 03:29:18 am
Hi Alienigene, these projects are probably a good start (open with Project/Open Project):
Alternatively, if you have a gmail account, you can just open the following link, select "Runtime" and then "Run all":
https://colab.research.google.com/github/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierCPU.ipynb

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 30, 2020, 08:04:14 pm
:) Hello - this is a quick update :)

These are the news from my end:

:) Wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 16, 2020, 11:11:05 pm
I'm trying to get your supersimple example to compile. I have your original uconvolutionneuralnetwork file. Do you have a new set of code that define units neuralnetwork, neuralvolume and nearalfit?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 17, 2020, 05:57:02 am
 :) Hi Daringly :)
You'll need this folder to compile: https://github.com/joaopauloschuler/neural-api/tree/master/neural

Alternatively, in the case that you would like to try image classification without having to install anything, you can run via browser at:

https://colab.research.google.com/github/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierCPU.ipynb
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 17, 2020, 08:47:11 pm
Thanks!

I do a lot of Sports actuarial analysis (probabilities for each match outcome), and this will hopefully let me do some much more complex analysis than what I am presently doing.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Blade on September 19, 2020, 07:26:33 pm
Very nice.  Great project.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 21, 2020, 08:44:13 pm
Is there a set of documentation for these libraries?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 22, 2020, 01:10:38 pm
:) Hi daringly :)
Except for this read me (https://github.com/joaopauloschuler/neural-api), not much. Is there any particular part of the API that you would like me to detail further?


Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 22, 2020, 04:51:28 pm
I'm not particularly good at Python/Keras, whereas I'm much more fluent in Pascal. I'd love a set of documents so that someone that never used Python or Keras could do things from the ground up.

Here's the meat of a keras model I wrote, and I have no idea how to implement this in your library:

model=tf.keras.Sequential()
model.add(layers.Dense(12, input_dim=14, activation='relu'))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics =['accuracy'])

print(model.summary)

monitor_val_acc=EarlyStopping(monitor='val_loss', patience=5)
model.fit(x,y, epochs=200, validation_data=(x_test, y_test), callbacks=[monitor_val_acc])
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 22, 2020, 04:54:55 pm
How about an example where you have 2 inputs (x,y) and an output (z) which is the hypotenuse of a right-angle triangle?

So in our dataset, Z^2 = X^2 + Y^2.. use maybe 10 datapoints.

A simple start-to-finish on "solving" and improving on the solution to this problem could be extrapolated to all other sorts of problems.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 24, 2020, 05:02:08 am
 :) Hello daringly :)

Your Keras model translates to this:
Code: Pascal  [Select][+][-]
  1. var
  2.   Model: TNNet;
  3. begin
  4.   Model := TNNet.Create();
  5.   Model.AddLayer([
  6.     TNNetInput.Create(14),
  7.     TNNetFullConnectReLU.Create(12),
  8.     TNNetFullConnectReLU.Create(4),
  9.     TNNetFullConnect.Create(1) // This is hyperbolic tangent - not Sigmoid
  10.   ]);

print(model.summary) translates to:
Code: Pascal  [Select][+][-]
  1. Model.DebugStructure();

Above example has hyperbolic tangent instead of sigmoid.

For the fitting, you might follow this example:
https://github.com/joaopauloschuler/neural-api/tree/master/examples/XorAndOr

The fitting method allows you to pass validation and test pairs also:
Code: Pascal  [Select][+][-]
  1. procedure Fit(pNN: TNNet;
  2.   pTrainingVolumes, pValidationVolumes, pTestVolumes: TNNetVolumePairList;
  3.   pBatchSize, Epochs: integer);

You won't need callbacks for monitoring. The fitting already saves the best solution for you and displays progress.

In the case that you really need callbacks, you can look at:
Code: Pascal  [Select][+][-]
  1.       property OnAfterStep: TNotifyEvent read FOnAfterStep write FOnAfterStep;
  2.       property OnAfterEpoch: TNotifyEvent read FOnAfterEpoch write FOnAfterEpoch;

CAI is SGD based (I don't have adam). I'm passionate towards SGD.

About Z^2 = X^2 + Y^2, I created a feature request at:
https://github.com/joaopauloschuler/neural-api/issues/33
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 24, 2020, 02:14:49 pm
Thanks! I'll fire this up this weekend.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 29, 2020, 04:35:58 am
@daringly

I coded the example you asked for:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/Hypotenuse/Hypotenuse.lpr

Training, validation and testing pairs are created with:
Code: Pascal  [Select][+][-]
  1.     Result := TNNetVolumePairList.Create();
  2.     for Cnt := 1 to MaxCnt do
  3.     begin
  4.       LocalX := Random(100);
  5.       LocalY := Random(100);
  6.       Hypotenuse := sqrt(LocalX*LocalX + LocalY*LocalY);
  7.  
  8.       Result.Add(
  9.         TNNetVolumePair.Create(
  10.           TNNetVolume.Create([LocalX, LocalY]),
  11.           TNNetVolume.Create([Hypotenuse])
  12.         )
  13.       );
  14.     end;

The neural network is created with 2 inputs and one output:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(2) );
  2. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  3. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  4. NN.AddLayer( TNNetFullConnectReLU.Create(1) );

These are results from my own run:
Code: Pascal  [Select][+][-]
  1. Inputs:19.00, 71.00 - Output:73.53  Desired Output:73.50
  2. Inputs:28.00,  6.00 - Output:28.66  Desired Output:28.64
  3. Inputs:34.00, 53.00 - Output:63.02  Desired Output:62.97
  4. Inputs:20.00, 35.00 - Output:40.25  Desired Output:40.31
  5. Inputs:53.00, 28.00 - Output:59.98  Desired Output:59.94
  6. Inputs:56.00, 12.00 - Output:57.31  Desired Output:57.27
  7. Inputs:77.00, 29.00 - Output:82.35  Desired Output:82.28
  8. Inputs:69.00, 31.00 - Output:75.65  Desired Output:75.64
  9. Inputs:24.00, 78.00 - Output:81.65  Desired Output:81.61
  10. Inputs:15.00, 22.00 - Output:26.66  Desired Output:26.63

 :) Wish everyone happy pascal coding  :)
TinyPortal © 2005-2018