Lazarus

Announcements => Third party => Topic started by: schuler on November 24, 2017, 12:54:49 am

Title: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 24, 2017, 12:54:49 am
 :) Hello :)
As some of you already know, I've been developing Neural Networks with Free Pascal / Lazarus and sometimes it's good to show progress.

The newest addition to the project is a convolutional neural network pascal unit capable of convolution and other common features. These are the available layers:
* TNNetInput (Input/output: 1D, 2D or 3D).
* TNNetFullConnect (Input/output: 1D, 2D or 3D).
* TNNetFullConnectReLU (Input/output: 1D, 2D or 3D).
* TNNetLocalConnect (Input/output: 1D, 2D or 3D - feature size: 1D or 2D). Similar to full connect with individual neurons.
* TNNetLocalConnectReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolution (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConvolutionReLU (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetMaxPool (Input/output: 1D, 2D or 3D - feature size: 1D or 2D).
* TNNetConcat (Input/output: 1D, 2D or 3D) - Allows concatenating the result from previous layers.
* TNNetReshape (Input/output: 1D, 2D or 3D).
* TNNetSoftMax (Input/output: 1D, 2D or 3D).

These are the available weight initializers:
* InitUniform(Value: TNeuralFloat = 1);
* InitLeCunUniform(Value: TNeuralFloat = 1);
* InitHeUniform(Value: TNeuralFloat = 1);
* InitGlorotBengioUniform(Value: TNeuralFloat = 1);

The source code is located here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uconvolutionneuralnetwork.pas

There is an extensive CIFAR-10 testing script located at:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/testcnnalgo/testcnnalgo.lpr

The API allows you to create divergent/parallel and convergent layers as per
example below:

Code: Pascal  [Select][+][-]
  1. // Creates The Neural Network
  2. NN := TNNet.Create();
  3.  
  4. // This network splits into 2 paths and then is later concatenated
  5. InputLayer := NN.AddLayer(TNNetInput.Create(32, 32, 3));
  6.  
  7. // First branch starting from InputLayer (5x5 features)
  8. NN.AddLayerAfter(TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}2, {Stride=}1), InputLayer);
  9. NN.AddLayer(TNNetMaxPool.Create(2));
  10. NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}5, {Padding=}2, {Stride=}1));
  11. NN.AddLayer(TNNetMaxPool.Create(2));
  12. EndOfFirstPath := NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}5, {Padding=}2, {Stride=}1));
  13.  
  14. // Another branch starting from InputLayer (3x3 features)
  15. NN.AddLayerAfter(TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}3, {Padding=}1, {Stride=}1), InputLayer);
  16. NN.AddLayer(TNNetMaxPool.Create(2));
  17. NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1));
  18. NN.AddLayer(TNNetMaxPool.Create(2));
  19. EndOfSecondPath := NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1));
  20.  
  21. // Concats both branches into one branch.
  22. NN.AddLayer(TNNetDeepConcat.Create([EndOfFirstPath, EndOfSecondPath]));
  23. NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1));
  24. NN.AddLayer(TNNetLayerFullConnectReLU.Create(64));
  25. NN.AddLayer(TNNetLayerFullConnectReLU.Create(NumClasses));

As a simpler example, this example shows how to create a simple fully forward connected network 3x3:
Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(3) );
  3. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  4. NN.AddLayer( TNNetLayerFullConnectReLU.Create(3) );
  5. NN.SetLearningRate(0.01,0.8);

In above example, learning rate is 0.01 and inertia is 0.8.

Follows an example showing how to train your network:
Code: Pascal  [Select][+][-]
  1. // InputVolume and vDesiredVolume are of the type TNNetVolume
  2. NN.Compute(InputVolume);
  3. NN.GetOutput(PredictedVolume);
  4. vDesiredVolume.SetClassForReLU(DesiredClass);
  5. NN.Backpropagate(vDesiredVolume);

Recently, an ultra fast Single Precision AVX/AVX2 API was coded so Neural Networks could benefit from it:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uvolume.pas

2 videos were created showing this unit:
https://www.youtube.com/watch?v=qGnfwpKUTIQ
https://www.youtube.com/watch?v=Pnv174V_emw

In preparation for future OpenCL based Neural Networks, an OpenCL wrapper was created:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopencl.pas
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopenclcl.pas

There is an example for this OpenCL wrapper here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/opencl/easy-trillion-test/

 :) Wish everyone happy pascal coding  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: DonAlfredo on November 24, 2017, 09:51:35 am
Very impressive. And OpenCL is a very useful feature for speeding things up.
Thanks.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 16, 2018, 01:28:01 am
 :) Hello,  :)
I haven't updated this post for a long time. There are plenty of new features since the last update:

A new pooling layer:
* TNNetAvgPool (input/output: 1D, 2D or 3D)

Layers for inverse operations:
* TNNetDeLocalConnect (input/output: 1D, 2D or 3D)
* TNNetDeLocalConnectReLU (input/output: 1D, 2D or 3D)
* TNNetDeconvolution (input/output: 1D, 2D or 3D)
* TNNetDeconvolutionReLU (input/output: 1D, 2D or 3D)
* TNNetDeMaxPool (input/output: 1D, 2D or 3D)

New normalization layers:
* TNNetLayerMaxNormalization (input/output: 1D, 2D or 3D)
* TNNetLayerStdNormalization (input/output: 1D, 2D or 3D)

A dropout layer:
* TNNetDropout (input/output: 1D, 2D or 3D)

L2 regularization has been added. It's now possible to apply L2 to all layers or only to convolutional layers.
Code: Pascal  [Select][+][-]
  1.       procedure SetL2DecayToConvolutionalLayers(pL2Decay: TNeuralFloat);
  2.       procedure SetL2Decay(pL2Decay: TNeuralFloat);

A new video:
Increasing Image Resolution with Neural Networks
https://www.youtube.com/watch?v=jdFixaZ2P4w (https://www.youtube.com/watch?v=jdFixaZ2P4w)

Data Parallelism has been implemented. Experiments have been made with up to 40 parallel threads in virtual machines with up to 32 virtual cores.

Data Parallelism is implemented at TNNetDataParallelism. As per example, you can create and run 40 parallel neural networks with:
Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. ...
  3. FThreadNN := TNNetDataParallelism.Create(NN, 40);
  4. ...
  5. ProcThreadPool.DoParallel(@RunNNThread, 0, FThreadNN.Count-1, Nil, FThreadNN.Count);
  6. FThreadNN.AvgWeights(NN);

 :) Wish everyone happy coding  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 31, 2018, 05:21:45 am
It's time for new features and examples!

Follows some new examples:

CIFAR-10 with OpenCL on convolution forward step: https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10OpenCL/

CIFAR-10 batch updates example:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

Mario Werner independently coded and shared examples for CIFAR-10 and MNIST:
https://bitbucket.org/108bits/cai-implementations/src/c8c027b1a0d636713f7ebb70a738f1cd7117a7a4?at=master

BTW, if you have examples that you've shared in public repositories, please let me know. I love seeing what you are doing.

These are new features:

OpenCL based Convolution: on tiny images 32x32x3 or 28x28x1, you won't get much improvement against the well made AVX code. But you'll find impressive improvement on large input volumes. OpenCL can be enabled by calling:
Code: Pascal  [Select][+][-]
  1. procedure EnableOpenCL(platform_id: cl_platform_id; device_id: cl_device_id);

Batch Updates Support
Batch updates can be enabled/disabled via:
Code: Pascal  [Select][+][-]
  1. procedure SetBatchUpdate(pBatchUpdate: boolean);

New Normalization layers:
* TNNetLocalResponseNorm2D: (input/output: 2D or 3D).
* TNNetLocalResponseNormDepth: (input/output: 2D or 3D).

New Concatenation / Splitting layers:
* TNNetDeepConcat (input/output: 1D, 2D or 3D) - Concatenates independent NN branches into the Depth axis.
* TNNetSplitChannels (input: 1D, 2D or 3D / output: 1D, 2D or 3D). Splits layers/channels from input. This is useful for creating divergent NN branches.

New ready to use data augmentation methods from TVolume:
* procedure FlipX();
* procedure FlipY();
* procedure CopyCropping(Original: TVolume; StartX, StartY, pSizeX, pSizeY: integer);
* procedure CopyResizing(Original: TVolume; NewSizeX, NewSizeY: integer);
* procedure AddGaussianNoise(pMul: TNeuralFloat);
* procedure AddSaltAndPepper(pNum: integer; pSalt: integer = 2; pPepper: integer = -2);

Final random thought:
As pascal is easy to teach and learn (in my opinion), pascal might be perfect for teaching/learning neural networks.

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on July 31, 2018, 08:57:54 am
As pascal is easy to teach and learn (in my opinion), pascal might be perfect for teaching/learning neural networks.
Actually, that has been the case since a very long time, in the early 80's it was pretty much the standard.
That's the way I learned some of the basics and taught some of the basics at university back in the 80's.
(Basically I learned it, wrote examples and taught it - almost - in the same week, writing the curriculum on the go... <hmm..> O:-) :-[ )
We used it in teaching for problems like Traveling Salesman and Knapsack. Lot's of fun. I must still have some line printed code from those days...The green/white line stuff with holes on both sides.
Let's see if I can put it into your way more advanced software form... Great job.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Trenatos on July 31, 2018, 08:18:43 pm
This is very cool to see, many thanks for sharing!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on August 02, 2018, 11:30:33 pm
Hello,
I got a couple of private questions from forum members and decided to post here a general reply as it might help others.

This link gives a general view of CNNs:
http://cs231n.github.io/convolutional-networks/

Above link mentions some well known NN architectures. I would like to show here how to implement some of these well known architectures with CAI:

Yann LeCun LeNet-5:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(28, 28, 1) );
  3. NN.AddLayer( TNNetConvolution.Create(6, 5, 0, 1) );
  4. NN.AddLayer( TNNetMaxPool.Create(2) );
  5. NN.AddLayer( TNNetConvolution.Create(16, 5, 0, 1) );
  6. NN.AddLayer( TNNetMaxPool.Create(2) );
  7. NN.AddLayer( TNNetFullConnect.Create(120) );
  8. NN.AddLayer( TNNetFullConnect.Create(84) );
  9. NN.AddLayer( TNNetFullConnectLinear.Create(10) );
  10. NN.AddLayer( TNNetSoftMax.Create() );

AlexNet can be implemented as follows:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(227, 227, 3) );
  3.  
  4. //Conv1 + ReLU
  5. NN.AddLayer( TNNetConvolutionReLU.Create(96, 11, 0, 4) );
  6. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  7. NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) );
  8.  
  9. //Conv2 + ReLU
  10. NN.AddLayer( TNNetConvolutionReLU.Create(256, 5, 2, 1) );
  11. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  12. NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) );
  13.  
  14. //Conv3,4,5 + ReLU
  15. NN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );
  16. NN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );
  17. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  18. NN.AddLayer( TNNetMaxPool.Create(3,2) );
  19.  
  20. // Dropouts and Dense Layers
  21. NN.AddLayer( TNNetDropout.Create(0.5) );
  22. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  23. NN.AddLayer( TNNetDropout.Create(0.5) );
  24. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  25. NN.AddLayer( TNNetFullConnectLinear.Create(1000) );
  26. NN.AddLayer( TNNetSoftMax.Create() );

Above implementation is based on:
https://medium.com/@smallfishbigsea/a-walk-through-of-alexnet-6cbd137a5637

The VGGNet has an interesting architecture. It always uses 3x3 filters making its implementation easy to understand:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(224, 224, 3) );
  3. NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );
  4. NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );
  5. NN.AddLayer( TNNetMaxPool.Create(2) );
  6. //112x112x64
  7. NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );
  8. NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );
  9. NN.AddLayer( TNNetMaxPool.Create(2) );
  10. //56x56x128
  11. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  12. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  13. NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );
  14. //28x28x256
  15. NN.AddLayer( TNNetMaxPool.Create(2) );
  16. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  17. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  18. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  19. NN.AddLayer( TNNetMaxPool.Create(2) );
  20. //14x14x512
  21. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  22. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  23. NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );
  24. NN.AddLayer( TNNetMaxPool.Create(2) );
  25. //7x7x512
  26. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  27. NN.AddLayer( TNNetFullConnectReLU.Create(4096) );
  28. NN.AddLayer( TNNetFullConnectLinear.Create(1000) );
  29. NN.AddLayer( TNNetSoftMax.Create() );

Above architectures have been added to the API as follows:
Code: Pascal  [Select][+][-]
  1.   THistoricalNets = class(TNNet)
  2.     public
  3.       procedure AddLeCunLeNet5(IncludeInput: boolean);
  4.       procedure AddAlexNet(IncludeInput: boolean);
  5.       procedure AddVGGNet(IncludeInput: boolean);
  6.   end;

Hope it helps and happy pascal coding!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on March 13, 2019, 10:19:49 pm
Hello,
This message is intended to be read by whom loves pascal and neural networks.

After a long time, here we go with a new post. This time, instead of reporting new features, this post is more about how to do things than just listing new features. I would like to show how to replicate examples made for Keras and TensorFlow with CAI. As shown in the paper https://arxiv.org/abs/1412.6806 (https://arxiv.org/abs/1412.6806), it’s possible to create efficient neural networks with just convolutional layers. Therefore, we’ll start from convolutional layers. There is a simple example from Keras located at https://keras.io/examples/cifar10_cnn/ (https://keras.io/examples/cifar10_cnn/). From this code, we can find convolutional layers being added with:
Code: Pascal  [Select][+][-]
  1. model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
  2. model.add(Activation('relu'))

The above can be done with CAI with just one line of code:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );

It’s also possible to add it with 2 lines of code but it’s not recommended as it’s not efficient in CAI:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetConvolutionLinear.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  2. NN.AddLayer( TNNetReLU.Create() );

This is the MaxPooling from Keras:
Code: Pascal  [Select][+][-]
  1. model.add(MaxPooling2D(pool_size=(2, 2)))

And this is the equivalent from CAI:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetMaxPool.Create(2) );

The full convolutional neural network can be defined in plain Free Pascal with:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(32,32,iInputDepth) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  3. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
  4. NN.AddLayer( TNNetMaxPool.Create(2) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  6. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  7. NN.AddLayer( TNNetMaxPool.Create(2) );
  8. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}512) );
  9. NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );
  10. NN.AddLayer( TNNetSoftMax.Create() );

As you can see above, there aren’t any dropout layers in the Free Pascal implementation. This was done intentionally as I prefer using data augmentation instead of dropout layers. The data augmentation technique used here is very simple:
The above can be easily done in Free Pascal:
Code: Pascal  [Select][+][-]
  1. // Crop and Stretch
  2. CropSizeX := random(9);
  3. CropSizeY := random(9);
  4. ImgInputCp.CopyCropping(ImgVolumes[ImgIdx], random(CropSizeX), random(CropSizeY),ImgVolumes[ImgIdx].SizeX-CropSizeX, ImgVolumes[ImgIdx].SizeY-CropSizeY);
  5. ImgInput.CopyResizing(ImgInputCp, ImgVolumes[ImgIdx].SizeX, ImgVolumes[ImgIdx].SizeY);
  6. // Flip the Image
  7. if Random(1000) > 500 then
  8. begin
  9.   ImgInput.FlipX();
  10. end;
  11. // Make it Gray
  12. if (Random(1000) > 750) then
  13. begin
  14.   ImgInput.MakeGray(color_encoding);
  15. end;

Running the above in CAI gives really impressive results (they seem to me better than on Keras itself…). In the case that you would like to try by yourself, you can get this Lazarus Project:
 https://sourceforge.net/p/cai/svncode/724/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/  (https://sourceforge.net/p/cai/svncode/724/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/)

When you compile and run it, select “2 – Keras inspired” in “Algo” dropdown, check “Multiple Samples at Validation” and then start it. Although this example looks simple, it’s no numerical toy. It actually has more than 2.1 million neuronal weights/connections!

I would like to also show you another example ported to Free Pascal. This time it’s an example from TensorFlow:  https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py (https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py)
The neuronal structure can be found in the “def inference(images)”. This structure was ported to Free Pascal as follows:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(24,24,3) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );
  3. NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );
  4. NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );
  6. NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );
  7. NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );
  8. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}384) );
  9. NN.AddLayer( TNNetFullConnectReLU.Create({neurons}192) );
  10. NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );
  11. NN.AddLayer( TNNetSoftMax.Create() );

Running it in CAI/Free Pascal, you’ll get similar results (classification accuracy) to results reported in TensorFlow site. In the case that you would like to run this by yourself, select “3 – TF inspired”, “Multiple Samples at Validation” and “24x24 image cropping”. The data augmentation technique found in this example is a bit different than before: we are cropping the original image into 24x24 images at random places. We are also flopping it and making it sometimes gray.

:-) I wish everyone happy pascal neural networking! :-)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on March 13, 2019, 10:55:30 pm
this post is more about how to do things than just listing new features
Much appreciated. Thank you!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 21, 2019, 08:50:10 am
 @Avra,
I'm starting to believe that most of examples showing how to use CAI are too advanced.

Just added 2 entry level examples. The first shows how to learn boolean functions XOR, AND and OR:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimple/

The next is the same as above with Pearson Correlation:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimplecorrelation/

:) I hope it helps and happy pascal coding to everyone. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 29, 2019, 12:31:10 pm
 :) Hello :)

Instead of reporting new features (there are plenty of new neuronal layer types), I'll report here a test. For the past 2 years, I've probably spent more time testing than coding. The test is simple: using a DenseNet like architecture with 1M trainable parameters, normalization, L2, cyclical learning rate and weight averaging, how precise can be the CIFAR-10 image classification accuracy?

The experiment was done with SVN revision 907:
https://sourceforge.net/p/cai/svncode/907/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

In the case that you intend to reproduce it, you should use exactly the same parameters found in the attached screenshot expect for "Logical Threads" and "Physical Threads" (you should use your CPU core count instead). This experiment was run in the slowest (and cheapest) machine I have access to.

At my end, I got about 94% CIFAR-10 classification accuracy.

:) Wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 29, 2019, 06:37:32 pm
 :) Hello :)
I promise that I'll try to not to post anything after this for some time.

I've just done an experiment comparing CAI with Keras/TF. CAI and TF were both compiled with AVX instruction set (no AVX2 nor AVX512). This is the FPC code:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer( TNNetInput.Create(32, 32, 3) );
  2. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}0, {stride}1) );
  3. NN.AddLayer( TNNetMaxPool.Create(4) );
  4. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  5. NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );
  6. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  7. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  8. NN.AddLayer( TNNetFullConnectLinear.Create(10) );
  9. NN.AddLayer( TNNetSoftMax.Create() );

This is the equivalent Keras code:
Code: Python  [Select][+][-]
  1. model = Sequential()
  2. model.add(Conv2D(64, (5, 5), padding='valid',
  3.                  input_shape=x_train.shape[1:]))
  4. model.add(Activation('relu'))
  5. model.add(MaxPooling2D(pool_size=(4, 4)))
  6. model.add(Conv2D(64, (3, 3), padding='same'))
  7. model.add(Activation('relu'))
  8. model.add(Conv2D(64, (3, 3), padding='same'))
  9. model.add(Activation('relu'))
  10. model.add(Flatten())
  11. model.add(Dense(32))
  12. model.add(Activation('relu'))
  13. model.add(Dense(32))
  14. model.add(Activation('relu'))
  15. model.add(Dense(num_classes))
  16. model.add(Activation('softmax'))

Using the very same machine (google cloud with 16 virtual cores and no gpu), I run both models. This is the output from Keras showing that each epoch is taking 20 seconds to be computed:
Quote
Epoch 24/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3232 - acc: 0.5270 - val_loss: 1.2861 - val_acc: 0.5369
Epoch 25/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3297 - acc: 0.5226 - val_loss: 1.2873 - val_acc: 0.5361
Epoch 26/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3144 - acc: 0.5295 - val_loss: 1.2868 - val_acc: 0.5378
Epoch 27/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3156 - acc: 0.5284 - val_loss: 1.2915 - val_acc: 0.5356
Epoch 28/350
312/312 [==============================] - 20s 63ms/step - loss: 1.3150 - acc: 0.5281 - val_loss: 1.2776 - val_acc: 0.5397
Epoch 29/350
312/312 [==============================] - 20s 64ms/step - loss: 1.3057 - acc: 0.5332 - val_loss: 1.2737 - val_acc: 0.5415

In CAI, each epoch is computed in about 16 or 17 seconds as per log (time in seconds is incremental and is the last field below):
Quote
31,0.7509,0.7379,0.6683,0.7780,0.6491,0.6031,0.0010000,523
32,0.7565,0.6392,0.6981,0.7679,0.6725,0.6202,0.0010000,539
33,0.7482,0.8476,0.7243,0.7678,0.6851,0.6034,0.0010000,555
34,0.7653,0.5730,0.5781,0.7665,0.6971,0.6179,0.0010000,571
35,0.7668,0.7007,0.6337,0.7800,0.6464,0.5884,0.0010000,588

Am I typing that CAI will always perform better? Absolutely not. CAI is likely to be faster on some models and slower on other models. I haven't tested enough to be able to indicate when it will be faster or slower.

:) Anyway, wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 02, 2019, 10:33:01 pm
 :) Hello :),
I'm thinking about making a change that will brake existing code but it will make this library to look a lot better. Before I do this change, I would like to have some feedback please.

The change is: renaming library units and then creating a package (version 1.0).

All library units will start with "Neural". So, these are some renaming examples:
uconvolutionalneuralnetwork -> NeuralNet
uvolume -> NeuralVolume
ueasyopencl -> NeuralOpenCl
uevolutionary -> NeuralEvolution
ucifar10 -> NeuralDatasets
ubyteprediction -> NeuralPrediction

What do you think? Please feel free to ask for other changes.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on September 03, 2019, 12:11:36 am
I'm thinking about making a change that will brake existing code but it will make this library to look a lot better. Before I do this change, I would like to have some feedback please.
I do not mind suggested changes, but it would be nice if thread and other examples get corrected so that they still work.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 23, 2019, 04:06:10 am
... I'm approaching 1000 commits ... And this is the "new SVN repository" since I moved from CVS to SVN many years ago. I'm getting old...  Might be time to move to git ... Anyway ...

@avra
A new Neural API version with major changes/improvements is now under construction:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/neural/

To whom may concern,
All unit names now start by "neural". All examples have already been fixed. The old "libs" folder has been kept for backward compatibility so the next version won't brake any existing code.

Looking for sources of inspiration, Keras has an useful "fit" method:
https://keras.rstudio.com/reference/fit.html

"fit" like methods and classes are now under construction for CAI! YAY! neuralfit.pas has already more than 700 lines of source code.

Soon, I intend to have deep learning examples ready to run on Google Colab with Lazarus/FPC/CAI NEURAL API.

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on September 23, 2019, 08:46:11 am
Better and better! I will enjoy playing with your code this week. TNX!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 23, 2019, 11:58:25 am
@Thaddy,
It's not ready yet! I intend to have the first version ready in 2 weeks.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on September 23, 2019, 12:34:44 pm
Ok. I have some patience... :D
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 23, 2019, 05:46:08 pm
Thank you for the interest! Makes life happier!

This is an example about the work in progress:
https://github.com/joaopauloschuler/neural-api#how-does-the-code-look-like-for-a-cifar-10-classification-example

A good introductory example is:
https://github.com/joaopauloschuler/neural-api/tree/master/examples/SimpleImageClassifier

:) Wishing everyone happy pascal coding :)

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: nouzi on October 03, 2019, 02:17:40 pm
Good job 'bravo '
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 08, 2019, 12:46:20 pm
@nouzi, thank you!

@Thaddy, I think that the new CAI Neural API subproject currently placed at https://github.com/joaopauloschuler/neural-api is ready for testing.

In the case that no major bug is found on the near future, I'll call it CAI Neural API v1.0.

:) wish everyone happy pascal solution construction :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 19, 2019, 07:08:25 pm
:) Hi :)
To whom is interested in computer vision and has never had a look at MNIST (http://yann.lecun.com/exdb/mnist/ (http://yann.lecun.com/exdb/mnist/)) or Fashion MNIST (https://github.com/zalandoresearch/fashion-mnist (https://github.com/zalandoresearch/fashion-mnist)), I highly recommend clicking on these links. MNIST contains images with handwritten digits while Fashion MNIST contains 10 clothing classes.

At this moment, these datasets have API support:
Source code examples for MNIST and Fashion MNIST have just been added:
:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on October 20, 2019, 06:12:42 pm
Nice! Things are getting better and better.  :P
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: nouzi on October 20, 2019, 08:40:51 pm
@schuler

you are a hero
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 26, 2019, 01:39:47 pm
@avra and @nouzi,
Thank you for your support.

Just asked to include a Free Pascal benchmark into the Fashion MNIST benchmark list:
https://github.com/zalandoresearch/fashion-mnist/issues/153

Got 94% accuracy with just 174K parameters. In the benchmark list (https://github.com/zalandoresearch/fashion-mnist), you'll find results with 26M parameters...

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on October 26, 2019, 01:42:36 pm
That is extremely good!!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 05, 2019, 12:42:22 am
 :) ,
I would like to ask everyone: what is the feature or example that you would like to be added? Feel free to ask. In the case that you ask for something that I can do quickly or I already have it half baked, I might just spend some minutes typing code.

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: MathMan on November 05, 2019, 01:43:33 pm
:) ,
I would like to ask everyone: what is the feature or example that you would like to be added? Feel free to ask. In the case that you ask for something that I can do quickly or I already have it half baked, I might just spend some minutes typing code.

:) wish everyone happy pascal coding :)

Before I answer I have to admit that I neither played with CAI, nor will I have time to do so in the foreseeable future. So if you don't mind suggestions from a purely theoretical perspective

* (fully) binarized neural network - (F)BNN
* bayesian confidence propagation neural network - BCPNN

Both to me look, from what I have read, like interesting approaches addressing topics like reduced power consumption, inference capabilities, etc. But, also from what I read, both might be major developments.

Cheers,
MathMan
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: circular on November 05, 2019, 03:54:06 pm
That looks amazing, being able to program neural networks in Pascal. I wonder if that can be useful in LazPaint? Some image processing features might be more efficient with neural networks.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on November 05, 2019, 05:29:11 pm
That looks amazing, being able to program neural networks in Pascal. I wonder if that can be useful in LazPaint? Some image processing features might be more efficient with neural networks.
Well, one of the examples is about recognition, but it needs to learn. That process is quite slow (hours and days even months), but the code renders quite amazing results after that, and once established -to your specification! you have to stop learning! at some point -  it is fast.
And well written pure pascal.
What is maybe missing is just some pre-processing along the lines of my fuzzy code (on this forum, Zadeh logic) that would help a bit in some scenario's as a filter, but is actually a completely different subject and is AI but not machine learning AI, what  schuler's library is all about.
If you are interested: the code is really good  and you pick it up quite fast if you have just a tiny bit of background. It is one of my favorite Freepascal projects.

Example: if you feed it enough number plate examples ( say, a million, from all angles ) and you run for some months, you have a fast number plate recognition software with high probability of being correct. But the fun is is really to use it on comic book examples and let it filter out Daffy Duck...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: zulof on November 12, 2019, 11:05:27 am
Could/Should this be included in online package manager?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: m.abudrais on November 22, 2019, 08:14:54 am
thank you for making this good library.
I am absolute beginner in NN.
i run the CIFAR-10 example (for  50 epochs only)   and  the SimpleImageClassifier.nn is generated.
then I tried to Classify some image  by this code:
Code: Pascal  [Select][+][-]
  1. procedure TForm1.Button1Click(Sender: TObject);
  2. var
  3.   NN: THistoricalNets;
  4.   O1: array of TNeuralFloat;
  5.   pOutPut, pInput: TNNetVolume;
  6.   k, y, x: integer;
  7.   OK: extended;
  8.   img: TTinyImage;
  9. begin
  10.   Image1.Picture.LoadFromFile('C:\Users\moh\Desktop\dog1.png');
  11.   NN := THistoricalNets.Create();
  12.   NN.AddLayer([TNNetInput.Create(32, 32, 3),
  13.     TNNetConvolutionLinear.Create(64, 5, 2, 1, 1).InitBasicPatterns(),
  14.     TNNetMaxPool.Create(4), TNNetConvolutionReLU.Create(64, 3, 1, 1, 1),
  15.     TNNetConvolutionReLU.Create(64, 3, 1, 1, 1),
  16.     TNNetConvolutionReLU.Create(64, 3, 1, 1, 1),
  17.     TNNetConvolutionReLU.Create(64, 3, 1, 1, 1), TNNetDropout.Create(0.5),
  18.     TNNetMaxPool.Create(2), TNNetFullConnectLinear.Create(10),
  19.     TNNetSoftMax.Create()]);
  20.   NN.LoadDataFromFile('SimpleImageClassifier.nn');
  21.   for y := 0 to 31 do
  22.   begin
  23.     for x := 0 to 31 do
  24.     begin
  25.       img.B[y, x] := Red(Image1.Canvas.Pixels[y, x]);
  26.       img.g[y, x] := Green(Image1.Canvas.Pixels[y, x]);
  27.       img.B[y, x] := Blue(Image1.Canvas.Pixels[y, x]);
  28.     end;
  29.   end;
  30.   pInput := TNNetVolume.Create(32, 32, 3, 1);
  31.   pOutPut := TNNetVolume.Create(10, 1, 1, 1);
  32.   LoadTinyImageIntoNNetVolume(img, pInput);
  33.   NN.Compute(pInput);
  34.   NN.GetOutput(pOutPut);
  35.   OK := 0.0;
  36.   for k := 0 to 9 do
  37.   begin
  38.     OK := OK + pOutPut.Raw[k];
  39.     WriteLn(pOutPut.Raw[k]);
  40.   end;
  41.   WriteLn();
  42.   WriteLn(OK);
  43. end;  
i tried image for a dog and car from the same data set but i always get 1 in pOutPut.Raw[0]!
can you please add  example  to show how to use Classifier after training.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 25, 2019, 04:56:02 pm
Please forgive me for taking so long to reply.

First of all, you can completely remove "AddLayer" call as CAI stores both architecture and weights into the same nn file.

Then, you need to add RgbImgToNeuronalInput as follows:

Code: Pascal  [Select][+][-]
  1. LoadTinyImageIntoNNetVolume(img, pInput);
  2. pInput.RgbImgToNeuronalInput(csEncodeRGB);

If the above doesn't work, I'll try to compile at my end.

As you have SoftMax, you can print output class probabilities with:
Code: Pascal  [Select][+][-]
  1. pOutPut.Print();

You can get the inferred output class with:
Code: Pascal  [Select][+][-]
  1. WriteLn('Inferred Class: ', csTinyImageLabel[pOutPut.GetClass()]);
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: m.abudrais on November 25, 2019, 07:40:52 pm
thank you for replying.
Quote
First of all, you can completely remove "AddLayer" call as CAI stores both architecture and weights into the same nn file.

I used the wrong function LoadDataFromFile, the correct one is LoadFromFile.
now it works :).
this is the correct code if any one is interesting
Code: Pascal  [Select][+][-]
  1. procedure TForm1.Button1Click(Sender: TObject);
  2. var
  3.   NN: THistoricalNets;
  4.   O1: array of TNeuralFloat;
  5.   pOutPut, pInput: TNNetVolume;
  6.   k, y, x: integer;
  7.   OK: extended;
  8.   img: TTinyImage;
  9. begin
  10.   NN := THistoricalNets.Create();
  11.   NN.LoadFromFile('SimpleImageClassifier.nn');
  12.   for y := 0 to 31 do
  13.   begin
  14.     for x := 0 to 31 do
  15.     begin
  16.       img.R[y, x] := Red(Image1.Canvas.Pixels[y, x]);
  17.       img.g[y, x] := Green(Image1.Canvas.Pixels[y, x]);
  18.       img.B[y, x] := Blue(Image1.Canvas.Pixels[y, x]);
  19.     end;
  20.   end;
  21.   pInput := TNNetVolume.Create(32, 32, 3, 1);
  22.   pOutPut := TNNetVolume.Create(10, 1, 1, 1);
  23.   LoadTinyImageIntoNNetVolume(img, pInput);
  24.   pInput.RgbImgToNeuronalInput(csEncodeRGB);
  25.   NN.Compute(pInput);
  26.  
  27.   WriteLn('Inferred Class: ', csTinyImageLabel[pOutPut.GetClass()]);  
  28.  
I moved the image loading to TForm1.FormCreate,the image don't load when I load it TForm1.Button1Click functon !!.
thank you again for making this good library.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 25, 2019, 09:55:28 pm
@m.abudrais
Are you placing your code on a public repo?

Anyway, just realized that you might be able to replace both for loops by using the neuralvolumev https://github.com/joaopauloschuler/neural-api/blob/master/neural/neuralvolumev.pas (https://github.com/joaopauloschuler/neural-api/blob/master/neural/neuralvolumev.pas) unit:

Code: Pascal  [Select][+][-]
  1. /// Loads a Picture into a Volume
  2. procedure LoadPictureIntoVolume(LocalPicture: TPicture; Vol:TNNetVolume); {$IFDEF Release} inline; {$ENDIF}
  3.  
  4. /// Loads a Bitmat into a Volume
  5. procedure LoadBitmapIntoVolume(LocalBitmap: TBitmap; Vol:TNNetVolume);

The code will look like:
Code: Pascal  [Select][+][-]
  1. LoadPictureIntoVolume(Image1.Picture, pInput);
  2. pInput.RgbImgToNeuronalInput(csEncodeRGB);

In the case that the input image isn't 32x32, you can resize it (via copying) with:
Code: Pascal  [Select][+][-]
  1. TVolume.CopyResizing(Original: TVolume; NewSizeX, NewSizeY: integer);

Last idea: given that you have a trained NN, you could call this:
Code: Pascal  [Select][+][-]
  1. procedure TNeuralImageFit.ClassifyImage(pNN: TNNet; pImgInput, pOutput: TNNetVolume);
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: m.abudrais on November 26, 2019, 04:58:28 pm
Quote
Are you placing your code on a public repo?
I have just uploaded the code to:
https://github.com/mabudrais/CAI-NEURAL-API-Test (https://github.com/mabudrais/CAI-NEURAL-API-Test)
your library has many useful functions, I think more examples may be added to show that.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 06, 2019, 06:57:25 pm
:) Hello :)

It's usually very hard to understand neuron by neuron how a neural network dedicated to image classification internally works. One technique used to help with the understanding about what individual neurons represent is called Gradient Ascent. You can find more about gradient ascent at http://yosinski.com/deepvis .

In this technique, an arbitrary neuron is required to activate and then the same backpropagation method used for learning is applied to an input image producing an image that this neuron expects to see. I'm happy to inform that we have a working example purely coded in FPC/Lazarus at: https://github.com/joaopauloschuler/neural-api/tree/master/examples/GradientAscent

To be able to run this example, you'll need to load an already trained neural network file and then select the layer you intend to visualise. I've attached an example from the last layer (can you see any pattern?) of a neural network trained to classify CIFAR-10 dataset. Can you find patterns for horse, deer, ship, car or truck?

:) Long life to Pascal :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 21, 2019, 01:28:15 pm
Just found this thread, it seems to be a very interesting topic!
Thanks to all the hard work you did!

While I still prefer python/tensorflow/keras approach to train CNNs, I'd like to use the resulting CNN in my Delphi project to help me sorting the sounds. Do you have any method to export the weights from a Keras model and load it to your library so I can do that NN magic in pure pascal without any additional DLLs?

Thank you in advance!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 23, 2019, 06:29:55 pm
Hello agorka! Glad that you like it.

Supporting Keras file format isn't in the near horizon. At this moment, I'm polishing a super advanced example about producing artificial art: https://github.com/joaopauloschuler/neural-api/tree/master/examples/VisualGAN

Other feature requests are listed as github issues.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 24, 2019, 07:38:50 am
Wow! I couldn't imagine this could be done in Lazarus without such heavy frameworks like tensorflow or pytorch...
Still, my question is, what file formats could be used to load existing weights to the convolution layers?
I could find a converter or make my own export function from Python...
Is there any json, csv, any other format?

Thanks!
Oleg.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: avra on December 24, 2019, 08:17:21 am
Is there any json, csv, any other format?
It would also help for MatLab.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 24, 2019, 11:35:49 am
@agorka,
In the case that you try loading weights by yourself, here we go:

A neural network TNNet has Layers:
Code: Pascal  [Select][+][-]
  1. property Layers: TNNetLayerList read FLayers;

Each Layer (TNNetLayer) has Neurons:
Code: Pascal  [Select][+][-]
  1. property Neurons: TNNetNeuronList read FNeurons;

Each Neuron (TNNetNeuron) has weights:
Code: Pascal  [Select][+][-]
  1. property Weights: TNNetVolume read FWeights

Accessing weights from the second neuron from the second layer will look like this:
Code: Pascal  [Select][+][-]
  1. NN.Layers[1].Neurons[1].Weights

As you can see, Weights is a TNNetVolume.

TNNetVolume has 2 methods that might interest you:
Code: Pascal  [Select][+][-]
  1.     // load and save functions
  2.     function SaveToString(): string;
  3.     procedure LoadFromString(strData: string);

Having a look at the implementation, you'll find that weights are saved as a ";" separated string. The first 4 values are structural: version, SizeX, SizeY and depth. The following values are the actual weights.

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 24, 2019, 11:47:46 am
Quote
Wow! I couldn't imagine this could be done in Lazarus without such heavy frameworks like tensorflow or pytorch
Lazarus/FPC can do impressive things such as compiling to other targets (android and raspberry).

Artificial Art can be useful for texture generation on an artificial environment (such as computer games) as long as you have a pre-trained network. I'll eventually add an example creating very large images (and not just 32x32 pixels).
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 24, 2019, 10:48:38 pm
Thank you, I'm getting it running with 1 convolution layer.
One thing I stuck with was weights not updating after loading:
          NN.Layers.Neurons[N].LoadFromString(SL[N]);
I had to add
        NN.Layers.AfterWeightUpdate;
and unprivate this AfterWeightUpdate procedure.
Probably it should be called from LoadFromString function to avoid this inconsistencies.

Thank you for your hard work! I really hope to port my keras network to Pascal tomorrow! =)

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 24, 2019, 11:09:42 pm
Sorry, I've run into a problem again.
My network now is as simple as this:

      TNNetInput.Create(512, 512, 1),
      TNNetConvolutionReLU.Create(24, 7, 0, 1, 0).InitUniform(0),
      TNNetConvolutionReLU.Create(1, 5, 0, 1, 0).InitUniform(0)

Though, I run out of memory in
procedure TNNetConvolutionBase.SetPrevLayer(pPrevLayer: TNNetLayer);
when it tries to allocate 502x502x5x5x24 Volume, which is ~600MB.
As I understand, it's needed to optimize convolution calculation, but in my 32 bit application I just can't allocate that much, and my real network needs many more layers.
Is there any way to workaround this problem?
For me it's better to run it slowly than not to run at all. =)

Thanks!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 25, 2019, 02:15:01 am
Good idea about AfterWeightUpdate... Thank you.

Do you have freedom to try something like this (a bigger stride compensated by more neurons in the first convolution)?
Code: Pascal  [Select][+][-]
  1.       TNNetInput.Create(512, 512, 1),
  2.       TNNetConvolutionReLU.Create({neurons=}128, {FeatureSize=}7, {Padding=)0, {Stride=}7, {SupressBias=}0),
  3.       ...

If not, I'll eventually code a turn around (slower with less memory).
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 26, 2019, 10:45:41 am
Thank you!

I reduced the input size to 256x256 and was able to train the net to similar precision.
With this size, it's still enough memory to create the network in Delphi.
Now I'm adding the 3rd layer (MaxPooling) and getting another inconsistency with Keras:

My layer is
Code: Pascal  [Select][+][-]
  1.       TNNetMaxPool.Create(4, 2)
  2.  

With input size 246 it produces output size 123.
Keras produces size 122.

Code: Pascal  [Select][+][-]
  1. function TNNetPoolBase.CalcOutputSize(pInputSize: integer): integer;
  2. begin
  3.   Result := (pInputSize + FPadding) div FStride;
  4.   if ((pInputSize + FPadding) mod FStride > 0) then Inc(Result);
  5. end;
  6.  

I mostly find this formula for output size:
Out=(W−F+2P)/S+1

Could this be corrected to improve compatibility with other neural network frameworks?

Thanks for your continous help =)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 27, 2019, 01:32:35 am
Hello agorka,
Thank you for bringing this to my attention.

I've just created a new layer called TNNetMaxPoolPortable with your requested behavior. So, if you get the latest github version, you'll be able to replace your existing maxpools by this new one.

The original implementation has motivation behind it. Assume an 11x11 input followed by a 2x2 max pooling. In this scenario, CAI will output a 6x6 map while the portable implementation will output a 5x5 map ((11-2+2*0) div 2) + 1=5 completely missing last 11th row and column with loss of information. Therefore, I still like CAI's original implementation.

May I ask you what is your target machine? Does it have AVX or OpenCL?

In the case that you have AVX, are you curious to benchmark CAI against another API on your own environment? If yes, I can send you equivalent implementations on both APIs.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 28, 2019, 09:03:57 am
Thank you!
I'm updating right now.
About the performance, I mostly care about compatibility right now, so performance doesn't matter much for me. I might return to the performance after I'll have everything working. =)

BTW, I know it's strange but I still use Delphi 7 in parallel with Lazarus, so I have to change some parts of your code to make it compatible.
I don't know if you'd like to have it compatible with D7, but still here's the changes:
Code: Pascal  [Select][+][-]
  1. procedure TVolume.FillForIdx(c: T; const aIdx: array of integer);
  2. var
  3.   Idx: integer;
  4. begin
  5.   for Idx in aIdx do
  6.     FData[Idx] := c;
  7. end;
I change to
Code: Pascal  [Select][+][-]
  1. procedure TVolume.FillForIdx(c: T; const aIdx: array of integer);
  2. var
  3.   I, Idx: integer;
  4. begin
  5.   for I:=0 to Length(aIdx)-1 do
  6.   begin
  7.     Idx:=aIdx[i];
  8.     FData[Idx] := c;
  9.   end;
  10. end;
I know it's not that beautiful but hopefully not less efficient. =)

Also in CopyChannels:
Code: Pascal  [Select][+][-]
  1.       for i:=0 to Length(aChannels)-1 do
  2.       begin
  3.         InputDepth:=aChannels[i];
  4.         Self[X, Y, OutputDepth] := Original[X, Y, InputDepth];
  5.         Inc(OutputDepth);
  6.       end;

I'll write back as soon as I get my FFT data into the net and compare it with Keras result. I had some mismatches when processing some test data but probably it was because of different MaxPooling layer...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 29, 2019, 11:46:50 pm
@agorka,
You aren't the first to ask for D7 support. At the other extreme, at this moment, I'm retesting AVX512 support with FPC trunk...

I believe that if you open source your work about porting trained networks to pascal this would benefit the pascal user base.

In Keras, do your convolutional layers have biases? If yes, you'll need to bring too.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on December 30, 2019, 11:31:01 am
Oh, great!
D7 support requires very little changes, such as removing inline directives, removing StrictDelimiter and a few lines more.

I finally got it kinda working. Not a very precise classification but at least something!
So of course I can share my code but it's far from ideal.
My convolutions have biases of course, but I thought it's already works. Still I need to compare Keras and Pascal predictions to check if there's any differences.
Also I removed all batch normalization layers from the network to avoid possible differences.
And my Keras network had Flatten layer which I didn't find in your library, but it seems that Dense layer work right after Convolution layer (but I'm not sure that the order of weights get transferred correctly). Removing flatten layer in Keras produces wrong dimension after Dense layer, so maybe you can implement similar Flatten layer, or confirm it should work correctly right away?

Here goes Python Keras export part:
Code: Pascal  [Select][+][-]
  1. def SaveModelToTXT(model, filename):
  2.     print("Saving model to", filename, "...")
  3.     print("Found ", len(model.layers), " layers")
  4.     for i in range(len(model.layers)):
  5.         layer=model.layers[i]
  6.         layerFileName = filename+"_layer"+str(i)+".csv"
  7.         print(layer.name, ' to ', layerFileName)
  8.         if isinstance(layer, keras.layers.Conv2D):
  9.             with open(layerFileName, "w") as f:
  10.                 w=layer.get_weights()
  11.                 for filter in range(w[0].shape[3]):
  12.                     bias=w[1][filter]
  13.                     weights=w[0][:,:,:,filter].flatten()
  14.                     title = "Layer" +str(i)+"  Filter "+str(filter)+"  WShape:"+str(w[0].shape)
  15.                     #print(title)
  16.                     dimensions='1;'+str(w[0].shape[0])+';'+str(w[0].shape[1])+';'+str(w[0].shape[2])
  17.                     weights=["%.6f" % number for number in weights]
  18.                     weights=';'.join(weights)
  19.                     f.write(str(bias)+']'+dimensions+';'+weights+"\n")
  20.         if isinstance(layer, keras.layers.Dense):
  21.             with open(filename+"_layer"+str(i)+".csv", "w") as f:
  22.                 w=layer.get_weights()
  23.                 print(w[0].shape)
  24.                 for filter in range(w[0].shape[1]):
  25.                     bias=w[1][filter]
  26.                     weights=w[0][:,filter].flatten()
  27.                     title = "Layer" +str(i)+"  Filter "+str(filter)+"  WShape:"+str(w[0].shape)
  28.                     dimensions='1;'+str(w[0].shape[0])+';1;1'
  29.                     weights=["%.6f" % number for number in weights]
  30.                     weights=';'.join(weights)
  31.                     f.write(str(bias)+']'+dimensions+';'+weights+"\n")
  32.     print("Done!")
  33. SaveModelToTXT(model, modelPath+"ModelNetworkWeights")

And here's pascal reading of weights:

Code: Pascal  [Select][+][-]
  1. procedure TSampleCNN.LoadWeights(FN: String);
  2. var I, N : Integer;
  3.     S : String;
  4.     SL : TStringList;
  5.     Size, SX, SY : Integer;
  6.     LayerFN : TFileNameUnicodeString;
  7. begin
  8. For I:=0 to NN.Layers.Count-1 do
  9.   Begin
  10.   If (NN.Layers[i] is TNNetConvolutionLinear) or
  11.      (NN.Layers[i] is TNNetConvolutionReLU) or
  12.      (NN.Layers[i] is TNNetFullConnectLinear) then
  13.      Begin
  14.      Try
  15.      SL:=TStringList.Create;
  16.      LayerFN:=FN+'_layer'+IntToStr(I)+'.csv';
  17.      If not FileExistsUTF8(LayerFN) then
  18.         Begin
  19.         ShowMessage('File not found: '+LayerFN);
  20.         Continue
  21.         End;
  22.      SL.LoadFromFile(LayerFN);
  23.      If NN.Layers[I].Neurons.Count<>SL.Count then
  24.         Begin
  25.         ShowMessage('Number of neurons mismatch: '+IntToStr(NN.Layers[I].Neurons.Count)+' vs '+IntToStr(SL.Count));
  26.         End
  27.      else
  28.         Begin
  29.         For N:=0 to NN.Layers[I].Neurons.Count-1 do
  30.           Begin
  31.           Size:=NN.Layers[I].Neurons[N].Weights.Size;
  32.           SX:=NN.Layers[I].Neurons[N].Weights.SizeX;
  33.           SY:=NN.Layers[I].Neurons[N].Weights.SizeY;
  34.           NN.Layers[I].Neurons[N].LoadFromString(SL[N]);
  35.           If NN.Layers[I].Neurons[N].Weights.Size<>Size then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
  36.           If NN.Layers[I].Neurons[N].Weights.SizeX<>SX then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
  37.           If NN.Layers[I].Neurons[N].Weights.SizeY<>SY then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
  38.           End;
  39.         NN.Layers[I].AfterWeightUpdate;
  40.         End
  41.      Except
  42.         ShowMessage('Exception loading weights for layer '+IntToStr(I));
  43.      End;
  44.      SL.Free;
  45.      End;
  46.   End;
  47. end;

Also, I was thinking, how much could this optimized if network is used for prediction? I've got more projects that would benefit from adding NN, but memory consumption will be much higher there...

Oleg.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 30, 2019, 04:41:32 pm
Thank you for sharing your code! I'm certainly going to give it a go.

A beautiful aspect of TNNetVolume is about it being able to represent data on both 1D and 3D at any time. Therefore, flatten layers aren't required in this API.

In regards to memory requirements, you may consider using separable convolutions: https://github.com/joaopauloschuler/neural-api/tree/master/examples/SeparableConvolution

If you still need common convolutions with less memory requirements, I'll add feature into my feature list.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: fastblade on January 03, 2020, 09:17:18 am
I am a beginner of AI and NN,
Can this API do Object detecting and get some examples?

Thanks for your greate work!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 05, 2020, 01:42:19 am
@fastblade,
Very welcome!

Although there are plenty of examples about image classification, there is no example about object detection yet. I was almost starting to code one and then I ended coding a generative adversarial example. Porting an existing example from Keras to this API is doable.

BTW, as I'm heading for version 1.0, the time to report bugs is now.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on January 05, 2020, 09:44:04 pm
I don't know if it's a bug report or feature request, but I think some of the networks could benefit from non-square convolutions or poolings,
for example now I'm training my FFT detector on inputs of 512x128, and with only square operations I seem to be quite limited in options. =)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 06, 2020, 02:30:25 pm
Thank you for the feedback agorka!

Totally off topic, yesterday I was googling to find how other APIs support Weight Averaging (https://arxiv.org/abs/1803.05407) and I ended discovering that plenty of them do not have support.

I'm glad to inform that CAI does have support. This is an example showing how to configure 10 epochs weight averaging for image classification:
Code: Pascal  [Select][+][-]
  1. NeuralFit := TNeuralImageFit.Create;
  2. NeuralFit.AvgWeightEpochCount := 10; // the number of epochs used for weight averaging
  3. NeuralFit.Fit(NN, ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes, {NumClasses=}10, {batchsize=}128, {epochs=}100);

Long life to pascal!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Root on January 07, 2020, 02:10:10 pm
hi, first of all great work, I would like to ask if this code can be converted with your library.
thank you very much

https://github.com/erohkohl/question-tagging/blob/master/src/model.py
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 08, 2020, 07:14:33 pm
@Root,
Yes. It can be done.

Inner layers are TNNetFullConnectReLU.

The last 2 layers are (in this order): TNNetFullConnectLinear and TNNetSoftMax.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Root on January 12, 2020, 09:35:48 pm
Are you planning to implement these two algorithms?

Recurrent Neural Networks
LSTM Networks

ref.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
I think it is only to add the layer specifications and related calculations.

this is a library that implements them linearly.
https://github.com/cvsandbox/ANNT

Thank you ;)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 14, 2020, 05:15:59 pm
:) Hello Root, :)
RNN and LSTM are desirable but aren't current planned.

Next steps in this API are:

Soon, we'll have a new example with the Tiny ImageNet dataset https://tiny-imagenet.herokuapp.com/ (https://tiny-imagenet.herokuapp.com/). This data set has 64x64 RGB images and 200 classes.

I'm happy to help anyone trying to code new layers with a forked github repo.

:) Wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: agorka on January 21, 2020, 11:01:09 am
Hello!
I've got some news on my progress porting Keras model to your library!
I've finally trained it in Keras to 95% accuracy, tried to use it in Pascal, and faced this problem:

My model looks like
Code: Pascal  [Select][+][-]
  1.       TNNetInput.Create(512, 128, 1),
  2.       TNNetConvolutionReLU.Create(24, 7, 0, 1, 0).InitUniform(0),
  3.       TNNetConvolutionReLU.Create(24, 5, 0, 1, 0).InitUniform(0),
  4.       TNNetMaxPoolPortable.Create(2, 2),
  5.       TNNetConvolutionReLU.Create(24*2, 5, 0, 1, 0).InitUniform(0),
  6.       TNNetMaxPoolPortable.Create(2, 2),
  7.       TNNetConvolutionReLU.Create(24*4, 5, 0, 1, 0).InitUniform(0),
  8.       TNNetMaxPoolPortable.Create(2, 2),
  9.       TNNetConvolutionReLU.Create(24*4, 5, 0, 1, 0).InitUniform(0),
  10.       TNNetMaxPoolPortable.Create(2, 2),
  11.       TNNetFullConnectLinear.Create(NumClasses)
Seems that X and Y coordinates are swapped when loading Conv layer weights from strings saved in Keras.
I found that by looking to feature maps, Keras contained vertical lines while pascal contained horizontal and vise versa. I used numpy.swapaxes on weights for convolution layers and it did the trick, though images are still not 100% the same, maybe I'll also need to flip convolution filters to match with Keras.
The next problem was the dense layer.
In Keras I had a flatten layer before the dense, so weights of the dense layer were flat. I had to reshape them manually to 27x3x96, swap axes, flatten, and then save. This seems too cumbersome, cause I had to know the original image shape, and it can't be used in generic weight saving function...
Having non-square convolution could have solved my problem, so I could use 27x3 convolution instead of dense layer.
Maybe you have a better different idea how this problem could be solved in a more beautiful way?

Good news is that after all those tricks I've got very good prediction accuracy in Pascal! Thank you again for making this possible! =)
Oleg.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 29, 2020, 05:09:59 pm
@agorka,
Thank you for sharing good news! I'm curious to look at your code. Have you though about open sourcing it? Your code could help others.

Maybe, with time, we could polish both ends and make porting tasks easier.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Firewrath on January 29, 2020, 09:50:17 pm
ok, so this is going to be a little OT from the current discussion going on, sorry. >.>

and first of all, I know pretty much nothing about AI other then article's I've read and such,
I'd like to play with it but I have an 11 y/o PC and almost all the tutorials are in python, so bleh. :P

But I was reading this guys blog:
http://blog.dlib.net/
and he posted something I found rather interesting that I havn't seen anywhere else, called a 'tiled pyramid'
(but mind, I don't follow AI news / updates a lot)

So I was wondering if it could be implemented in your work for training.
Post:
http://blog.dlib.net/2017/08/vehicle-detection-with-dlib-195_27.html
and Pic:
Shortened pic link (https://2.bp.blogspot.com/-xl3Qne3TWjk/WaHJgQZkH8I/AAAAAAAAAuc/qu037v4HxCUjFinIp6Ze5Oou7rhS1OU1QCLcBGAs/s1600/mmod_tutorial_image_pyramid.jpg)

To me, it seems a really smart way to train an AI, but that's coming from someone who doesn't know how it all works.
So what do I know? ;)
but wanted to pass it along anyways. ^-^
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on March 12, 2020, 05:35:55 pm
@Firewrath, I own you a reply. Will do eventually.

I would like to comment here a trick that you can do with this API or any other API when working with image classification: you can increase the input image size.

As per the following example, by increasing CIFAR-10 input image sizes from 32x32 to 48x48, you can gain up to 2% in classification accuracy.

You can change image sizes with:
Code: Pascal  [Select][+][-]
  1. ImgTrainingVolumes.ResizeImage(48, 48);
  2. ImgValidationVolumes.ResizeImage(48, 48);
  3. ImgTestVolumes.ResizeImage(48, 48);

Then, you need to change the input image size in your NN model:
Code: Pascal  [Select][+][-]
  1. NN.AddLayer([
  2. TNNetInput.Create(48, 48, 3),

There is a CIFAR-10 example showing this trick here:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierResize48.lpr (https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierResize48.lpr)

 :) On the power of native code, I wish everyone happy pascal coding!  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 12, 2020, 02:30:46 am
 :) hello :),

I would like to comment a new feature and new examples that have just been added:
There are plenty of image datasets where each folder represents a class of images. To make the pascal coding simpler, a new procedure CreateVolumesFromImagesFromFolder has been added as exemplified below:
Code: Pascal  [Select][+][-]
  1.     // change ProportionToLoad to a smaller number if you don't have available 6GB of RAM.
  2.     ProportionToLoad := 1;
  3.     WriteLn('Loading ', Round(ProportionToLoad*100), '% of the Tiny ImageNet 200 dataset into memory.');
  4.     CreateVolumesFromImagesFromFolder
  5.     (
  6.       ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes,
  7.       {FolderName=}'tiny-imagenet-200/train', {pImageSubFolder=}'images',
  8.       {color_encoding=}0{RGB},
  9.       {TrainingProp=}0.9*ProportionToLoad,
  10.       {ValidationProp=}0.05*ProportionToLoad,
  11.       {TestProp=}0.05*ProportionToLoad
  12.     );

The above example loads the Tiny ImageNet 200 Dataset https://tiny-imagenet.herokuapp.com/ (https://tiny-imagenet.herokuapp.com/) into RAM memory. 90% is loaded as training data while 5% is loaded for each validation and testing.

The next example shows how to load the dataset used for the paper Identification of Plant Leaf Diseases Using a 9-layer Deep Convolutional Neural Network https://data.mendeley.com/datasets/tywbtsjrjv/1 (https://data.mendeley.com/datasets/tywbtsjrjv/1):

Code: Pascal  [Select][+][-]
  1.     // change ProportionToLoad to a smaller number if you don't have available 32GB of RAM.
  2.     ProportionToLoad := 1;
  3.     WriteLn('Loading ', Round(ProportionToLoad*100), '% of the Plant leave disease dataset into memory.');
  4.     CreateVolumesFromImagesFromFolder
  5.     (
  6.       ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes,
  7.       {FolderName=}'plant', {pImageSubFolder=}'',
  8.       {color_encoding=}0{RGB},
  9.       {TrainingProp=}0.9*ProportionToLoad,
  10.       {ValidationProp=}0.05*ProportionToLoad,
  11.       {TestProp=}0.05*ProportionToLoad,
  12.       {NewSizeX=}128, {NewSizeY=}128
  13.     );

As you can see above, besides loading the dataset, images are being resized to 128x128. As I don't have 32GB of RAM in my development machine, I've tested the above code with https://vast.ai/ (https://vast.ai/) using this jupyter notebook: https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.ipynb (https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.ipynb). You can find raw results at: https://github.com/joaopauloschuler/neural-api/tree/master/examples/SimplePlantLeafDisease/results .

As you can see on these raw results, the test classification (or the plant disease diagnostic) accuracy is 98.95% (this is pretty high - I wonder if we could do the same with corona virus).

CreateVolumesFromImagesFromFolder has parallel code so classes are loaded into memory in parallel. I consider this code tremendously fast and it outperforms code that I saw in other APIs. I should say thank you to FPC developers for coding FPImage and plenty of other super fast bits and pieces that allowed me to implement a fast CreateVolumesFromImagesFromFolder.

This is the neural network architecture used for classification:
Code: Pascal  [Select][+][-]
  1.     NN.AddLayer([
  2.       TNNetInput.Create(128, 128, 3),
  3.       TNNetConvolutionLinear.Create({Features=}64, {FeatureSize=}5, {Padding=}4, {Stride=}2),
  4.       TNNetMaxPool.Create(2),
  5.       TNNetMovingStdNormalization.Create(),
  6.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  7.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  8.       TNNetMaxPool.Create(2),
  9.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  10.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1),
  11.       TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}2),
  12.       TNNetDropout.Create(0.5),
  13.       TNNetMaxPool.Create(2),
  14.       TNNetFullConnectLinear.Create(39),
  15.       TNNetSoftMax.Create()
  16.     ]);
The complete source code can be found at: https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.pas (https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/SimplePlantLeafDisease.pas)

 :) I wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: eljo on April 12, 2020, 07:07:56 am
I just wanted to say I love your work. Thank you for your support.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 14, 2020, 09:21:47 am
@eljo,
Thank you for your kind words. They are motivating.

Created a full Tiny ImageNet 200 example: https://github.com/joaopauloschuler/neural-api/tree/master/examples/SimpleTinyImageNet

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 03, 2020, 09:24:20 am
 :) Hello :),
I have news about Increasing Image Resolution with Neural Networks.

I created a command line tool so everyone can now increase image resolution with no more than FPC (no external library are required) via command line:
Code: Pascal  [Select][+][-]
  1. #SuperResolution -i street.png -o street3.png
  2. Loading input file: street.png
  3. Input image size: 158x214x3
  4. Creating Neural Network...
  5. Resizing with tiles...
  6. Neural network file found at ../../../examples/SuperResolution : super-resolution-7-64-sep.nn
  7. Padding input image.
  8. Resizing with tiles to: 288x416x3
  9. Saving output file: street3.png

-i defines the input image file while -o defines the output image file.

You can get this command line tool from:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SuperResolution/README.md

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Alienigene on May 23, 2020, 02:56:48 am
Hello everyone, I don't know if this is the right place to ask, but I know this problem is rather frequent and it is related to Free Pascal/Lazarus IDE, so I figure here is a good place to ask.

The thing is very simple. My Operating System is Windows 10 v1909 x64. I have downloaded Conscious Artificial Intelligence (project on Sourceforge.net), the files in it are 64-bit. I have extracted the files, ran ArtificialArt.exe, pressed start and nothing happens. I noticed a console window beside the gui window, and it told me it couldn't find a bin file, told me to download cifar, and provided me the address. Simple enough, I downloaded and extracted the files to the folder of CAI.

Then I pressed start again, this time and all times after, an error message window pop up and says: Access violation. Press OK to ignore and risk data corruptio. Press Abort to kill the program. This always happens after loading the .bin files, Creating Neural Networks... and immediately after Creating generative. If I press OK, nothing will happen. If I press Abort, program doesn't end, console says Sending STOP request, but the program doesn't stop, requiring me to close the console to kill the program.

The same problem applies to GradientAscent.exe, it happens immediately after I load a neural network.
I noticed their icons are of the programs developed by Lazarus IDE, I searched the problem online and found it rather frequent, but no solutions, so I figured I would ask here.

I have installed FreePascal IDE 3.0.4 (x86), Lazarus IDE 2.0.6 (x86_64), and placed the executable path of FPC in system path Env.Var., I have no problem using Lazarus IDE itself so far, any ideas on what caused this and how to solve it?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 24, 2020, 01:12:35 am
Hello Alienigene,
Can you confirm please whether your processor has or hasn't AVX instruction set? In the case it hasn't, I can prepare a compiled version for you without the AVX instruction set.

Current binaries require AVX instruction set and they will provoke an exception at the time the NN is run with a processor that hasn't AVX.

In the case that you would like to try to compile by yourself, with Lazarus, go to Options/Project Options/Custom Options/Defines and then uncheck AVX. After this, go to Run/Build and then Run/Run.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Alienigene on May 28, 2020, 10:35:37 am
I am sorry that I have just seen it, been busy for a while, so I didn't reply instantly. My CPU is Intel Celeron CPU J1900 @ 1.99GHz, quad-core CPU, I looked at CPU-Z, it supports MMX, SSE1-4.2, EM64T and VT-x instruction sets, but no, no AVX, so I think that is the cause of the problem.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Alienigene on May 29, 2020, 06:41:35 am
I got a problem, big problem, I have downloaded the svncode trunk of cai from sourceforge, intended to compile all the stuff by myself, however there are not any project files(.lpi), there are lots of .pas files, some .inc files, some .lpr .lfm and .cl files, obviously multiple files are included in one project, but I honestly don't know which files are included in a project, and what file each project corresponds to, e.g. .exe .dll .lib .nn, I don't know the correspondences, so I can't compile anything...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 31, 2020, 03:29:18 am
Hi Alienigene, these projects are probably a good start (open with Project/Open Project):
Alternatively, if you have a gmail account, you can just open the following link, select "Runtime" and then "Run all":
https://colab.research.google.com/github/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierCPU.ipynb

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 30, 2020, 08:04:14 pm
:) Hello - this is a quick update :)

These are the news from my end:

:) Wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 16, 2020, 11:11:05 pm
I'm trying to get your supersimple example to compile. I have your original uconvolutionneuralnetwork file. Do you have a new set of code that define units neuralnetwork, neuralvolume and nearalfit?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 17, 2020, 05:57:02 am
 :) Hi Daringly :)
You'll need this folder to compile: https://github.com/joaopauloschuler/neural-api/tree/master/neural

Alternatively, in the case that you would like to try image classification without having to install anything, you can run via browser at:

https://colab.research.google.com/github/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierCPU.ipynb
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 17, 2020, 08:47:11 pm
Thanks!

I do a lot of Sports actuarial analysis (probabilities for each match outcome), and this will hopefully let me do some much more complex analysis than what I am presently doing.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Blade on September 19, 2020, 07:26:33 pm
Very nice.  Great project.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 21, 2020, 08:44:13 pm
Is there a set of documentation for these libraries?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 22, 2020, 01:10:38 pm
:) Hi daringly :)
Except for this read me (https://github.com/joaopauloschuler/neural-api), not much. Is there any particular part of the API that you would like me to detail further?


Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 22, 2020, 04:51:28 pm
I'm not particularly good at Python/Keras, whereas I'm much more fluent in Pascal. I'd love a set of documents so that someone that never used Python or Keras could do things from the ground up.

Here's the meat of a keras model I wrote, and I have no idea how to implement this in your library:

model=tf.keras.Sequential()
model.add(layers.Dense(12, input_dim=14, activation='relu'))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics =['accuracy'])

print(model.summary)

monitor_val_acc=EarlyStopping(monitor='val_loss', patience=5)
model.fit(x,y, epochs=200, validation_data=(x_test, y_test), callbacks=[monitor_val_acc])
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 22, 2020, 04:54:55 pm
How about an example where you have 2 inputs (x,y) and an output (z) which is the hypotenuse of a right-angle triangle?

So in our dataset, Z^2 = X^2 + Y^2.. use maybe 10 datapoints.

A simple start-to-finish on "solving" and improving on the solution to this problem could be extrapolated to all other sorts of problems.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 24, 2020, 05:02:08 am
 :) Hello daringly :)

Your Keras model translates to this:
Code: Pascal  [Select][+][-]
  1. var
  2.   Model: TNNet;
  3. begin
  4.   Model := TNNet.Create();
  5.   Model.AddLayer([
  6.     TNNetInput.Create(14),
  7.     TNNetFullConnectReLU.Create(12),
  8.     TNNetFullConnectReLU.Create(4),
  9.     TNNetFullConnect.Create(1) // This is hyperbolic tangent - not Sigmoid
  10.   ]);

print(model.summary) translates to:
Code: Pascal  [Select][+][-]
  1. Model.DebugStructure();

Above example has hyperbolic tangent instead of sigmoid.

For the fitting, you might follow this example:
https://github.com/joaopauloschuler/neural-api/tree/master/examples/XorAndOr

The fitting method allows you to pass validation and test pairs also:
Code: Pascal  [Select][+][-]
  1. procedure Fit(pNN: TNNet;
  2.   pTrainingVolumes, pValidationVolumes, pTestVolumes: TNNetVolumePairList;
  3.   pBatchSize, Epochs: integer);

You won't need callbacks for monitoring. The fitting already saves the best solution for you and displays progress.

In the case that you really need callbacks, you can look at:
Code: Pascal  [Select][+][-]
  1.       property OnAfterStep: TNotifyEvent read FOnAfterStep write FOnAfterStep;
  2.       property OnAfterEpoch: TNotifyEvent read FOnAfterEpoch write FOnAfterEpoch;

CAI is SGD based (I don't have adam). I'm passionate towards SGD.

About Z^2 = X^2 + Y^2, I created a feature request at:
https://github.com/joaopauloschuler/neural-api/issues/33
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 24, 2020, 02:14:49 pm
Thanks! I'll fire this up this weekend.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 29, 2020, 04:35:58 am
@daringly

I coded the example you asked for:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/Hypotenuse/Hypotenuse.lpr

Training, validation and testing pairs are created with:
Code: Pascal  [Select][+][-]
  1.   function CreateHypotenusePairList(MaxCnt: integer): TNNetVolumePairList;
  2.   var
  3.     Cnt: integer;
  4.     LocalX, LocalY, Hypotenuse: TNeuralFloat;
  5.   begin
  6.     Result := TNNetVolumePairList.Create();
  7.     for Cnt := 1 to MaxCnt do
  8.     begin
  9.       LocalX := Random(100);
  10.       LocalY := Random(100);
  11.       Hypotenuse := sqrt(LocalX*LocalX + LocalY*LocalY);
  12.  
  13.       Result.Add(
  14.         TNNetVolumePair.Create(
  15.           TNNetVolume.Create([LocalX, LocalY]),
  16.           TNNetVolume.Create([Hypotenuse])
  17.         )
  18.       );
  19.     end;
  20.   end;
  21. ...
  22.     TrainingPairs := CreateHypotenusePairList(10000);
  23.     ValidationPairs := CreateHypotenusePairList(1000);
  24.     TestPairs := CreateHypotenusePairList(1000);

The neural network is created with 2 inputs and one output:
Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(2) );
  3. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  4. NN.AddLayer( TNNetFullConnectReLU.Create(32) );
  5. NN.AddLayer( TNNetFullConnectReLU.Create(1) );

The fitting is called with:
Code: Pascal  [Select][+][-]
  1.   // Returns TRUE if difference is smaller than 0.1 .
  2.   function LocalFloatCompare(A, B: TNNetVolume; ThreadId: integer): boolean;
  3.   begin
  4.     Result := ( Abs(A.FData[0]-B.FData[0])<0.1 );
  5.   end;  
  6. ...
  7.     NFit := TNeuralFit.Create();
  8.     NFit.InitialLearningRate := 0.00001;
  9.     NFit.LearningRateDecay := 0;
  10.     NFit.L2Decay := 0;
  11.     NFit.InferHitFn := @LocalFloatCompare;
  12.     NFit.Fit(NN, TrainingPairs, ValidationPairs, TestPairs, {batchsize=}32, {epochs=}50);

A final test is done with:
Code: Pascal  [Select][+][-]
  1.     // tests the learning
  2.     for Cnt := 0 to 9 do
  3.     begin
  4.       NN.Compute(TestPairs[Cnt].I);
  5.       NN.GetOutput(pOutPut);
  6.       WriteLn
  7.       ( 'Inputs:',
  8.         TestPairs[Cnt].I.FData[0]:5:2,', ',
  9.         TestPairs[Cnt].I.FData[1]:5:2,' - ',
  10.         'Output:',
  11.         pOutPut.Raw[0]:5:2,' ',
  12.         ' Desired Output:',
  13.         TestPairs[Cnt].O.FData[0]:5:2
  14.       );
  15.     end;

These are results from my own run:
Code: Pascal  [Select][+][-]
  1. Inputs:19.00, 71.00 - Output:73.53  Desired Output:73.50
  2. Inputs:28.00,  6.00 - Output:28.66  Desired Output:28.64
  3. Inputs:34.00, 53.00 - Output:63.02  Desired Output:62.97
  4. Inputs:20.00, 35.00 - Output:40.25  Desired Output:40.31
  5. Inputs:53.00, 28.00 - Output:59.98  Desired Output:59.94
  6. Inputs:56.00, 12.00 - Output:57.31  Desired Output:57.27
  7. Inputs:77.00, 29.00 - Output:82.35  Desired Output:82.28
  8. Inputs:69.00, 31.00 - Output:75.65  Desired Output:75.64
  9. Inputs:24.00, 78.00 - Output:81.65  Desired Output:81.61
  10. Inputs:15.00, 22.00 - Output:26.66  Desired Output:26.63

 :) Wish everyone happy pascal coding  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on September 30, 2020, 02:26:20 am
When I compile the hypotenuse example, I get "FATAL: Cannot find MTPCPU used by neuralthread".

I did get the examples included in your package working.

$0.01 for your thoughts.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on September 30, 2020, 10:11:54 am
you'll need the package multithreadprocslaz .
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 02, 2020, 03:31:12 pm
The example works, thanks!

I tried modifying it, and broke it.

I wanted to expand the example. With 3 inputs, I have the dimensions of a hexahedron. With 2 outputs, I'd calculate the hypotenuse of opposite corners, and the volume.

I changed the initial NN definition to:

NN.AddLayer( TNNetInput.Create(3) );
NN.AddLayer( TNNetFullConnectReLU.Create(32) );
NN.AddLayer( TNNetFullConnectReLU.Create(32) );
NN.AddLayer( TNNetFullConnectReLU.Create(2) );     

The Create pair list function was modified:
for Cnt := 1 to MaxCnt do
    begin
      LocalX := Random(100);
      LocalY := Random(100);
      LocalZ := Random(100);
      Hypotenuse := sqrt(LocalX*LocalX + LocalY*LocalY + LocalZ * LocalZ);
      Volume := LocalX * LocalY * LocalZ;

      Result.Add(
        TNNetVolumePair.Create(
          TNNetVolume.Create([LocalX, LocalY, LocalZ]),
          TNNetVolume.Create([Hypotenuse, Volume])
        )
      );   

I adjusted the outputs, but that isn't causing the problems. All my outputs are now 0. What have I done?

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 02, 2020, 07:57:18 pm
What exactly does this statement do?

pOutPut := TNNetVolume.Create(1,1,1,1);
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 04, 2020, 07:05:51 am
@daringly,
Very interesting!

Code: Pascal  [Select][+][-]
  1. pOutPut := TNNetVolume.Create({pSizeX=}1, {pSizeY=}1, {pDepth=}1, {FillValue=}1);
creates a 3D memory structure that can be accessed as 1D, 2D and 3D construct at the same time. Above example will create a 1x1x1 array filled with the value 1.

Given that your output now has size 2, you should try this:
Code: Pascal  [Select][+][-]
  1. pOutPut := TNNetVolume.Create({pSizeX=}2, {pSizeY=}1, {pDepth=}1, {FillValue=}1);

Given your use cases, I'm now considering adding non linear neural network layers so the network can learn expressions such as "sqr(A) + sqrt(A*B)" faster.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 04, 2020, 07:29:11 am
This is in reply to "All my outputs are now 0. What have I done?".

Volume is a lot bigger number than hypothenuse. So, the network will converge towards fixing volume first. To solve the problem, you can rescale volume with:
Code: Pascal  [Select][+][-]
  1. Volume := (LocalX * LocalY * LocalZ)/(100*100);

With the above, your network should converge.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 04, 2020, 10:09:04 pm
Thanks. That adjustment did make it converge, and I can now run with 2 outputs. I'm still getting an output of 0 for the second output (volume) while the desired outputs are correct. It's not critical, since I could simply rerun the nn for each output, but slightly frustrating.

The next thing I'm starting is much more interesting. Using about 8 custom features to predict spread and total results in a sport. I've had a lot of success doing this in the past without neural nets, and I'm wondering how much better this might be.

Edit: I figured out what was making the second ouput 0 -- dividing by (100*100) made the scale too small, but 100 worked reasonably well. Is it better to build separate models for unrelated outputs? In the next major project, most projected spreads (the target solution for my features) will typically be -25 to 25, and target for totals is typically 40-80.

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 04, 2020, 11:57:53 pm
As long as I'm thinking wishfully, can we have GPU support?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 05, 2020, 08:40:11 am
In regards to "can we have GPU support?":
Yes we can!

GPUs are run via OpenCL. Have a look at this example:
https://github.com/joaopauloschuler/neural-api/tree/master/examples/SimpleImageClassifierGPU

Fully connected (dense) layers have OpenCL implementation for the forward pass. While other APIs support only NVIDIA, GPUs from major vendors are supported here.

In the case that you have more than one GPU on your environment, you can have a look here (how to select the correct GPU from a GPU list):
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10OpenCL/

In the case that you need to infer 20 partially unrelated outputs in the same network, you can try a multi-path neural network (this is fully supported by this API).

These Project/Custom (compilation) options might be important to you: AVX, AVX2, AVX512 and OpenCL.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 05, 2020, 08:48:49 am
About "will typically be -25 to 25, and target for totals is typically 40-80", I have a preference for rescaling inputs/outputs to a monopolar range [0.1,0.9] or
 bipolar [-1,+1] or even [-2, +2]. This is a personal preference. Smaller numbers avoid overflows.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: josiaslg on October 07, 2020, 02:00:28 am
Hi.
First of all, congratulations for this project. Is really amazing such vision about performance and optimization.
I'm newer in this camp and leaning about the project and process.
I started compile the SimpleFashionMNIST and make the changes to be OpenCL compatible. Work very well wirhout any problem.
Now i have the challange to read directorys with pure photos (png, jpeg, bmp and etc).
To read is ok, but, how i can implement this on neuraldatasets unit ?.
The photos is one category, but, different sizes. Other thing is: how i can test the the results to know how well is training ?.
Sorry if this questions is too newbie.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 07, 2020, 08:29:24 pm
@josiaslg,
Welcome aboard! Glad to see you here!

Your question is a very good question. Sometimes, we have images in folders like this:
Code: Pascal  [Select][+][-]
  1. /myfolder/<class_id>/img1.jpg
  2. /myfolder/<class_id>/img2.jpg
  3. /myfolder/<class_id>/img3.jpg

For folder structures like the above, you can look here: https://github.com/joaopauloschuler/neural-api#one-class-per-folder-with-image-classification

Sometimes, inside the class folder, you have a subfolder such as "images". Follows an example:
Code: Pascal  [Select][+][-]
  1. /myfolder/<class_id>/images/img1.jpg
  2. /myfolder/<class_id>/images/img2.jpg
  3. /myfolder/<class_id>/images/img3.jpg

In this case, you'll pass 'images' to the parameter pImageSubFolder. It should look like this:
Code: Pascal  [Select][+][-]
  1. // change ProportionToLoad to a smaller number if you don't have enough RAM.
  2. ProportionToLoad := 1;
  3. WriteLn('Loading ', Round(ProportionToLoad*100), '% of the dataset into memory.');
  4. CreateVolumesFromImagesFromFolder
  5. (
  6.   ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes,
  7.   {FolderName=}'/myfolder', {pImageSubFolder=}'images',
  8.   {color_encoding=}csRGB{RGB},
  9.   {TrainingProp=}0.9*ProportionToLoad,
  10.   {ValidationProp=}0.05*ProportionToLoad,
  11.   {TestProp=}0.05*ProportionToLoad,
  12.   {NewSizeX=}64, {NewSizeY=}64
  13. );

Above code will resize input images to 64x64.

I'll eventually add an example loading images "on the fly" for very large datasets. If you need large amounts of RAM such as 128GB to run your models, you can consider running in the cloud.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 11, 2020, 04:15:47 pm
Edit: Issue resolved. Switching to Linear worked, since output was typically -1 to 1, and ReLU ranges positive.

I'm having another "0 Output" issue. All my outputs show 0, even though everything seemed to run correctly.

I started with the Hypotenuse example, and changed the first layer to allow for 22 features:

N.AddLayer( TNNetInput.Create(22) );
    NN.AddLayer( TNNetFullConnectReLU.Create(32) );
    NN.AddLayer( TNNetFullConnectReLU.Create(32) );
    NN.AddLayer( TNNetFullConnectReLU.Create(1) );       

The datapoint creation was also changed:
 Result.Add(
         TNNetVolumePair.Create(
           TNNetVolume.Create([Features[6], Features[7], Features[8], Features[9],
           Features[10], Features[11], Features[12], Features[13], Features[14],
           Features[15], Features[16], Features[17], Features[18], Features[19],
           Features[20], Features[21], Features[22], Features[23], Features[24],
           Features[25], Features[26], Features[27]]),
           TNNetVolume.Create([noutputs[28]/50])           

The output is typically a number from -50 to 50, and usually -20 to 20.

I used weird indexing because it made it easier to visualize where pieces of a .csv file are going -- the features are columns 6-27, and the desired output is in 28 (and 29).

Any idea how I blew this up?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 11, 2020, 07:09:13 pm
What would the expected output range be (-50 to 50)?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 11, 2020, 07:19:27 pm
Before scaling, -50 to 50. I divided the expected output by 50, so it is typically -1 to 1.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 11, 2020, 07:20:52 pm
One other weird issue:

When I execute the following statement, I inspected Features to ensure they were non-0. After processing and showing the inputs, Features 24-27 are always 0 on output (although they aren't initially). Something isn't copying over somehow.

Result.Add(
         TNNetVolumePair.Create(
           TNNetVolume.Create([Features[6], Features[7], Features[8], Features[9],
           Features[10], Features[11], Features[12], Features[13], Features[14],
           Features[15], Features[16], Features[17], Features[18], Features[19],
           Features[20], Features[21], Features[22], Features[23], Features[24],
           Features[25], Features[26], Features[27]]),
           TNNetVolume.Create([noutputs[28]/50])           
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: josiaslg on October 13, 2020, 03:56:52 am
Hi.
I try to do but not generate nn file.
When execute, the images is founded on folders (the folders "images" is separated in 3 classes).
But, simple don't generate nn file.
The source file is here: https://github.com/josiaslg/cai-exemple-photo-from-directory/blob/main/neural.lpr (i not paste code here to not mess up the text and forum).
The OpenCL is ok, i test other example files and everything goes fine (the driver is installed and working on CAi OpenCL examples).
 

@josiaslg,
Welcome aboard! Glad to see you here!

Your question is a very good question. Sometimes, we have images in folders like this:
Code: Pascal  [Select][+][-]
  1. /myfolder/<class_id>/img1.jpg
  2. /myfolder/<class_id>/img2.jpg
  3. /myfolder/<class_id>/img3.jpg

For folder structures like the above, you can look here: https://github.com/joaopauloschuler/neural-api#one-class-per-folder-with-image-classification

Sometimes, inside the class folder, you have a subfolder such as "images". Follows an example:
Code: Pascal  [Select][+][-]
  1. /myfolder/<class_id>/images/img1.jpg
  2. /myfolder/<class_id>/images/img2.jpg
  3. /myfolder/<class_id>/images/img3.jpg

In this case, you'll pass 'images' to the parameter pImageSubFolder. It should look like this:
Code: Pascal  [Select][+][-]
  1. // change ProportionToLoad to a smaller number if you don't have enough RAM.
  2. ProportionToLoad := 1;
  3. WriteLn('Loading ', Round(ProportionToLoad*100), '% of the dataset into memory.');
  4. CreateVolumesFromImagesFromFolder
  5. (
  6.   ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes,
  7.   {FolderName=}'/myfolder', {pImageSubFolder=}'images',
  8.   {color_encoding=}csRGB{RGB},
  9.   {TrainingProp=}0.9*ProportionToLoad,
  10.   {ValidationProp=}0.05*ProportionToLoad,
  11.   {TestProp=}0.05*ProportionToLoad,
  12.   {NewSizeX=}64, {NewSizeY=}64
  13. );

Above code will resize input images to 64x64.

I'll eventually add an example loading images "on the fly" for very large datasets. If you need large amounts of RAM such as 128GB to run your models, you can consider running in the cloud.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 13, 2020, 12:31:06 pm
Quote
Before scaling, -50 to 50. I divided the expected output by 50, so it is typically -1 to 1...are always 0 on output

If your Expected output can be negative, if your last layer is ReLU, replacing ReLU layer (as ReLU result is never negative) by a Linear layer might fix the problem.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 13, 2020, 12:32:23 pm
Quote
But, simple don't generate nn file.

You need validation data to generate the nn file. The nn file is created each time the validation does better.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 15, 2020, 12:30:45 am
Is there a limit to how many inputs you can add? It looks like adding features stopped working after 21 inputs in the code I ran.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 15, 2020, 11:08:46 am
Some of my own models have millions of trainable parameters with 224x224x3 (150k) inputs. So, input size isn't a limit.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: daringly on October 21, 2020, 07:35:33 pm
I figured out the error, thanks. I was doing something idiotic.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on October 22, 2020, 07:42:42 pm
Cool.

In the case that you are interested, I coded a new Hypotenuse Example without preallocating all the training data into memory. This is useful if you have a too large dataset to fit into memory (I've seen training datasets approaching 1TB).

https://github.com/joaopauloschuler/neural-api/blob/master/examples/HypotenuseFitLoading/HypotenuseFitLoading.lpr

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: nouzi on October 22, 2020, 08:04:14 pm
Very nice  8-)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 03, 2020, 01:02:18 am
@nouzi, thank you for your kind words.

I have 2 news:

Beautiful Images
I've attached some images created by a generative adversarial network. The source code can be found here:
https://sourceforge.net/p/cai/svncode/1485/tree/trunk/lazarus/examples/VisualGANTinyImagenet/

TNNetConvolutionSharedWeights
Added today to the API a new convolutional layer type: TNNetConvolutionSharedWeights . Instead of having its own neurons and weights, this convolutional layer uses the same neurons and weights from another existing layer. So, if you need 2 layers with the same neurons, you can add TNNetConvolutionSharedWeights to your network. Why would you need something like this? Maybe, your NN needs to learn the same patterns in different scales (such as big and small dogs are all dogs).

I added an example at:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierSharedWeights.lpr

:) wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: PierceNg on January 31, 2021, 08:56:51 am
@schuler This is great stuff! Thanks for making this!

My setup: Windows 10 Home 2H20, Ryzen 2500U, latest AMD drivers, Lazarus 2.0.10 with FPC 3.2.0.  GPU probe shows following:

Platform info: 0 ---------------------
PROFILE: FULL_PROFILE
VERSION: OpenCL 2.1 AMD-APP (3188.4)
NAME: AMD Accelerated Parallel Processing
VENDOR: Advanced Micro Devices, Inc.
EXTENSIONS: cl_khr_icd cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_khr_dx9_media_sharing cl_amd_event_callback cl_amd_offline_devices
<and so on>


However, when running, say, opencl-dot-product-test.exe, I get many lines of the following:

Error: TDotProductCL.Compute - Failed setting parameters: -48

I am not familiar with OpenCL programming. Is -48 the OpenCL error CL_INVALID_KERNEL? More importantly, does the error mean the GPU wasn't used?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on February 06, 2021, 06:49:48 pm
Hi PierceNg,
Do you have any of the following messages?

Code: Pascal  [Select][+][-]
  1. clCreateContext OK!
  2. clCreateCommandQueue OK!
  3. clCreateProgramWithSource OK!
  4. clBuildProgram OK!
  5. clCreateKernel cai_dot_product OK!
  6. clCreateKernel cai_dot_product OK!

I'm wondering if the file cai_dot_product.cl is correctly been loaded.

I logged a bug report for this:
https://github.com/joaopauloschuler/neural-api/issues/41

Can you please confirm if you are using an APU or a discrete video card? In the case of a discrete video card, what is the model?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on February 07, 2021, 06:53:04 pm
@PierceNg,
I've just tested with a Ryzen APU and was able to reproduce the problem and fix it:
https://github.com/joaopauloschuler/neural-api/issues/41

 :) Wish everyone happy pascal coding  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: PierceNg on February 08, 2021, 11:03:33 am
@PierceNg,
I've just tested with a Ryzen APU and was able to reproduce the problem and fix it:
https://github.com/joaopauloschuler/neural-api/issues/41

 :) Wish everyone happy pascal coding  :)

Thank you!

After pulling from your repos,the dot product and visual CIFAR 10 OpenCL versions run, and I see the "OK!" messages in the command prompt window. My machine is using a Ryzen APU.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: mika on February 11, 2021, 06:46:54 pm
HypotenuseFitLoading.lpr
Code: Pascal  [Select][+][-]
  1.     Write('Press ENTER to exit.');
  2.     ReadLn;
  3.   end;
  4.  

should be

Code: Pascal  [Select][+][-]
  1.     Write('Press ENTER to exit.');
  2.     ReadLn;
  3.     Terminate;
  4.   end;
  5.  
Otherwise it keeps repeating DoRun method over and over.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on February 14, 2021, 07:49:28 pm
@mika,
the bug has been fixed as per your suggestion (thank you!):
https://github.com/joaopauloschuler/neural-api/issues/42
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: PierceNg on March 09, 2021, 11:58:32 am
@schuler,

I am happy to report that I have OpenCL Dot Product example running also on my Macbook Pro 2012 model, which has Intel i7 CPU, integrated HD Graphics 4000, and GeForce GT 650M.

Running directly by double-clicking on the app in Finder appears to work - total OpenCL runtime reports 0.02s - but not really. When I run the executable from the shell, the program reports:

File neural.cl could not be found.
Error: Failed to create compute kernel:cai_dot_product
<many many lines of error messages>


I copied neural.cl into the the app bundle directory where the executable opencl-dot-product-test is, then run the executable from the shell again:

clCreateContext OK!
clCreateCommandQueue OK!
clCreateProgramWithSource OK!
clBuildProgram OK!
clCreateKernel cai_dot_product OK!
clCreateKernel cai_dot_product OK!


Screenshot of successful run attached.

But double clicking on the app from Finder still didn't work even with neural.cl already copied in, as in OpenCL runtime is again 0.02s.

Oh, I also had to fix a FPC_FULLVERSION test in the file Lazarus/components/multithreadprocs/mtpcpu.pas in order to build the program.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on March 11, 2021, 09:07:01 am
@PierceNg,
Loved seeing the screenshot from your experiment! Thank you for sharing your test results.

:) May the (fo/sou)rce be with you :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on March 23, 2021, 01:42:50 pm
 :) Hello :)

I would like to comment a data augmentation approach commonly used with CIFAR-10: the dataset is padded with 4 px increasing the original size from 32x32 to 40x40. Then, at the time of training, a random crop is done bringing the size back to 32x32 px. An example with this approach can be found at https://medium.com/fenwicks/tutorial-2-94-accuracy-on-cifar10-in-2-minutes-7b5aaecd9cdd.

Implementing this in Pascal is very simple:

Code: Pascal  [Select][+][-]
  1. const
  2.   // Padding and cropping constants.
  3.   csPadding = 4;
  4.   csCropSize = csPadding * 2;  
  5. ...
  6.     // Load the CIFAR-10 dataset
  7.     CreateCifar10Volumes(ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes);
  8.  
  9.     // Add padding to dataset resizing from 32x32 to 40x40
  10.     ImgTrainingVolumes.AddPadding(csPadding);
  11.     ImgValidationVolumes.AddPadding(csPadding);
  12.     ImgTestVolumes.AddPadding(csPadding);
  13.  
  14.     NeuralFit := TNeuralImageFit.Create;
  15.  
  16.     // Enable cropping while fitting.
  17.     NeuralFit.HasImgCrop := true;
  18.     NeuralFit.MaxCropSize := csCropSize;
  19.     NeuralFit.HasFlipX := true;

This simple trick actually seems to outperform CAI's default image data augmentation method. A fully functional source code example can be found at: https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierPaddingCropping.lpr

:) May the source be with you :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 01, 2021, 03:46:08 pm
:) Hello :)
I think that I have never commented the Neural API threading in detail here. I got a couple of questions in regards to this recently. I hope that I'm not typing too much.

As an example, assuming that you need to run a procedure 10 times in parallel, you can create 10 thread workers as follows:

Code: Pascal  [Select][+][-]
  1. FProcs := TNeuralThreadList.Create( 10 );

As an example, this is the procedure that we intend to run in parallel:

Code: Pascal  [Select][+][-]
  1. procedure MyClassName.RunNNThread(index, threadnum: integer);
  2. begin
  3.   WriteLn('This is thread ',index,' out of ',threadnum,' threads.');
  4. end;

Then, to run the procedure RunNNThread passed as parameter 10 times in parallel, do this:

Code: Pascal  [Select][+][-]
  1. FProcs.StartProc(@RunNNThread);

You can control the blocking mode (waiting threads to finish before the program continues) as per declaration:

Code: Pascal  [Select][+][-]
  1. procedure StartProc(pProc: TNeuralProc; pBlock: boolean = true);

Or, if you prefer, you can specifically say when to wait for threads to finish as per this example:

Code: Pascal  [Select][+][-]
  1. FProcs.StartProc(@RunNNThread, false);
  2. // insert your code here
  3. FProcs.WaitForProc(); // waits until all threads are finished.

When you are done, you should call:

Code: Pascal  [Select][+][-]
  1. FProcs.Free;

neuralthread is the unit that allows parallel processing when using neural fit: https://github.com/joaopauloschuler/neural-api/blob/master/neural/neuralfit.pas

Wish long and prosper life to FPC and Lazarus!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: iLya2IK on July 02, 2021, 10:34:07 am
Thank you for a wonderful project.

I would like to start classifying text phrases according to certain patterns. It is about recognizing and classifying car brands and models. The number of combinations is about 2000. But all my attempts run into a fundamental misunderstanding of which class to choose for modeling - now I use TNNetForByteProcessing. Here is the code I am using to simulate:
Code: Pascal  [Select][+][-]
  1. const MAX_IN_SIZE = 25;
  2.       HIGH_IN_VAL = MAX_IN_SIZE-1;
  3.       MAX_OUT_SIZE = 256;
  4.       HIGH_OUT_VAL = MAX_OUT_SIZE-1;
  5.  
  6. type TInpArray = Array [0..HIGH_IN_VAL] of Byte;
  7.      TOutArray = Array [0..HIGH_OUT_VAL] of Byte;
  8.  
  9. var
  10.   DS : TSqlite3Dataset;
  11.   MMList : TStringList;
  12.  
  13. procedure SetCorrectOut(var aOut : TOutArray; value : Word);
  14. var Pos : Word;
  15.     Shift : Byte;
  16. begin
  17.   Pos := value div 8;
  18.   Shift := value mod 8;
  19.   aOut[Pos] := $1 shl Shift;
  20. end;
  21.  
  22. function GetCorrectOut(const aOut : TOutArray) : Word;
  23. var Pos : Word;
  24.     Shift : Byte;
  25. begin
  26.   for Pos := 0 to HIGH_OUT_VAL do
  27.   begin
  28.     if (aOut[Pos] > 0) then
  29.     begin
  30.       case aOut[Pos] of
  31.         2   : Shift := 1;
  32.         4   : Shift := 2;
  33.         8   : Shift := 3;
  34.         16  : Shift := 4;
  35.         32  : Shift := 5;
  36.         64  : Shift := 6;
  37.         128 : Shift := 7;
  38.       else
  39.         Shift := 0;
  40.       end;
  41.       Exit(Pos * 8 + Shift);
  42.     end;
  43.   end;
  44.   Result := 0;
  45. end;
  46.  
  47. function GetCorrectOutList(const aOut : TOutArray) : String;
  48. var Pos : Word;
  49.     Cnt : Byte;
  50.     S : String;
  51.  
  52. procedure AddOption(W : Word);
  53. begin
  54.   if length(S) > 0 then S := S + ' ';
  55.   S := S + MMList.Values[inttostr(W)];
  56.   Inc(Cnt);
  57. end;
  58.  
  59. begin
  60.   S := '';
  61.   Cnt := 0;
  62.   for Pos := 0 to HIGH_OUT_VAL do
  63.   begin
  64.     if (aOut[Pos] > 0) then
  65.     begin
  66.       if aOut[Pos] and $1 > 0 then AddOption(Pos shl 3);
  67.       if aOut[Pos] and $2 > 0 then AddOption(Pos shl 3 or $1);
  68.       if aOut[Pos] and $4 > 0 then AddOption(Pos shl 3 or $2);
  69.       if aOut[Pos] and $8 > 0 then AddOption(Pos shl 3 or $3);
  70.       if aOut[Pos] and $10 > 0 then AddOption(Pos shl 3 or $4);
  71.       if aOut[Pos] and $20 > 0 then AddOption(Pos shl 3 or $5);
  72.       if aOut[Pos] and $40 > 0 then AddOption(Pos shl 3 or $6);
  73.       if aOut[Pos] and $80 > 0 then AddOption(Pos shl 3 or $7);
  74.     end;
  75.     if Cnt > 15 then break;
  76.   end;
  77.   Result := S;
  78. end;
  79.  
  80. procedure RunAlgo();
  81. const MAX_EPOCH_CNT = 1000;
  82.       MAX_EPOCH_SIZE = 512;
  83.       MAX_ERROR_CNT  = 1;
  84. var
  85.   NN: TNNetForByteProcessing;
  86.   BAInput : TInpArray;
  87.   BAExpected, BAOutput :TOutArray;
  88.   L, epoch : integer;
  89.   Txt1, Res, Outv : String;
  90.   MM, MM1 : Word;
  91.   ErrorCnt, EpochSize : Integer;
  92.   NeedNextEpoch : Boolean;
  93.   Key : char;
  94. begin
  95.   NN := TNNetForByteProcessing.Create();
  96.  
  97.   try
  98.     NN.AddBasicByteProcessingLayers(MAX_IN_SIZE, MAX_OUT_SIZE, 8, 32);
  99.     NN.SetLearningRate(0.1, 0.0);
  100.  
  101.     WriteLn('Computing...');
  102.     epoch := 1;
  103.     NeedNextEpoch := true;
  104.  
  105.     ErrorCnt := MAX_ERROR_CNT + 1;
  106.     EpochSize := MAX_EPOCH_SIZE;
  107.     while NeedNextEpoch and (epoch <= MAX_EPOCH_CNT) and (ErrorCnt > MAX_ERROR_CNT) do
  108.     begin
  109.       WriteLn('Epoch #', epoch);
  110.       inc(epoch);
  111.       DS.SQL := 'select mrkorig, uniq_ats_schem.id as mmid, '+
  112.       'ats_schem.mrk||" "||ats_schem.model as expc '+
  113.       'from ats_schem, uniq_ats_marks, uniq_ats_schem '+
  114.       'where length(mrkorig) < 25 and uniq_ats_marks.mrk = ats_schem.mrk and '+
  115.       '      uniq_ats_schem.mrk = ats_schem.mrk and '+
  116.       '      uniq_ats_schem.model = ats_schem.model order by random() limit ' + inttostr(EpochSize);
  117.       DS.Open;
  118.       try
  119.         ErrorCnt := 0;
  120.         // tests the learning
  121.         while not DS.EOF do
  122.         begin
  123.           if Keypressed then // if user provided input
  124.           begin
  125.             Key := readkey;
  126.             if Key = #27 then begin
  127.               NeedNextEpoch := false;
  128.               Break;
  129.             end;
  130.           end;
  131.  
  132.           FillChar(BAInput,    MAX_IN_SIZE, #0);
  133.           FillChar(BAExpected, MAX_OUT_SIZE, #0);
  134.           FillChar(BAOutput,   MAX_OUT_SIZE, #0);
  135.           Txt1 := UTF8ToConsole(UTF8LowerCase(DS.FieldByName('mrkorig').AsString));
  136.           Res := UTF8ToConsole(DS.FieldByName('expc').AsString);
  137.           MM := DS.FieldByName('mmid').AsInteger;
  138.  
  139.           L := Length(Txt1);
  140.           if L > HIGH_IN_VAL then
  141.           begin
  142.             L := HIGH_IN_VAL;
  143.           end;
  144.           Move((@(Txt1[1]))^, (@(BAInput[0]))^, L);
  145.           SetCorrectOut(BAExpected, MM);
  146.           NN.Compute(BAInput);
  147.           NN.GetOutput(BAOutPut);
  148.           NN.Backpropagate(BAExpected);
  149.  
  150.           MM1 := GetCorrectOut(BAOutput);
  151.           if MM <> MM1 then Inc(ErrorCnt);
  152.           Outv := GetCorrectOutList(BAOutput);
  153.           WriteLn
  154.           ( 'In:',
  155.             ConsoleToUTF8(PChar(@(BAInput[0]))),
  156.             ' Out:',
  157.             Outv,
  158.             ' Expect:',
  159.             Res
  160.           );
  161.           DS.Next;
  162.         end;
  163.       finally
  164.         DS.Close;
  165.       end;
  166.     end;
  167.   finally
  168.     NN.Free;
  169.     Write('Press ENTER to exit.');
  170.     ReadLn;
  171.   end;  
  172. end;

There is a feeling that I have chosen the wrong approach. I ask you to suggest how best to approach the solution of this problem. If it is not difficult, please give an example of solving similar problems of text analysis.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 06, 2021, 11:24:18 am
Hi iLya2IK!
Thank you for sharing your source code example with bleeding edge new features from the API.

If I understood your code, you are placing characters into bytes. Although doable, this is an unusual approach but I must say that I have curiosity to find what the NN will find...

Have you tried much lower learning rates such as 0.001? I would give a try to:
Code: Pascal  [Select][+][-]
  1.     NN.SetLearningRate(0.001, 0.9);
  2.     NN.SetLearningRate(0.0001, 0.9);

Usually, when you see no convergence at all, you can try lower learning rates.

Are you able to share your output?
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: iLya2IK on July 06, 2021, 02:46:47 pm
Are you able to share your output?
Thanks for the answer!

I'm afraid there is nothing to show  :(. For any parameters, the approximation degrades to the output of an empty set (the neural network produces zero bits as a result after a large number of training rounds).
I think I have found a possible alternative for building a neural network. If something works out, I will certainly post the results and the source code.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 26, 2021, 07:16:09 am
 :) Hello Pascal Lovers!  :)

Besides wishing a long and prosper life to Lazarus/FPC, after 6 months without showing up here, I would like to comment some enhancements recently done to the API.

As usual, I’ll type around the CIFAR-10 https://www.cs.toronto.edu/~kriz/cifar.html (https://www.cs.toronto.edu/~kriz/cifar.html) image classification task. First, I would like to comment things that you can now do to increase the learning capability of your neural network without adding more neurons/connections to it.

Just by updating the library to the latest library on github, CreateCifar10Volumes and CreateCifar100Volumes now load 2000 images for validation instead of 10000. This small change will push the classification accuracy a bit up. Follows an example:

Code: Pascal  [Select][+][-]
  1. CreateCifar10Volumes(ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes, csEncodeRGB, {ValidationSampleSize=}2000);

We have new activation functions:

Regarding TNNetReLU6, TNNetSwish6 and TNNetReLUL, limiting outputs can both prevent overflows and make the learning faster. Follows an example:

Code: Pascal  [Select][+][-]
  1. TNNetReLUL.Create(-3, 3, 0)

You could define a ReLU6 equivalent activation function with:

Code: Pascal  [Select][+][-]
  1. TNNetReLUL.Create(0, 6, 0)

I coded an example with Swish at  https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierPaddingCroppingSwish.lpr  (https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimpleImageClassifier/SimpleImageClassifierPaddingCroppingSwish.lpr). From experience, Swish doesn’t work well along the first layer or layers. On deeper layers, it works very well. In my experiments, it gives a classification accuracy improvement from 1% to 2%.

In my opinion, one of the most interesting neural network architectures is called ResNet:  https://arxiv.org/abs/1512.03385  (https://arxiv.org/abs/1512.03385). The ResNet-20 neural network model has been ported to FPC:  https://github.com/joaopauloschuler/neural-api/blob/master/examples/ResNet/ResNet20.lpr  (https://github.com/joaopauloschuler/neural-api/blob/master/examples/ResNet/ResNet20.lpr). ResNet-20 model is particularly interesting because it’s very floating-point and parameter efficient image classification architecture. Another version of this model was coded taking advantage of the CAI specific activation function TNNetReLUL:  https://github.com/joaopauloschuler/neural-api/blob/master/examples/ResNet/CaiResNet20.lpr  (https://github.com/joaopauloschuler/neural-api/blob/master/examples/ResNet/CaiResNet20.lpr).

 :) Long and prosperous life to Lazarus and FPC!  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 29, 2022, 08:39:33 pm
 :) Hello :)
I found today a scientific paper incorrectly referencing this API (it's super cool to find a scientific paper referencing a free pascal based work - this is possible thanks to lots of people that worked on FPC and Lazarus). In the case that anyone needs to reference it, this is how in BibTeX format:
Code: Pascal  [Select][+][-]
  1. @software{cai_neural_api_2021_5810077,
  2.   author       = {Joao Paulo Schwarz Schuler},
  3.   title        = {CAI NEURAL API},
  4.   month        = dec,
  5.   year         = 2021,
  6.   publisher    = {Zenodo},
  7.   version      = {v1.0.6},
  8.   doi          = {10.5281/zenodo.5810077},
  9.   url          = {https://doi.org/10.5281/zenodo.5810077}
  10. }

BTW, a recommendation via researchgate will be greatly appreciated: https://www.researchgate.net/publication/357717402_CAI_NEURAL_API .

:) Happy Pascal Coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: kagamma on February 17, 2022, 03:59:31 am
It's nice to see Pascal has a very good native NN library. Too bad it doesn't support RNN/LSTM/GRU so I can't move some of my NLP stuff from PyTorch to it :(
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: domasz on March 15, 2022, 10:05:49 am
First of all- amazing work!

I don't understand how to use all those neural nets but I'd love to see some nets as sort of ready-to-use functions for morons like me.

Perhaps this is something you could consider?

What would be great:
1) colorizing black & white photos:
https://github.com/mlhubber/colorize

2) upscale:
https://github.com/Araxeus/PNG-Upscale
https://github.com/IBM/MAX-Image-Resolution-Enhancer

3) removing background:
https://github.com/danielgatis/rembg
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on April 05, 2022, 02:20:47 pm
@domasz,
regarding upscaling, there is a ready to use example:
https://github.com/joaopauloschuler/neural-api/tree/master/examples/SuperResolution

You can find some of the testing at:
https://github.com/joaopauloschuler/neural-api/issues/26

Thank you for the other ideas.

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on September 18, 2022, 07:42:17 pm
Hello Joao,

I just wanted to tell you that this is a new approach in neural networks which are limited a little too much to Python and OpenCV (apart from Darknet).

The Pascal Lazarus is a great development tool that has the advantage of being independent of obscure external libraries and working on the 3 (4 with Android) main platforms, Linux Mac and Windows.

I hope you will continue to develop CAI.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 22, 2022, 06:48:19 pm
:) Hello :)

I have a selfish post to make: yesterday, I finished my PhD course in AI. YAY!

My PhD thesis certainly can give clues to where this API is heading: super efficient neural networks.

:) I wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: nummer8 on November 22, 2022, 07:42:15 pm
Congrats,

Finishing a PhD is a hell of a job.
Enjoy!!
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on November 22, 2022, 10:07:58 pm
:) Hello :)

I have a selfish post to make: yesterday, I finished my PhD course in AI. YAY!
Many, many congratulations! Dr Schuler.
By coincidence I referred to your work at this forum today...

I will owe you some beers and a good meal when you will visit the Netherlands.
Yay,
Thaddy
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 22, 2022, 11:05:59 pm
Dear Thaddy,
You supported my work in its very early days. I'm very thankful for the positive thoughts you've provided me.

@nummer8, it's very a hard work indeed.

 :D Dr. Schuler :D
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: metis on November 24, 2022, 12:02:09 pm
@Dr. Schuler

Congratulations !
Got for Academics Beer&Food and Sun in Tenerife.  ;)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Thaddy on November 24, 2022, 03:20:36 pm
Well,  with a last name like that, you obviously felt the pressure to become a scholar  O:-)
( for others: https://de.wikipedia.org/wiki/Scholar )
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on December 09, 2022, 11:45:22 am
Dr Schuler  :D,


Congratulation!!!

Great job.

B->
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 10, 2022, 04:26:26 am
@metis and @Dzandaa,
Thank you!

@Thaddy,
Quote
Well, with a last name like that, you obviously felt the pressure to become a scholar.
You certainly have a point.

Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 10, 2022, 04:46:52 am
... the coding continues ...

The paper Searching for Activation Functions (https://arxiv.org/abs/1710.05941) describes the search for a better activation function than ReLU. In their work, the authors found Swish to be the best replacement for ReLU. The downside of Swish is: it requires a lot of computation to calculate it. Later, the paper Searching for MobileNetV3 (https://arxiv.org/pdf/1905.02244v5.pdf) introduces the Hard Swish activation function. The Hard Swish gives similar results to Swish with a lot less computation.

I've just added to the source code the Hard Swish implementation and done new runs comparing ReLU, Swish and Hard Swish. Follow source codes and results.

Source codes are:

Results are (CIFAR-10 test classification accuracy):

Looks pretty good to me!

:) Wish everyone happy pascal coding. :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on December 10, 2022, 04:10:42 pm
Hi,
thank you for the update.

Have you tested the "mish" activation?
https://ai-scholar.tech/en/articles/treatise/mish-ai-374 (https://ai-scholar.tech/en/articles/treatise/mish-ai-374)

It seems easy to implement.

B->
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 11, 2022, 04:14:55 am
@Dzandaa,
It looks good.

I've just added a feature request to this: https://github.com/joaopauloschuler/neural-api/issues/106 .
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 21, 2022, 06:14:42 pm
DrDub from GitHub coded a simple web server for a neural network that receives an image and classifies it. I did plenty of updates on the top of his code. The code can be found at:

https://github.com/joaopauloschuler/neural-api/tree/master/examples/ResNet/server
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on January 09, 2023, 05:34:18 pm
 :) Hello :)
Just finished updating the autoencoder example: https://github.com/joaopauloschuler/neural-api/tree/master/examples/VisualAutoencoder .

This example uses the freshly tested (but coded long ago) TNNetUpsample layer for increasing resolution in the decoder side. TNNetUpsample converts channels (depth) into spatial data. For example, a 128x128x256 activation map will be converted to 256x256x64. This layer type does a similar job to Tensorflow's tf.nn.depth_to_space.

 :) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on February 12, 2023, 12:26:59 pm
Your sources are available on 2 platforms
Which are the most recent?

Thank you.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 25, 2023, 08:15:26 am
:) Hello :)

Sorry for being away for so long.

Given that everyone is talking about chatGPT and language models in general, I'll join the trend.

I've been doing experiments with chatGPT 3.5, chatGPT 4 and Hugging Face. For some questions, they all fail terribly. For other questions, one or another gives a good reply.

Anyway, follows a very interesting experiment about Free Pascal.

Schuler:

Hello. I hope that you are doing well. I'm trying to understand the following free pascal source code. Do you know what it does please?

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer([
  3.   TNNetInput.Create(32, 32, 3), //32x32x3 Input Image
  4.   TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  5.   TNNetMaxPool.Create({Size=}2),
  6.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  7.   TNNetMaxPool.Create({Size=}2),
  8.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  9.   TNNetFullConnectReLU.Create({Neurons=}32),
  10.   TNNetFullConnectLinear.Create(NumClasses),
  11.   TNNetSoftMax.Create()
  12. ]);

GPT-3.5 :

The given source code is creating a neural network model using the Free Pascal programming language. Here's a breakdown of what the code does:

NN := TNNet.Create(); - This line creates an instance of a neural network model called NN using the TNNet class.

NN.AddLayer([...]); - This line adds layers to the neural network model. Each layer is specified as an array of objects representing different types of layers in the network.

a. TNNetInput.Create(32, 32, 3) - This layer represents the input layer of the network and expects 32x32x3 input images (32 pixels width, 32 pixels height, and 3 color channels for RGB).

b. TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0) - This layer is a convolutional layer with Rectified Linear Unit (ReLU) activation. It performs 2D convolutions on the input images with 16 features (output channels), a feature size of 5x5, no padding, a stride of 1, and bias not suppressed.

c. TNNetMaxPool.Create({Size=}2) - This layer is a max pooling layer that downsamples the input by selecting the maximum value within a 2x2 window.

d. The above two layers (convolution + max pooling) are repeated twice with different feature sizes (32 features) but with the same configuration.

e. TNNetFullConnectReLU.Create({Neurons=}32) - This layer is a fully connected layer with ReLU activation. It connects all the neurons from the previous layer to the current layer with 32 neurons.

f. TNNetFullConnectLinear.Create(NumClasses) - This layer is a fully connected layer with linear activation. It connects the previous layer to the output layer, where NumClasses represents the number of classes in the classification task.

g. TNNetSoftMax.Create() - This layer is the output layer that applies the softmax function to produce probability scores for each class.

By adding these layers to the neural network model, the code creates a convolutional neural network (CNN) architecture commonly used for image classification tasks. The specific architecture details (number of layers, feature sizes, etc.) may vary depending on the specific problem domain and requirements.

Schuler :
Regarding the provided source code above, can you rewrite it adding more and better source code comments please?

GPT-3.5 :
Certainly! Here's the rewritten source code with more comprehensive comments:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create(); // Create a new neural network model
  2.  
  3. NN.AddLayer([
  4.   TNNetInput.Create(32, 32, 3), // Input layer: Expects 32x32x3 input images (32 pixels width, 32 pixels height, and 3 color channels for RGB)
  5.  
  6.   TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  7.   // Convolutional layer with ReLU activation:
  8.   // - Features: 16 output channels
  9.   // - FeatureSize: 5x5 filter size
  10.   // - Padding: 0 (no additional padding)
  11.   // - Stride: 1 (movement of the filter by 1 pixel at a time)
  12.   // - SuppressBias: 0 (include bias parameters)
  13.  
  14.   TNNetMaxPool.Create({Size=}2), // Max pooling layer: Downsampling the input by selecting the maximum value within a 2x2 window
  15.  
  16.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  17.   // Another Convolutional layer with ReLU activation:
  18.   // - Features: 32 output channels
  19.   // - FeatureSize: 5x5 filter size
  20.   // - Padding: 0
  21.   // - Stride: 1
  22.   // - SuppressBias: 0
  23.  
  24.   TNNetMaxPool.Create({Size=}2), // Another Max pooling layer
  25.  
  26.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  27.   // Another Convolutional layer with ReLU activation:
  28.   // - Features: 32 output channels
  29.   // - FeatureSize: 5x5 filter size
  30.   // - Padding: 0
  31.   // - Stride: 1
  32.   // - SuppressBias: 0
  33.  
  34.   TNNetFullConnectReLU.Create({Neurons=}32), // Fully connected layer with ReLU activation: Connects all neurons from the previous layer to the current layer with 32 neurons
  35.  
  36.   TNNetFullConnectLinear.Create(NumClasses), // Fully connected layer with linear activation: Connects the previous layer to the output layer with 'NumClasses' number of neurons/classes
  37.  
  38.   TNNetSoftMax.Create() // Softmax layer: Produces probability scores for each class
  39. ]);
  40.  

--
Additional comments from Schuler:
GPT-3.5 was able to vastly go beyond the original source code comments. The only mistake I can see is "repeated twice". I would say "repeated" or "repeated once".

What is now clear to me: GPT-3.5 can help at explaining existing free pascal source code and can help at adding documentation.

Another topic: I'll be at https://www.pascalcongress.com/ ...

--

I then asked the very first question to GPT-4 and this is what I've got:

GPT-4 :
This Pascal code is defining a Convolutional Neural Network (CNN) using the TNNet library. CNNs are a class of deep neural networks, most commonly used to analyze visual images.

Let's break down the code:

TNNet.Create(): This line is creating a new instance of a neural network object.

NN.AddLayer(): This method is adding layers to the neural network. In this case, the network contains the following layers:

TNNetInput.Create(32, 32, 3): This is the input layer of the neural network. It's expecting input images of size 32x32 with 3 channels (probably RGB color images).

TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0): This is a convolutional layer with a Rectified Linear Unit (ReLU) activation function. It uses 16 filters, each with a size of 5x5. There's no padding and the stride is 1. SuppressBias is set to 0, which means bias is not suppressed (bias is a constant value added to the weighted inputs and the bias before passing through the activation function).

TNNetMaxPool.Create({Size=}2): This is a max pooling layer, which reduces the spatial size of the representation to reduce the amount of parameters and computation in the network. This layer is using a 2x2 pooling window.

Another convolutional ReLU layer and max pooling layer are added, this time with 32 filters.

TNNetFullConnectReLU.Create({Neurons=}32): This is a fully connected layer with a ReLU activation function. The layer has 32 neurons.

TNNetFullConnectLinear.Create(NumClasses): This is a fully connected linear layer. The number of neurons equals to the number of classes in the problem at hand, which is defined elsewhere in the code as NumClasses.

TNNetSoftMax.Create(): The final layer is a SoftMax layer, which outputs a vector that represents the probability distributions of a list of potential outcomes. It's most commonly used in applied machine learning, where it can be used for multi-class classification.

So, the code is building a Convolutional Neural Network with specific architecture to be trained and used for image classification tasks. The specific task would be defined by the training data and the NumClasses variable.

--

More comments from Schuler: GPT-4 hasn't made a single error explaining this code. Users could also ask about "please clarify further this line or that line",  "please teach me the concept of ...", "please improve this portion of code" (would not make sense as my source code is perfect...  O:-)  ).

--
HuggingChat OpenAssistant/oasst-sft-6-llama-30b:

This Free Pascal source code appears to define an object called NN, which is an instance of class TNNet from some package. From looking at this code snippet alone, we can deduce that this code creates layers in NN using methods from different classes defined within the TNNet module (or other dependent packages). These classes include TNNetInput for input data layer, TNNetConvolutionReLU/TNNetMaxPool sequence for convolutional layers, TNNetConvolutionReLU/TNNetMaxPool again and then FullConnect layers before finishing off with a softmax output activation function (SMO) layer, ultimately producing outputs for multiple different classes NumClasses. But without knowing more details about how these components work together it’s impossible to tell precisely what these weights compute specifically.
 
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on May 25, 2023, 04:19:38 pm
Hi
A little scary isn't it?
Especially since ChatGPT was trained in part with data from GitHub without asking for approval.  :D

One question:
Do you have an idea for training a network to detect atoms from spectra?

I thought about an auto-encoder but I'm stuck for the rest

Have a nice day

B->
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on May 26, 2023, 05:48:22 pm
@dzandaa,
regarding "scary", I would look at the following from gpt 3.5:

Code: Pascal  [Select][+][-]
  1.   TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  2.   // Convolutional layer with ReLU activation:
  3.   // - Features: 16 output channels
  4.   // - FeatureSize: 5x5 filter size
  5.   // - Padding: 0 (no additional padding)
  6.   // - Stride: 1 (movement of the filter by 1 pixel at a time)
  7.   // - SuppressBias: 0 (include bias parameters)

The comments above are correct. It (or him - I don't know) guessed by itself "16 output channels", "no additional padding", "movement of the filter by 1 pixel at a time" and "include bias parameters". All these guesses are correct. The most impressive of all, "SuppressBias" is very particular to this API...

Still regarding "scary", I hope people will not try to use these technologies for automating decision making. The chances for a disaster are total. I also fear that people might end blindly believing on whatever these technologies say. It's absolutely important for users to keep critical thinking when using social media, search engines and language models. Language models could be used to provide information that fits a specific narrative and users would think that they arrive to conclusions by themselves when they were actually manipulated into certain beliefs.

Anyway, it's a fantastic technology.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 30, 2023, 02:18:42 am
 :) Hello :)

I've just published the source code and trained neural networks for the Malaria, Colorectal Cancer and Plant Village image classification datasets. The test classification accuracies are 95.63%, 94.26% and 99.03% respectively. I showed these models (neural networks) for the first time in the International Pascal Congress of 2023. These models are now public.

You can find details (source code, dataset, trained model for download) at:
https://github.com/joaopauloschuler/pre-trained-neural-api-networks

:) Wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on July 30, 2023, 03:44:04 pm
Hi,

Thank you very much,
I'll test it tomorrow.

Happy Coding!!!

B->
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 30, 2023, 11:15:59 pm
I'll test it tomorrow.

I'll add code examples for loading and testing trained models. I'll let you know when ready.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on July 31, 2023, 05:47:34 pm
I'll test it tomorrow.

I've just added examples to make the testing easier. For testing only one image, you can follow this example:

Code: Pascal  [Select][+][-]
  1.   procedure DoRun;
  2.   var
  3.     NN: TNNet;
  4.     ImageFileName: string;
  5.     NeuralFit: TNeuralImageFit;
  6.     vInputImage, vOutput: TNNetVolume;
  7.     InputSizeX, InputSizeY, NumberOfClasses: integer;
  8.   begin
  9.     WriteLn('Loading Neural Network...');
  10.     NN := TNNet.Create;
  11.     NN.LoadFromFile('SimplePlantLeafDisease-20230720.nn');
  12.     NN.DebugStructure();
  13.     InputSizeX := NN.Layers[0].Output.SizeX;
  14.     InputSizeY := NN.Layers[0].Output.SizeY;
  15.     NumberOfClasses := NN.GetLastLayer().Output.Size;
  16.  
  17.     NeuralFit := TNeuralImageFit.Create;
  18.     vInputImage := TNNetVolume.Create();
  19.     vOutput := TNNetVolume.Create(NumberOfClasses);
  20.  
  21.     ImageFileName := 'plant/Apple___Black_rot/image (1).JPG';
  22.     WriteLn('Loading image: ',ImageFileName);
  23.  
  24.     if LoadImageFromFileIntoVolume(
  25.       ImageFileName, vInputImage, InputSizeX, InputSizeY,
  26.       {EncodeNeuronalInput=}csEncodeRGB) then
  27.     begin
  28.       WriteLn('Classifying the image:', ImageFileName);
  29.       vOutput.Fill(0);
  30.       NeuralFit.ClassifyImage(NN, vInputImage, vOutput);
  31.       WriteLn('The image belongs to the class of images: ', vOutput.GetClass());
  32.     end
  33.     else
  34.     begin
  35.       WriteLn('Failed loading image: ',ImageFileName);
  36.     end;
  37.  
  38.     vInputImage.Free;
  39.     vOutput.Free;
  40.     NeuralFit.Free;
  41.  
  42.     WriteLn('Press ENTER to quit.');
  43.     ReadLn();
  44.  
  45.     NN.Free;
  46.   end;
  47.  

The above source code is located at:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/TestPlantLeafDiseaseTrainedModelOneImage.pas

If you would like to test against the actual training dataset, you can follow this example:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/TestPlantLeafDiseaseTrainedModel.pas
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on August 01, 2023, 12:13:55 pm
Hi,

I have problem with TrainingAccuracy, my program is running well, results are good but TrainingAccuracy is always Zero.

I've send you a personal message with code.

B->
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on August 02, 2023, 06:10:17 pm
Hi Schuler,

I tested "plant leaf diseases" and "Colorectal" networks and  it's working :)


B->
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 27, 2023, 12:10:56 am
@Dzandaa,
What a beautiful screenshot! Wondering if I can find your code in an open repository...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 27, 2023, 12:48:23 am
:) Hello pascal lovers :) ,
I did an experiment few days ago that I would like to share. The experiment is about predicting the next character in a string.

To make things very simple, the dataset has 3 strings:
Code: Pascal  [Select][+][-]
  1.     FDataset[0] := 'happy good morning.'+chr(1);
  2.     FDataset[1] := 'fantastic good evening.'+chr(1);
  3.     FDataset[2] := 'superb good night.'+chr(1);

The neural network is built with:

Code: Pascal  [Select][+][-]
  1.     FNN.AddLayer([
  2.       TNNetInput.Create(csContextLen, 1, csVocabSize),
  3.       TNNetPointwiseConvReLU.Create(32),
  4.       TNNetFullConnectReLU.Create(32),
  5.       TNNetFullConnectReLU.Create(32),
  6.       TNNetFullConnectLinear.Create(csVocabSize),
  7.       TNNetSoftMax.Create()
  8.     ]);

The constants above are defined with:
Code: Pascal  [Select][+][-]
  1. const
  2.   csContextLen = 64;  // The input string can have up to 64 characters.
  3.   csVocabSize  = 128; // Character based vocabulary/dictionary.
  4.   csMinSampleSize = 3; // Minimum of 3 characters.
  5.  
After training the NN, I tested with:

Code: Pascal  [Select][+][-]
  1.     WriteLn('Testing.');
  2.     WriteLn(GenerateStringFromChars(FNN, 'happy'));
  3.     WriteLn(GenerateStringFromChars(FNN, 'fantastic'));
  4.     WriteLn(GenerateStringFromChars(FNN, 'superb'));
  5.  
Then NN gets it 100% right with the following output:
Quote
happy good morning.
fantastic good evening.
superb good night.

How does it work?
Each character is encoded into a number from 0 to 127. This number from 0 to 127 is then transformed into a vector with 128 elements where only one element values 1. This is called “one-hot encoding” (https://en.wikipedia.org/wiki/One-hot). This is why the input is defined with (csContextLen, 1, csVocabSize). Then, to decrease the dimensionality from 128 dimensions in the vector, a pointwise convolution is used (TNNetPointwiseConvReLU). Each input character is converted into an 128 elements vector in reverse order so “good” will become “doog”. The last layer of the NN has 128 elements so the element with highest value (or probability) is the next predicted character.

The one-hot encoding is done with one API call:
Code: Pascal  [Select][+][-]
  1. procedure TVolume.OneHotEncodingReversed(aTokens: string); overload;

In the case that your input is not a string, you could call the following instead:

Code: Pascal  [Select][+][-]
  1. procedure TVolume.OneHotEncoding(aTokens: array of integer); overload;

So, for each character predicted, this character is added to the input string and the process is repeated until a termination token is found (chr(0) or chr(1)). This is how calling GenerateStringFromChars(FNN, 'superb') outputs ‘superb good night’.

The source code can be found at:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/StringManipulation/StringManipulation.lpr

:) I wish everyone happy pascal coding  :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on November 27, 2023, 03:50:13 pm
Hi,

Another update :)
Thank you.

Do you know this ?

https://analyticsindiamag.com/beginners-guide-to-text-generation-with-rnns/ (https://analyticsindiamag.com/beginners-guide-to-text-generation-with-rnns/)

B->
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on November 28, 2023, 04:37:48 pm
@Dzandaa,
Your question is intelligent and very proper.

At this moment, I'm not coding RNNs nor I have it clear if I will ever code it. To the other readers of this forum that are interested in RNNs, I would recommend the following:

https://www.youtube.com/watch?v=AsNTP8Kwu80 - RNN
https://www.youtube.com/watch?v=YCzL96nL7j0 - LSTM
https://www.youtube.com/watch?v=L8HKweZIOmg - seq2seq
https://www.youtube.com/watch?v=PSs6nxngL6k - Attention

BTW, it's not crazy to think about NLP with convolutional layers:
https://towardsdatascience.com/nlp-with-cnns-a6aa743bdc1e
https://dennybritz.com/posts/wildml/understanding-convolutional-neural-networks-for-nlp/
https://slds-lmu.github.io/seminar_nlp_ss20/convolutional-neural-networks-and-their-applications-in-nlp.html

If you google for "nlp convolutional neural network", you'll find plenty of info.

The example above is a "hello world". ATM, I have 4 VMs running experiments in this area...
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: Dzandaa on November 29, 2023, 01:07:14 pm
Hi,

@schuler

I reproduce your String Manipulation program using Forms and chart.

Feel free to change, share (check errors!!!)  and comment.

Don't forget to change AVX and OpenCL in Project Options!!!

I just tested it with a NVidia card on Windows and Linux Mint.

You have to download CAI from schuler on github : https://github.com/joaopauloschuler/neural-api

and put the "neural" directory from CAI in ../neural , or you can change the path  in Options->Project Options->Path

Have a good day.

B->


 
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 02, 2023, 07:34:31 am
I've just pushed a tremendous memory optimization (I still have more optimizations to go). It dropped the single thread memory requirement from 4.5GB to 1.7GB for the model below. ATM, each thread consumes an extra 0.6GB.
The model below has 45.6 million parameters also intended to guess the next character in a string from a context of up to 81 characters:
Code: Pascal  [Select][+][-]
  1.     FNN := TNNet.Create();
  2.     NFit := TNeuralDataLoadingFit.Create();
  3.  
  4.     FNN.AddLayer([
  5.       TNNetInput.Create(csContextLen, 1, csVocabSize),
  6.       TNNetPointwiseConv.Create(32,0),
  7.       TNNetPadXY.Create(1,0),
  8.       TNNetConvolutionReLU.Create(64,3,0,1,0),
  9.       TNNetMaxPool.Create(3),
  10.       TNNetPadXY.Create(1,0),
  11.       TNNetConvolutionReLU.Create(128,3,0,1,0),
  12.       TNNetMaxPool.Create(3),
  13.       TNNetFullConnectReLU.Create(1024*6),
  14.       TNNetFullConnectReLU.Create(1024*6),
  15.       TNNetFullConnectLinear.Create(csVocabSize),
  16.       TNNetSoftMax.Create()
  17.     ]);
  18.     NFit.MaxThreadNum := 1;
  19.     NFit.LogEveryBatches := 100;
  20.     NFit.InitialLearningRate := 0.001;
  21.     NFit.LearningRateDecay := 0;
  22.     NFit.L2Decay := 0;
  23.     NFit.EnableClassComparison();
  24.     NFit.EnableDefaultLoss();
  25.     NFit.AvgWeightEpochCount := 1; // This is very important for large models
  26.     NFit.OnAfterEpoch := @OnAfterEpoch;
  27.     NFit.FitLoading(
  28.       FNN,
  29.       {TrainingVolumesCount=}FDatasetSize*20,
  30.       {ValidationVolumesCount=}FDatasetSize,
  31.       {TestVolumesCount=}FDatasetSize,
  32.       {batchsize=}32,
  33.       {epochs=}50,
  34.       @GetTrainingPair, @GetValidationPair, @GetTestPair
  35.     );
  36.  

I have also pushed code for NLP Samplers:
Code: Pascal  [Select][+][-]
  1.   { TNNetSamplerGreedy }
  2.   TNNetSamplerGreedy = class (TNNetSamplerBase)
  3.     public
  4.       function GetToken(Origin: TNNetVolume): integer; override;
  5.   end;
  6.  
  7.   { TNNetSamplerTopK }
  8.   TNNetSamplerTopK = class (TNNetSamplerBase)
  9.     protected
  10.       FTopK: integer;
  11.     public
  12.       constructor Create(TopK: integer);
  13.       function GetToken(Origin: TNNetVolume): integer; override;
  14.   end;
  15.  
  16.   { TNNetSamplerTopP }
  17.   TNNetSamplerTopP = class (TNNetSamplerBase)
  18.     protected
  19.       FTopP: TNeuralFloat;
  20.     public
  21.       constructor Create(TopP: TNeuralFloat);
  22.       function GetToken(Origin: TNNetVolume): integer; override;
  23.   end;
  24.  

I can now talk to my own NNs with:
Code: Pascal  [Select][+][-]
  1. procedure TestFromFile;
  2. var
  3.   S: string;
  4.   oSampler: TNNetSamplerBase;
  5.   NN: TNNet;
  6. begin
  7.   oSampler := TNNetSamplerTopP.Create(0.4);
  8.   NN := TNNet.Create();
  9.   WriteLn('Loading neural network.');
  10.   NN.LoadFromFile('JP45A01d2.nn');
  11.   NN.DebugStructure();
  12.   WriteLn();
  13.   WriteLn('Write something and I will reply.');
  14.   repeat
  15.     Write('User: ');
  16.     ReadLn(S);
  17.     WriteLn('Neural network: ',GenerateStringFromChars(NN, LowerCase(S), oSampler),'.');
  18.   until S = 'exit';
  19.   NN.Free;
  20.   oSampler.Free;
  21. end;

And then, I get outputs:
Code: Pascal  [Select][+][-]
  1. User: this is good for
  2. Neural network: this is good for the construction of the great deal with the south core comparti.
  3. User: this is not good for
  4. Neural network: this is not good for the present " consistent with the extended to be seen as r.

The output text is still not human like. But it's improving fast.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: cpicanco on December 07, 2023, 01:18:43 am
Hi schuler, do you consider your system feasible to my speech-to-text needs? Please, let me explain.

I am building a custom database with CVCV (consonant-vowel-consonant-vowel) pseudowords. Each pseudoword may have an audio, text and image association. I call them "psedo" because they have no natural meaning, they were artificially created using a mini alphabet composed by 4 vowels and 4 consonants. As you might guess, my goal is to train a model to recognize those words, my urgent need is a speech-to-text solution. 
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on December 09, 2023, 05:08:28 am
@cpicanco,
Thank you for considering CAI. CAI is not ready yet for speech recognition. I'll soon have some source code examples in the NLP area but still far from speech recognition. There are mature solutions around for this so I won't drag you into something experimental.
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: schuler on March 04, 2024, 06:03:16 am
 :) Hello :)

In the hope that Neural Networks can be easily learned, I created few videos:

* Basics of Neural Networks in Pascal - Loading and Saving: https://youtu.be/aIy1S7clhQo
* Neural Networks for Absolute Beginners! Learning a Simple Function: https://youtu.be/q56NcgUiAAk
* Coding a Neural Network in Pascal that Learns to Calculate the Hypotenuse: https://youtu.be/PdNTgI_qSyo
* Pre-trained Neural Networks & Transfer Learning with Pascal's CAI Neural API: https://youtu.be/tODsv6Ks2DM
* Coding a Neural Network in Pascal that Learns the OR Boolean Operation: https://youtu.be/f4T9IB-He_k
* A Dive into Identity Shortcut Connection - The ResNet building block: https://youtu.be/o-8NuoSsdck

:) I wish everyone happy pascal coding :)
Title: Re: Conscious Artificial Intelligence - Project Update
Post by: gerardus on March 04, 2024, 06:14:04 pm
Thanks a lot.
You have a voice not unlike David Attenborough's  ;)
TinyPortal © 2005-2018