Forum > Third party

Conscious Artificial Intelligence - Project Update

<< < (2/29) > >>

Trenatos:
This is very cool to see, many thanks for sharing!

schuler:
Hello,
I got a couple of private questions from forum members and decided to post here a general reply as it might help others.

This link gives a general view of CNNs:
http://cs231n.github.io/convolutional-networks/

Above link mentions some well known NN architectures. I would like to show here how to implement some of these well known architectures with CAI:

Yann LeCun LeNet-5:


--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN := TNNet.Create();NN.AddLayer( TNNetInput.Create(28, 28, 1) );NN.AddLayer( TNNetConvolution.Create(6, 5, 0, 1) );NN.AddLayer( TNNetMaxPool.Create(2) );NN.AddLayer( TNNetConvolution.Create(16, 5, 0, 1) );NN.AddLayer( TNNetMaxPool.Create(2) ); NN.AddLayer( TNNetFullConnect.Create(120) );NN.AddLayer( TNNetFullConnect.Create(84) );NN.AddLayer( TNNetFullConnectLinear.Create(10) );NN.AddLayer( TNNetSoftMax.Create() );
AlexNet can be implemented as follows:


--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN := TNNet.Create();NN.AddLayer( TNNetInput.Create(227, 227, 3) ); //Conv1 + ReLUNN.AddLayer( TNNetConvolutionReLU.Create(96, 11, 0, 4) );NN.AddLayer( TNNetMaxPool.Create(3,2) );NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) ); //Conv2 + ReLUNN.AddLayer( TNNetConvolutionReLU.Create(256, 5, 2, 1) );NN.AddLayer( TNNetMaxPool.Create(3,2) );NN.AddLayer( TNNetLocalResponseNormDepth.Create(5) ); //Conv3,4,5 + ReLUNN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(398, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );NN.AddLayer( TNNetMaxPool.Create(3,2) ); // Dropouts and Dense LayersNN.AddLayer( TNNetDropout.Create(0.5) );NN.AddLayer( TNNetFullConnectReLU.Create(4096) );NN.AddLayer( TNNetDropout.Create(0.5) );NN.AddLayer( TNNetFullConnectReLU.Create(4096) );NN.AddLayer( TNNetFullConnectLinear.Create(1000) );NN.AddLayer( TNNetSoftMax.Create() );
Above implementation is based on:
https://medium.com/@smallfishbigsea/a-walk-through-of-alexnet-6cbd137a5637

The VGGNet has an interesting architecture. It always uses 3x3 filters making its implementation easy to understand:


--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN := TNNet.Create();NN.AddLayer( TNNetInput.Create(224, 224, 3) );NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(64, 3, 1, 1) );NN.AddLayer( TNNetMaxPool.Create(2) );//112x112x64NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(128, 3, 1, 1) );NN.AddLayer( TNNetMaxPool.Create(2) );//56x56x128NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(256, 3, 1, 1) );//28x28x256NN.AddLayer( TNNetMaxPool.Create(2) );NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );NN.AddLayer( TNNetMaxPool.Create(2) );//14x14x512NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );NN.AddLayer( TNNetConvolutionReLU.Create(512, 3, 1, 1) );NN.AddLayer( TNNetMaxPool.Create(2) );//7x7x512NN.AddLayer( TNNetFullConnectReLU.Create(4096) );NN.AddLayer( TNNetFullConnectReLU.Create(4096) );NN.AddLayer( TNNetFullConnectLinear.Create(1000) );NN.AddLayer( TNNetSoftMax.Create() );
Above architectures have been added to the API as follows:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---  THistoricalNets = class(TNNet)    public      procedure AddLeCunLeNet5(IncludeInput: boolean);      procedure AddAlexNet(IncludeInput: boolean);      procedure AddVGGNet(IncludeInput: boolean);  end;
Hope it helps and happy pascal coding!

schuler:
Hello,
This message is intended to be read by whom loves pascal and neural networks.

After a long time, here we go with a new post. This time, instead of reporting new features, this post is more about how to do things than just listing new features. I would like to show how to replicate examples made for Keras and TensorFlow with CAI. As shown in the paper https://arxiv.org/abs/1412.6806, it’s possible to create efficient neural networks with just convolutional layers. Therefore, we’ll start from convolutional layers. There is a simple example from Keras located at https://keras.io/examples/cifar10_cnn/. From this code, we can find convolutional layers being added with:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))model.add(Activation('relu'))
The above can be done with CAI with just one line of code:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );
It’s also possible to add it with 2 lines of code but it’s not recommended as it’s not efficient in CAI:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN.AddLayer( TNNetConvolutionLinear.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) ); NN.AddLayer( TNNetReLU.Create() );
This is the MaxPooling from Keras:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---model.add(MaxPooling2D(pool_size=(2, 2)))
And this is the equivalent from CAI:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN.AddLayer( TNNetMaxPool.Create(2) );
The full convolutional neural network can be defined in plain Free Pascal with:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN.AddLayer( TNNetInput.Create(32,32,iInputDepth) );NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );NN.AddLayer( TNNetConvolutionReLU.Create({neurons}32, {featuresize}3, {padding}1, {stride}1) );NN.AddLayer( TNNetMaxPool.Create(2) );NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}3, {padding}1, {stride}1) );NN.AddLayer( TNNetMaxPool.Create(2) );NN.AddLayer( TNNetFullConnectReLU.Create({neurons}512) );NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );NN.AddLayer( TNNetSoftMax.Create() );
As you can see above, there aren’t any dropout layers in the Free Pascal implementation. This was done intentionally as I prefer using data augmentation instead of dropout layers. The data augmentation technique used here is very simple:

* We crop the image and then stretch back to the original size.
* We randomly flip/flop the image.
* We randomly make the image gray.The above can be easily done in Free Pascal:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---// Crop and StretchCropSizeX := random(9);CropSizeY := random(9);ImgInputCp.CopyCropping(ImgVolumes[ImgIdx], random(CropSizeX), random(CropSizeY),ImgVolumes[ImgIdx].SizeX-CropSizeX, ImgVolumes[ImgIdx].SizeY-CropSizeY);ImgInput.CopyResizing(ImgInputCp, ImgVolumes[ImgIdx].SizeX, ImgVolumes[ImgIdx].SizeY);// Flip the Imageif Random(1000) > 500 thenbegin  ImgInput.FlipX();end;// Make it Grayif (Random(1000) > 750) thenbegin  ImgInput.MakeGray(color_encoding);end;
Running the above in CAI gives really impressive results (they seem to me better than on Keras itself…). In the case that you would like to try by yourself, you can get this Lazarus Project:
https://sourceforge.net/p/cai/svncode/724/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/

When you compile and run it, select “2 – Keras inspired” in “Algo” dropdown, check “Multiple Samples at Validation” and then start it. Although this example looks simple, it’s no numerical toy. It actually has more than 2.1 million neuronal weights/connections!

I would like to also show you another example ported to Free Pascal. This time it’s an example from TensorFlow: https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py
The neuronal structure can be found in the “def inference(images)”. This structure was ported to Free Pascal as follows:

--- Code: Pascal  [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---NN.AddLayer( TNNetInput.Create(24,24,3) );NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );NN.AddLayer( TNNetConvolutionReLU.Create({neurons}64, {featuresize}5, {padding}2, {stride}1) );NN.AddLayer( TNNetLocalResponseNormDepth.Create({diameter}9) );NN.AddLayer( TNNetMaxPool.Create({size}3,{stride}2) );NN.AddLayer( TNNetFullConnectReLU.Create({neurons}384) );NN.AddLayer( TNNetFullConnectReLU.Create({neurons}192) ); NN.AddLayer( TNNetFullConnectLinear.Create(NumClasses) );NN.AddLayer( TNNetSoftMax.Create() );
Running it in CAI/Free Pascal, you’ll get similar results (classification accuracy) to results reported in TensorFlow site. In the case that you would like to run this by yourself, select “3 – TF inspired”, “Multiple Samples at Validation” and “24x24 image cropping”. The data augmentation technique found in this example is a bit different than before: we are cropping the original image into 24x24 images at random places. We are also flopping it and making it sometimes gray.

:-) I wish everyone happy pascal neural networking! :-)

avra:

--- Quote from: schuler on March 13, 2019, 10:19:49 pm ---this post is more about how to do things than just listing new features
--- End quote ---
Much appreciated. Thank you!

schuler:
 @Avra,
I'm starting to believe that most of examples showing how to use CAI are too advanced.

Just added 2 entry level examples. The first shows how to learn boolean functions XOR, AND and OR:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimple/

The next is the same as above with Pearson Correlation:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/supersimplecorrelation/

:) I hope it helps and happy pascal coding to everyone. :)

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version