Recent

Author Topic: Conscious Artificial Intelligence - Project Update  (Read 60693 times)

Dzandaa

  • Full Member
  • ***
  • Posts: 248
  • From C# to Lazarus
Re: Conscious Artificial Intelligence - Project Update
« Reply #135 on: December 09, 2022, 11:45:22 am »
Dr Schuler  :D,


Congratulation!!!

Great job.

B->
Dzandaa

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #136 on: December 10, 2022, 04:26:26 am »
@metis and @Dzandaa,
Thank you!

@Thaddy,
Quote
Well, with a last name like that, you obviously felt the pressure to become a scholar.
You certainly have a point.


schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #137 on: December 10, 2022, 04:46:52 am »
... the coding continues ...

The paper Searching for Activation Functions describes the search for a better activation function than ReLU. In their work, the authors found Swish to be the best replacement for ReLU. The downside of Swish is: it requires a lot of computation to calculate it. Later, the paper Searching for MobileNetV3 introduces the Hard Swish activation function. The Hard Swish gives similar results to Swish with a lot less computation.

I've just added to the source code the Hard Swish implementation and done new runs comparing ReLU, Swish and Hard Swish. Follow source codes and results.

Source codes are:

Results are (CIFAR-10 test classification accuracy):

Looks pretty good to me!

:) Wish everyone happy pascal coding. :)
« Last Edit: December 11, 2022, 03:21:32 am by schuler »

Dzandaa

  • Full Member
  • ***
  • Posts: 248
  • From C# to Lazarus
Re: Conscious Artificial Intelligence - Project Update
« Reply #138 on: December 10, 2022, 04:10:42 pm »
Hi,
thank you for the update.

Have you tested the "mish" activation?
https://ai-scholar.tech/en/articles/treatise/mish-ai-374

It seems easy to implement.

B->
Dzandaa

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #139 on: December 11, 2022, 04:14:55 am »
@Dzandaa,
It looks good.

I've just added a feature request to this: https://github.com/joaopauloschuler/neural-api/issues/106 .

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #140 on: December 21, 2022, 06:14:42 pm »
DrDub from GitHub coded a simple web server for a neural network that receives an image and classifies it. I did plenty of updates on the top of his code. The code can be found at:

https://github.com/joaopauloschuler/neural-api/tree/master/examples/ResNet/server

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #141 on: January 09, 2023, 05:34:18 pm »
 :) Hello :)
Just finished updating the autoencoder example: https://github.com/joaopauloschuler/neural-api/tree/master/examples/VisualAutoencoder .

This example uses the freshly tested (but coded long ago) TNNetUpsample layer for increasing resolution in the decoder side. TNNetUpsample converts channels (depth) into spatial data. For example, a 128x128x256 activation map will be converted to 256x256x64. This layer type does a similar job to Tensorflow's tf.nn.depth_to_space.

 :) Wish everyone happy pascal coding :)

Dzandaa

  • Full Member
  • ***
  • Posts: 248
  • From C# to Lazarus
Re: Conscious Artificial Intelligence - Project Update
« Reply #142 on: February 12, 2023, 12:26:59 pm »
Your sources are available on 2 platforms
Which are the most recent?

Thank you.
Dzandaa

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #143 on: May 25, 2023, 08:15:26 am »
:) Hello :)

Sorry for being away for so long.

Given that everyone is talking about chatGPT and language models in general, I'll join the trend.

I've been doing experiments with chatGPT 3.5, chatGPT 4 and Hugging Face. For some questions, they all fail terribly. For other questions, one or another gives a good reply.

Anyway, follows a very interesting experiment about Free Pascal.

Schuler:

Hello. I hope that you are doing well. I'm trying to understand the following free pascal source code. Do you know what it does please?

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer([
  3.   TNNetInput.Create(32, 32, 3), //32x32x3 Input Image
  4.   TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  5.   TNNetMaxPool.Create({Size=}2),
  6.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  7.   TNNetMaxPool.Create({Size=}2),
  8.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  9.   TNNetFullConnectReLU.Create({Neurons=}32),
  10.   TNNetFullConnectLinear.Create(NumClasses),
  11.   TNNetSoftMax.Create()
  12. ]);

GPT-3.5 :

The given source code is creating a neural network model using the Free Pascal programming language. Here's a breakdown of what the code does:

NN := TNNet.Create(); - This line creates an instance of a neural network model called NN using the TNNet class.

NN.AddLayer([...]); - This line adds layers to the neural network model. Each layer is specified as an array of objects representing different types of layers in the network.

a. TNNetInput.Create(32, 32, 3) - This layer represents the input layer of the network and expects 32x32x3 input images (32 pixels width, 32 pixels height, and 3 color channels for RGB).

b. TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0) - This layer is a convolutional layer with Rectified Linear Unit (ReLU) activation. It performs 2D convolutions on the input images with 16 features (output channels), a feature size of 5x5, no padding, a stride of 1, and bias not suppressed.

c. TNNetMaxPool.Create({Size=}2) - This layer is a max pooling layer that downsamples the input by selecting the maximum value within a 2x2 window.

d. The above two layers (convolution + max pooling) are repeated twice with different feature sizes (32 features) but with the same configuration.

e. TNNetFullConnectReLU.Create({Neurons=}32) - This layer is a fully connected layer with ReLU activation. It connects all the neurons from the previous layer to the current layer with 32 neurons.

f. TNNetFullConnectLinear.Create(NumClasses) - This layer is a fully connected layer with linear activation. It connects the previous layer to the output layer, where NumClasses represents the number of classes in the classification task.

g. TNNetSoftMax.Create() - This layer is the output layer that applies the softmax function to produce probability scores for each class.

By adding these layers to the neural network model, the code creates a convolutional neural network (CNN) architecture commonly used for image classification tasks. The specific architecture details (number of layers, feature sizes, etc.) may vary depending on the specific problem domain and requirements.

Schuler :
Regarding the provided source code above, can you rewrite it adding more and better source code comments please?

GPT-3.5 :
Certainly! Here's the rewritten source code with more comprehensive comments:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create(); // Create a new neural network model
  2.  
  3. NN.AddLayer([
  4.   TNNetInput.Create(32, 32, 3), // Input layer: Expects 32x32x3 input images (32 pixels width, 32 pixels height, and 3 color channels for RGB)
  5.  
  6.   TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  7.   // Convolutional layer with ReLU activation:
  8.   // - Features: 16 output channels
  9.   // - FeatureSize: 5x5 filter size
  10.   // - Padding: 0 (no additional padding)
  11.   // - Stride: 1 (movement of the filter by 1 pixel at a time)
  12.   // - SuppressBias: 0 (include bias parameters)
  13.  
  14.   TNNetMaxPool.Create({Size=}2), // Max pooling layer: Downsampling the input by selecting the maximum value within a 2x2 window
  15.  
  16.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  17.   // Another Convolutional layer with ReLU activation:
  18.   // - Features: 32 output channels
  19.   // - FeatureSize: 5x5 filter size
  20.   // - Padding: 0
  21.   // - Stride: 1
  22.   // - SuppressBias: 0
  23.  
  24.   TNNetMaxPool.Create({Size=}2), // Another Max pooling layer
  25.  
  26.   TNNetConvolutionReLU.Create({Features=}32, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  27.   // Another Convolutional layer with ReLU activation:
  28.   // - Features: 32 output channels
  29.   // - FeatureSize: 5x5 filter size
  30.   // - Padding: 0
  31.   // - Stride: 1
  32.   // - SuppressBias: 0
  33.  
  34.   TNNetFullConnectReLU.Create({Neurons=}32), // Fully connected layer with ReLU activation: Connects all neurons from the previous layer to the current layer with 32 neurons
  35.  
  36.   TNNetFullConnectLinear.Create(NumClasses), // Fully connected layer with linear activation: Connects the previous layer to the output layer with 'NumClasses' number of neurons/classes
  37.  
  38.   TNNetSoftMax.Create() // Softmax layer: Produces probability scores for each class
  39. ]);
  40.  

--
Additional comments from Schuler:
GPT-3.5 was able to vastly go beyond the original source code comments. The only mistake I can see is "repeated twice". I would say "repeated" or "repeated once".

What is now clear to me: GPT-3.5 can help at explaining existing free pascal source code and can help at adding documentation.

Another topic: I'll be at https://www.pascalcongress.com/ ...

--

I then asked the very first question to GPT-4 and this is what I've got:

GPT-4 :
This Pascal code is defining a Convolutional Neural Network (CNN) using the TNNet library. CNNs are a class of deep neural networks, most commonly used to analyze visual images.

Let's break down the code:

TNNet.Create(): This line is creating a new instance of a neural network object.

NN.AddLayer(): This method is adding layers to the neural network. In this case, the network contains the following layers:

TNNetInput.Create(32, 32, 3): This is the input layer of the neural network. It's expecting input images of size 32x32 with 3 channels (probably RGB color images).

TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0): This is a convolutional layer with a Rectified Linear Unit (ReLU) activation function. It uses 16 filters, each with a size of 5x5. There's no padding and the stride is 1. SuppressBias is set to 0, which means bias is not suppressed (bias is a constant value added to the weighted inputs and the bias before passing through the activation function).

TNNetMaxPool.Create({Size=}2): This is a max pooling layer, which reduces the spatial size of the representation to reduce the amount of parameters and computation in the network. This layer is using a 2x2 pooling window.

Another convolutional ReLU layer and max pooling layer are added, this time with 32 filters.

TNNetFullConnectReLU.Create({Neurons=}32): This is a fully connected layer with a ReLU activation function. The layer has 32 neurons.

TNNetFullConnectLinear.Create(NumClasses): This is a fully connected linear layer. The number of neurons equals to the number of classes in the problem at hand, which is defined elsewhere in the code as NumClasses.

TNNetSoftMax.Create(): The final layer is a SoftMax layer, which outputs a vector that represents the probability distributions of a list of potential outcomes. It's most commonly used in applied machine learning, where it can be used for multi-class classification.

So, the code is building a Convolutional Neural Network with specific architecture to be trained and used for image classification tasks. The specific task would be defined by the training data and the NumClasses variable.

--

More comments from Schuler: GPT-4 hasn't made a single error explaining this code. Users could also ask about "please clarify further this line or that line",  "please teach me the concept of ...", "please improve this portion of code" (would not make sense as my source code is perfect...  O:-)  ).

--
HuggingChat OpenAssistant/oasst-sft-6-llama-30b:

This Free Pascal source code appears to define an object called NN, which is an instance of class TNNet from some package. From looking at this code snippet alone, we can deduce that this code creates layers in NN using methods from different classes defined within the TNNet module (or other dependent packages). These classes include TNNetInput for input data layer, TNNetConvolutionReLU/TNNetMaxPool sequence for convolutional layers, TNNetConvolutionReLU/TNNetMaxPool again and then FullConnect layers before finishing off with a softmax output activation function (SMO) layer, ultimately producing outputs for multiple different classes NumClasses. But without knowing more details about how these components work together it’s impossible to tell precisely what these weights compute specifically.
 
« Last Edit: May 26, 2023, 05:17:06 pm by schuler »

Dzandaa

  • Full Member
  • ***
  • Posts: 248
  • From C# to Lazarus
Re: Conscious Artificial Intelligence - Project Update
« Reply #144 on: May 25, 2023, 04:19:38 pm »
Hi
A little scary isn't it?
Especially since ChatGPT was trained in part with data from GitHub without asking for approval.  :D

One question:
Do you have an idea for training a network to detect atoms from spectra?

I thought about an auto-encoder but I'm stuck for the rest

Have a nice day

B->
« Last Edit: May 25, 2023, 07:17:29 pm by Dzandaa »
Dzandaa

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #145 on: May 26, 2023, 05:48:22 pm »
@dzandaa,
regarding "scary", I would look at the following from gpt 3.5:

Code: Pascal  [Select][+][-]
  1.   TNNetConvolutionReLU.Create({Features=}16, {FeatureSize=}5, {Padding=}0, {Stride=}1, {SuppressBias=}0),
  2.   // Convolutional layer with ReLU activation:
  3.   // - Features: 16 output channels
  4.   // - FeatureSize: 5x5 filter size
  5.   // - Padding: 0 (no additional padding)
  6.   // - Stride: 1 (movement of the filter by 1 pixel at a time)
  7.   // - SuppressBias: 0 (include bias parameters)

The comments above are correct. It (or him - I don't know) guessed by itself "16 output channels", "no additional padding", "movement of the filter by 1 pixel at a time" and "include bias parameters". All these guesses are correct. The most impressive of all, "SuppressBias" is very particular to this API...

Still regarding "scary", I hope people will not try to use these technologies for automating decision making. The chances for a disaster are total. I also fear that people might end blindly believing on whatever these technologies say. It's absolutely important for users to keep critical thinking when using social media, search engines and language models. Language models could be used to provide information that fits a specific narrative and users would think that they arrive to conclusions by themselves when they were actually manipulated into certain beliefs.

Anyway, it's a fantastic technology.

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #146 on: July 30, 2023, 02:18:42 am »
 :) Hello :)

I've just published the source code and trained neural networks for the Malaria, Colorectal Cancer and Plant Village image classification datasets. The test classification accuracies are 95.63%, 94.26% and 99.03% respectively. I showed these models (neural networks) for the first time in the International Pascal Congress of 2023. These models are now public.

You can find details (source code, dataset, trained model for download) at:
https://github.com/joaopauloschuler/pre-trained-neural-api-networks

:) Wish everyone happy pascal coding :)

Dzandaa

  • Full Member
  • ***
  • Posts: 248
  • From C# to Lazarus
Re: Conscious Artificial Intelligence - Project Update
« Reply #147 on: July 30, 2023, 03:44:04 pm »
Hi,

Thank you very much,
I'll test it tomorrow.

Happy Coding!!!

B->
Dzandaa

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #148 on: July 30, 2023, 11:15:59 pm »
I'll test it tomorrow.

I'll add code examples for loading and testing trained models. I'll let you know when ready.

schuler

  • Full Member
  • ***
  • Posts: 223
Re: Conscious Artificial Intelligence - Project Update
« Reply #149 on: July 31, 2023, 05:47:34 pm »
I'll test it tomorrow.

I've just added examples to make the testing easier. For testing only one image, you can follow this example:

Code: Pascal  [Select][+][-]
  1.   procedure DoRun;
  2.   var
  3.     NN: TNNet;
  4.     ImageFileName: string;
  5.     NeuralFit: TNeuralImageFit;
  6.     vInputImage, vOutput: TNNetVolume;
  7.     InputSizeX, InputSizeY, NumberOfClasses: integer;
  8.   begin
  9.     WriteLn('Loading Neural Network...');
  10.     NN := TNNet.Create;
  11.     NN.LoadFromFile('SimplePlantLeafDisease-20230720.nn');
  12.     NN.DebugStructure();
  13.     InputSizeX := NN.Layers[0].Output.SizeX;
  14.     InputSizeY := NN.Layers[0].Output.SizeY;
  15.     NumberOfClasses := NN.GetLastLayer().Output.Size;
  16.  
  17.     NeuralFit := TNeuralImageFit.Create;
  18.     vInputImage := TNNetVolume.Create();
  19.     vOutput := TNNetVolume.Create(NumberOfClasses);
  20.  
  21.     ImageFileName := 'plant/Apple___Black_rot/image (1).JPG';
  22.     WriteLn('Loading image: ',ImageFileName);
  23.  
  24.     if LoadImageFromFileIntoVolume(
  25.       ImageFileName, vInputImage, InputSizeX, InputSizeY,
  26.       {EncodeNeuronalInput=}csEncodeRGB) then
  27.     begin
  28.       WriteLn('Classifying the image:', ImageFileName);
  29.       vOutput.Fill(0);
  30.       NeuralFit.ClassifyImage(NN, vInputImage, vOutput);
  31.       WriteLn('The image belongs to the class of images: ', vOutput.GetClass());
  32.     end
  33.     else
  34.     begin
  35.       WriteLn('Failed loading image: ',ImageFileName);
  36.     end;
  37.  
  38.     vInputImage.Free;
  39.     vOutput.Free;
  40.     NeuralFit.Free;
  41.  
  42.     WriteLn('Press ENTER to quit.');
  43.     ReadLn();
  44.  
  45.     NN.Free;
  46.   end;
  47.  

The above source code is located at:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/TestPlantLeafDiseaseTrainedModelOneImage.pas

If you would like to test against the actual training dataset, you can follow this example:
https://github.com/joaopauloschuler/neural-api/blob/master/examples/SimplePlantLeafDisease/TestPlantLeafDiseaseTrainedModel.pas

 

TinyPortal © 2005-2018