### Bookstore

 Computer Math and Games in Pascal (preview) Lazarus Handbook

### Author Topic: Conscious Artificial Intelligence - Project Update  (Read 24353 times)

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #45 on: December 25, 2019, 02:15:01 am »
Good idea about AfterWeightUpdate... Thank you.

Do you have freedom to try something like this (a bigger stride compensated by more neurons in the first convolution)?
Code: Pascal  [Select][+][-]
1.       TNNetInput.Create(512, 512, 1),
2.       TNNetConvolutionReLU.Create({neurons=}128, {FeatureSize=}7, {Padding=)0, {Stride=}7, {SupressBias=}0),
3.       ...

If not, I'll eventually code a turn around (slower with less memory).

#### agorka

• New Member
• Posts: 25
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #46 on: December 26, 2019, 10:45:41 am »
Thank you!

I reduced the input size to 256x256 and was able to train the net to similar precision.
With this size, it's still enough memory to create the network in Delphi.
Now I'm adding the 3rd layer (MaxPooling) and getting another inconsistency with Keras:

My layer is
Code: Pascal  [Select][+][-]
1.       TNNetMaxPool.Create(4, 2)
2.

With input size 246 it produces output size 123.
Keras produces size 122.

Code: Pascal  [Select][+][-]
1. function TNNetPoolBase.CalcOutputSize(pInputSize: integer): integer;
2. begin
3.   Result := (pInputSize + FPadding) div FStride;
4.   if ((pInputSize + FPadding) mod FStride > 0) then Inc(Result);
5. end;
6.

I mostly find this formula for output size:
Out=(W−F+2P)/S+1

Could this be corrected to improve compatibility with other neural network frameworks?

Thanks for your continous help =)

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #47 on: December 27, 2019, 01:32:35 am »
Hello agorka,
Thank you for bringing this to my attention.

I've just created a new layer called TNNetMaxPoolPortable with your requested behavior. So, if you get the latest github version, you'll be able to replace your existing maxpools by this new one.

The original implementation has motivation behind it. Assume an 11x11 input followed by a 2x2 max pooling. In this scenario, CAI will output a 6x6 map while the portable implementation will output a 5x5 map ((11-2+2*0) div 2) + 1=5 completely missing last 11th row and column with loss of information. Therefore, I still like CAI's original implementation.

May I ask you what is your target machine? Does it have AVX or OpenCL?

In the case that you have AVX, are you curious to benchmark CAI against another API on your own environment? If yes, I can send you equivalent implementations on both APIs.

#### agorka

• New Member
• Posts: 25
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #48 on: December 28, 2019, 09:03:57 am »
Thank you!
I'm updating right now.
About the performance, I mostly care about compatibility right now, so performance doesn't matter much for me. I might return to the performance after I'll have everything working. =)

BTW, I know it's strange but I still use Delphi 7 in parallel with Lazarus, so I have to change some parts of your code to make it compatible.
I don't know if you'd like to have it compatible with D7, but still here's the changes:
Code: Pascal  [Select][+][-]
1. procedure TVolume.FillForIdx(c: T; const aIdx: array of integer);
2. var
3.   Idx: integer;
4. begin
5.   for Idx in aIdx do
6.     FData[Idx] := c;
7. end;
I change to
Code: Pascal  [Select][+][-]
1. procedure TVolume.FillForIdx(c: T; const aIdx: array of integer);
2. var
3.   I, Idx: integer;
4. begin
5.   for I:=0 to Length(aIdx)-1 do
6.   begin
7.     Idx:=aIdx[i];
8.     FData[Idx] := c;
9.   end;
10. end;
I know it's not that beautiful but hopefully not less efficient. =)

Also in CopyChannels:
Code: Pascal  [Select][+][-]
1.       for i:=0 to Length(aChannels)-1 do
2.       begin
3.         InputDepth:=aChannels[i];
4.         Self[X, Y, OutputDepth] := Original[X, Y, InputDepth];
5.         Inc(OutputDepth);
6.       end;

I'll write back as soon as I get my FFT data into the net and compare it with Keras result. I had some mismatches when processing some test data but probably it was because of different MaxPooling layer...

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #49 on: December 29, 2019, 11:46:50 pm »
@agorka,
You aren't the first to ask for D7 support. At the other extreme, at this moment, I'm retesting AVX512 support with FPC trunk...

I believe that if you open source your work about porting trained networks to pascal this would benefit the pascal user base.

In Keras, do your convolutional layers have biases? If yes, you'll need to bring too.

#### agorka

• New Member
• Posts: 25
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #50 on: December 30, 2019, 11:31:01 am »
Oh, great!
D7 support requires very little changes, such as removing inline directives, removing StrictDelimiter and a few lines more.

I finally got it kinda working. Not a very precise classification but at least something!
So of course I can share my code but it's far from ideal.
My convolutions have biases of course, but I thought it's already works. Still I need to compare Keras and Pascal predictions to check if there's any differences.
Also I removed all batch normalization layers from the network to avoid possible differences.
And my Keras network had Flatten layer which I didn't find in your library, but it seems that Dense layer work right after Convolution layer (but I'm not sure that the order of weights get transferred correctly). Removing flatten layer in Keras produces wrong dimension after Dense layer, so maybe you can implement similar Flatten layer, or confirm it should work correctly right away?

Here goes Python Keras export part:
Code: Pascal  [Select][+][-]
1. def SaveModelToTXT(model, filename):
2.     print("Saving model to", filename, "...")
3.     print("Found ", len(model.layers), " layers")
4.     for i in range(len(model.layers)):
5.         layer=model.layers[i]
6.         layerFileName = filename+"_layer"+str(i)+".csv"
7.         print(layer.name, ' to ', layerFileName)
8.         if isinstance(layer, keras.layers.Conv2D):
9.             with open(layerFileName, "w") as f:
10.                 w=layer.get_weights()
11.                 for filter in range(w[0].shape[3]):
12.                     bias=w[1][filter]
13.                     weights=w[0][:,:,:,filter].flatten()
14.                     title = "Layer" +str(i)+"  Filter "+str(filter)+"  WShape:"+str(w[0].shape)
15.                     #print(title)
16.                     dimensions='1;'+str(w[0].shape[0])+';'+str(w[0].shape[1])+';'+str(w[0].shape[2])
17.                     weights=["%.6f" % number for number in weights]
18.                     weights=';'.join(weights)
19.                     f.write(str(bias)+']'+dimensions+';'+weights+"\n")
20.         if isinstance(layer, keras.layers.Dense):
21.             with open(filename+"_layer"+str(i)+".csv", "w") as f:
22.                 w=layer.get_weights()
23.                 print(w[0].shape)
24.                 for filter in range(w[0].shape[1]):
25.                     bias=w[1][filter]
26.                     weights=w[0][:,filter].flatten()
27.                     title = "Layer" +str(i)+"  Filter "+str(filter)+"  WShape:"+str(w[0].shape)
28.                     dimensions='1;'+str(w[0].shape[0])+';1;1'
29.                     weights=["%.6f" % number for number in weights]
30.                     weights=';'.join(weights)
31.                     f.write(str(bias)+']'+dimensions+';'+weights+"\n")
32.     print("Done!")
33. SaveModelToTXT(model, modelPath+"ModelNetworkWeights")

And here's pascal reading of weights:

Code: Pascal  [Select][+][-]
1. procedure TSampleCNN.LoadWeights(FN: String);
2. var I, N : Integer;
3.     S : String;
4.     SL : TStringList;
5.     Size, SX, SY : Integer;
6.     LayerFN : TFileNameUnicodeString;
7. begin
8. For I:=0 to NN.Layers.Count-1 do
9.   Begin
10.   If (NN.Layers[i] is TNNetConvolutionLinear) or
11.      (NN.Layers[i] is TNNetConvolutionReLU) or
12.      (NN.Layers[i] is TNNetFullConnectLinear) then
13.      Begin
14.      Try
15.      SL:=TStringList.Create;
16.      LayerFN:=FN+'_layer'+IntToStr(I)+'.csv';
17.      If not FileExistsUTF8(LayerFN) then
18.         Begin
20.         Continue
21.         End;
23.      If NN.Layers[I].Neurons.Count<>SL.Count then
24.         Begin
25.         ShowMessage('Number of neurons mismatch: '+IntToStr(NN.Layers[I].Neurons.Count)+' vs '+IntToStr(SL.Count));
26.         End
27.      else
28.         Begin
29.         For N:=0 to NN.Layers[I].Neurons.Count-1 do
30.           Begin
31.           Size:=NN.Layers[I].Neurons[N].Weights.Size;
32.           SX:=NN.Layers[I].Neurons[N].Weights.SizeX;
33.           SY:=NN.Layers[I].Neurons[N].Weights.SizeY;
35.           If NN.Layers[I].Neurons[N].Weights.Size<>Size then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
36.           If NN.Layers[I].Neurons[N].Weights.SizeX<>SX then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
37.           If NN.Layers[I].Neurons[N].Weights.SizeY<>SY then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));
38.           End;
39.         NN.Layers[I].AfterWeightUpdate;
40.         End
41.      Except
42.         ShowMessage('Exception loading weights for layer '+IntToStr(I));
43.      End;
44.      SL.Free;
45.      End;
46.   End;
47. end;

Also, I was thinking, how much could this optimized if network is used for prediction? I've got more projects that would benefit from adding NN, but memory consumption will be much higher there...

Oleg.

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #51 on: December 30, 2019, 04:41:32 pm »
Thank you for sharing your code! I'm certainly going to give it a go.

A beautiful aspect of TNNetVolume is about it being able to represent data on both 1D and 3D at any time. Therefore, flatten layers aren't required in this API.

In regards to memory requirements, you may consider using separable convolutions: https://github.com/joaopauloschuler/neural-api/tree/master/examples/SeparableConvolution

If you still need common convolutions with less memory requirements, I'll add feature into my feature list.

• New Member
• Posts: 20
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #52 on: January 03, 2020, 09:17:18 am »
I am a beginner of AI and NN,
Can this API do Object detecting and get some examples?

Thanks for your greate work!

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #53 on: January 05, 2020, 01:42:19 am »
Very welcome!

Although there are plenty of examples about image classification, there is no example about object detection yet. I was almost starting to code one and then I ended coding a generative adversarial example. Porting an existing example from Keras to this API is doable.

BTW, as I'm heading for version 1.0, the time to report bugs is now.

#### agorka

• New Member
• Posts: 25
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #54 on: January 05, 2020, 09:44:04 pm »
I don't know if it's a bug report or feature request, but I think some of the networks could benefit from non-square convolutions or poolings,
for example now I'm training my FFT detector on inputs of 512x128, and with only square operations I seem to be quite limited in options. =)

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #55 on: January 06, 2020, 02:30:25 pm »
Thank you for the feedback agorka!

Totally off topic, yesterday I was googling to find how other APIs support Weight Averaging (https://arxiv.org/abs/1803.05407) and I ended discovering that plenty of them do not have support.

I'm glad to inform that CAI does have support. This is an example showing how to configure 10 epochs weight averaging for image classification:
Code: Pascal  [Select][+][-]
1. NeuralFit := TNeuralImageFit.Create;
2. NeuralFit.AvgWeightEpochCount := 10; // the number of epochs used for weight averaging
3. NeuralFit.Fit(NN, ImgTrainingVolumes, ImgValidationVolumes, ImgTestVolumes, {NumClasses=}10, {batchsize=}128, {epochs=}100);

Long life to pascal!

#### Root

• Newbie
• Posts: 4
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #56 on: January 07, 2020, 02:10:10 pm »
hi, first of all great work, I would like to ask if this code can be converted with your library.
thank you very much

https://github.com/erohkohl/question-tagging/blob/master/src/model.py

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #57 on: January 08, 2020, 07:14:33 pm »
@Root,
Yes. It can be done.

Inner layers are TNNetFullConnectReLU.

The last 2 layers are (in this order): TNNetFullConnectLinear and TNNetSoftMax.

#### Root

• Newbie
• Posts: 4
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #58 on: January 12, 2020, 09:35:48 pm »
Are you planning to implement these two algorithms?

Recurrent Neural Networks
LSTM Networks

ref.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
I think it is only to add the layer specifications and related calculations.

this is a library that implements them linearly.
https://github.com/cvsandbox/ANNT

Thank you

#### schuler

• Full Member
• Posts: 178
##### Re: Conscious Artificial Intelligence - Project Update
« Reply #59 on: January 14, 2020, 05:15:59 pm »
Hello Root,
RNN and LSTM are desirable but aren't current planned.

Next steps in this API are:
• More examples including more generative adversarial networks.
• Add support for other common data sets with examples.
• Improve documentation.
• OpenCL optimizations and better integration with TNNetVolume.
• Multi OpenCL device support so you can run your models across a number of devices with no source code change.

Soon, we'll have a new example with the Tiny ImageNet dataset https://tiny-imagenet.herokuapp.com/. This data set has 64x64 RGB images and 200 classes.

I'm happy to help anyone trying to code new layers with a forked github repo.

Wish everyone happy pascal coding.