Forum > Third party
Conscious Artificial Intelligence Keras to CAI
(1/1)
Dzandaa:
Hi everybody.
Using Lazarus 2.0.12 in Windows and Linux.
I try to translate a little keras python test to CAI:
--- Code: Python [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---import cv2import kerasfrom keras.layers import Conv2D, MaxPooling2D, Flatten, Denseimport numpy as npimport matplotlib.pyplot as pltimport tensorflow as tf img = cv2.imread("Funny_Ben.jpg")img = cv2.resize(img, (224, 224)) model = keras.Sequential() # Input# 224x224x3 # Block 1model.add(Conv2D(64, kernel_size=(3, 3), padding= "same", activation="relu", input_shape=(224, 224, 3)))# 224x224x64model.add(Conv2D(64, kernel_size=(3, 3), padding= "same", activation="relu"))# 224x224x64model.add(MaxPooling2D((2, 2), strides=(2,2)))# 112x112x64 model.build()model.summary() # Result result = model.predict(np.array([img])) # Feature listfor i in range(64): feature_img = result[0, :, :, i] ax: object = plt.subplot(8, 8, i+1) ax.set_xticks([]) ax.set_yticks([]) plt.imshow(feature_img, cmap="gray")plt.show()
what I have so far:
--- Code: Pascal [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---OpenPicture: TOpenPictureDialog; InputVolume, PredictedVolume: TNNetVolume;ImgInput: TImage; NN: TNNet; procedure TestForm.BTestClick(Sender: TObject);var Res: Boolean; TmpVolume: TNNetVolume;begin InputVolume := TNNetVolume.Create(); PredictedVolume := TNNetVolume.Create(1); if(OpenPicture.Execute) then begin TmpVolume := TNNetVolume.Create(); Res := LoadImageFromFileIntoVolume(OpenPicture.FileName, TmpVolume); if(Res) then begin ImgInput.Width := 224; ImgInput.Height := 224; InputVolume.CopyResizing(TmpVolume, 224, 224); LoadVolumeIntoTImage(InputVolume, ImgInput); // OK NN := TNNet.Create(); NN.AddLayer(TNNetInput.Create(224, 224, 3)); // 224x224x3 Input Image // Block 1 // 224x224x3 NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}0, {Stride=}0)); // 224x224x64 NN.AddLayer(TNNetConvolutionReLU.Create({Features=}64, {FeatureSize=}3, {Padding=}1, {Stride=}1)); // 224x224x64 NN.AddLayer(TNNetMaxPool.Create(2)); // 112x112x64 NN.SetLearningRate(0.01,0.8); NN.Compute(InputVolume); NN.GetOutput(PredictedVolume); // Here I want to display the Channels list from PredictedVolume in a TImage, but I don't know how... end; TmpVolume.Free; end; InputVolume.Free; PredictedVolume.Free; NN.Free;end;
I don't know if it is the right way to do that...
Any Help?
Thank you.
schuler:
Hello @Dzandaa,
I presume that you intend to plot each channel of the activation map (not the features - neuronal weights).
To everyone else reading this post, if you do not know what an activation map is, you can have a look at https://towardsdatascience.com/activation-maps-for-deep-learning-models-in-a-few-lines-of-code-ed9ced1e8d21 .
@Dzandaa, on your Python code, you are extracting a channel of the activation map: feature_img = result[0, :, :, i] .
As this has the "channel last" notation, i is the channel.
This is what I would do in Pascal:
To produce an image, you need 3 channels (RGB). So, I would first produce a volume with 3 channels by copying the same channel with neuralvolume TVolume.CopyChannels(Original: TVolume; aChannels: array of integer) . In your code, it would look like this:
--- Code: Pascal [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---ImageVolume.CopyChannels(PredictedVolume, [i,i,i]);
As you have a ReLU, you won't have negative values for us to care about.
Then, you'll need to scale the volume to [0, 255]:
--- Code: Pascal [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---ImageVolume.NormalizeMax(255);
Then, to transform the ImageVolume into an image, you can call either SaveImageFromVolumeIntoFile or LoadVolumeIntoImage from neuraldatasets unit.
Alternatively, if your form has a TImage, you can call LoadVolumeIntoTImage from the unit neuralvolumev instead of calling SaveImageFromVolumeIntoFile or LoadVolumeIntoImage .
If, instead of plotting channels, you really intend to plot features, you can look at https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/experiments/visualCifar10BatchUpdate/ . In this source code example, the features of the first layer are plotted.
Does this reply solve the question?
Dzandaa:
Hi Joao Paulos,
Thank you very much.
This is just what I want, I will try it tomorrow.
Dzandaa:
Hi,
It's working!!!
Thank you very much.
next Step OpenCL :)
B->
Navigation
[0] Message Index