Oh, great!

D7 support requires very little changes, such as removing inline directives, removing StrictDelimiter and a few lines more.

I finally got it kinda working. Not a very precise classification but at least something!

So of course I can share my code but it's far from ideal.

My convolutions have biases of course, but I thought it's already works. Still I need to compare Keras and Pascal predictions to check if there's any differences.

Also I removed all batch normalization layers from the network to avoid possible differences.

And my Keras network had Flatten layer which I didn't find in your library, but it seems that Dense layer work right after Convolution layer (but I'm not sure that the order of weights get transferred correctly). Removing flatten layer in Keras produces wrong dimension after Dense layer, so maybe you can implement similar Flatten layer, or confirm it should work correctly right away?

Here goes Python Keras export part:

def SaveModelToTXT(model, filename):

print("Saving model to", filename, "...")

print("Found ", len(model.layers), " layers")

for i in range(len(model.layers)):

layer=model.layers[i]

layerFileName = filename+"_layer"+str(i)+".csv"

print(layer.name, ' to ', layerFileName)

if isinstance(layer, keras.layers.Conv2D):

with open(layerFileName, "w") as f:

w=layer.get_weights()

for filter in range(w[0].shape[3]):

bias=w[1][filter]

weights=w[0][:,:,:,filter].flatten()

title = "Layer" +str(i)+" Filter "+str(filter)+" WShape:"+str(w[0].shape)

#print(title)

dimensions='1;'+str(w[0].shape[0])+';'+str(w[0].shape[1])+';'+str(w[0].shape[2])

weights=["%.6f" % number for number in weights]

weights=';'.join(weights)

f.write(str(bias)+']'+dimensions+';'+weights+"\n")

if isinstance(layer, keras.layers.Dense):

with open(filename+"_layer"+str(i)+".csv", "w") as f:

w=layer.get_weights()

print(w[0].shape)

for filter in range(w[0].shape[1]):

bias=w[1][filter]

weights=w[0][:,filter].flatten()

title = "Layer" +str(i)+" Filter "+str(filter)+" WShape:"+str(w[0].shape)

dimensions='1;'+str(w[0].shape[0])+';1;1'

weights=["%.6f" % number for number in weights]

weights=';'.join(weights)

f.write(str(bias)+']'+dimensions+';'+weights+"\n")

print("Done!")

SaveModelToTXT(model, modelPath+"ModelNetworkWeights")

And here's pascal reading of weights:

procedure TSampleCNN.LoadWeights(FN: String);

var I, N : Integer;

S : String;

SL : TStringList;

Size, SX, SY : Integer;

LayerFN : TFileNameUnicodeString;

begin

For I:=0 to NN.Layers.Count-1 do

Begin

If (NN.Layers[i] is TNNetConvolutionLinear) or

(NN.Layers[i] is TNNetConvolutionReLU) or

(NN.Layers[i] is TNNetFullConnectLinear) then

Begin

Try

SL:=TStringList.Create;

LayerFN:=FN+'_layer'+IntToStr(I)+'.csv';

If not FileExistsUTF8(LayerFN) then

Begin

ShowMessage('File not found: '+LayerFN);

Continue

End;

SL.LoadFromFile(LayerFN);

If NN.Layers[I].Neurons.Count<>SL.Count then

Begin

ShowMessage('Number of neurons mismatch: '+IntToStr(NN.Layers[I].Neurons.Count)+' vs '+IntToStr(SL.Count));

End

else

Begin

For N:=0 to NN.Layers[I].Neurons.Count-1 do

Begin

Size:=NN.Layers[I].Neurons[N].Weights.Size;

SX:=NN.Layers[I].Neurons[N].Weights.SizeX;

SY:=NN.Layers[I].Neurons[N].Weights.SizeY;

NN.Layers[I].Neurons[N].LoadFromString(SL[N]);

If NN.Layers[I].Neurons[N].Weights.Size<>Size then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));

If NN.Layers[I].Neurons[N].Weights.SizeX<>SX then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));

If NN.Layers[I].Neurons[N].Weights.SizeY<>SY then ShowMessageU('Incorrect size loaded to layer '+IntToStr(I));

End;

NN.Layers[I].AfterWeightUpdate;

End

Except

ShowMessage('Exception loading weights for layer '+IntToStr(I));

End;

SL.Free;

End;

End;

end;

Also, I was thinking, how much could this optimized if network is used for prediction? I've got more projects that would benefit from adding NN, but memory consumption will be much higher there...

Oleg.