Recent

Author Topic: A neural network framework (with autograd)  (Read 558 times)

ghora

  • Newbie
  • Posts: 3
A neural network framework (with autograd)
« on: February 12, 2020, 10:59:05 am »
Hi,

I am working on (yet) a(nother) framework to make neural networks in object pascal. It supports automatic gradient computation (first-order derivative only for now). That said, it allows you to design your model in various abstraction levels, aside from using the predefined layers, i.e., something that you could've also done in keras or pytorch. You may utilize high level API like this:

Code: Pascal  [Select][+][-]
  1. { Load and prepare the data. }
  2. Dataset := ReadCSV('data.csv');
  3. X       := GetColumnRange(Dataset, 0, 4);
  4. X       := StandardScaler(X);
  5.  
  6. { Convert raw labels to one-hot vectors }
  7. Enc := TOneHotEncoder.Create;    
  8. y   := GetColumn(Dataset, 4);
  9. y   := Enc.Encode(Squeeze(y));
  10.  
  11. { Initialize the model. }
  12. NNModel := TModel.Create([
  13.   TDenseLayer.Create(NInputNeuron, 32),
  14.   TReLULayer.Create(),
  15.   TDropoutLayer.Create(0.2),
  16.   TDenseLayer.Create(32, 16),
  17.   TReLULayer.Create(),
  18.   TDropoutLayer.Create(0.2),
  19.   TDenseLayer.Create(16, NOutputNeuron),
  20.   TSoftMaxLayer.Create(1)
  21. ]);
  22.  
  23. { Initialize the optimizer. There are several other optimizers too. }
  24. optimizer := TAdamOptimizer.Create;
  25. optimizer.LearningRate := 0.003;
  26. for i := 0 to MAX_EPOCH - 1 do
  27. begin
  28.   { Make a prediction and compute the loss }
  29.   yPred := NNModel.Eval(X);
  30.   Loss  := CrossEntropyLoss(yPred, y) + L2Regularization(NNModel);
  31.  
  32.   { Update model parameter w.r.t. the loss }
  33.   optimizer.UpdateParams(Loss, NNModel.Params);
  34. end;
  35.  

or go down to the lower level and defining your own model and loss function by writing the math directly like this:

Code: Pascal  [Select][+][-]
  1. { weights and biases }
  2. W1 := RandomTensorNormal([NInputNeuron, NHiddenNeuron]);
  3. W2 := RandomTensorNormal([NHiddenNeuron, NOutputNeuron]);
  4. b1 := CreateTensor([1, NHiddenNeuron], (1 / NHiddenNeuron ** 0.5));
  5. b2 := CreateTensor([1, NOutputNeuron], (1 / NOutputNeuron ** 0.5));
  6.  
  7. { Since we need the gradient of weights and biases, it is mandatory to set
  8.   RequiresGrad property to True. We can also set the parameter individually
  9.   for each parameter, e.g., `W1.RequiresGrad := True;`. }
  10. SetRequiresGrad([W1, W2, b1, b2], True);
  11.  
  12. Optimizer := TAdamOptimizer.Create;
  13. Optimizer.LearningRate := 0.003;
  14.  
  15. Lambda := 0.001;
  16.  
  17. for i := 0 to MAX_EPOCH - 1 do
  18. begin
  19.   { Make the prediction. }
  20.   yPred := SoftMax(ReLU(X.Dot(W1) + b1).Dot(W2) + b2, 1);
  21.  
  22.   { Compute the cross-entropy loss. }
  23.   CrossEntropyLoss := -Mean(y * Log(yPred));
  24.  
  25.   { Your usual L2 regularization term. }
  26.   L2Reg := Sum(W1 * W1) + Sum(W2 * W2);
  27.  
  28.   TotalLoss := CrossEntropyLoss + Lambda * L2Reg;
  29.  
  30.   { Update the network weight }
  31.   Optimizer.UpdateParams(TotalLoss, [W1, W2, b1, b2]);
  32. end;
  33.  

You can even skip the predefined optimizer and define your own weight update rule:

Code: Pascal  [Select][+][-]
  1. for i := 0 to MAX_EPOCH - 1 do
  2. begin
  3.   { Zero the gradient of all parameters from previous iteration. }
  4.   ZeroGradGraph(TotalLoss);
  5.  
  6.   { Make the prediction. }
  7.   yPred := SoftMax(ReLU(X.Dot(W1) + b1).Dot(W2) + b2, 1);
  8.  
  9.   { Compute the cross-entropy loss. }
  10.   CrossEntropyLoss := -Mean(y * Log(yPred));
  11.  
  12.   { Your usual L2 regularization term. }
  13.   L2Reg := Sum(W1 * W1) + Sum(W2 * W2);
  14.  
  15.   TotalLoss := CrossEntropyLoss + Lambda * L2Reg;
  16.  
  17.   { Compute all gradients by simply triggering `Backpropagate` method in the
  18.     `TotalLoss`. }
  19.   TotalLoss.Backpropagate;
  20.  
  21.   { Vanilla gradient descent update rule. }
  22.   W1.Data := W1.Data - LearningRate * W1.Grad;
  23.   W2.Data := W2.Data - LearningRate * W2.Grad;
  24.   b1.Data := b1.Data - LearningRate * b1.Grad;
  25.   b2.Data := b2.Data - LearningRate * b2.Grad;
  26. end;
  27.  

If you are interested, kindly check the project here: https://github.com/ariaghora/noe. Note that, at this stage, the framework is still a proof of concept. Thus, please bear with the leaks here and there, also with the lack of rigorous optimization. :-[

Cheers

mr-highball

  • Full Member
  • ***
  • Posts: 175
    • Highball Github
Re: A neural network framework (with autograd)
« Reply #1 on: February 12, 2020, 02:25:08 pm »
neat, repo starred

nouzi

  • Full Member
  • ***
  • Posts: 178
Re: A neural network framework (with autograd)
« Reply #2 on: February 12, 2020, 02:28:39 pm »
my english is  bad
Lazarus 2.0.6 free pascal 3.0.4
Lazarus trunk  free pascal trunk 
System : linux mint 19.3 64bit  windows 7 64bit

ghora

  • Newbie
  • Posts: 3
Re: A neural network framework (with autograd)
« Reply #3 on: February 12, 2020, 02:55:13 pm »
neat, repo starred
Thanks! So is your ezjson. Starred it. Now I'm considering to use it to save the neural net models, btw  :D.

hi @ghora
nice
see this
https://forum.lazarus.freepascal.org/index.php/topic,39049.0.html
Yeap! Been following CAI for a while. It is an awesome project. The integration with OpenCL is nice and I believe the performance is superb. Tbh, CAI inspired me to start my project. However, I'm aiming for a higher degree of flexibility to design custom layers, custom loss function (let it be for classification or regression), and prototype from a research paper (bunch of equations) to code. Thus, I highlighted automatic gradient computation. Idk if CAI supports this. Didn't get chance to scan through the source code thoroughly. Let me know if I missed something. :)

mr-highball

  • Full Member
  • ***
  • Posts: 175
    • Highball Github
Re: A neural network framework (with autograd)
« Reply #4 on: February 12, 2020, 03:11:29 pm »
neat, repo starred
Thanks! So is your ezjson. Starred it. Now I'm considering to use it to save the neural net models, btw  :D.

Cool, still working through some rtti quirks, but it's getting there :)
updates made here: https://forum.lazarus.freepascal.org/index.php/topic,47982.msg344879.html#msg344879

Root

  • Newbie
  • Posts: 4
Re: A neural network framework (with autograd)
« Reply #5 on: February 12, 2020, 04:53:43 pm »
great,
Are you planning to implement these two algorithms?

Recurrent Neural Networks
LSTM Networks

Thank you  ;)

ghora

  • Newbie
  • Posts: 3
Re: A neural network framework (with autograd)
« Reply #6 on: February 12, 2020, 05:16:01 pm »
great,
Are you planning to implement these two algorithms?

Recurrent Neural Networks
LSTM Networks

Thank you  ;)

Yes. They those cells (including GRU cell) are on my list, at least after I get what I am implementing now (2D convolution layer) works.

 

TinyPortal © 2005-2018