Try here: http://forum.lazarus.freepascal.org/index.php/topic,32620.msg210473.html#msg210473Nope, you missed the point. That is actually very nice and concise code... Short is better than long. Playing with it..
Would you be so kind to explain why? I don't get it. What's wrong with this simple back propagation pascal code that you get after following the first link:Try here: http://forum.lazarus.freepascal.org/index.php/topic,32620.msg210473.html#msg210473Nope, you missed the point
:) Hello :)Ah... if only you made it 7 years earlier, I might get an A from that class :P
Just to let you know that I've just implemented a backpropagation algorithm in Lazarus:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ubackpropagation.pas
:) Have fun :)
Thanks for the reply and explanation.
But I'm not sure if Self.Size represents the number of records in the List, like List.Count.
Anyone interested in TensorFlow instead of rolling your own neural network?
I'm currently typing a paper for peer review with impressive results in regards to CIFAR-10 classification.
What was the reason for implementing something like this?
Ok, I just saw that Size resembles the size of all dimensions of the Volume:
Hi Phil,
Thank you for sharing links so it gives me opportunity to see an implementation for MNIST.
I needed this:
TNeuralFloatArr = array[0..300000000] of TNeuralFloat;
To be able to declare this:
TNeuralFloatArrPtr = ^TNeuralFloatArr;
With this type, I can then use:
[...]
Yes, I saw that. But you are aware that this array clogs 1.2GB of the max. 2GB global memory? Why does it need to have 300 Mio elements? Why not 250 Mio or 100 Mio or 10 Mio? Is there any specific reason for that number?
But you are aware that this array clogs 1.2GB of the max. 2GB global memory?
https://macpgmr.github.io/MacXPlatform/PascalForTensorFlow.html
Most processes are multi-step (flowchart): they require multiple actions in sequence (probably all a neural network as well) to get a result. How do you calculate a score for the learning from that?
But how do you pinpoint the exact step that was weakest and should be tweaked most? Or would you need a kind of unit test for each step?This is done via derivatives/slope/gradient and delta/error/distance/loss. If you have a big slope with a big delta on any given neuron/weight, there will be big learning/correction.
Is the learning always a separate pass to create a file with biases and weights, or can the network keep on learning as it goes? It would need some feedback for that, which is probably generated by a different process and so might have a different format, and might have to be processed itself before it becomes useful. How would you do that? Use another neural network to process the feedback? But that should have learning feedback as well
As an example, when you are happy with the quality of a "face detection" NN, you can freeze your NN, save your NN to file, compile the code to an Android device (resource limited) and run just the forward pass in your mobile device.
@PhilQuotehttps://macpgmr.github.io/MacXPlatform/PascalForTensorFlow.html
Very interesting project. I might need your help to properly benchmark CAI against TF in the future.
I have a question to you: have you benchmarked PascalForTensorFlow against Python with TF? I would presume that Pas2TF is a lot faster than Python + TF.
In human vision, first we detect edges and then shapes.In machine vision, it's exactly the same.
QuoteBut you are aware that this array clogs 1.2GB of the max. 2GB global memory?
I think that SymolicFrank got the idea. This array is never ever allocated. I just need a pointer type. The big number is there just to avoid range checks when debugging. BTW, I think that I should replace the big number by a constant as shown in the SymbolicFrank's post.
I don't have a Python version of MNIST, just the Swift version to check my Pascal version's results against. I would guess that almost all processing takes place within the TensorFlow library, not in the calling, so choice of language shouldn't matter much for this example (mostly interative execution of tensor operations by TensorFlow, not much really going on in the Swift or Pascal code).
mw108, it is already dynamic. There is no array taking up 1.2 GB.
Long story short: All good. Thanks for the clarification. Forget what I said. :D
Although that is a smart approach it is not a fast approach.
Usually speed is here the foremost importance. String manipulation does not help here.
Now we just need a non-TensorFlow version of MNIST to test against.
Now we just need a non-TensorFlow version of MNIST to test against.
I did a MNIST implementation using the CAI framework.
The NN layout is based on one of the CAI CIFAR learning applications, inspired by TF.
Ok, did some read ups and it seems that a 128 batch indeed completes in approx. 0.5sec in TF.
https://stackoverflow.com/questions/48035125/speed-of-logistic-regression-on-mnist-with-tensorflow
Thats really fast, yea. And CAI very slow, compared to that. But then in the end, it is still WIP. Maybe João can get it faster over time.
And I personally don't understand TF at all. Imho it is very complicated for beginners to get a hold on it.
https://towardsdatascience.com/machine-learning-with-swift-for-tensorflow-9167df128912?gi=aadadd2fbc78Ok, I will check that out. Thanks. :)