Forum > General

Trying to get started with deep neural networks

(1/1)

mikerabat:
First... great work with the conscious neural network :)  Mr. Schuler! ...
https://github.com/joaopauloschuler/neural-api


my latest reasearch was in direction of automatic apnea detection using ECG only. There are a few
papers around that heavily depend on convolutional deep neural network so since my own
tests simpler implementations didn't get over 82% accuracy I thought to give the convolutional networks a try
(which promise over 96% accuracy... wow).

so... I started but ran into some heavy problems with access violations and other stuff while trying to map the papers
suggested network layouts to the framework... Most likely I didn't understand the frameworks parameters I guess.

There are two papers that I wanted to try out: the first uses a public available ECG Apnea Database
"Automatic Detection of Obstructive Sleep Apnea Events Using a Depp CNN-LSTM Model.pdf" from Zhang et al

the second one - a more straight forward approach I guess is from:
"Automated Detection of Obstructive Sleep Apnea Events from a singlelead ecg using a cNN"
but uses a non public databse.

Here is what I'm stuck with:

First these papers use batch normalization as a first step which I think can be easily mapped to nnet.AddMovingNorm right?

The first paper actually splits the input path into 3 convolutional layers with different kernel sizes and adds them together
In a second step the use the normalized input again and add this to the output of the concatenated convoultional layers. I think
this might not work since I cannot imagine how the input space sizes can match....

The next step is a standard maxpooling layer and a dense connection layer (leaky ReLu) and here I'm stuck the second time...
Adding a Full concatination layer would result in hundreds of millions of weights obviously overwhelming the PC.

As a last point. When I try to run the net as I created it below I get exceptions at
procedure TVolume.CopyNoChecks(Original: TVolume);

(obviously running in debug mode ;) )
when the net is copied over utilizing threading.

Here is the code that creates the net:

numFeatures := 1000;  //100Hz ECG * 10 seconds.

inputLayer := TNNetInput.Create(numFeatures);
          NN.AddLayer( inputLayer );
          normLayer := NN.AddMovingNorm(False, inputLayer);

          NN.AddLayerAfter( TNNetConvolutionReLU.Create( 24, 125, 0, 1 ), normLayer );
          B1 := NN.AddLayer( TNNetMaxPool.Create( 2, 0 ) );

          NN.AddLayerAfter( TNNetConvolutionReLU.Create( 24, 15, 0, 1 ), normLayer );
          B2 := NN.AddLayer( TNNetMaxPool.Create( 2, 0 ) );

          NN.AddLayerAfter( TNNetConvolutionReLU.Create( 24, 5, 0, 1 ), normLayer );
          B3 := NN.AddLayer( TNNetMaxPool.Create( 2, 0 ) );

          NN.AddLayer(TNNetConcat.Create( B1.Output.SizeX + B2.Output.SizeX + B3.Output.SizeX, 1, 24, [B1, B2, B3] ));
          mp := NN.AddLayer( TNNetMaxPool.Create( 3, 0 ) );

          //NN.AddLayer(TNNetSum.Create([normLayer, mp] ) );     // this does not work - the sizes do not match. maybe to introduce a resampling layer?
          //NN.AddLayer( TNNetLayerFullConnectReLU.Create( mp.Output.SizeX, 1, 48)); // should be full connect leaky ReLu ... this actualy creates hundreds of millions of weights...
          NN.AddLayer( TNNetConvolutionReLU.Create( 48, 100, 0, 1)); // should be full connect leaky ReLu
          NN.AddLayer( TNNetDropout.Create( 0.25 ) );

          NN.AddLayer( TNNetMaxChannel.Create );
          //NN.AddLayer( TNNetMaxPool.Create( 2 ) );
          NN.AddLayer( TNNetFullConnectLinear.Create(2) );
          NN.AddLayer( TNNetSoftMax.Create );


Does anyone has a hint here for me?

kind regards
   Mike











Hightower:
1. What are DNNs?
 Deep Neural Network (DNN)is a class of Artificial Neural Networks that are used to solve a variety of problems using multilayer perception. They have been successful at tasks like speech recognition, computer vision, natural language processing, robotics, reinforcement learning, translation, game playing, generative art, etc.
 2. What do they learn?
 The DNN learns to perform complex functions through multiple layers of neurons. Unlike traditional NNs, DNNs can handle input data sets that change over time. This makes them useful for applications where the training data may not always be available. The network works by passing input data set through several layers of nonlinear transformations. These transformations can then be passed onto subsequent layers. The output layer usually consists of a single neuron. Through backpropagation, errors are calculated between the desired output values and the actual outputs.
 3. How does it work?
 As mentioned above, the first step in creating a DNN is to define the structure of the neural network. There are two ways to create this; manually by defining each connection between nodes, or automatically generating the connections via a genetic algorithm. Once this has been done, the input data set is fed into the network. Each node in the network receives inputs from previous nodes. Some nodes might receive multiple inputs, while others just one. When all the inputs are received, the node applies some transformation function to these values. In many cases, these functions are linear combinations of the input variables. Finally, the transformed values are passed to other nodes, where the process repeats until the final result is obtained. This process continues across the entire network. After finishing all the iterations, the error between the expected and actual results is calculated. If this error is below a certain threshold, the model is considered to have learned the target task. Otherwise, the process is repeated until the error drops below the specified threshold.

Navigation

[0] Message Index

Go to full version