Neural networks

Last modified by Nikita Kapchenko on 2019/08/17 16:10

Back propogation. 

As with gradient descent, the idea is simple:

  1. find Unknown macro: formula. Click on this message for details.
     for each parameter in neural network (weights, biases)
  2. change weights to have better J (minimize)

Can we start from the end (final total error)? 

We have to understand how each particular weight impact the final objective function J passing through the whole chain.

Back propogation derivation. 

1. Change in objectif function by shifting a_k^{L-1} - activation in layer L-1 for neuron k.

dJda.jpg

2. neuron error in L-1 as function of neuron errir in L

dJdz.jpg

Recall Unknown macro: formula. Click on this message for details.

backpropa_plan.PNG