Neural networks
Last modified by Nikita Kapchenko on 2019/08/17 16:10
Back propogation.
As with gradient descent, the idea is simple:
- find Unknown macro: formula. Click on this message for details.
for each parameter in neural network (weights, biases) - change weights to have better J (minimize)
Can we start from the end (final total error)?
We have to understand how each particular weight impact the final objective function J passing through the whole chain.
Back propogation derivation.
1. Change in objectif function by shifting a_k^{L-1} - activation in layer L-1 for neuron k.
2. neuron error in L-1 as function of neuron errir in L
Recall Unknown macro: formula. Click on this message for details.