In the earlier post “Genesis of Deep Learning. Neuron & Artificial Neural Network (ANN)“, we discussed about the neuron and how a bunch of neurons forms an ANN. In this post, let’s have a quick appreciation and intuition on the typical life cycle of training an ANN.
To recap, in order for the ANN to make predictions, it depends on the ANN’s architecture (how many neurons, layers, etc.) and the weights (think of it as a multiplier factor) calculated against the input data. I am oversimplifying here for the ease of general understanding.
However, the ANN (just like us humans) will not be good at its job/task on first try! To improve the ANN’s performance, it goes through a training life cycle that generally takes place over 4 stages:
- Forward Pass | Estimates result through sum of neuron outputs.
- Loss Measurement | Measures the loss (error) between actual result & estimate.
- Backward Pass | Determine which weights contributed most to the loss.
- Weights Update | Adjust the weights for better estimation next round.
These 4 stages are performed iteratively until the desired performance (e.g. accuracy of classifying an image, ability to predict some values, etc.) is reached. As you may observe, the training focuses alot on the fine tuning and updating of weights. Reason being that these weights are what “filters” against input data to “decide” what is significant or insignificant for the job/task the ANN has been training for.
An analogy could be how your young children (or nephews/nieces) keeps telling you about their favourite toys and somehow, you manage to keep an eye out and recognise these particular toys over other toys when shopping for their presents. In essence, they have “trained” you to assign significance (weights) to a certain type of toys. Sounds familiar, doesn’t it? Haha!
And, that is basically the overall idea of how an ANN is being trained. Hope you’ve enjoyed this and is 1 step closer towards understanding Deep Learning.