Tips for the Multi-Layer Perceptron demo:
- Focus on the Iris Data Set.
- Try several parameter configurations, but first try to get convergence with the default parameters for the algorithm. You can increase the number of epochs as needed. It is not necessary to obtain 100% accuracy, because the data are not linearly separable!
- You can switch views while training to get a better picture of how the training evolves.
- Convergence of the Iris Data Set can take 10,000 epochs for the Stochastic Backpropagation algorithm. It can take much fewer epochs with the Batch Backpropagation algorithm. Why?
- As you reach convergence you can manually lower the learning rate and momentum term, so as to prevent jumping over local minima.
- It is possible that the algorithm gets stuck in a local minimum that is far from ideal, say only 66% accuracy. Just press Pause, Reset and then Train to restart training.
- When there are two hidden layers, the learning rate ought to be higher. The reason is that as the error propagates from one layer to the next during weight updates, these updates become smaller and smaller. For a good accuracy, all layers must have 'active' synapses that is, synapses that are far from zero to make sure they are really 'participating' in classifying the inputs. You will notice that the accuracy increases when the synapses in all layers become 'thicker', that is, they have a higher absolute weight value.
If you have any questions or comments, let me know:
freeismsgmail.com
It would help improve these demos.