Perceptron - Test your knowledge!
Try to answer the following questions:- Why does training some data sets take less time than others? What are the factors involved?
- What would happen if you presented the network with new data that did not belong to the data set? When would the network still give accurate outputs and when not?
- What happens when the learning rate is very large (like 5 or 20)?
- Would the Perceptron Learning Rule still work with a Hardlimit (Step) activation function (i.e. outputs and targets of 1 or 0)?
- The training stops when all data vectors are well classified. The more data vectors, the more the weights will have to be modified and the longer it takes. When the data are very similar they tend to get classified more easily.
- It depends on the data of course. When the data 'behaves' like the training data, the classification will be quite similar, however, it is possible that due to a lack of training examples, the network will still classify poorly. The reason is that the algorithm could do no more changes to the weights than was necessary to classify the examples. In a given situation, the more training data the better!
- You can clearly see what happens in the Mapping View during training. Eventually for the data sets in the demonstrations convergence always occurs, except when the data are not linearly separable (like in the XOR case) regardless of the learning rate.
- Yes.
If you have any questions or comments, let me know:
freeismsgmail.com
It would help improve these demos.