Perceptron - Test your knowledge!

Try to answer the following questions:
  1. Why does training some data sets take less time than others? What are the factors involved?
  2. What would happen if you presented the network with new data that did not belong to the data set? When would the network still give accurate outputs and when not?
  3. What happens when the learning rate is very large (like 5 or 20)?
  4. Would the Perceptron Learning Rule still work with a Hardlimit (Step) activation function (i.e. outputs and targets of 1 or 0)?
Answers:
  1. The training stops when all data vectors are well classified. The more data vectors, the more the weights will have to be modified and the longer it takes. When the data are very similar they tend to get classified more easily.
  2. It depends on the data of course. When the data 'behaves' like the training data, the classification will be quite similar, however, it is possible that due to a lack of training examples, the network will still classify poorly. The reason is that the algorithm could do no more changes to the weights than was necessary to classify the examples. In a given situation, the more training data the better!
  3. You can clearly see what happens in the Mapping View during training. Eventually for the data sets in the demonstrations convergence always occurs, except when the data are not linearly separable (like in the XOR case) regardless of the learning rate.
  4. Yes.


If you have any questions or comments, let me know:
freeismsATgmail.com
It would help improve these demos.