Single-Layer Perceptron Neural Networks

A single-layer perceptron network consists of one or more artificial neurons in parallel.  The neurons may be of the same type we've seen in the Artificial Neuron Applet.
Layer of several units
The perceptron learning rule, which we study next, provides a simple algorithm for training a perceptron neural network. However, as we will see, single-layer perceptron networks cannot learn everything: they  are not computationally complete. As mentioned in the introduction, two-input networks cannot approximate the XOR (or XNOR) functions. Of the (22)n or 16 possible functions, a two-input perceptron can only perform 14 functions. As the number of inputs, n, increases, the proportion of functions that can be computed decreases rapidly.

Later, we will investigate multilayer perceptrons.


[Back to the Simple Perceptron Learning applet page ]