Associative Memory: Hopfield model

Introduction

The Hopfield model is a distributed model of an associative memory. Neurons are pixels and can take the values of -1 (off) or +1 (on). The network has stored a certain number of pixel patterns. During a retrieval phase, the network is started with some initial configuration and the network dynamics evolves towards the stored pattern which is closest to the initial configuration.

In the Hopfield model each  neuron is  connected to every other neuron (full connectivity). The connection matrix is
Wik= (1/N)  Summ  Xim Xkm
where N is the number of neurons, Xkm is the value of neuron k in pattern number m and the sum runs over all patterns from m=1 to m=p. This is a simple correlation based learning rule (Hebbian learning). Since it is not a iterative rule it is sometimes called one-shot learning. The learning rule works best if the patterns that are to be stored are random patterns with equal probability for on (+1) and off (-1). In a large networks (N to infinity) the number of random patterns that can be stored is approximately 0.14 times N.

Credits

This applet was written by Olivier Michel (adapted from Matt Hill -- mlh1@cornell.edu).

Instructions

Use the mouse to enter a pattern by clicking squares inside the rectangle "on" or "off". Then, have the network store your pattern by pressing "Memorize". After storing some patterns (typically two), try entering a new pattern which you will use as a test pattern. Do not impose this new pattern, but use it as an initial state of the network. Press "Test" repeatedly to watch the network settle into a previously imposed state.

First exercice: 4x4 nodes

  1. What is the experimental maximum number of random classes the network is able to memorize? Store more and more random patterns and test each pattern immediately after imposing it. The first few patterns should be stored perfectly, but then the performance gets worse.
  2. What is the theoretical maximum number of random classes the network is able to memorize?
  3. Do the experimental results agree with the theory? To check this, repeat the experiment number 1. several times.
  4. What is the theoretical maximum number of orthogonal classes the network is able to memorize? Hint: How many orthogonal states (=vectors) can there be in a network of size N?
  5. What is the experimental maximum number of orthogonal classes the network is able to memorize? Try to construct orthogonal patterns systematically.
  6. Do the experimental results agree with the theory?

Second exercice: 10x10 nodes

  1. What is the theoretical maximum number of random classes the network is able to memorize?
  2. What is the experimental maximum number of random classes the network is able to memorize?
  3. Do the experimental results agree with the theory?
  4. Store a finite number of random patterns, e.g., 8. How many wrong pixels can the network tolerate in the initial state so that it still settles into correct pattern?
  5. Try to store characters as the relevant patterns. How good is the retrieval? What is the reason?