Competitive Learning
Methods  |
Introduction
This applet, DemoGNG, implements several methods related to competitive
learning. It is possible to experiment with the methods using various data
distributions and observe the learning process. A common terminology is
used to make it easy to compare different methods.
Hopefully, experimentation with the models will increase one's intuitive
understanding and make it easier to judge their particular strengths and
weaknesses. Here is a link on the updated version of the applet.
The following algorithms are available:
- Growing Neural Gas (Fritzke)
- Hard Competitive Learning (standard algorithm)
- Neural Gas (Martinetz and Schulten)
- Neural Gas with Competitive Hebbian Learning (Martinetz and Schulten)
- Competitive Hebbian Learning (Martinetz and Schulten)
- LBG (Linde, Buzo, Gray)
- Growing Grid (Fritzke)
- Self-Organizing Map (Kohonen)
Credits
The original applet
was written by Hartmut
S. Loos and Bernd
Fritzke and slightly modified by Olivier
Michel and Sébastien Baehni.
Applet
DemoGNG (Version 1.3)
(Please wait while loading ca. 115 KByte class-code.)
Questions
- Hard Competitive Learning: explain the behavior of the iterative
LGB method (K-means). Test this algorithm on the ring distribution and
on the HiLo density.
- Kohonen (Self Organizing Map): what are the principles of this algorithm?
Test it with the ring distribution and the HiLo density. Explain the
experimental results according to the theory.
- Configure the Kohonen algorithm with a grid size of 1x30 and choose the
rectangular shape. What is the role of the sigmaf parameter ? What is
the value of the sigmaf parameter for which the representation changes
from a straight line to an oscillatory line ?