Clustering and adaptive maps

These notes have not yet been finalized for Winter, 1999.

Readings

Required readings: Chapter 13 page 452-459 (Radial Basis Functions and interpolation), Chapter 10 pages 286-290, 304-329 and 340-end of chapter (representations), Chapter 14, and Section 10.4.2 of "Neural network approaches to solving hard problems" article in coursepack.

Types of representations

There are two major kinds of representations in brains and in artifical neural networks:
  1. Distributed
  2. Localist
Anderson refers to the localist variety as a "grandmother cell representation", because it would suggest that one would have a cell tuned uniquely to each possible pattern, including a cell for detecting one's grandmother. Localist representations make it difficult to respond to in-between stimuli, e.g. if there is a white car detector and a black car detector, a grey car will not be recognized. Distributed representations are much more economical (require fewer neurons) and unlike localist ones, they do allow interpolation/generalization. On the other hand, highly distributed representations are more difficult to interpret unambiguously and therefore to respond quickly to. If an animal has a single cell that responds to "a lion approaching", it may react quickly to this important stimulus by associating a motor response ("flee") directly to the firing of that one lion-detecting cell.

Representations of maps in cortex

Two-dimensional feature maps are a common type of representation to many sensory areas of the cerebral cortex. The defining characteristic of a feature map is that features are layed out in an orderly, topographic fashion across the cortical surface, so that neighboring cells tend to respond to similar feature values. We see this pattern in the layout of the body surface (tactile receptors) in somatosensory cortex, in the layout of auditory space and frequency in auditory cortex, and in the layout of orientation responses in visual cortex, to name just a few examples. Cortical maps may be thought of as falling somewhere in between highly distributed and extremely localist representations. For simple input patterns, the resulting activation in the cortical map tends to be focused on one small cluster or region in the map. For example, in an orientation map, a single vertical bar in the middle of the retina would strongly activate vertically tuned cells in the middle of the brain's orientation map, but would also weakly activate similarly orentation-tuned cells in the same region. Thus, with relatively few cells tuned to particular orientations, the brain can detect other orientations with high precision from the relative firing rates of similarly tuned units.

Competitive Learning and Self-Organizing Feature Maps (SOFM)

Self-Organized Feature Maps are a version of competitive learning proposed by Kohonen, that allow a network to develop a feature map such as the ones described in the previous section. Self-organized learning can be characterized as displaying "global order emerging from local interactions". One example of self-organized learning in a neural network is the SOFM algorithm. There are 3 important principles from which the SOFM algorithm is derived:
  1. Self-amplification
  2. Competition
  3. Co-operation

We can defined these principles for the case of the SOFM as follows:

  1. Self-amplification: units which are on together tend to become more strongly connected. Thus, positive connections tend to be self-amplifying. This is just the Hebbian learning principle.

  2. Competition: Units enter into a competition according to which one responds "best" to the input. The definition of "best" is typically according to either (i) the Euclidean distance between the unit's weight vector and the input, or (ii) the size of the dot product between the unit's weight vector and the input. Provided the vectors are normalized, a minimum Euclidean distance is equivalent to a maximum dot product so it doesn't matter which you choose. The best-matching unit is deemed to be the winner of the competition.

  3. Co-operation: In the SOFM, each unit in the "competing layer" is fully connected to the input layer. Further, each competing unit is given a location on the map. Most often, a 2-dimensional map is used so the units are assigned locations on a 2-D lattice. (maps of one dimension or more than two dimensions are also possible). Whenever a given unit wins the competition, its neighbors are also given a chance to learn. The rule for deciding who are the neighbors may be the "nearest neighbor" rule, i.e. only the 4 nearest units in the lattice are considered to be in the neighborhood, or it could be "2 nearest neighbors", or the neighborhood could be defined as a shrinking function of the distance from each other unit and the winner. Whatever the basis for determining neighborhood membership, the winner and all its neighbors do some Hebbian learning, while units not in the neighborhood do not learn for a given pattern.

Competitive Learning is closely related to the SOFM. In fact, one commonly used version of competitive learning simply consists of using a winner-take-all activation function and Hebbian learning. This is similar to SOFM except that there is no co-operative mechanism (no neighborhood), so only the winning unit adapts its weights.

The online demos illustrate the difference between the behavior of the SOFM (Kohonen's model) and simple competitive learning on a simple problem: learning to cluster images of oriented bars. The SOFM was demonstrated on this problem in class.


[Previous] [Parent] [Next]