Hopfield networks and Boltzmann machines

 Section on Boltzmann machines will not be covered in 1999.

Readings

Required readings: Chapter 12 and sections 10.1 and 10.8 in coursepack article "Neural network approaches to solving hard problems". In 1999, we will only cover the sections in chapter 12 up to page 413.

Definition of a Binary Hopfield Network

The standard binary Hopfield network is a recurrently connected network with the following features:

The most important features of the Hopfield network are:

However, there are some serious drawbacks to Hopfield networks:

The Boltzmann machine, described below, was designed to overcome these limitations.

Hopfield network demo

The following matlab program sets up a Hopfield network having units arranged in a 2-dimensional grid with local connectivity. Each unit is symmetrically connected to its four nearest neighbors on the grid and the connection weights are all equal to 1.0. This example is described further in Anderson, Chapter 12. To achieve a minimum in energy, every unit should agree with its four neighbors. Thus, all units should be in the same state to achieve a mininum energy configuration.

The states of the units in the network are initialized as follows: a central square of units are turned on, the rest are turned off, and then all units are randomly flipped with probability 0.1.

Here is an example of the initial state of the network.

For this network, there are only two global energy minima, one with all units on and one with all units off. After settling, however, the network usually ends up in a "blend state" which is a blend of the two global minima.

Here is an example of the final state of the network after settling. This is a local minimum, sometimes called a spurious attractor.

Matlab code:

You can run the above demo by loading initHopfield, and then repeatedly loading forwardHopfield. Each time forwardHopfield is run, you should then call plotHopActivations, to display the states of all units in the network in a 2-dimensional layout.

Boltzmann Machines

The binary Boltzmann machine is very similar to the binary Hopfield network, with the addition of three features:

In class, we saw a demonstration on overhead transparencies of the Boltzmann machine performing figure-ground segregation. This network was hard-wired (i.e. hand-selected weights, no learning). Some units were designated as "edge units" which had a particular direction and orientation, while others were designated as "figure-ground units". At each image location there was a full set of edge units in every possible orientation (horizontal or vertical) and direction (left, right, up or down), and a full set of figure/ground units (one of each). The weights could be excitatory or inhibitory, and represented particular constraints amongst the figure/ground and edge units. For example, an edge unit at one location would inhibit the edge unit of the same orientation but opposite direction at the same location. Another example: a vertical right-ward pointing edge unit would excite a figure unit at the next image local to the right, and inhibit a ground unit at that location, and would inhibit the figure unit to the left and excite the ground unit to the left. The entire network was initialized with the appropriate edge units turned on, and all other units off, and then all units were randomly flipped with some small probability so that the input was noisy. Units states were then updated using simulated annealing. The network was shown to be able to fill in continuous regions and label them as either figure or ground. The region could be non-convex (e.g. the letter C). The network could also fill in non-continuous edges, exhibiting "illusory contours".


[Previous] [Parent] [Next]