What is a restricted Boltzmann machine?

A restricted Boltzmann machine is a simple stochastic artificial neural network. It was developed by Geoffry Hinton and Terrence J. Sejnowski in 1985. Such networks are named after the Boltzmann distribution. Unfortunately, it is difficult to train a corresponding machine without restrictions on connections. However, it is possible to restrict connections between neurons and thus drastically simplify learning processes, which then results in a restricted Boltzmann machine for learning. Solution to many practical problems suitable.

Such a Boltzmann machine is a network of neurons. In this, an energy level is defined, just like in a Hopfield network. The neurons only take on binary values - 0 or 1. In the Boltzmann machine, they behave stochastically. The restricted Boltzmann machine consists of visible units and hidden units.

A feature vector is applied to the invisible units. Both the visible and the invisible units are not connected to each other. But the visible units are fully connected to the hidden units.

These form a bipartite, undirected graph. There are parameters to be learned. These are the weights of the edges between the visible and the hidden units. The so-called bias vectors of the hidden and also the visible units are also learned. This is done using a Contrastive Divergence Algorithm.

What are restricted Boltzmann machines used for?

The constrained Boltzmann machines play an important role in reduction, classification and regression, for collaborative filtering, feature selection and feature extraction, feature learning and topic modelling.

Where does the name Boltzmann machines come from?

Boltzmann machines are named after the Austrian scientist Ludwig Boltzmann. Ludwig Boltzmann invented the Boltzmann distribution.

How does a simulated annealing algorithm process work?

This process can be used to calculate an annealing rate. The initial temperature and the final temperature as well as the temperature decay formula are included in the calculation. The transition probability functions of the neuron are calculated. The positive learning phase is accomplished with a self-associative or heterogeneous BM network. Self-associative BM networks are equipped with a reverse learning phase. Here, signals are input to the network and input and output nodes as well as hidden nodes can move freely.

How can a Boltzmann machine be used to solve NP-complete problems?

For these problems, a direct calculation of the solution is not possible. The solution space is represented with a probability distribution. The algorithm performs a random walk through this solution space. The solution state should be as probable as possible. The most general formulation possible is done with a Boltzmann machine. With the help of the Boltzmann machine, the solution space can be searched as efficiently as possible.