Backpropagation

What is backpropagation?

Backpropagation is a widely used technique in AI. Backpropagation and Backpropagation of Error (error backpropagation) is used for learning artificial neural networks. This method belongs to the group of supervised learning methods and it is applied as a generalisation of the delta rule in multilayer networks. There must always be an external teacher who knows a desired output, namely the target value, at each point in time of the input. This backpropagation is a certain special case of a general gradient method in an optimisation based on the mean square error.

The procedure was already proposed by some authors in the 1970s, but then fell into oblivion for a decade until various authors rediscovered it. Today, backpropagation is one of the most important methods for artificial neural networks to be taught. Error feedback belongs to the group of supervised learning methods. It is also applied as a generalisation from the delta rule to multilayer networks. In backpropagation, hidden layers can be used, which are trained in a meaningful way. For each training pattern, the desired output must be known.

How does error minimisation work?

The baking propagation Algorithm proceeds in the following phases. First, the input pattern is created, propagating forward through the network. Then all outputs of the net are compared with the desired outputs. Now the difference of the values is considered as an error of the net. This error can then be propagated back through the output layer to the input layer. Finally, the weights of the neuron connections are changed depending on the influence on the error. This way, when an input is applied again, a certain approximation to the desired and expected output can be guaranteed. The error function only needs to be minimised.

What does backpropagation do?

In backpropagation, the weights are to be adjusted against the error. An error that is as exact as possible is required as input. An error is calculated via the error function (Loss Function) at the output layer. The MSE (Mean Squared Error) or the cross entropy (Cross Entropy). A backpropagation consists of two steps, namely the error calculation by matching with the target values of the prediction in an output layer and by the error feedback to the individual neurons of the hidden layers (Hidden layer) and from the adjustment of the weights against the calculated gradient increase of the loss function.

What is backpropagation through time?

Backpropagation through time (BPTT) is a gradient-based technique to train certain types of recurrent neural networks (RNN). It can be used to train Elman Networks. The algorithm has been independently derived by several researchers. Backpropagation through time can expand the computational graph of an RNN in one time step at a time to obtain dependencies between model variables and parameters. Based on the chaining rule, backpropagation is applied to calculate and store gradients.

What is the biological context?

The procedure is part of the machine learningwhich uses a mathematically based learning mechanism. Artificial neural networks are used, but there is no attempt to model neural learning mechanisms biologically. It is probably not the case that backpropagation is used in biological neurons, but the method works mathematically accurately. Thus, it is unclear how the information about the target values can reach the synaptic cleft of the corresponding last neuronal layer. The fact is that biological neurons communicate via binary state changes (so-called spikes) and not via continuous values. The biological neurons are time sensitive and need perfectly synchronised and discrete steps in time.

Data Navigator Newsletter