Backpropagation is performed in a similar way to the feed forward MLP. The difference here is that the connections between the input and hidden layers for each time segment are representing one set of real connections. Thus weight adjustment for each connection is calculated as the sum of all the Dw s for each time segment (see equation 6).

where: Dwjk,s(t-1) is the last change in weight between nodes k and j at time t and segment s
    dj,s is the error signal at node j and segment s
            oj is the output at node j and segment s

This technique has the disadvantage of needing to store the input values of each pattern in the sequence, along with the different values of
d for the hidden nodes. However, as this project will not be looking at particularly long sequences, this was not regarded as a major problem.

3.5. Kohonen Self-Organising Maps

The backpropagation techniques used in training MLPs rely on the classification type for each training pattern being known in advance. In as much this type of training is referred to as 'supervised learning'. Self-Organising Maps (SOMs) do not require this information during training and therefore undergo 'unsupervised learning'.