3.4.2. Recurrent MLPs

The MLP described above is limited to respond to spatial information, ie data that does not change in respect to time. In order to be able to present spatial and temporal information to a network, the network must have the capacity to hold information about previous states of the network, ie states of the network arising from previous input patterns. One way of achieving this is to introduce a context layer to the network that is fed by certain nodes within the network. The output of these context nodes is then presented back to the network at the next forward pass. A complete piece of input data to be presented to the network is made up of a finite number of consecutive input patterns.
The type of recurrent network that will be used in this project is the type developed by Elman, the structure of which is given in figure 4 (Elman, 1990). It uses the outputs from the hidden layer nodes as the input to the context layer, which has an equal number of nodes to the hidden layer. The solid arrows represent complete connectivity between layers, the dotted arrows represent single connections between the hidden nodes and their corresponding context nodes.

3.4.3. Unfolding the network in time

Backpropagation can be performed for this type of network if the network is 'unfolded in time' (Mozer, 1989). This technique involves creating a model of the network that has a copy of the input and hidden layers for each input pattern presented within the input data sequence. This model can be seen in figure 5