3.3. Neural Networks

3.3.1.  Background

A neural network, in basic terms, is a computational device whose design draws on the workings of the biological brain. It comprises a number of nodes (or neurons), interconnected with each other via varying coefficients known as weights (or synaptic weights). It is in these weights that the computational power of the network is stored. The value of these weights is set by a process known as 'training'.

3.4.  Multi-Layer Perceptron (MLP)

The basic structure of the MLP is given in figure 3. An input vector representing the pattern to be learned or identified is presented to the input nodes. Each node calculates its output by summing all of its inputs and feeding this value into an activation function (in the case of the input layer the input vector is simply transferred to the connections).

The activation function of each node has an associated threshold provided by the weighted connection of a bias node. The activation function used here is the sigmoid with a unity slope parameter. The output of each node is given by :-

  where:  wjk is the weight of the connection from node k to j
    ok
is the output of node k
    oj is the output of node j ,  (Haykin, 1999
)

The action of presenting an input pattern to the network and obtaining output values is referred to as a forward pass.