Neural network implementations

This chapter presents four neural network implementations: back-propagation neural networks, the learning vector quantizer (LVQ), Kohonen's self-organizing feature map networks, and evolutionary multilayer perceptron neural networks. The back-propagation source code for the neural network implementation is written to support the implementation of one or more hidden layers. The number of hidden layers and the number of processing element (PEs) in each layer can be specified in the run file. The classification of Iris data is included as a benchmark problem to be solved. The chapter discusses issues such as topology that are related to implementing neural networks on personal computers. All four of the neural networks implemented are layered networks. The back-propagation neural networks have more than two layers (at least one hidden layer), and the Kohonen networks have only two layers. The LVQ-I and self-organizing feature map networks consist of a two-layer feedforward topology, where the input layer is fully connected to the output layer.