In this paper we present an approach to the training of feed forward neural networks on massively parallel SIMD-architectures. In order to cover a wide field of applications we focus our attention on the flexibility of the load balancing routines. Our approach is characterized by three important properties: 1. All four types of parallelism inherent in the training phase are used. 2. In a preprocessing step neural networks are transformed into equivalent topologies, more suited for parallel computation. 3. Each learning task can be parallelized in a number of different ways, the best of which is chosen according to estimations of the computing efficiency.
Alexander Singer,et al.
Implementations of artificial neural networks on the Connection Machine
Norbert Hoffmann,et al.
Simulation Neuronaler Netze
The Dataparallel Computer MasPar MP - 1
Viktor K. Prasanna,et al.
Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines
IEEE Trans. Computers.
Heinrich Braun,et al.
Evolving Neural Feedforward Networks