Massively Parallel Training of Multi Layer Perceptrons With Irregular Topologies

In this paper we present an approach to the training of feed forward neural networks on massively parallel SIMD-architectures. In order to cover a wide field of applications we focus our attention on the flexibility of the load balancing routines. Our approach is characterized by three important properties: 1. All four types of parallelism inherent in the training phase are used. 2. In a preprocessing step neural networks are transformed into equivalent topologies, more suited for parallel computation. 3. Each learning task can be parallelized in a number of different ways, the best of which is chosen according to estimations of the computing efficiency.