Massively Parallel Training of Multi Layer Perceptrons With Irregular Topologies
暂无分享,去创建一个
In this paper we present an approach to the training of feed forward neural networks on massively parallel SIMD-architectures. In order to cover a wide field of applications we focus our attention on the flexibility of the load balancing routines. Our approach is characterized by three important properties: 1. All four types of parallelism inherent in the training phase are used. 2. In a preprocessing step neural networks are transformed into equivalent topologies, more suited for parallel computation. 3. Each learning task can be parallelized in a number of different ways, the best of which is chosen according to estimations of the computing efficiency.
[1] Alexander Singer,et al. Implementations of artificial neural networks on the Connection Machine , 1990, Parallel Comput..
[2] Norbert Hoffmann,et al. Simulation Neuronaler Netze , 1991 .
[3] Werner Butscher. The Dataparallel Computer MasPar MP - 1 , 1991, Supercomputer.
[4] Viktor K. Prasanna,et al. Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines , 1991, IEEE Trans. Computers.
[5] Heinrich Braun,et al. Evolving Neural Feedforward Networks , 1993 .