Efficient Parallel Learning Algorithms for Neural Networks
暂无分享,去创建一个
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergence properties, optimization techniques such as the Polak-Ribiere method are also significantly more efficient than the Backpropagation algorithm. These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of hand-written character recognition.
[1] W. Daniel Hillis,et al. The connection machine , 1985 .
[2] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[3] Geoffrey E. Hinton,et al. Learning distributed representations of concepts. , 1989 .
[4] Yann LeCun,et al. Improving the convergence of back-propagation learning with second-order methods , 1989 .