Model-free learning for synchronous and asynchronous quasi-static networks is presented. The network weights are continuously perturbed, while the time-varying performance index is measured and correlated with the perturbation signals; the correlation output determines the changes in the weights. The perturbation may be either via noise sources or orthogonal signals. The invariance to detailed network structure mitigates large variability between supposedly identical networks as well as implementation defects. This local, regular, and completely distributed mechanism requires no central control and involves only a few global signals. Thus, it allows for integrated, on-chip learning in large analog and optical networks.
[1]
Stephen Grossberg,et al.
Absolute stability of global pattern formation and parallel memory storage by competitive neural networks
,
1983,
IEEE Transactions on Systems, Man, and Cybernetics.
[2]
Geoffrey E. Hinton,et al.
Learning representations by back-propagating errors
,
1986,
Nature.
[3]
S. Geman,et al.
Diffusions for global optimizations
,
1986
.
[4]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[5]
A. Dembo,et al.
High-order absolutely stable neural networks
,
1991
.