A hybrid algorithm for finding the global minimum of error function of neural networks and its applications
暂无分享,去创建一个
Yoshio Mogami | Norio Baba | Yasuhiro Shiraishi | Yutaka Yoshida | Motokazu Kohzaki | N. Baba | Y. Mogami | Y. Shiraishi | Motokazu Kohzaki | Yutaka Yoshida
[1] Roger J.-B. Wets,et al. Minimization by Random Search Techniques , 1981, Math. Oper. Res..
[2] Scott E. Fahlman,et al. An empirical study of learning speed in back-propagation networks , 1988 .
[3] C. M. Reeves,et al. Function minimization by conjugate gradients , 1964, Comput. J..
[4] S. Geman,et al. Diffusions for global optimizations , 1986 .
[5] N. Baba. A hybrid algorithm for finding a global minimum , 1983 .
[6] Toshio Shoman,et al. A modified convergence theorem for a random optimization method , 1977, Inf. Sci..
[7] C. D. Gelatt,et al. Optimization by Simulated Annealing , 1983, Science.
[8] L. Mark Berliner,et al. Bayesian control in mixture models , 1987 .
[9] V. Cerný. Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm , 1985 .
[10] C. Hwang,et al. Diffusion for global optimization in R n , 1987 .
[11] N. Baba. Convergence of a random optimization method for constrained optimization problems , 1981 .
[12] E. Polak. Introduction to linear and nonlinear programming , 1973 .
[13] Norio Baba,et al. A new approach for finding the global minimum of error function of neural networks , 1989, Neural Networks.
[14] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.