An Efficient Learning Algorithm Using Naturla Gradient and Second Order Information of Error Surface

Natural gradient learning algorithm, which originated from information geometry, is known to provide a good solution for the problem of slow learning speed of gradient descent learning methods. Whereas the natural gradient learning algorithm is inspired from the geometric structure of the space of learning systems, there have been other approaches to acceleration of learning by using the second order information of error surface. Although the second order methods cannot give as successful solutions as the natural gradient learning method, their results showed the usefulness of the second order information of error surface in the learning process. In this paper, we develop a method of combining these two different approaches to propose a more efficient learning algorithm. At each learning step, we calculate a search direction by means of the natural gradient. When we apply the search direction to parameter-updating process, the second order information of error surface is applied to determine an efficient learning rate. Through a simple experiment on a real world problem, we confirmed that the proposed learning algorithm show faster convergence than the pure natural gradient learning algorithm.