A Fast, Compact Approximation of the Exponential Function
暂无分享,去创建一个
Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This article describes how exponentiation can be approximated by manipulating the components of a standard (IEEE-754) floating-point representation. This models the exponential function as well as a lookup table with linear interpolation, but is significantly faster and more compact.
[1] R. E. Shafer,et al. Algorithm 443: Solution of the transcendental equation wew = x , 1973, Commun. ACM.
[2] Ansi Ieee,et al. IEEE Standard for Binary Floating Point Arithmetic , 1985 .
[3] John Wawrzynek,et al. JPEG Quality Transcoding Using Neural Networks Trained with a Perceptual Error Measure , 1999, Neural Computation.