The minimax distortion redundancy in empirical quantizer design

We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean-squared distortion of a vector quantizer designed from n independent and identically distributed (i.i.d.) data points using any design algorithm is at least /spl Omega/(n/sup -1/2/) away from the optimal distortion for some distribution on a bounded subset of /spl Rscr//sup d/. Together with existing upper bounds this result shows that the minimax distortion redundancy for empirical quantizer design, as a function of the size of the training data, is asymptotically on the order of n/sup -1/2/. We also derive a new upper bound for the performance of the empirically optimal quantizer.

[1]  A E Bostwick,et al.  THE THEORY OF PROBABILITIES. , 1896, Science.

[2]  A. Kolmogorov,et al.  Entropy and "-capacity of sets in func-tional spaces , 1961 .

[3]  Michel Loève,et al.  Probability Theory I , 1977 .

[4]  C. L. Mallows An inequality involving multinomial probabilities , 1968 .

[5]  Shun-ichi Amari,et al.  A Theory of Pattern Recognition , 1968 .

[6]  Lee D. Davisson,et al.  Universal noiseless coding , 1973, IEEE Trans. Inf. Theory.

[7]  David L. Neuhoff,et al.  Fixed rate universal block source coding with a fidelity criterion , 1975, IEEE Trans. Inf. Theory.

[8]  H. Teicher,et al.  Probability theory: Independence, interchangeability, martingales , 1978 .

[9]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..

[10]  Robert M. Gray,et al.  Locally Optimal Block Quantizer Design , 1980, Inf. Control..

[11]  D. Pollard Strong Consistency of $K$-Means Clustering , 1981 .

[12]  D. Pollard A Central Limit Theorem for $k$-Means Clustering , 1982 .

[13]  David Pollard,et al.  Quantization and the method of k -means , 1982, IEEE Trans. Inf. Theory.

[14]  K. Alexander,et al.  Probability Inequalities for Empirical Processes and a Law of the Iterated Logarithm , 1984 .

[15]  Geoffrey C. Fox,et al.  Vector quantization by deterministic annealing , 1992, IEEE Trans. Inf. Theory.

[16]  Allen Gersho,et al.  Competitive learning and soft competition for vector quantizer design , 1992, IEEE Trans. Signal Process..

[17]  P. Chou The distortion of vector quantizers trained on n vectors decreases to the optimum as O/sub p/(1/n) , 1994, Proceedings of 1994 IEEE International Symposium on Information Theory.

[18]  Tamás Linder,et al.  Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding , 1994, IEEE Trans. Inf. Theory.

[19]  M. Talagrand Sharper Bounds for Gaussian and Empirical Processes , 1994 .

[20]  P. Shields When is the weak rate equal to the strong rate? , 1994, Proceedings of 1994 Workshop on Information Theory and Statistics.

[21]  László Györfi,et al.  A Probabilistic Theory of Pattern Recognition , 1996, Stochastic Modelling and Applied Probability.

[22]  Neri Merhav,et al.  On the amount of statistical side information required for lossy data compression , 1997, IEEE Trans. Inf. Theory.

[23]  Tamás Linder,et al.  Empirical quantizer design in the presence of source noise or channel noise , 1997, IEEE Trans. Inf. Theory.