Data visualization is one of the major applications of nonlinear dimensionality reduction. From the information retrieval perspective, the quality of a visualization can be evaluated by considering the extent that the neighborhood relation of each data point is maintained while the number of unrelated points that are retrieved is minimized. This property can be quantified as a trade-off between the mean precision and mean recall of the visualization. While there have been some approaches to formulate the visualization objective directly as a weighted sum of the precision and recall, there is no systematic way to determine the optimal trade-off between these two nor a clear interpretation of the optimal value. In this paper, we investigate the properties of $\alpha$-divergence for information visualization, focusing our attention on a particular range of $\alpha$ values. We show that the minimization of the new cost function corresponds to maximizing a geometric mean between precision and recall, parameterized by $\alpha$. Contrary to some earlier methods, no hand-tuning is needed, but we can rigorously estimate the optimal value of $\alpha$ for a given input data. For this, we provide a statistical framework using a novel distribution called Exponential Divergence with Augmentation (EDA). By the extensive set of experiments, we show that the optimal value of $\alpha$, obtained by EDA corresponds to the optimal trade-off between the precision and recall for a given data distribution.
[1]
Andrzej Cichocki,et al.
Nonnegative Matrix and Tensor Factorization T
,
2007
.
[2]
Geoffrey E. Hinton,et al.
Stochastic Neighbor Embedding
,
2002,
NIPS.
[3]
S T Roweis,et al.
Nonlinear dimensionality reduction by locally linear embedding.
,
2000,
Science.
[4]
Aapo Hyvärinen,et al.
Estimation of Non-Normalized Statistical Models by Score Matching
,
2005,
J. Mach. Learn. Res..
[5]
Kilian Q. Weinberger,et al.
Learning a kernel matrix for nonlinear dimensionality reduction
,
2004,
ICML.
[6]
Jarkko Venna,et al.
Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization
,
2010,
J. Mach. Learn. Res..
[7]
Mikhail Belkin,et al.
Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering
,
2001,
NIPS.
[8]
John W. Sammon,et al.
A Nonlinear Mapping for Data Structure Analysis
,
1969,
IEEE Transactions on Computers.
[9]
Erkki Oja,et al.
Learning the Information Divergence
,
2014,
IEEE Transactions on Pattern Analysis and Machine Intelligence.