Handling Missing Values when Applying Classification Models
暂无分享,去创建一个
[1] Zhiqiang Zheng,et al. Personalization from incomplete data: what you don't know can hurt , 2001, KDD '01.
[2] Gustavo E. A. P. A. Batista,et al. An analysis of four missing data treatment methods for supervised learning , 2003, Appl. Artif. Intell..
[3] Qiang Yang,et al. Decision trees with minimal costs , 2004, ICML.
[4] Avrim Blum,et al. The Bottleneck , 2021, Monopsony Capitalism.
[5] Jeffrey S. Simonoff,et al. Tree Induction Vs Logistic Regression: A Learning Curve Analysis , 2001, J. Mach. Learn. Res..
[6] G F Cooper,et al. Algorithms for Bayesian Belief-Network Precomputation , 1991, Methods of Information in Medicine.
[7] Dale Schuurmans,et al. Learning to classify incomplete examples , 1997, COLT 1997.
[8] Michael I. Jordan,et al. Mixture models for learning from incomplete data , 1997, Annual Conference Computational Learning Theory.
[9] A. J. Feelders,et al. Handling Missing Data in Trees: Surrogate Splits or Statistical Imputation , 1999, PKDD.
[10] Nicole A. Lazar,et al. Statistical Analysis With Missing Data , 2003, Technometrics.
[11] David E. Booth,et al. Analysis of Incomplete Multivariate Data , 2000, Technometrics.
[12] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[13] Ray Bareiss,et al. Concept Learning and Heuristic Classification in WeakTtheory Domains , 1990, Artif. Intell..
[14] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[15] Leslie G. Valiant,et al. A theory of the learnable , 1984, STOC '84.
[16] Kamal Nigam,et al. Understanding the Behavior of Co-training , 2000, KDD 2000.
[17] David Maxwell Chickering,et al. Dependency Networks for Inference, Collaborative Filtering, and Data Visualization , 2000, J. Mach. Learn. Res..
[18] Eibe Frank,et al. Logistic Model Trees , 2003, Machine Learning.
[19] Alberto Maria Segre,et al. Programs for Machine Learning , 1994 .
[20] Ron Kohavi,et al. Wrappers for Feature Subset Selection , 1997, Artif. Intell..
[21] Michael I. Jordan,et al. Supervised learning from incomplete data via an EM approach , 1993, NIPS.
[22] Eric Bauer,et al. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.
[23] Ron Kohavi,et al. Lazy Decision Trees , 1996, AAAI/IAAI, Vol. 1.
[24] Alexander Kogan,et al. Knowing what doesn't Matter: Exploiting the Omission of Irrelevant Data , 1997, Artif. Intell..
[25] J. Ross Quinlan,et al. C4.5: Programs for Machine Learning , 1992 .
[26] Ian H. Witten,et al. Data mining: practical machine learning tools and techniques with Java implementations , 2002, SGMD.
[27] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[28] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[29] Nir Friedman,et al. Learning Bayesian Networks with Local Structure , 1996, UAI.
[30] Dale Schuurmans,et al. Learning Bayesian Nets that Perform Well , 1997, UAI.
[31] J. Ross Quinlan,et al. Unknown Attribute Values in Induction , 1989, ML.
[32] Rich Caruana,et al. Predicting good probabilities with supervised learning , 2005, ICML.
[33] Jennifer Neville,et al. Relational Dependency Networks , 2007, J. Mach. Learn. Res..
[34] Ben Taskar,et al. Learning Probabilistic Models of Link Structure , 2003, J. Mach. Learn. Res..