Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making
暂无分享,去创建一个
Rhema Vaithianathan | Kalyan Veeramachaneni | Alexandra Zytek | Dongyu Liu | K. Veeramachaneni | R. Vaithianathan | Dongyu Liu | Alexandra Zytek
[1] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[2] Minsuk Kahng,et al. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models , 2017, IEEE Transactions on Visualization and Computer Graphics.
[3] Yang Wang,et al. Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models , 2018, IEEE Transactions on Visualization and Computer Graphics.
[4] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[5] Mennatallah El-Assady,et al. explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning , 2019, IEEE Transactions on Visualization and Computer Graphics.
[6] Cynthia Rudin,et al. All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously , 2019, J. Mach. Learn. Res..
[7] Lei Xu,et al. Modeling Tabular data using Conditional GAN , 2019, NeurIPS.
[8] Hyunil Kim,et al. Lifetime Prevalence of Investigating Child Maltreatment Among US Children. , 2017, American journal of public health.
[9] Alexander M. Rush,et al. LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.
[10] Marko Bohanec,et al. Perturbation-Based Explanations of Prediction Models , 2018, Human and Machine Learning.
[11] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[12] Wendy E. Mackay,et al. Human-Centred Machine Learning , 2016, CHI Extended Abstracts.
[13] Scott M. Lundberg,et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery , 2018, Nature Biomedical Engineering.
[14] Qian Yang,et al. Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.
[15] Kenney Ng,et al. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models , 2016, CHI.
[16] Tamara Munzner,et al. A Nested Model for Visualization Design and Validation , 2009, IEEE Transactions on Visualization and Computer Graphics.
[17] Erika Schroeder,et al. National Highway Traffic Safety Administration (NHTSA) notes. Drugged driving expert panel report: a consensus protocol for assessing the potential of drugs to impair driving. , 2012, Annals of emergency medicine.
[18] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[19] Jimeng Sun,et al. RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records , 2018, IEEE Transactions on Visualization and Computer Graphics.
[20] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.