暂无分享,去创建一个
Pierre Geurts | Henrik Bostrom | Amir Hossein Akhavan Rahnama | Judith Butepage | P. Geurts | Henrik Bostrom | Judith Butepage
[1] Dumitru Erhan,et al. A Benchmark for Interpretability Methods in Deep Neural Networks , 2018, NeurIPS.
[2] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[3] Himabindu Lakkaraju,et al. Robust and Stable Black Box Explanations , 2020, ICML.
[4] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[5] Thomas Lukasiewicz,et al. Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods , 2019, ArXiv.
[6] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[7] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[8] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[9] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[10] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[11] Bernd Bischl,et al. Visualizing the Feature Importance for Black Box Models , 2018, ECML/PKDD.
[12] Henrik Boström,et al. A study of data and label shift in the LIME framework , 2019, ArXiv.
[13] Tommi S. Jaakkola,et al. On the Robustness of Interpretability Methods , 2018, ArXiv.