Dylan Slack
Title
Cited by
Cited by
Year
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
D Slack, S Hilgard, E Jia, S Singh, H Lakkaraju
AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2020
131*2020
Assessing the Local Interpretability of Machine Learning Models
D Slack, SA Friedler, C Scheidegger, C Dutta Roy
Workshop on Human Centric Machine Learning, NeurIPS, 2019
242019
Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
D Slack, S Friedler, E Givental
ACM Conference on Fairness, Accountability and Transparency (FAccT), 2020
112020
How much should I trust you? modeling uncertainty of black box explanations
D Slack, S Hilgard, S Singh, H Lakkaraju
ICML IMLH Workshop, 2021
92021
Differentially Private Language Models Benefit from Public Pre-training
G Kerrigan, D Slack, J Tuyls
EMNLP PrivateNLP Workshop, 2020
12020
Fair Meta-Learning: Learning How to Learn Fairly
D Slack, S Friedler, E Givental
NeurIPS HCML Workshop, 2019
12019
Counterfactual Explanations Can Be Manipulated
D Slack, S Hilgard, H Lakkaraju, S Singh
arXiv preprint arXiv:2106.02666, 2021
2021
Context, Language Modeling, and Multimodal Data in Finance
S Das, C Goggins, J He, G Karypis, S Krishnamurthy, M Mahajan, ...
The Journal of Financial Data Science, 2021
2021
Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy
D Slack, N Rauschmayr, K Kenthapadi
arXiv preprint arXiv:2102.06162, 2021
2021
On the Lack of Robust Interpretability of Neural Text Classifiers
MB Zafar, M Donini, D Slack, C Archambeau, S Das, K Kenthapadi
Findings of ACL, 2021
2021
Defuse: Debugging Classifiers Through Distilling Unrestricted Adversarial Examples
DZ Slack, N Rauschmayr, K Kenthapadi
2020
Expert-Assisted Transfer Reinforcement Learning
D Slack
Haverford College Thesis, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–12