Amit Dhurandhar


Amit Dhurandhar


Principal Research Staff Member


IBM Research - Yorktown Heights Yorktown Heights, NY USA


Welcome To Amit Dhurandhar's Webpage..

I am originally from Pune, India. I am a principal research staff member at IBM T.J. Watson in Yorktown Heights NY. I completed my Ph.D. in the Department of Computer and Information Science and Engineering at the University of Florida (UF), Gainesville. My advisor was Dr. Alin Dobra. My primary research areas are Machine learning and Data Mining.

I admire originality and brilliance but believe that having the right attitude is more important in life. 

Whats new?

  • Paper on suppressing interpretable concepts accepted to TMLR, 2024.
  • Paper on creating stable and unidirectional explanations accepted to NeurIPS, 2023.
  • (Invited) paper in Cell Patterns on diagnosing the current AI ethics debates featured by Montreal AI Ethics Institute, 2023.
  • Paper on unsupervised domain adaptation early accepted to MICCAI, 2023.
  • AIX360 tutorial with focus on new additions and industrial use cases accepted to KDD, 2023.
  • Paper on reprogramming LLMs for antibody infilling accepted to ICML, 2023.
  • Paper on using language to predict odor mixture similarity accepted to Chemical Senses, 2023.
  • Paper on practitioner-friendly bias mitigation accepted to FAccT, 2023.
  • Paper on clinical toxicity prediction with contrastive explanations accepted to Nature Sci. Reps., 2023.
  • Gave invited talk on CoFrNets at Morgan Stanley, 2023.
  • Paper studying XAI methods for stance detection accepted to CHIIR, 2023.
  • 2 papers (one on single domain generalization and another on RL explainability) accepted to AAAI, 2023.
  • CDO magazine awarded our work with an NGO Data4Good award, 2022.
  • Extended abstract on collaborative text summarization for chronic pain accepted to ML4H, 2022.
  • Paper on contrastive explanations for text accepted to EMNLP, 2022.
  • 2 papers on XAI accepted to NeurIPS, 2022.
  • Paper on knowledge transfer using a novel multihop approach accepted to IEEE ICKG, 2022.
  • Paper on connecting XAI metrics to usage contexts won best paper honorable mention at HCOMP, 2022.
  • Gave invited talk on Explainable AI at New England Statistical Society (NESS) Symposium, 2022.
  • Gave invited talk on Explainable AI at, 2022.
  • XAI as applicable to Healthcare and Life Sciences paper accepted to Cell Patterns, 2022.
  • Paper on dynamic knowledge transfer accepted to ICLR, 2022.
  • Paper on cognitive biases in human decision making accepted to CSCW, 2022.
  • Paper on impact (and experiences) of the AIX360 toolkit accepted to IAAI, 2022.
  • Paper on a new interpretable neural architecture accepted to NeurIPS, 2021. 
  • Gave tutorial on Explainable AI: From Correlations to Causations at RBCDSAI DAI Bootcamp, 2021.
  • Served as panelist on ICML workshop on algorithmic recourse, 2021.
  • Gave invited talk at World Economic Forum (WEF) on Trustworthy AI, 2021.
  • Paper on contrastive explanations using high level features accepted (as oral) to KDD, 2021.
  • Gave invited talk at New England Statistical Society (NESS) on Trustworthy AI, 2021.
  • Received Corporate Technical Recognition (CTR) award (highest award given by IBM), 2021. 
  • Gave invited talk at IIT Ropar, 2021.
  • Invited to attend a Schoss Dagstuhl Seminar in 2021.
  • Paper on IRM for ITE accepted to ICASSP, 2021.
  • Paper on OoD generalization accepted to AISTATS, 2021.
  • Paper theoretically comparing ERM to IRM accepted to ICLR, 2021.
  • Gave industry keynote at ACM CODS-COMAD, 2021 (Talk link).
  • Paper on explaining anomalies accepted to AAAI, 2021.
  • Paper on model agnostic PU learning was selected as Best of ICDM, 2020.
  • Our blog on counterfactual vs contrastive explanations in towardsdatascience, 2020.
    • This led to co-organizing an event on algorithmic recourse, whose recording is available here.
  • My blog on XAI in KDnuggets, 2020.
  • Invited to be on the Scientific Advisory Board for Beyond Explainable Artificial Intelligence initiative led by Andreas Holzinger and Wojciech Samek, 2020.
  • Slides for knowledge transfer to simple models talk given at Harvard, 2020.
  • Two papers on explainable AI accepted to NeurIPS, 2020.
  • Tutorial on Human-Centered Explainability in Healthcare presented at KDD, 2020.
  • Paper on AIX360 explainability toolkit accepted to JMLR, 2020.
  • Two papers (1 explainability and 1 causality) accepted to ICML, 2020.
  • Workshop on Human Interpretability in Machine Learning (WHI) accepted to ICML, 2020.
  • Received Outstanding Technical Achievement (OTA) award (highest award given by IBM Research), 2020.
  • Hands-on tutorial on AI Explainability 360 presented at FAccT, 2020.
  • Two papers mentioned as recent breakthroughs in olfaction using machine learning, 2019.
  • Tutorial on AI Explainability 360 given at MIT, 2019. (Video link)
  • Co-Lead in creation of open source AI Explainability 360 Toolkit, 2019. (Covered by VentureBeat, BetaNews, ZDNet)
  • Paper on selecting prototypical examples accepted to ICDM as regular paper, 2019.
  • Paper on teaching explanations accepted for a oral presentation to AIES, 2019.
  • Invited to attend a Schoss Dagstuhl Seminar in 2019.
  • Our work on improving simple models and contrastive explanations was featured in PC magazine, 2018.
  • Paper on predicting smells using natural language and interpretable methods accepted to Nature Communications, 2018. (Featured in Quartz)
  • Two papers on explainable AI accepted to NeurIPS, 2018.
  • Invited talk on formalizing interpretability given in the interpretability session at the European Conference on Data Analysis, 2018.
  • Our paper on contrastive explanations for deep learning models featured in Forbes, 2018.
  • Predicting Human Olfactory Perception from Chemical Features of Odor Molecules Paper accepted to Science, 2017. (New Yorker, Atlantic, Science News, The Biological Scene)
    • It has been highlighted in the annual AAAS meeting as one of the breakthroughs published by Science. It is considered an advance in the field beyond what has been seen in the past three decades.
  • Paper on a new clustering paradigm accepted to SDM 2017.
  • NSF-SBIR Grant Panelist, 2016-2017.



Top collaborators

Pin-Yu Chen

Pin-Yu Chen

Principal Research Scientist; Chief Scientist, RPI-IBM AI Research Collaboration