When in Doubt, Cascade: Towards Building Efficient and Capable GuardrailsManish NagireddyInkit Padhiet al.2025AIES 2025
WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from WikipediaYufang HouAlessandra Pascaleet al.2024NeurIPS 2024
Value Alignment from Unstructured TextInkit PadhiKarthikeyan Natesan Ramamurthyet al.2024NeurIPS 2024
Final-Model-Only Data Attribution with a Unifying View of Gradient-Based MethodsDennis WeiInkit Padhiet al.2024NeurIPS 2024
Split, Unlearn, Merge: Leveraging Data Attributes for More Effective Unlearning in LLMsSwanand Ravindra KadheFarhan Ahmedet al.2024ICML 2024
07 Nov 2022US11494802Guiding Customized Textual Persuasiveness To Meet Persuasion Objectives Of A Communication At Multiple Levels
IBM’s safety checkers top a new AI benchmarkNewsKim Martineau09 Apr 2025AIAI TransparencyFairness, Accountability, TransparencyGenerative AINatural Language Processing
An AI foundation model that learns the grammar of molecules NewsPayel Das, Youssef Mroueh, Inkit Padhi, Vijil Chenthamarakshan, Jerret Ross, and Brian Belgodere25 Jan 2023Accelerated DiscoveryAIFoundation ModelsLife SciencesMaterials Discovery
IBM researchers check AI bias with counterfactual textResearchInkit Padhi, Nishtha Madaan, Naveen Panwar, and Diptikalyan Saha05 Feb 20215 minute readAI TestingFairness, Accountability, Transparency