Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
The evaluation of Large Language Models (LLMs) increasingly relies on other LLMs acting as judges. However, current evaluation paradigms typically yield a single score or ranking, answering which model is better but not why. While essential for benchmarking, these top-level scores obscure the specific, actionable reasons behind a model's performance. To bridge this gap, we introduce CLEAR, an interactive, open-source package for LLM-based error analysis. CLEAR first generates per-instance textual feedback, then it creates a set of system-level error issues, and quantifies the prevalence of each identified issue. Our package also provides an interactive dashboard that supports a comprehensive error analysis. We demonstrate CLEAR analysis for RAG and Math benchmarks, and showcase its utility through a user case study.
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025
Gang Liu, Michael Sun, et al.
ICLR 2025
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019