Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Retrieval-augmented generation (RAG) has recently become a very popular task for Large Language Models (LLM). Evaluating them on multi-turn RAG conversations, where the system is asked to generate a response to a question in the context of a preceding conversation is an important and often overlooked task with several additional challenges. We present MTRAG: an end-to-end human-generated multi-turn RAG benchmark that reflects several real-world properties across diverse dimensions for evaluating the full RAG pipeline. MTRAG contains 110 conversations averaging 7.7 turns each across four domains for a total of 842 tasks. We also explore automation paths via synthetic data and LLM-as-a-Judge evaluation. Our human and automatic evaluations show that even state-of-the-art LLM RAG systems struggle on MTRAG. We demonstrate the need for strong retrieval and generation systems that can handle later turns, unanswerable questions, non-standalone questions and multiple domains.
Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Kevin Gu, Eva Tuecke, et al.
ICML 2024
Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Hui Wan, Song Feng, et al.
NAACL 2021