Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A
Large Language Models (LLMs) show promise in assisting with Register Transfer Level (RTL) design tasks, including code summarization, documentation, and question answering. However, directly applying LLMs to entire RTL codebases often leads to low accuracy in these tasks. This is primarily because LLMs are less exposed to RTL code during pretraining, limiting their ability to understand RTL-specific semantics and structural dependencies. To overcome this challenge, we propose RTLExplain, which builds project-specific knowledge bases to enhance LLM performance on RTL design tasks. Our method is entirely offline and requires no additional training or fine-tuning. Experiments on code summarization using the generated knowledge bases demonstrate consistent improvements across various medium-to-large RTL projects, even when variable names are obfuscated. Furthermore, we use these knowledge bases to support Retrieval-Augmented Generation (RAG) for question answering tasks. Results show that our enhanced knowledge bases, when combined with RAG, improve question-answering accuracy by 37% compared to naïve prompting and 27% compared to conventional RAG.
Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A
Erik Altman, Jovan Blanusa, et al.
NeurIPS 2023
Conrad Albrecht, Jannik Schneider, et al.
CVPR 2025
Miao Guo, Yong Tao Pei, et al.
WCITS 2011