Conference paper
Workshop paper
Deductive Verification of LLM generated SPARQL queries
Abstract
Considering the increasing applications of Large Language Models (LLMs) to many natural language tasks, this paper presents preliminary findings on developing a verification component for detecting hallucinations of an LLM that produces SPARQL queries from natural language questions. We suggest a logic-based deductive verification of the generated SPARQL query by checking if the original NL question’s deep semantic representation entails the SPARQL’s semantic representation.
Related
Conference paper
Reshaping Diverse Planning
Conference paper
Do not have enough data? Deep learning to the rescue!
Conference paper