Designing and implementing LLM guardrails components in production environments
Abstract
With advancements in generative AI, there has been an increasing need for tools that rely on Large Language Models (LLMs). As these models may produce undesired answers, there is a need to prevent such events, especially in enterprise environments. Even if models are trained on safe data, user inputs and even model behavior can be unpredictable, leading to problems like leakage of confidential data that could result in revenue loss. In this paper, we describe our experiences on developing tools for "guardrailing" LLMs. We describe how we started with a quick monolith implementation, and later transitioned to a microservices architecture. As results, we share our lessons learned throughout the process, and how the re-architecture to microservices led to runtime performance gains, easier maintenance and extensibility, and also allowed us to open source the main component of the solution, so anyone can contribute and use it.