Use Cases
Retrieval-Augmented Generation (RAG)
Evaluating RAG hallucinations
When using Retrieval-Augmented Generation (RAG), it’s crucial to ensure that model responses accurately reflect the retrieved context. By comparing responses against the provided context, you can detect and measure hallucinations.
Using Retrieved Context
When evaluating RAG responses, Atla can check if the output stays faithful to the provided context using the model_context
parameter:
For detecting RAG hallucinations, we recommend using the default
atla_default_faithfulness
metric, which evaluates how well responses align
with the provided context.