Atla provides a drop-in replacement for OpenAI’s /v1/chat/completions API. You can use it with OpenAI-compatible SDKs (e.g. openai, LangChain, LlamaIndex) by simply pointing to our base URL and API key. Please refer to the models page for the list of supported models.

Quickstart

If you don’t have an Atla API key, you can get one here.

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("ATLA_API_KEY"),
    base_url="https://api.atla-ai.com/v1",
)

eval_prompt = """You are tasked with evaluating a response based on a given user input and binary scoring rubric that serves as the evaluation standard. Provide comprehensive feedback on the response quality strictly adhering to the scoring rubric, followed by a binary 1/0 judgment. Avoid generating any additional opening, closing, or explanations.

Here are some rules of the evaluation:
(1) You should prioritize evaluating whether the response satisfies the provided rubric. The basis of your score should depend exactly on the rubric. However, the response does not need to explicitly address points raised in the rubric. Rather, evaluate the response based on the criteria outlined in the rubric.

Your reply must strictly follow this format:
**Reasoning:** <Your feedback>

**Result:** <1 or 0>

Here is the data:

Instruction:
```
What is the capital of France?
```

Response:
```
Paris
```

Score Rubrics:
Evaluate the answer based on its factual correctness.
1: The answer is factually correct.
0: The answer is not factually correct.
"""

chat_completion = client.chat.completions.create(
    model="atla-selene",
    messages=[{"role": "user", "content": eval_prompt}],
)

print(chat_completion.choices[0].message.content)

If you are using atla-selene-mini we strongly advise you to use the prompt templates linked in this github page.