POST
/
v1
/
chat
/
completions
from atla import Atla

client = Atla()

eval_prompt = """You are an expert evaluator.

You have been asked to evaluate an LLM's response to a given instruction.

Model input:
What is the capital of France?

Model output:
Paris

Score rubric:
Evaluate the answer based on its factual correctness. Assign a score of 1 if the answer is factually correct, otherwise assign a score of 0. Only scores of 0 or 1 are allowed.

Your response should strictly follow this format:
**Reasoning:** <your feedback>

**Result:** <your score>
"""

chat_completion = client.chat.completions.create(
    model="atla-selene",
    messages=[{"role": "user", "content": eval_prompt}],
)

print(chat_completion.choices[0].message.content)
{
  "id": "123e4567-e89b-12d3-a456-426614174000",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "**Reasoning:** The model output is factually correct and well-reasoned. It does not provide any additional information not directly supported by the input or context provided.\n\n**Result:** 1",
        "role": "assistant"
      }
    }
  ],
  "created": 694303200,
  "model": "atla-selene",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 10,
    "prompt_tokens": 10,
    "total_tokens": 20
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

A request to an Atla evaluator via the /eval/chat/completions endpoint.

messages
object[]
required

A list of messages comprising the conversation so far. See the OpenAI API reference for more information.

model
string
required

The ID or name of the Atla evaluator model to use. This may point to a specific model version or a model family. If a model family is provided, the default model version for that family will be used.

max_completion_tokens
integer | null

An upper bound for the number of tokens that can be generated for an evaluation. See the OpenAI API reference for more information.

max_tokens
integer | null

The maximum number of tokens that can be generated in the evaluation. This value is now deprecated in favor of max_completion_tokens. See the OpenAI API reference for more information.

temperature
number | null

What sampling temperature to use, between 0 and 2. See the OpenAI API reference for more information

top_p
number | null

An alternative to sampling with temperature, called nucleus sampling, wherethe model considers the results of the tokens with top_p probability mass. See the OpenAI API reference for more information.

Response

200
application/json
Success
id
string
required
choices
object[]
required
created
integer
required
model
string
required
object
string
required
Allowed value: "chat.completion"
service_tier
enum<string> | null
Available options:
scale,
default
system_fingerprint
string | null
usage
object | null