API Documentation
- Introduction
- Atla Evaluations
- OpenAI-compatible Evaluations
- Evaluation Metrics
- Overview
- POSTCreate Metric
- DELDelete Metric
- GETGet Metric
- GETList Metrics
- Evaluation Prompts
- Few Shot Examples
Evaluation Metrics
List Metrics
List all metrics for a project. Optionally include Atla default metrics that the project has access to.
GET
/
v1
/
metrics
{
"request_id": "123e4567-e89b-12d3-a456-426614174000",
"status": "success",
"metrics": [
{
"name": "my_metric",
"description": "An example metric demonstrating functionality.",
"metric_type": "binary",
"required_fields": [
"model_input",
"model_output",
"model_context",
"expected_model_output"
],
"active_prompt_version": 1,
"prompts": {
"1": {
"content": "This is an example prompt for the metric. It is active.",
"created_at": "2025-01-01T12:34:56.789000Z",
"updated_at": "2025-01-01T12:34:56.789000Z",
"version": 1
},
"2": {
"content": "This is an updated example prompt for the metric.",
"created_at": "2025-01-01T12:34:56.789000Z",
"updated_at": "2025-01-01T12:34:56.789000Z",
"version": 2
}
},
"few_shot_examples": [
{
"model_input": "Few-shot `model_input`.",
"model_output": "Few-shot `model_output`.",
"model_context": "Few-shot `model_context`.",
"expected_model_output": "Few-shot `expected_model_output`.",
"score": "1",
"critique": "Critique for the few-shot example explaining why the score is 1."
}
],
"_id": "<string>",
"project_id": "<string>",
"created_at": "2025-01-01T12:34:56.789Z",
"updated_at": "2025-01-01T12:34:56.789Z"
}
]
}
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Query Parameters
Whether to include default metrics.
Response
200
application/json
Successful Response
A response containing a list of retrieved metrics.
{
"request_id": "123e4567-e89b-12d3-a456-426614174000",
"status": "success",
"metrics": [
{
"name": "my_metric",
"description": "An example metric demonstrating functionality.",
"metric_type": "binary",
"required_fields": [
"model_input",
"model_output",
"model_context",
"expected_model_output"
],
"active_prompt_version": 1,
"prompts": {
"1": {
"content": "This is an example prompt for the metric. It is active.",
"created_at": "2025-01-01T12:34:56.789000Z",
"updated_at": "2025-01-01T12:34:56.789000Z",
"version": 1
},
"2": {
"content": "This is an updated example prompt for the metric.",
"created_at": "2025-01-01T12:34:56.789000Z",
"updated_at": "2025-01-01T12:34:56.789000Z",
"version": 2
}
},
"few_shot_examples": [
{
"model_input": "Few-shot `model_input`.",
"model_output": "Few-shot `model_output`.",
"model_context": "Few-shot `model_context`.",
"expected_model_output": "Few-shot `expected_model_output`.",
"score": "1",
"critique": "Critique for the few-shot example explaining why the score is 1."
}
],
"_id": "<string>",
"project_id": "<string>",
"created_at": "2025-01-01T12:34:56.789Z",
"updated_at": "2025-01-01T12:34:56.789Z"
}
]
}
Assistant
Responses are generated using AI and may contain mistakes.