Attach metadata to traces to track custom system configurations. This metadata is essential for comparing performance across different setups and analyzing patterns in your AI agent behavior.

Usage

from atla_insights import configure

# We can define some system settings, prompt versions, etc. we'd like to keep track of.
metadata = {  
    "model": "gpt-4o-2024-08-06",
    "prompt-version": "v1.4",
}

# All subsequent traces will inherit this metadata
configure(
    token="<MY_ATLA_INSIGHTS_TOKEN>",
    metadata=metadata,  
)

Dynamic Metadata

Metadata set with the configure function will be attached to all traces. You can also set metadata dynamically during runtime. This is useful, for example, to “tag” specific traces with information that is only available during runtime such as a User ID.
from atla_insights import instrument, set_metadata

@instrument("My Agent")
def my_agent():
    # Add metadata specific to this execution
    set_metadata({"user_id": user_id, "feature_id": feature_id})  

Key Metadata Tags

Focus on these three essential metadata tags for effective experiment tracking:
TagPurposeExamples
experimentTrack different experiments"v1.2-rc3", "feature/fancy-update"
modelTrack different models and versions"gpt-5", "claude-4-sonnet"
promptVersion control for prompt templates"baseline", "optimized-v2"