1

Install the Atla Insights SDK

Start by installing the Atla Insights Python SDK. From the terminal, run the following:
pip install atla-insights
To install with support for specific providers:
pip install "atla-insights[litellm]"
2

Configure Atla Insights

Configure Atla Insights with your authentication token at the start of your application:
from atla_insights import configure

configure(token="<MY_ATLA_INSIGHTS_TOKEN>")
You can retrieve your authentication token from the Atla Insights platform - you can get access here.
3

Instrument your AI agent

Add instrumentation to your AI agent by instrumenting your LLM provider and grouping calls into traces:
from atla_insights import configure, instrument, instrument_openai
from openai import OpenAI

# Configure Atla Insights
configure(token="<MY_ATLA_INSIGHTS_TOKEN>")

# Instrument OpenAI
instrument_openai()

client = OpenAI()

@instrument("Weather agent execution")
def run_weather_agent(user_query: str) -> str:
    # Process user query
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful weather assistant."},
            {"role": "user", "content": user_query}
        ]
    )
    
    return response.choices[0].message.content

# Run your agent
result = run_weather_agent("What's the weather like in San Francisco?")
print(result)
The @instrument decorator groups all LLM calls within the function into a single trace. Without it, each LLM call would be treated as a separate trace.
4

Add metadata (optional)

You can attach metadata to track different experiments, models, or environments:
from atla_insights import configure

metadata = {
    "environment": "dev",
    "prompt-version": "v1.0",
    "model": "gpt-4",
    "experiment": "weather-agent-test"
}

configure(
    token="<MY_ATLA_INSIGHTS_TOKEN>",
    metadata=metadata
)
This metadata will be attached to all subsequent traces and can be used for experiments and comparisons.
Advanced Instrumentation PatternsAll instrumentation methods support two usage patterns for more control:
  • Session-wide: Enable/disable instrumentation throughout your application with instrument_openai() and uninstrument_openai()
  • Context-based: Use instrumentation as context managers: with instrument_openai(): ...

Supported Frameworks

Atla Insights supports instrumentation for popular AI frameworks:
  • LLM Providers: OpenAI, Anthropic, Google GenAI, LiteLLM, Bedrock
  • Agent Frameworks: LangChain, CrewAI, OpenAI Agents, Smolagents, BAML, Agno
  • Tools: MCP (Model Context Protocol)
For framework-specific instrumentation, use functions like instrument_langchain(), instrument_crewai(), etc.

Need Help?

Don’t hesitate to contact us and schedule an onboarding call if you would like us to walk through with you: Schedule Onboarding Call