LLM Providers
We currently support the following LLM providers:Provider | Instrumentation Function | Notes |
---|---|---|
Anthropic | instrument_anthropic | Also supports AnthropicBedrock client from Anthropic |
Bedrock | instrument_bedrock | |
Google GenAI | instrument_google_genai | E.g., Gemini |
LiteLLM | instrument_litellm | Supports all available models in the LiteLLM framework |
OpenAI | instrument_openai | Includes Azure OpenAI |
Basic Provider Usage
By default, instrumented LLM calls will be treated independently from one another. To logically group LLM calls into a trace, use the
@instrument
decorator.Agent Frameworks
We currently support the following frameworks:Framework | Instrumentation Function | Notes |
---|---|---|
Agno | instrument_agno | Supported with openai , google-genai , litellm and/or anthropic models |
BAML | instrument_baml | Supported with openai , anthropic or bedrock models |
Claude Code SDK | instrument_claude_code_sdk | |
CrewAI | instrument_crewai | |
Google AI SDK | instrument_google_adk | |
LangChain | instrument_langchain | This includes e.g., LangGraph as well |
MCP | instrument_mcp | Only includes context propagation. You will need to instrument the model calling the MCP server separately. |
OpenAI Agents | instrument_openai_agents | Supported with openai , google-genai , litellm and/or anthropic models |
Smolagents | instrument_smolagents | Supported with openai , google-genai , litellm and/or anthropic models |
Framework + Provider Instrumentation
For Agno, BAML, OpenAI Agents and Smolagents in Python, you will need to instrument both the framework and the underlying LLM provider(s).
Can’t find your framework or LLM provider?
If you are using a framework or LLM provider without native support, you can manually record LLM generations via our lower-level SDK.- input_messages
- output_messages
- tools (optional but recommended)