Third-highest Python agent framework by download volume, ahead of CrewAI, Google ADK, and Strands. DL/star ratio of 1,003 indicates massive silent adoption.
Pydantic AI
active#3 Python agent framework by downloads — 15.6M PyPI/month. Built by the Pydantic team. Runtime type enforcement is a genuine differentiator no other framework offers. V1 shipped with Temporal integration for durable execution and Logfire observability. Emerging pattern: 'Pydantic AI for agent logic, LangGraph for orchestration' (ZenML).
Where it wins
15.6M PyPI downloads/month — #3 Python agent framework by volume
Built by the Pydantic team — unmatched trust signal in Python ecosystem
Runtime type enforcement — genuine differentiator no other framework offers
V1 shipped with Temporal integration for durable execution
Logfire observability built-in
DL/star ratio of 1,003 — massive silent adoption
Where to be skeptical
Higher issue count (583) — may indicate growing pains at scale
No named enterprise customers found (likely many private deployments)
Not a standalone orchestration framework — pairs with LangGraph for multi-agent workflows
Editorial verdict
The type-safe agent logic layer for Python teams. 15.6M downloads/month makes it #3 by volume. Not a competitor to LangGraph — it's a complement. The Pydantic team's reputation is an unmatched trust signal in the Python ecosystem. Best paired with LangGraph for orchestration.
Source
Videos
Reviews, tutorials, and comparisons from the community.
PydanticAI - The NEW Agent Builder on the Block
Building a Research Agent with PydanticAI
Related

Claude Code
98Anthropic's official agentic coding CLI. v2.1.81 (Mar 20) shipped `--bare`, smarter worktree resume, and improved MCP OAuth while the repo crossed 82,204 stars and logged ~14 commits/week across 10+ maintainers. Terminal-native, tool-use-driven, with deep file system + shell access, #1 SWE-bench Pro standardized (45.89%), ~4% of GitHub public commits (SemiAnalysis), $2.5B annualized revenue. 8M+ npm weekly downloads. Opus 4.6 with 1M context.
LangGraph
95#1 Python agent framework by production evidence — 40.2M PyPI downloads/month, Fortune 500 deployments (LinkedIn, Uber, Replit, Elastic, Klarna, Cloudflare, Coinbase), ~400 LangGraph Platform companies, LangSmith rated best-in-class observability. Stable v1.x API, model-agnostic, MCP support.
AutoGen (Microsoft)
95⚠️ MAINTENANCE MODE — Microsoft officially confirmed bug fixes and security patches only, no new features (VentureBeat 2026-02-19). 55.9K stars but only 1.57M PyPI/month — DL/star ratio of 28, the most inflated among active frameworks. Being replaced by Microsoft Agent Framework (AutoGen + Semantic Kernel merge, GA targeted ~Q2 2026). Teams on AutoGen should plan migration.
CrewAI
93#2 Python agent framework — 5.7M PyPI downloads/month (3× growth in 6 months), Fortune 500 customers (PwC, IBM, Capgemini, NVIDIA, DocuSign), YAML-driven role-based orchestration rated 'fastest to prototype' in 2026 independent reviews. CVE-responsive: gitpython path traversal fixed in v1.11.0.
Public evidence
'The pattern gaining the most traction in production AI engineering circles in 2026 is PydanticAI for agent logic and LangGraph for orchestration.' Independent validation of complementary positioning.
'The other two offer partial type hints but don't enforce them at runtime.' Runtime enforcement is the technical differentiator.
Raw GitHub source
GitHub README peek
Constrained peek so you can sanity-check the source material without leaving the site.
Documentation: ai.pydantic.dev
<em>Pydantic AI is a Python agent framework designed to help you quickly, confidently, and painlessly build production grade applications and workflows with Generative AI.</em>
FastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of Pydantic Validation and modern Python features like type hints.
Yet despite virtually every Python agent framework and LLM library using Pydantic Validation, when we began to use LLMs in Pydantic Logfire, we couldn't find anything that gave us the same feeling.
We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI app and agent development.
Why use Pydantic AI
-
Built by the Pydantic Team: Pydantic Validation is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. Why use the derivative when you can go straight to the source? :smiley:
-
Model-agnostic: Supports virtually every model and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius, OVHcloud, Alibaba Cloud, SambaNova, and Outlines. If your favorite model or provider is not listed, you can easily implement a custom model.
-
Seamless Observability: Tightly integrates with Pydantic Logfire, our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can use that too.
-
Fully Type-safe: Designed to give your IDE or AI coding agent as much context as possible for auto-completion and type checking, moving entire classes of errors from runtime to write-time for a bit of that Rust "if it compiles, it works" feel.
-
Powerful Evals: Enables you to systematically test and evaluate the performance and accuracy of the agentic systems you build, and monitor the performance over time in Pydantic Logfire.
-
Extensible by Design: Build agents from composable capabilities that bundle tools, hooks, instructions, and model settings into reusable units. Use built-in capabilities for web search, thinking, and MCP, pick from the Pydantic AI Harness capability library, build your own, or install third-party capability packages. Define agents entirely in YAML/JSON — no code required.
-
MCP, A2A, and UI: Integrates the Model Context Protocol, Agent2Agent, and various UI event stream standards to give your agent access to external tools and data, let it interoperate with other agents, and build interactive applications with streaming event-based communication.
-
Human-in-the-Loop Tool Approval: Easily lets you flag that certain tool calls require approval before they can proceed, possibly depending on tool call arguments, conversation history, or user preferences.
-
Durable Execution: Enables you to build durable agents that can preserve their progress across transient API failures and application errors or restarts, and handle long-running, asynchronous, and human-in-the-loop workflows with production-grade reliability.
-
Streamed Outputs: Provides the ability to stream structured output continuously, with immediate validation, ensuring real time access to generated data.
-
Graph Support: Provides a powerful way to define graphs using type hints, for use in complex applications where standard control flow can degrade to spaghetti code.
Realistically though, no list is going to be as convincing as giving it a try and seeing how it makes you feel!
Hello World Example
Here's a minimal example of Pydantic AI:
from pydantic_ai import Agent
# Define a very simple agent including the model to use, you can also set the model when running the agent.
agent = Agent(
'anthropic:claude-sonnet-4-6',
# Register static instructions using a keyword argument to the agent.
# For more complex dynamically-generated instructions, see the example below.
instructions='Be concise, reply with one sentence.',
)
# Run the agent synchronously, conducting a conversation with the LLM.
result = agent.run_sync('Where does "hello world" come from?')
print(result.output)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
"""
(This example is complete, it can be run "as is", assuming you've installed the pydantic_ai package)
The exchange will be very short: Pydantic AI will send the instructions and the user prompt to the LLM, and the model will return a text response.
Not very interesting yet, but we can easily add tools, dynamic instructions, structured outputs, or composable capabilities to build more powerful agents.
Here's the same agent with thinking and web search capabilities:
from pydantic_ai import Agent
from pydantic_ai.capabilities import Thinking, WebSearch
agent = Agent(
'anthropic:claude-sonnet-4-6',
instructions='Be concise, reply with one sentence.',
capabilities=[Thinking(), WebSearch()],
)
result = agent.run_sync('What was the mass of the largest meteorite found this year?')
print(result.output)
Tools & Dependency Injection Example
Here is a concise example using Pydantic AI to build a support agent for a bank:
(Better documented example in the docs)
from dataclasses import dataclass
from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from bank_database import DatabaseConn
# SupportDependencies is used to pass data, connections, and logic into the model that will be needed when running
# instructions and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents.
@dataclass
class SupportDependencies:
customer_id: int
db: DatabaseConn
# This Pydantic model defines the structure of the output returned by the agent.
class SupportOutput(BaseModel):
support_advice: str = Field(description='Advice returned to the customer')
block_card: bool = Field(description="Whether to block the customer's card")