High raw number but 1,038 downloads-per-star vs CrewAI 122 is a significant anomaly. Zero HN discussion despite claimed 14M cumulative downloads. CI/CD pipeline inflation is a credible concern.
AWS Strands Agents SDK
watch5.5M PyPI downloads/month (claimed 14M+ cumulative since May 2025). v1.30.0 (2026-03-11), A2A protocol, Agents-as-Tools pattern. Internal AWS usage: Amazon Q Developer, AWS Glue, VPC Reachability Analyzer. Best for AWS Bedrock teams only. Anomalous download/star ratio (1,038 DL/star vs CrewAI 122) — zero HN organic signal.
53/100
Trust
5.3K+
Stars
2
Evidence
Product screenshot

Repo health
1d ago
Last push
428
Open issues
723
Forks
100
Contributors
Editorial verdict
AWS Bedrock teams only. High claimed downloads but anomalous download/star ratio (1,038 vs CrewAI 122) and zero HN organic discussion despite 14M cumulative downloads raises CI/CD pipeline inflation concern. Official AWS tooling is genuine advantage for Bedrock teams; lock-in penalty is high for everyone else.
Source
GitHub: strands-agents/sdk-python
Docs: strandsagents.com
Public evidence
Internal dogfooding across major AWS services is a meaningful signal of production viability within the AWS ecosystem.
How does this compare?
See side-by-side metrics against other skills in the same category.
Where it wins
Official AWS SDK — supported tooling for Bedrock teams
Internal AWS usage: Amazon Q Developer, AWS Glue, VPC Reachability Analyzer
A2A protocol support; Agents-as-Tools pattern
Calculator agent in ~3 lines vs LangGraph's ~40 (AWS benchmark, caveated)
Cold start ~800ms/150MB vs LangGraph ~1,200ms/250MB (AWS benchmark, caveated)
Where to be skeptical
Anomalous download/star ratio: 5.5M downloads / 5.3K stars = 1,038 DL/star (CrewAI: 122) — CI/CD pipeline inflation concern
Zero HN organic discussion despite claimed 14M cumulative downloads
AWS Bedrock lock-in — not model-agnostic
All benchmark data from AWS-controlled publications, no third-party reproduction
Ranking in categories
Know a better alternative?
Submit evidence and we'll run the full pipeline.
Similar skills
Claude Code
90Anthropic's official agentic coding CLI. Terminal-native, tool-use-driven, with deep file system and shell access. #1 SWE-bench Pro standardized (45.89%), ~4% of GitHub public commits (SemiAnalysis), $2.5B annualized revenue (fastest enterprise SaaS to $1B ARR). 8M+ npm weekly downloads. Opus 4.6 with 1M context.
OpenHands
88Category leader in multi-agent orchestration — 69,352 stars (verified), $18.8M Series A, AMD hardware partnership, 455 contributors, 1M downloads/month PyPI (3.4M all-time). SWE-Bench Verified 72% with Claude 4.5 Extended Thinking (updated 2026-03-19), Multi-SWE-Bench #1 across 8 languages. Gap to #2 is enormous on every axis.
n8n
83179,860 GitHub stars — largest OSS repo in adjacent workflow-automation space by 2×. 3,000+ enterprise customers, ~200,000 active users, $60M Series B. 1,100+ ready-to-use integrations, native AI Agent node, MCP client/server support. Best for orchestrating SaaS integrations and processes with AI nodes — not for building agent systems in code.
LangGraph
78#1 Python agent framework by production evidence — 40.2M PyPI downloads/month, Fortune 500 deployments (LinkedIn, Uber, Replit, Elastic, Klarna, Cloudflare, Coinbase), ~400 LangGraph Platform companies, LangSmith rated best-in-class observability. Stable v1.x API, model-agnostic, MCP support.
Raw GitHub source
GitHub README peek
Constrained peek so you can sanity-check the source material without leaving the site.
Strands Agents is a simple yet powerful SDK that takes a model-driven approach to building and running AI agents. From simple conversational assistants to complex autonomous workflows, from local development to production deployment, Strands Agents scales with your needs.
Feature Overview
- Lightweight & Flexible: Simple agent loop that just works and is fully customizable
- Model Agnostic: Support for Amazon Bedrock, Anthropic, Gemini, LiteLLM, Llama, Ollama, OpenAI, Writer, and custom providers
- Advanced Capabilities: Multi-agent systems, autonomous agents, and streaming support
- Built-in MCP: Native support for Model Context Protocol (MCP) servers, enabling access to thousands of pre-built tools
Quick Start
# Install Strands Agents
pip install strands-agents strands-agents-tools
from strands import Agent
from strands_tools import calculator
agent = Agent(tools=[calculator])
agent("What is the square root of 1764")
Note: For the default Amazon Bedrock model provider, you'll need AWS credentials configured and model access enabled for Claude 4 Sonnet in the us-west-2 region. See the Quickstart Guide for details on configuring other model providers.
Installation
Ensure you have Python 3.10+ installed, then:
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows use: .venv\Scripts\activate
# Install Strands and tools
pip install strands-agents strands-agents-tools
Features at a Glance
Python-Based Tools
Easily build tools using Python decorators:
from strands import Agent, tool
@tool
def word_count(text: str) -> int:
"""Count words in text.
This docstring is used by the LLM to understand the tool's purpose.
"""
return len(text.split())
agent = Agent(tools=[word_count])
response = agent("How many words are in this sentence?")
Hot Reloading from Directory:
Enable automatic tool loading and reloading from the ./tools/ directory:
from strands import Agent
# Agent will watch ./tools/ directory for changes
agent = Agent(load_tools_from_directory=True)
response = agent("Use any tools you find in the tools directory")
MCP Support
Seamlessly integrate Model Context Protocol (MCP) servers:
from strands import Agent
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters
aws_docs_client = MCPClient(
lambda: stdio_client(StdioServerParameters(command="uvx", args=["awslabs.aws-documentation-mcp-server@latest"]))
)
with aws_docs_client:
agent = Agent(tools=aws_docs_client.list_tools_sync())
response = agent("Tell me about Amazon Bedrock and how to use it with Python")
Multiple Model Providers
Support for various model providers:
from strands import Agent
from strands.models import BedrockModel
from strands.models.ollama import OllamaModel
from strands.models.llamaapi import LlamaAPIModel
from strands.models.gemini import GeminiModel
from strands.models.llamacpp import LlamaCppModel
# Bedrock
bedrock_model = BedrockModel(
model_id="us.amazon.nova-pro-v1:0",
temperature=0.3,
streaming=True, # Enable/disable streaming
)
agent = Agent(model=bedrock_model)
agent("Tell me about Agentic AI")
# Google Gemini
gemini_model = GeminiModel(
client_args={