Sandbox escape via dunder attribute validation bypass in LocalPythonExecutor. Fixed in v1.21.0 but architectural issues persist.
smolagents (HuggingFace)
watch⚠️ RESEARCH/EXPERIMENTATION ONLY. 26,100 GitHub stars; 443K PyPI/month. CVE-2025-9959 (JFrog, CVSS 7.6): sandbox escape via LocalPythonExecutor. NCC Group (2025-07-28): arbitrary file read/write + RCE via prompt injection — architectural mitigation only. Docker/E2B sandboxing is a hard requirement, not optional.
78/100
Trust
26K+
Stars
2
Evidence
Repo health
6d ago
Last push
439
Open issues
2,380
Forks
199
Contributors
Editorial verdict
Research and experimentation only. LocalPythonExecutor must NOT be used in production under any circumstances. Two independent security firms (JFrog + NCC Group) confirmed this. Docker or E2B sandboxing is an architectural requirement. Best for: evaluating CodeAgent paradigm, HuggingFace model experimentation, academic research.
Source
GitHub: huggingface/smolagents
Docs: huggingface.co
Public evidence
additional_authorized_imports enables arbitrary file read/write and potential RCE via prompt injection. Architectural mitigation only — Docker or E2B required. Two independent firms confirm: do not deploy LocalPythonExecutor in production.
How does this compare?
See side-by-side metrics against other skills in the same category.
Where it wins
26,100 GitHub stars — strong research community
CodeAgent paradigm (code-based tool-calling vs JSON) is genuinely differentiated
Best for HuggingFace model experimentation and academic research
443K PyPI/month — research-grade adoption
Where to be skeptical
⚠️ CVE-2025-9959 (JFrog, CVSS 7.6): LocalPythonExecutor sandbox escape via dunder attribute validation bypass. Fixed in v1.21.0 but architectural risk remains.
⚠️ NCC Group (2025-07-28): additional_authorized_imports enables arbitrary file read/write + RCE via prompt injection. No code-level patch — Docker/E2B required.
Last stable release v1.24.0 (2026-01-16) — 2 months ago, moderate commit velocity
Cannot be deployed in production with LocalPythonExecutor under any configuration
Ranking in categories
Know a better alternative?
Submit evidence and we'll run the full pipeline.
Similar skills
Claude Code
90Anthropic's official agentic coding CLI. Terminal-native, tool-use-driven, with deep file system and shell access. #1 SWE-bench Pro standardized (45.89%), ~4% of GitHub public commits (SemiAnalysis), $2.5B annualized revenue (fastest enterprise SaaS to $1B ARR). 8M+ npm weekly downloads. Opus 4.6 with 1M context.
OpenHands
88Category leader in multi-agent orchestration — 69,352 stars (verified), $18.8M Series A, AMD hardware partnership, 455 contributors, 1M downloads/month PyPI (3.4M all-time). SWE-Bench Verified 72% with Claude 4.5 Extended Thinking (updated 2026-03-19), Multi-SWE-Bench #1 across 8 languages. Gap to #2 is enormous on every axis.
n8n
83179,860 GitHub stars — largest OSS repo in adjacent workflow-automation space by 2×. 3,000+ enterprise customers, ~200,000 active users, $60M Series B. 1,100+ ready-to-use integrations, native AI Agent node, MCP client/server support. Best for orchestrating SaaS integrations and processes with AI nodes — not for building agent systems in code.
LangGraph
78#1 Python agent framework by production evidence — 40.2M PyPI downloads/month, Fortune 500 deployments (LinkedIn, Uber, Replit, Elastic, Klarna, Cloudflare, Coinbase), ~400 LangGraph Platform companies, LangSmith rated best-in-class observability. Stable v1.x API, model-agnostic, MCP support.
Raw GitHub source
GitHub README peek
Constrained peek so you can sanity-check the source material without leaving the site.
smolagents is a library that enables you to run powerful agents in a few lines of code. It offers:
✨ Simplicity: the logic for agents fits in ~1,000 lines of code (see agents.py). We kept abstractions to their minimal shape above raw code!
🧑💻 First-class support for Code Agents. Our CodeAgent writes its actions in code (as opposed to "agents being used to write code"). To make it secure, we support executing in sandboxed environments via Blaxel, E2B, Modal, Docker, or Pyodide+Deno WebAssembly sandbox.
🤗 Hub integrations: you can share/pull tools or agents to/from the Hub for instant sharing of the most efficient agents!
🌐 Model-agnostic: smolagents supports any LLM. It can be a local transformers or ollama model, one of many providers on the Hub, or any model from OpenAI, Anthropic and many others via our LiteLLM integration.
👁️ Modality-agnostic: Agents support text, vision, video, even audio inputs! Cf this tutorial for vision.
🛠️ Tool-agnostic: you can use tools from any MCP server, from LangChain, you can even use a Hub Space as a tool.
Full documentation can be found here.
[!NOTE] Check the our launch blog post to learn more about
smolagents!
Quick demo
First install the package with a default set of tools:
pip install "smolagents[toolkit]"
Then define your agent, give it the tools it needs and run it!
from smolagents import CodeAgent, WebSearchTool, InferenceClientModel
model = InferenceClientModel()
agent = CodeAgent(tools=[WebSearchTool()], model=model, stream_outputs=True)
agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
https://github.com/user-attachments/assets/84b149b4-246c-40c9-a48d-ba013b08e600
You can even share your agent to the Hub, as a Space repository:
agent.push_to_hub("m-ric/my_agent")
# agent.from_hub("m-ric/my_agent") to load an agent from Hub
Our library is LLM-agnostic: you could switch the example above to any inference provider.
<details> <summary> <b>InferenceClientModel, gateway for all <a href="https://huggingface.co/docs/inference-providers/index">inference providers</a> supported on HF</b></summary>from smolagents import InferenceClientModel
model = InferenceClientModel(
model_id="deepseek-ai/DeepSeek-R1",
provider="together",
)
</details>
<details>
<summary> <b>LiteLLM to access 100+ LLMs</b></summary>
from smolagents import LiteLLMModel
model = LiteLLMModel(
model_id="anthropic/claude-4-sonnet-latest",
temperature=0.2,
api_key=os.environ["ANTHROPIC_API_KEY"]
)
</details>
<details>
<summary> <b>OpenAI-compatible servers: Together AI</b></summary>
import os
from smolagents import OpenAIModel
model = OpenAIModel(
model_id="deepseek-ai/DeepSeek-R1",
api_base="https://api.together.xyz/v1/", # Leave this blank to query OpenAI servers.
api_key=os.environ["TOGETHER_API_KEY"], # Switch to the API key for the server you're targeting.
)
</details>
<details>
<summary> <b>OpenAI-compatible servers: OpenRouter</b></summary>
import os
from smolagents import OpenAIModel
model = OpenAIModel(
model_id="openai/gpt-4o",
api_base="https://openrouter.ai/api/v1", # Leave this blank to query OpenAI servers.
api_key=os.environ["OPENROUTER_API_KEY"], # Switch to the API key for the server you're targeting.
)
</details>
<details>
<summary> <b>Local `transformers` model</b></summary>
from smolagents import TransformersModel
model = TransformersModel(
model_id="Qwen/Qwen3-Next-80B-A3B-Thinking",
max_new_tokens=4096,
device_map="auto"
)