skillpack.co
All skills

smolagents (HuggingFace)

watch

⚠️ RESEARCH/EXPERIMENTATION ONLY. 26,100 GitHub stars; 443K PyPI/month. CVE-2025-9959 (JFrog, CVSS 7.6): sandbox escape via LocalPythonExecutor. NCC Group (2025-07-28): arbitrary file read/write + RCE via prompt injection — architectural mitigation only. Docker/E2B sandboxing is a hard requirement, not optional.

Expertise
Composite
Complexity
agentsorchestration

78/100

Trust

26K+

Stars

2

Evidence

Repo health

78/100

6d ago

Last push

439

Open issues

2,380

Forks

199

Contributors

Editorial verdict

Research and experimentation only. LocalPythonExecutor must NOT be used in production under any circumstances. Two independent security firms (JFrog + NCC Group) confirmed this. Docker or E2B sandboxing is an architectural requirement. Best for: evaluating CodeAgent paradigm, HuggingFace model experimentation, academic research.

Public evidence

How does this compare?

See side-by-side metrics against other skills in the same category.

COMPARE SKILLS →

Where it wins

26,100 GitHub stars — strong research community

CodeAgent paradigm (code-based tool-calling vs JSON) is genuinely differentiated

Best for HuggingFace model experimentation and academic research

443K PyPI/month — research-grade adoption

Where to be skeptical

⚠️ CVE-2025-9959 (JFrog, CVSS 7.6): LocalPythonExecutor sandbox escape via dunder attribute validation bypass. Fixed in v1.21.0 but architectural risk remains.

⚠️ NCC Group (2025-07-28): additional_authorized_imports enables arbitrary file read/write + RCE via prompt injection. No code-level patch — Docker/E2B required.

Last stable release v1.24.0 (2026-01-16) — 2 months ago, moderate commit velocity

Cannot be deployed in production with LocalPythonExecutor under any configuration

Ranking in categories

Know a better alternative?

Submit evidence and we'll run the full pipeline.

SUBMIT →

Similar skills

Raw GitHub source

GitHub README peek

Constrained peek so you can sanity-check the source material without leaving the site.

<!--- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <!-- Uncomment when CircleCI is set up --> <a href="https://deepwiki.com/huggingface/smolagents"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a> </p> <h3 align="center"> <div style="display:flex;flex-direction:row;"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/smolagents.png" alt="Hugging Face mascot as James Bond" width=400px> <p>Agents that think in code!</p> </div> </h3>

smolagents is a library that enables you to run powerful agents in a few lines of code. It offers:

Simplicity: the logic for agents fits in ~1,000 lines of code (see agents.py). We kept abstractions to their minimal shape above raw code!

🧑‍💻 First-class support for Code Agents. Our CodeAgent writes its actions in code (as opposed to "agents being used to write code"). To make it secure, we support executing in sandboxed environments via Blaxel, E2B, Modal, Docker, or Pyodide+Deno WebAssembly sandbox.

🤗 Hub integrations: you can share/pull tools or agents to/from the Hub for instant sharing of the most efficient agents!

🌐 Model-agnostic: smolagents supports any LLM. It can be a local transformers or ollama model, one of many providers on the Hub, or any model from OpenAI, Anthropic and many others via our LiteLLM integration.

👁️ Modality-agnostic: Agents support text, vision, video, even audio inputs! Cf this tutorial for vision.

🛠️ Tool-agnostic: you can use tools from any MCP server, from LangChain, you can even use a Hub Space as a tool.

Full documentation can be found here.

[!NOTE] Check the our launch blog post to learn more about smolagents!

Quick demo

First install the package with a default set of tools:

pip install "smolagents[toolkit]"

Then define your agent, give it the tools it needs and run it!

from smolagents import CodeAgent, WebSearchTool, InferenceClientModel

model = InferenceClientModel()
agent = CodeAgent(tools=[WebSearchTool()], model=model, stream_outputs=True)

agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")

https://github.com/user-attachments/assets/84b149b4-246c-40c9-a48d-ba013b08e600

You can even share your agent to the Hub, as a Space repository:

agent.push_to_hub("m-ric/my_agent")

# agent.from_hub("m-ric/my_agent") to load an agent from Hub

Our library is LLM-agnostic: you could switch the example above to any inference provider.

<details> <summary> <b>InferenceClientModel, gateway for all <a href="https://huggingface.co/docs/inference-providers/index">inference providers</a> supported on HF</b></summary>
from smolagents import InferenceClientModel

model = InferenceClientModel(
    model_id="deepseek-ai/DeepSeek-R1",
    provider="together",
)
</details> <details> <summary> <b>LiteLLM to access 100+ LLMs</b></summary>
from smolagents import LiteLLMModel

model = LiteLLMModel(
    model_id="anthropic/claude-4-sonnet-latest",
    temperature=0.2,
    api_key=os.environ["ANTHROPIC_API_KEY"]
)
</details> <details> <summary> <b>OpenAI-compatible servers: Together AI</b></summary>
import os
from smolagents import OpenAIModel

model = OpenAIModel(
    model_id="deepseek-ai/DeepSeek-R1",
    api_base="https://api.together.xyz/v1/", # Leave this blank to query OpenAI servers.
    api_key=os.environ["TOGETHER_API_KEY"], # Switch to the API key for the server you're targeting.
)
</details> <details> <summary> <b>OpenAI-compatible servers: OpenRouter</b></summary>
import os
from smolagents import OpenAIModel

model = OpenAIModel(
    model_id="openai/gpt-4o",
    api_base="https://openrouter.ai/api/v1", # Leave this blank to query OpenAI servers.
    api_key=os.environ["OPENROUTER_API_KEY"], # Switch to the API key for the server you're targeting.
)
</details> <details> <summary> <b>Local `transformers` model</b></summary>
from smolagents import TransformersModel

model = TransformersModel(
    model_id="Qwen/Qwen3-Next-80B-A3B-Thinking",
    max_new_tokens=4096,
    device_map="auto"
)
View on GitHub →