skillpack.co
All solutions

Auggie CLI (Augment Code)

watch

Augment Code's coding CLI powered by the Augment Context Engine — a semantic codebase index. SWE-bench Pro: 51.80% on Augment's own scaffold (highest raw number in the category). 153 GitHub stars. No public release as of 2026-03-18.

Score 53watch

Where it wins

51.80% SWE-bench Pro on Augment scaffold — highest raw number in the category

Augment Context Engine: semantic codebase index with deep code understanding

Architecture advantage is credible: same model, better scaffold → +5.91pp over SEAL standardized

Where to be skeptical

No public release — 153 GitHub stars, not generally available

Benchmark uses Augment's own non-standardized scaffold — not an apples-to-apples comparison

No independent SWE-bench reproduction published

Cannot verify claims without public artifact

Editorial verdict

Highest SWE-bench Pro number in the category (51.80% on Augment scaffold), but the scaffold is not standardized — they used the same Opus 4.5 model that scores 45.89% on SEAL's standardized setup. The architecture/scaffolding advantage is credible and meaningful. Cannot rank above tools with millions of verified installs on a single blog-post benchmark. Watch for: public GA release and independent SWE-bench reproduction.

Videos

Reviews, tutorials, and comparisons from the community.

Playwright Can't Do This... But This MCP Can.

Augment Code·2025-08-13

Auggie CLI: Smartest + Most Powerful AI Agentic Coder! RIP Claude Code & Gemini CLI!

WorldofAI·2025-08-29

Meet Auggie CLI The Smartest Coding Agent Yet by Augment Code. CLAUDE CODE KILLER?

The Metaverse Guy·2025-08-26

Related

Claude Code

98

Anthropic's official agentic coding CLI. v2.1.81 (Mar 20) shipped `--bare`, smarter worktree resume, and improved MCP OAuth while the repo crossed 82,204 stars and logged ~14 commits/week across 10+ maintainers. Terminal-native, tool-use-driven, with deep file system + shell access, #1 SWE-bench Pro standardized (45.89%), ~4% of GitHub public commits (SemiAnalysis), $2.5B annualized revenue. 8M+ npm weekly downloads. Opus 4.6 with 1M context.

OpenHands

88

Category leader in multi-agent orchestration — 69,352 stars (verified), $18.8M Series A, AMD hardware partnership, 455 contributors, 1M downloads/month PyPI (3.4M all-time). SWE-Bench Verified 72% with Claude 4.5 Extended Thinking (updated 2026-03-19), Multi-SWE-Bench #1 across 8 languages. Gap to #2 is enormous on every axis.

OpenCode

88

Open-source AI coding agent from SST. v1.2.27 active (2026-03-16) — development resumed after a gap. OpenAI official partnership following the Anthropic OAuth block controversy. 126K+ GitHub stars (star surge driven by Anthropic controversy). Known unauthenticated RCE fixed in v1.1.10+ (CVE, 432 HN pts). CVE-2026-22812 (CVSS 8.8-10.0) is a second serious security incident.

Gemini CLI

88

Google's open-source terminal agent with Gemini 3 models, 1M token context, built-in Google Search grounding, and the best free tier in the category (60 req/min, 1K req/day). v0.35.0 (Mar 24) shipped keybinding, policy, and telemetry fixes while the repo hit 98,957 stars and 12,593 forks. Terminal-Bench 2.0: 78.4% (#1). SWE-bench Pro standardized 43.30% (#3). Plan Mode added March 2026. First-pass correctness ~50-60% (Educative.io).

Public evidence

moderateSelf-reported2026-03
Augment blog: Auggie tops SWE-bench Pro at 51.80% (Augment scaffold)

Auggie scores 51.80% on SWE-bench Pro using the Augment Context Engine scaffold — the highest raw number in the category. The blog acknowledges the same Opus 4.5 model scores 45.89% on SEAL's standardized scaffold, validating that the gap is scaffold architecture, not model capability.

Official Augment blog postAugment Code team (official, self-reported)
strong2026-03
SEAL SWE-bench Pro leaderboard: standardized scores verified 2026-03-18

SEAL's standardized SWE-bench Pro leaderboard confirms Claude Code scaffold at 45.89% (#1). Augment's 51.80% is not on this leaderboard — it uses a non-standardized scaffold. Context for interpreting Augment's claim.

SEAL public leaderboard (Scale AI, standardized)SEAL (Scale AI Evaluation Lab — independent evaluation)

Raw GitHub source

GitHub README could not be fetched right now.