Skip to content
Anthropic

Claude Opus 4.6

Frontier

AnthropicReleased on 2026-02-05

Anthropic's flagship model with 1M token context (now default), adaptive thinking, and the highest agentic coding scores. Introduced Agent Teams for parallel autonomous coding. Nearly doubled ARC-AGI-2 score over Opus 4.5 (68.8% vs 37.6%).

91
Overall Score

Core Specs

1000K
Context Window
128K
Max Output
Reasoning
Open Source
Multimodal Support
textimage

User Feedback Highlights

Based on community feedback. Hover to see original reviews.

2x price of GPT-5.4 Rate limits can be hit quickly on entry-level plans Response prefilling removed (breaking change)+ Highest SWE-bench score (80.8%) Extended thinking deprecated+ 128K max output (doubled from 4.5)+ Agent Teams for parallel coding+ Best instruction following in complex contexts+ Adaptive thinking with effort levels
Sentiment:👍 80%😐 13%👎 7%

Pros & Cons

Pros

  • +Highest SWE-bench score (80.8%)
  • +128K max output (doubled from 4.5)
  • +Adaptive thinking with effort levels
  • +Agent Teams for parallel coding
  • +Best instruction following in complex contexts

Cons

  • 2x price of GPT-5.4
  • Response prefilling removed (breaking change)
  • Extended thinking deprecated
  • Rate limits can be hit quickly on entry-level plans

Reliability

SLA99.9%
Incidents (30d)44
Last Incident2026-04-08
4/8 故障已解决(监控中)。4月前9天累计 9 次故障,连续3天出问题(4/6-4/8)。
View Status Page →

Pricing

Input (per 1M tokens)$5.00
Output (per 1M tokens)$25.00
Subscription$20/month
Updated on 2026-04-21

Get Started with Claude

2Choose Pro ($20/mo) for Opus access
3Start chatting or try Claude Code

💡 Free tier uses Sonnet. Upgrade to Pro for Opus.

Benchmarks

sweBenchVerified80.8%
terminalBench265.4%
browseComp84%
gdpvalElo1606%
arcAgi268.8%
gpqaDiamond91.3%
bigLawBench90.2%