Head-to-head
Claude Opus 4.6 vs GPT Image 1
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
Claude Opus 4.6
AnthropicGPT Image 1
OpenAI| Benchmark | Claude Opus 4.6 | GPT Image 1 |
|---|---|---|
Chatbot Arena Elo Arena | — | |
SWE-bench Verified SWE-bench | — | |
Image Arena Elo Img Arena | — | |
Prompt Adherence Prompt Fid. | — | |
FrontierMath Tiers 1-3 FrontierMath | — | |
SimpleQA Verified SimpleQA | — | |
OTIS Mock AIME 2024-2025 OTIS AIME | — | |
Humanity's Last Exam HLE | — | |
ARC-AGI 2 ARC-AGI 2 | — | |
Terminal-Bench 2 TermBench 2 | — | |
Frontier Composite Frontier | — | |
Output Stability Stability | — | |
Format Adherence Format | — | |
Recovery Rate Recovery | — | |
Safety Handling Safety | — |
Methodology matches the main AI Model Analyzer About page.