Head-to-head
Claude Opus 4.6 vs GPT-5.4
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
Claude Opus 4.6
AnthropicGPT-5.4
OpenAI| Benchmark | Claude Opus 4.6 | GPT-5.4 |
|---|---|---|
Chatbot Arena Elo Arena | ||
SWE-bench Verified SWE-bench | — | |
FrontierMath Tiers 1-3 FrontierMath | ||
SimpleQA Verified SimpleQA | ||
OTIS Mock AIME 2024-2025 OTIS AIME | ||
Humanity's Last Exam HLE | — | |
ARC-AGI 2 ARC-AGI 2 | ||
Terminal-Bench 2 TermBench 2 | ||
Frontier Composite Frontier | ||
Output Stability Stability | ||
Format Adherence Format | ||
Recovery Rate Recovery | ||
Safety Handling Safety |
Methodology matches the main AI Model Analyzer About page.