Head-to-head
GPT-5.2 vs o3
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
GPT-5.2
OpenAIo3
OpenAI| Benchmark | GPT-5.2 | o3 |
|---|---|---|
Chatbot Arena Elo Arena | ||
AIME 2024 AIME | — | |
HumanEval HumanEval | — | |
LiveCodeBench LiveCB | — | |
SWE-bench Verified SWE-bench | — | |
MMMU MMMU | — | |
MathVista MathVista | — | |
RULER 128k RULER | — | |
Output Speed Speed | — | |
Time to First Token TTFT | — | |
FrontierMath Tiers 1-3 FrontierMath | ||
SimpleQA Verified SimpleQA | ||
OTIS Mock AIME 2024-2025 OTIS AIME | ||
Humanity's Last Exam HLE | ||
ARC-AGI 2 ARC-AGI 2 | ||
Aider Polyglot Aider | — | |
Terminal-Bench 2 TermBench 2 | — | |
Frontier Composite Frontier | ||
Output Stability Stability | — | |
Format Adherence Format | — | |
Recovery Rate Recovery | — | |
Safety Handling Safety | — |
Methodology matches the main AI Model Analyzer About page.