Head-to-head
DeepSeek R1 vs GPT-5.2
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
DeepSeek R1
DeepSeekGPT-5.2
OpenAI| Benchmark | DeepSeek R1 | GPT-5.2 |
|---|---|---|
Chatbot Arena Elo Arena | ||
AIME 2024 AIME | — | |
HumanEval HumanEval | — | |
LiveCodeBench LiveCB | — | |
SWE-bench Verified SWE-bench | — | |
RULER 128k RULER | — | |
Output Speed Speed | — | |
Time to First Token TTFT | — | |
FrontierMath Tiers 1-3 FrontierMath | — | |
SimpleQA Verified SimpleQA | ||
OTIS Mock AIME 2024-2025 OTIS AIME | ||
Humanity's Last Exam HLE | — | |
ARC-AGI 2 ARC-AGI 2 | ||
Aider Polyglot Aider | — | |
Terminal-Bench 2 TermBench 2 | — | |
Frontier Composite Frontier | ||
Output Stability Stability | ||
Format Adherence Format | ||
Recovery Rate Recovery | ||
Safety Handling Safety |
Methodology matches the main AI Model Analyzer About page.