Head-to-head
DeepSeek R1 vs o1
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
DeepSeek R1
DeepSeeko1
OpenAI| Benchmark | DeepSeek R1 | o1 |
|---|---|---|
Chatbot Arena Elo Arena | ||
AIME 2024 AIME | ||
HumanEval HumanEval | ||
LiveCodeBench LiveCB | — | |
MMMU MMMU | — | |
MathVista MathVista | — | |
RULER 128k RULER | ||
Output Speed Speed | ||
Time to First Token TTFT | ||
SimpleQA Verified SimpleQA | — | |
OTIS Mock AIME 2024-2025 OTIS AIME | — | |
ARC-AGI 2 ARC-AGI 2 | — | |
Aider Polyglot Aider | — | |
Frontier Composite Frontier | ||
Output Stability Stability | — | |
Format Adherence Format | — | |
Recovery Rate Recovery | — | |
Safety Handling Safety | — |
Methodology matches the main AI Model Analyzer About page.