Head-to-head
Gemini 2.5 Pro vs o1
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
Gemini 2.5 Pro
Googleo1
OpenAI| Benchmark | Gemini 2.5 Pro | o1 |
|---|---|---|
Chatbot Arena Elo Arena | ||
AIME 2024 AIME | ||
HumanEval HumanEval | ||
LiveCodeBench LiveCB | — | |
SWE-bench Verified SWE-bench | — | |
MMMU MMMU | ||
MathVista MathVista | ||
RULER 128k RULER | ||
Output Speed Speed | ||
Time to First Token TTFT | ||
FrontierMath Tiers 1-3 FrontierMath | — | |
SimpleQA Verified SimpleQA | — | |
OTIS Mock AIME 2024-2025 OTIS AIME | — | |
Terminal-Bench 2 TermBench 2 | — | |
Frontier Composite Frontier |
Methodology matches the main AI Model Analyzer About page.