Head-to-head
DeepSeek R1 vs Imagen 4
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
DeepSeek R1
DeepSeekImagen 4
Google| Benchmark | DeepSeek R1 | Imagen 4 |
|---|---|---|
Chatbot Arena Elo Arena | — | |
AIME 2024 AIME | — | |
HumanEval HumanEval | — | |
LiveCodeBench LiveCB | — | |
RULER 128k RULER | — | |
Image Arena Elo Img Arena | — | |
Prompt Adherence Prompt Fid. | — | |
Output Speed Speed | — | |
Time to First Token TTFT | — | |
SimpleQA Verified SimpleQA | — | |
OTIS Mock AIME 2024-2025 OTIS AIME | — | |
ARC-AGI 2 ARC-AGI 2 | — | |
Aider Polyglot Aider | — | |
Frontier Composite Frontier | — | |
Output Stability Stability | — | |
Format Adherence Format | — | |
Recovery Rate Recovery | — | |
Safety Handling Safety | — |
Methodology matches the main AI Model Analyzer About page.