Head-to-head
Claude Opus 4.7 vs FLUX.1.1 [pro]
Normalized scores are min-maxed per benchmark across all models we track (0–100). Open the interactive compare view to add benchmarks to the radar chart or pull in more models.
Claude Opus 4.7
AnthropicFLUX.1.1 [pro]
Black Forest Labs| Benchmark | Claude Opus 4.7 | FLUX.1.1 [pro] |
|---|---|---|
Chatbot Arena Elo Arena | — | |
Image Arena Elo Img Arena | — | |
Prompt Adherence Prompt Fid. | — | |
FrontierMath Tiers 1-3 FrontierMath | — | |
SimpleQA Verified SimpleQA | — | |
OTIS Mock AIME 2024-2025 OTIS AIME | — | |
Humanity's Last Exam HLE | — | |
ARC-AGI 2 ARC-AGI 2 | — | |
Frontier Composite Frontier | — | |
Output Stability Stability | — | |
Format Adherence Format | — | |
Recovery Rate Recovery | — | |
Safety Handling Safety | — |
Methodology matches the main AI Model Analyzer About page.